Proun stood for ‘project for the affirmation of the new’. It may remind us to remain sceptical in the face of claims that revolutionary change is upon us, requiring us to disrupt and break ‘traditional’ legal frameworks to better enable technological innovation. Let’s first find out what the current incarnation of law is all about.
This book aims to introduce law to computer scientists. For that reason, it serves as a text book, providing an overview of the practice and study of law for a specific audience. Teaching law to computer scientists will always be an attempt, an ‘essay’, to bridge the disciplinary gaps between two scientific practices that each have their own methodological demands and constraints. This book probes the middle ground, aiming to present a reasonably coherent picture of the vocabulary and grammar of modern positive law. It is geared to those who have no wish to become lawyers but are nevertheless forced to consider the salience of legal rights and obligations with regard to the construction, maintenance and protection of computational artefacts. It aims to raise awareness and provide proper information about these legal rights and obligations, not just with regard to computer scientists themselves, but also with regard to those who will suffer or enjoy the results of their constructions. The latter is often considered under the heading of ethics, here it is studied from the perspective of law, explaining the legal rights and obligations involved. It is therefore not a matter of individual moral preferences or intellectual reflection, but a matter of confronting ‘what law does’ when such rights and obligations are violated.
In this introduction I will briefly situate the rise of modern positive law as an affordance of a specific information and communication technology (ICT), namely the printing press. This will be followed by an outline of the book.
Though many assume that law and computer science are miles apart as scientific disciplines and professional practices, this book takes another position. It is built on the fact that both law and computer science are about architecture, rather than merely about rules (and principles). Architecture refers to three aspects of both law and computing systems:
the fact of being constructed (artificial) rather than natural,
the relational and high-dimensional nature of whatever is constructed, and,
the double ecological nature of the construct
a. as it has to survive in a specific (often dynamic) environment,
b. while the construction itself forms the environment for its inhabitants.
A house, a legal system and a computing system all have an architecture that determines how the various parts (rooms, legal domains, modules) hold together, interact and support each other. Architecture refers to physical, institutional and computational design that determines the strength and sustainability of the construct, involving both hardware (walls, books, silicon chips) and software (the mapping of space to functions such as eating, working, sleeping; the positivity of the law; the program or algorithm). The high-dimensionality of the architecture of both law and computer science implies that choices made at any point of the system will ripple through the entire system, resulting in bugs or new features, requiring vigilance as to the dynamics that is inherent in any complex construct, including network effects and unintended consequences. A supreme court that overrules precedent will cause numerous subtle or not so subtle changes in the interpretation of the law by lower courts that need to anticipate how their verdicts will fare. This will in turn trigger adaptations in the conduct of those subject to these courts and may also trigger interventions on the side of legislators or regulators. Law is a complex construct, with a plethora of interlinked, hyperlinked and deep-linked connections between its various nodes: treaties, statutes, case law, principles and policies, within and across legal domains such as private law, public law and criminal law.
Though we can hardly imagine what it is like to live in a world without text, the latter is a recent invention. Homo sapiens supposedly emerged around 200.000 BC, the script has supposedly been invented around 3100 BC. Most human societies have thus been oral, meaning that communication was mainly face-to-face. The architecture of ‘speakerspace’ societies is an affordance of human language. This obviously limits the reach of language as a means to hold together society, both in space (groupings were necessarily small) and in time (cross-generational learning depended on word of mouth and durable artefacts). These were non-state, mostly nomadic societies, their livelihood contingent upon hunting (game) and/or gathering (fruits and vegetables).
Anthropologists who spent time in oral societies describe a lifeworld where law, religion and economy are not merely entangled but non-existent as separable dimensions of society. Clearly, these societies have a normative order, they make a difference between interactions that are obligated, preferred, allowed or prohibited, which often depend on kinship, age, gender, time of day or year, and context (home, hunting, division of food, celebration, war). This normative order, however, is not externalised in the form of inscriptions on stone, papyrus or paper. The normativity that rules human interaction in oral society depends on speech and on living memory, aided by a number of mnemonic devices (from rhetorical repetition to artefacts that represent specific taboos or obligations). There is no external written declaration of the norms that govern what is deemed polite, sacrilegious, heroic, expedient or simply ‘proper’. One can neither defend oneself in reference to such externalised norms, nor throw them in the face of others. All normativity is, as it were, under the skin of those who are expected to live up to it. This means that the addressants and the addressees of norms are largely the same, requiring repeated assemblies to discuss, establish and apply such necessarily fluid norms. Being fluid, however, does not imply that such norms are flexible, they may be extremely rigid to compensate for the fluidity of human language (e.g. in the case of taboos) and societal consensus on the existence, interpretation and application of norms is often delegated to what ‘we’ (Western anthropologists) like to call priests or others qualified as endowed with special competences. Note that normativity in oral society mainly depends on the material affordances of the human voice and human memory. There is no police force to implement legal norms and no independent court to contest the way one has been treated; no adjudication apart from negotiated dispute settlement based on voluntary jurisdiction.
As nomadic societies – in the course of centuries – transform into sedentary societies, the relationship with land and time changes due to the need to plough, sow and harvest. Planning is needed, storage is required, division of land enacted. The script first emerges as an inscription of numbers, to enable division of land and to count cattle. Sedentary or segmented societies develop into kingdoms and proto-states, with a specialised class of scribes or clerks that holds a factual monopoly on reading and writing. Often, neither the ruler nor those ruled can write or read, and the ruler often governs via his clerks (who are in his service and develop a system of written rules that is used to rule the subjects of the ruler). Note that the role of written ‘law’ in this era is of two kinds. On the one hand, kings attempt to impose various simple rules (taxes or toll), moving their own position from being a primus inter pares to being in a position to subject others to their ‘general orders backed by threats’ (as one famous legal scholar said). On the other hand, kings require their clerks to detect and articulate what is often termed the ‘customary law’ that rules the relationships between their subjects. The result has for instance been called leges (e.g. legal barbarorum) and used in royal courts as an authoritative though not binding testimony about the applicable law. The architecture of ‘manuscriptspace’ is an affordance of handwritten manuscripts.
The reach of handwritten manuscripts is far beyond that of orality, both in space (the same text can be copied and read across geographical distance) and in time (the text will survive its author and the very same text can be read by later generations). The distantiation this involves has curious implications for the interpretation of text; as a text emancipates from the tyranny of its author, its meaning will develop in response to subsequent readers that need to interpret the same text in new circumstances. The rigidity of written manuscripts, so much less ephemeral than spoken words, thus generates a need for iterative interpretation. This also results in the possibility to counter and contest specific interpretations. We can see this ‘at work’ in the famous medieval version of Roman law, the Digests. In the middle of the page one finds the primary text, as written by Roman jurisconsults. On the sides, on top and at the bottom, one finds glosses (commentaries) written by medieval lawyers who interpret the primary text in order to apply it to their contemporary society. These glosses were followed, over the course of centuries, by commentaries on the commentaries, generating a vivid discussion on points of law. In the end, the stability of text combined with the ambiguity of human language turns interpretation and contestation into a hallmark of the law, thus offering a very specific type of protection that is at the root of the legal protection offered by modern positive law.
Whereas written manuscripts had to be copied by hand, enabling both error and deliberate changes, the printing press delivered an even more unified text as copies are now ‘true’ copies. The proliferation of text and the comparative speed of producing identical copies deepen the distantiations in both time and space between text and author, author and reader, and, finally, meaning and text. This intensifies the quest for stable meaning in the face of increased opportunities to contest established interpretation. At the same time the proliferation of printed text (pamphlets, books, newspapers, magazines) invites attempts to systematise content, by way of indexing, developing tables of contents, including footnotes and bibliographies. The architecture of ‘bookspace’ is more complex, more systematic and hierarchical, and more explicitly interlinked than that of a ‘manuscriptspace’. The pressing need for systemisation demands taxonomies that are mutually exclusive; books must be categorised in terms of one topic/domain/discipline or another – to enable placing and retrieving them in a private or public library. In his seminal work on information, Gleick explains that abstract thought is contingent on written text, as it extends memory and other cognitive resources. Just like the development of counting, calculating and mathematics depends on notation (for instance, on the invention of ‘zero’), abstract thought depends on the sequential processing of written and printed text. This also affords written articulation of more complex frameworks of abstract (general) norms that share the affordances of text-driven abstraction: sequential processing and hierarchical ordering. The combination of the monopoly of violence and the concomitant ability to impose abstract legal norms on an abstract population (confined within geographical borders) thus afforded modern positive law: a law explicitly authored by a sovereign that commands obedience from its subjects (internal sovereignty) while protecting them from occupation or interference by other sovereigns (external sovereignty).
This has consequences for the nature of law (which, being artificial, is not fixed):
sovereigns can now impose general written rules on those subject to their jurisdiction, they can ‘rule by law’
sovereigns thereby ‘posit’ the law, which has resulted in ‘positive law’, i.e. law that is valid in a specific jurisdiction
customary laws are integrated in the legal order of positive law, meaning they must be recognized by the sovereign as valid law;
the easy proliferation of legal text requires systemisation in the form of elaborate legal codes (in continental law) and treatises (common law) that instigate a complex hierarchy of legal norms, that clarifies which legal norm applies in what situation.
The need for interpretation that is core to text-driven law results in an increasingly independent position for the courts. Originally, judges are appointed by the sovereign to speak the law in his name: rex est lex animata (the king is the living law). Kings thus feel free to intervene if a court ruled against their wishes. However, as the proliferation of legal text requires study as well as experience, courts increasingly distance themselves from the author of the law (the king), providing a buffer zone between the ruler and those ruled. Montesquieux’s famous iudex est lex loquens (the court is the mouth of the law) signifies the end to the ‘rule by law’ of the sovereign, thus revoking the old adage of rex est lex animata. This signifies the beginnings of what we now term ‘the rule of law’, based on an internal division of sovereignty into legislative, administrative and adjudicative functions that provide for a system of checks and balances. Core to ‘the rule of law’ is indeed an independent judiciary that is capable of sustaining legal certainty, justice and the instrumentality of the law – if necessary against the arbitrary will of either the legislature or the administration.
One of the challenges that modern, positive law faces, is the transformation of the ICT-infrastructure from books and mass-media to a digital and computational ICT-infrastructure. Cyberspace refers to cyber (steering) and connects with cybernetics (remote control of one’s environment by means of feedback loops). This highlights that the new ICT-infrastructure is fundamentally different from speech, writing, printing and mass media. Cyberspace is not merely a digitized version of physical space but refers to an architecture with two novel characteristics: its hyperconnectivity and its computational pre-emptions. In cyberspace the inanimate environment begins to observe, infer, predict, and anticipate human behaviour, while also acting on its own inferences. The ICT-infrastructure does not merely predict the behaviour of its users but also measures and calculates how that behaviour changes when its own behaviour changes (e.g. AB testing). This allows for fine grained nudging or micro targeting, and for a whole range of automated decisions taken by robotic systems (self-driving cars), the Internet of Things (domotica) and for governmental and business decisions that directly or indirectly affect individuals or categories of people (behavioural advertising, credit rating, crime mapping, tax fraud detection). The architecture of cyberspace is thus data-driven and code-driven. With the advent of the Internet of Things (e.g. smart energy grids) and the expected integration of robotics in everyday life (e.g. connected cars) it becomes clear that cyberspace is ‘everyware’. Cyberspace is not a separate, virtual space but the emergent architecture of an onlife world. It is onlife for two reasons: first, because the difference between online and offline is becoming increasingly artificial, and, second, because the pre-emptive abilities of cyberphysical systems ‘animate’ our environment. Data-driven infrastructures behave as if our environment is alive.
Modern positive law is text-driven. It has developed in an environment driven by text, whose institutional framework is based on text, and whose societal trust and vigilance is contingent on the ‘force of law’. Written legal norms are part of a complex legal system that attaches specified legal effect when specified legal conditions apply. Both the conditions and the legal effect are grounded in text and part of the affordances of human language that are reinforced in printed text. This is related to the fact that speech acts can actually ‘do’ something, instead of merely describing something. A civil servant who declares a couple ‘man and wife’ (or husband and husband, or wife and wife), is not describing a state of affairs but actually ‘performs’ the marriage. As of that moment the legal effects that private law attributes to a lawful marriage apply, with far-reaching consequences for e.g. inheritance and liability for debts (depending on the applicable national law).
For several centuries, lawyers have been the architects of human societies, structuring economic markets (private law), punitive interventions (criminal law) and the competences of governments to decide crucial matters for their constituents (administrative and constitutional law). In many ways the state itself is a legal construct that defines the contours of everyday life and determines what counts as the public interest. Lawyers may think they still hold a monopoly on the constitution of the state and the foundational structure of society, but in a society that is increasingly rooted in cyberspace this can no longer be taken for granted. They now share this ‘monopoly’ with the architects of the internet, the web and all the different application layers. This especially bears on the computational backend systems that are hidden by user friendly interfaces, while determining the choice architecture of its users.
This requires new ways of constructing law. If we value legal protection, we need to articulate it in the data- and code-driven ICT infrastructure that to a large extent makes and sustains contemporary human societies. This is not an easy quest and it will take some time to achieve anything like it. Time in itself, however, will not do the trick. Just like the rise of the Rule of Law in the era of the ‘bookspace’ was the result of pertinent political struggles, bringing cyberspace under the Rule of Law will require a concerted effort on the side of both lawyers and computer scientists (and, obviously, citizens, policy makers, politicians and the industry). In the meantime, it is pivotal that computer scientists get a taste of what law and legal protection is all about, if only to make sure that the systems they study, develop and maintain are compatible with current legal requirements.
As indicated above, computer scientists develop, protect and maintain computing systems in the broad sense of that term, whether hardware (a smartphone, a driverless car, a smart energy meter, a laptop or a server) or software (a program, an application programming interface or API, a module, code), or data (captured via cookies, sensors, APIs, or manual input). Computer scientists may be focused on security (e.g. cryptography), on embedded systems (e.g. the Internet of Things) or on data science (e.g. machine learning). They may be closer to mathematicians or to electrical or electronic engineers, or they may work on the cusp of hardware and software, mathematical proofs and empirical testing. Whatever their focus, this book targets ‘law in cyberpace’ from three angles. First, it answers the question ‘what law is’ by asking the question ‘what law does’. Second, having introduced the basic elements of the law, this book targets ‘domains of cyberlaw’ that are particularly relevant for computer science: privacy and data protection, cybercrime, copyright and private law liability. Third, the book discusses the ‘frontiers of law in an onlife world’, notably legal personhood for artificial agents, legal protection by design and computational law. Finally, the closing chapter addresses the relationship between law, code and ethics, with a focus on algorithmic fairness.
To prevent mistaking law for either a bag of independent rules or a rigid hierarchical system of decision trees, this book takes off with a discussion of the nature of modern positive law in the light of constitutional democracy, grounding the whole enterprise in a proper understanding of the nature of legal norms and legal reasoning (chapter 2). This is followed by an introduction of the major legal domains and the logic that informs them (chapter 3): private law, public law, and criminal law, ending with a basic explanation of international and supranational law (chapter 4).
These introductory chapters are crucial for a proper understanding of the more targeted legal domains in the second part of the book (on privacy and data protection, cybercrime, copyright and liability for faulty ICT). The dynamic nature of these targeted legal domains, resulting from the transformative and often volatile nature of our computational lifeworld, requires a foothold in the architecture of modern legal systems. Without a sound grounding of the core tenets of law and the Rule of Law, legal norms are easily subject to misinterpretation and may even contribute to confusion instead of a deeper understanding of how law actually operates.
Developing, protecting or maintaining computing systems will often trigger the applicability of the law, for instance when a software program is protected by copyright or patent, when security breaches are criminal offences, or when default settings are such that data protection law is systematically violated. This provides a practical reason to include law in the curriculum of computer science and a good reason to make sure that computer scientists have easy access to concise and correct information about legal domains that are relevant to their work. These legal domains are privacy and data protection, (chapter 5), cybercrime (chapter 6) and copyright in cyberspace (chapter 7), as well as private law liability for faulty ICT (chapter 8).
This part of the book does not provide a comprehensive in-depth analysis of the domains of cyberlaw. That would take at least four text books, if not a proper law degree. The point is not to turn computer scientists into lawyers but to provide them with sufficient information about how these legal domains operate, what kind of questions they should ask when developing computational systems, how to read (often incorrect) headlines on legal issues and where to find accurate legal information and advice on legal rights and obligations.
Finally, this book probes three topics on the frontline of law and computer science. First, it investigates the issue of legal personhood for artificial agents (chapter 9), which refines the understanding of the concept of legal subjectivity and the notion of individual subjective rights. Second, this part of the book examines the concept of legal protection by design (chapter 10), of which data protection by design is a primary example. Finally, the book ends with a discussion of the distinctions between law, code and ethics, their interrelationships and their interaction (chapter 11).
In ‘the old days’ – the beginning of this century – an esteemed colleague of mine remarked that my focus on law and computer science was a niche topic for lawyers and legal philosophers. I intuitively guessed that this so-called niche topic would come into its own sooner rather than later. Just like international and European law was often considered a niche topic in the ‘90s of the last century, the relationship between law and computer science will be pivotal for each and every legal domain as each and every practice develops data- and code-driven versions.
By now the tables have turned on lawyers, and they show a growing awareness of the impact of hyperconnected computing systems on the substance of law and on the protections offered by legal procedure. The European Parliament has proposed to consider attributing electronic personhood for certain types of artificial intelligence. The General Data Protection Regulation has imposed a legal obligation to implement data protection by design and default. Law firms, tech start-ups and academia are investing in ‘legal tech’ that some believe will revolutionize the law itself. This book traces the fault lines between modern positive law and its follow-up, arguing that text-driven law offers a type of protection that cannot be taken for granted in an onlife world. The idea, however, is not to reject the new onlife world. The real challenge is to figure out when to condone it, when to embrace it and when to decline and reject what is on offer. More precisely, the task is for lawyers and computer scientists to team up and develop a plurality of solutions in close collaboration with those who will suffer and/or enjoy the consequences of the new architecture.
Introductions to law at a basic level:
Glenn, H. Patrick. 2007. Legal Traditions of the World. Oxford: Oxford University Press.
Hage, Jaap, Antonia Waltermann, and Bram Akkermans, eds. 2017. Introduction to Law. 2nd ed. 2017 edition. New York, NY: Springer.
Introduction to computer law, information law, information technology law:
Murray, Andrew. 2016. Information Technology Law: The Law and Society. 3 edition. Oxford, United Kingdom ; New York, NY: OUP Oxford.
Bainbridge, David. 2007. Introduction to Information Technology Law. 6th ed. Trans-Atlantic Publications, Incorporated.
On the relationship between law, computers, internet, web and architecture
Cohen, Julie E. 2012. Configuring the Networked Self: Law, Code, and the Play of Everyday Practice. Yale University Press.
Hildebrandt, Mireille. 2008. “A Vision of Ambient Law.” In Regulating Technologies, edited by Roger Brownsword and Karen Yeung. Oxford: Hart.
———. 2013. “The Rule of Law in Cyberspace.” http://works.bepress.com/mireille_hildebrandt/48.
———. 2015. Smart Technologies and the End(s) of Law. Novel Entanglements of Law and Technology. Cheltenham: Edward Elgar.
———. 2016. “Law as Information in the Era of Data‐Driven Agency.” The Modern Law Review 79 (1): 1–30. https://doi.org/10.1111/1468-2230.12165.
———. 2017. “Law As Computation in the Era of Artificial Legal Intelligence. Speaking Law to the Power of Statistics.” SSRN Scholarly Paper ID 2983045. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=2983045.
———. 2018. “Law as Computation in the Era of Artificial Legal Intelligence: Speaking Law to the Power of Statistics.” University of Toronto Law Journal, March. https://doi.org/10.3138/utlj.2017-0044.
Vismann, Cornelia, and Geoffrey Winthrop-Young. 2008. Files : Law and Media Technology. Meridian. Stanford, Calif.: Stanford University Press. http://www.loc.gov/catdir/toc/ecip081/2007039414.html.
On architecture and design in law, politics and morality:
Lessig, Lawrence. 2006. Code Version 2.0. New York: Basic Books.
Winograd, Terry. 1996. Bringing Design to Software. 1 edition. New York, N.Y. : Reading, Mass: ACM Press.
On the implications of ICT infrastructures:
Eisenstein, Elisabeth. 2005. The Printing Revolution in Early Modern Europe. Cambridge New York: Cambridge University Press.
Goody, Jack. 1986. The Logic of Writing and the Organization of Society. Cambridge [Cambridgeshire]; New York: Cambridge University Press.
Ihde, Don. 1990. Technology and the Lifeworld : From Garden to Earth. The Indiana Series in the Philosophy of Technology. Bloomington: Indiana University Press.
Ong, Walter. 1982. Orality and Literacy: The Technologizing of the Word. London/New York: Methuen.
On the move from online and offline to onlife:
Floridi, Luciano. 2014. The Onlife Manifesto - Being Human in a Hyperconnected Era. Springer. http://www.springer.com/philosophy/epistemology+and+philosophy+of+science/book/978-3-319-04092-9.
Hildebrandt, Mireille. 2015. Smart Technologies and the End(s) of Law. Novel Entanglements of Law and Technology. Cheltenham: Edward Elgar.
On the Rule of Law:
Dworkin, Ronald. 1991. Law’s Empire. Glasgow: Fontana.
Waldron, Jeremy. 2010. “The Rule of Law and the Importance of Procedure.” New York University Public Law and Legal Theory Working Papers, October. http://lsr.nellco.org/nyu_plltwp/234.