Mel Bochner is deemed to be one of the founding fathers of conceptual art. His awareness of the ephemeral character of language in relation to space and time brings to mind that words do not speak for themselves, especially when the author is absent (as with written or printed text). The ingenuity of modern positive law sits in its ability to turn the uncertainty of the meaning of text into the hallmark of legal certainty, requiring interpretation, argumentation and thus enabling the contestation of the norms that rule us.
Working with computing systems, whether developing, integrating or testing them, will often involve working with data. Sometimes this data will be personal data, and sometimes these systems will have a major impact on the private life of those targeted by these systems (think of data brokers, credit rating agencies), or those interacting with these systems (in the case of social networks, search engines). In this chapter we will investigate the legal domain of privacy and data protection, which entails a series of legal requirements for the development and design, for the default settings, and for the employment of computer architectures. This chapter can in no way provide a comprehensive overview of privacy and data protection, which would require two separate books at the least. However, the purpose of this book is not to turn computer scientists into lawyers. The purpose is to provide some real taste and true bite of the law on legal subjects highly relevant for computer science. Therefore, please check the references for further reading and for real world scenarios check with a practicing lawyer.
The right to privacy is a subjective right, attributed by objective law. This may be national (constitutional) law, international human rights law, or supranational law (EU fundamental rights law). In this chapter we will first confront the landscape of human rights law at the global, national and EU level, followed by a discussion of the concept of privacy. We will then inquire into the right of privacy, as guaranteed under the ECHR and the CFREU, and finally, we will target the new fundamental right to data protection, as guaranteed by the CFREU.
When tracing the history of human rights, we first encounter the English Bill of Rights of 1689, followed by the revolutionary French Déclaration des Droits de l’Homme et du Citoyen of 1789 and the US Bill of Rights of 1791. Though the famous Magna Charta of 1215 may seem an early example of a human rights charter, it did not attribute what we now call human rights. Instead, it ensured that the feudal lords were able to restrict the powers of the King, while protecting jurisdiction over their own subjects against royal interference. The era of the Magna Charta saw the struggle between a feudal society and attempts to institute overriding royal power; this was not yet the era of a powerful modern state that managed to subject each and every person on its territory to its jurisdiction.
The rise of the modern state must be situated in the beginning of what historians call the era of ‘Modernity’, around the 15th and 16th century. It was the rise of the modern, bureaucratic state that warranted new types of protection against the monopolistic powers of the King and his clerks (feeding on the impressive affordances of proliferating printed text, see above section 1.4). The idea of human rights coincides with the rise of sovereignty (see above section 1.4 and 4.1.2).
The human rights declarations of the 17th and 18th centuries provided those subject to the power of a sovereign state with an entitlement to civil and political rights, emulating their status as individual right bearers and constituents of the polity. Being subject to a sovereign became being a subject in law. It is hard to imagine how novel the attribution of such rights was, even if initially their enforcement was neither practical nor effective. Some attribute the power of this attribution to the ‘endowment bias’; if people come to believe they ‘have’ these rights, they will invest in ‘keeping’ them. If the struggle this entails succeeds, these rights will eventually be instituted as effective subjective rights. In due course, respect for human dignity and a new emphasis on the centrality of the individual reconfigured the idea of law and politics, laying the groundwork for the more ‘practical and effective’ human rights protection of the second half of the 20th century.
However, in the context of international law, human rights have mostly been citizens’ rights rather than human rights, depending on constitutional protection and citizenship, thus offering little protection for subjects of rogue states. After the atrocities of the second world war, states decided to elevate the protection of human rights to the level of international law, starting with the Universal Declaration of Human Rights of 1948. Though this declaration had no binding force, it was soon followed by various treaties at the global and regional level, aiming to finally institute human rights as enforceable subjective rights against the state.
Human rights law was originally focused on the protection of individual citizens against powerful states. We call these rights first generation human rights, and they are best described as subjective rights that the state refrain from interference with the legal good that is protected by such rights. This is why they are often called liberty rights. These legal goods are: privacy, non-discrimination, bodily integrity, freedom of movement, the presumption of innocence, a fair trial, freedom of expression, freedom of association, freedom of religion, and voting rights. Note that these legal goods are considered worthy of protection as public goods, because a society that does not protect them cannot support a viable democracy that depends on independence of thought and unhindered development of both individual and group identities. For that reason they are also called civil and political rights. The focus is on public goods that protect individual persons as autonomous agents in a democratic polity and on negative obligations of the state towards its citizens.
A second generation of human rights developed when it became clear that (1) non-interference is not always enough to protect such public goods, while (2) a number of other public goods were absent in the initial inventories of human rights. The public goods protected by second generation human rights concern public goods such as employment, food and housing, social security, healthcare, and access to basic utilities such as electricity, postal services and public transport. These rights are often called social and economic rights. To actually provide these protected goods, a state cannot restrict itself to respecting liberty rights. The second generation human rights impose positive obligations on states to create and sustain the goods it must protect. This implies that second generation of human rights address states with ‘instruction norms’, rather than providing citizens with directly enforceable subjective rights. To exercise a right to employment, an economic system must be in place that enables such a right, meaning that second generation human rights require states to build institutions capable of supporting economic welfare and a fair distribution of access to social and economic goods.
Taking note that second generation human rights are instruction norms to states, rather than directly enforceable individual rights, the latter part of the 20th century witnessed advocacy for a third generation of human rights. Here, we encounter rights to construct and develop group identities and rights to a sustainable environment. These rights have even less of a straightforward relationship with individual entitlement, focusing on the rights of groups (e.g. the right to self-determination for indigenous peoples which we encountered as a fundamental principle of international law) and obligations towards the natural environment on which human society depends (responsible innovation, sustainable development).
Before investigating the right to privacy as part of the first generation of human rights law, we will first inquire into the nature of privacy itself. The reason is that computer science has a specific relationship with privacy, notably in the context of digital security and cryptography. In this context, privacy is often seen as a subset of security, focused on hiding or removing the link between data and the identifiers that enable the identification of whoever the data refer to, or on encrypting the data to safeguard the confidential data against eavesdropping. This has as a consequence that privacy protection is restricted to (1) anonymisation or pseudonymisation of personal data, by way of deleting or separating identifiers and to (2) hiding the content by means of encryption or other security measures. This generates research fields such as differential privacy and reidentification metrics, based on cryptography and key-management, k-anonymity, linkability metrics and so on.
Though this research is of crucial importance to protect privacy, one must not mistake issues of identifiability and confidentiality for issues of privacy as the latter concerns far more than mere technical identifiability or readability. Consider the following data points:
Your bank account
The taxes your mother pays
What kind of socks you wear
The logs of your surfing behaviour on the net
Your pattern of your energy usage behaviour
The decision to have an abortion
The decision, or inclination to be a vegetarian
Should we qualify this data as part of the privacy of the person the data refers to? To answer this question, we need to check what falls within the value of, the interest in or the right to privacy:
when (under what conditions)?
with regard to whom (is data on my mother part of my privacy)?
where (are specific locations more privacy-sensitive than others)?
for what reason (what could make my socks relevant to my privacy)?
Many authors have made attempts to define privacy by summing up the common denominators of what is generally seen as falling within the scope of privacy. This turns out to be a questionable undertaking, because the concept is as elusive as it is pertinent. Another way of tackling the issue of understanding privacy is to define it in terms of family resemblance. The American privacy scholar and lawyer Daniel Solove made an insightful attempt to approximate the concept of privacy in terms of six categories that are partly overlapping, while thus covering much of what we intend when referring to privacy:
The right to be let alone
Limited access to self
Secrecy – concealment
Control over personal information
Personhood – protection of identity, dignity
Solove notes that some of these categories focus on goals, others on means, while they are in various way interdependent. Taken separately, none of these definitions would exhaust the concept of privacy, being either too broad or too narrow. He warns that this is therefore not a taxonomy, which would assume mutually independent features of the same thing. On the contrary, the idea of a family resemblance means that privacy cannot be defined in terms of necessary and sufficient conditions, because there is no common core to the different conceptions of privacy. Instead, Wittgenstein’s notion of family resemblances enables us to take a pragmatic approach, recognizing the contextual, historical, dynamic nature of privacy, such as relating to family life, the body, or the home. This approach is bottom-up rather than abstract and acknowledges that, in the end, privacy is best seen as a set of practices rather than a formula. The concept of family resemblance was introduced as a way to understand the meaning of words by Wittgenstein in his Philosophical Investigations. The concept is very interesting for computer science as it explains why translating concepts into ontologies or a semantic web may entail a loss of meaning. I will therefore quote The Stanford Encyclopedia of Philosophy to elucidate this understanding of meaning:
There is no reason to look, as we have done traditionally—and dogmatically—for one, essential core in which the meaning of a word is located and which is, therefore, common to all uses of that word. We should, instead, travel with the word’s uses through ‘a complicated network of similarities overlapping and criss-crossing’ (PI 66).1 Family resemblance also serves to exhibit the lack of boundaries and the distance from exactness that characterize different uses of the same concept. Such boundaries and exactness are the definitive traits of form—be it Platonic form, Aristotelian form, or the general form of a proposition adumbrated in the Tractatus.2 It is from such forms that applications of concepts can be deduced, but this is precisely what Wittgenstein now eschews in favor of appeal to similarity of a kind with family resemblance.
To emphasize the elusive nature of privacy, we briefly follow Solove’s discussion of the categories enumerated above. A right to non-interference seems a pivotal shorthand for the right to privacy, as it clearly depicts the negative obligations of governments and others (vertical and horizontal effects of human rights law). Here, we think of privacy as the ‘right to be left alone’, where privacy is a liberty or freedom, in the sense of freedom from external constraints. This understanding of privacy is related to intimacy, to the idea of drawing boundaries around a small circle of people with whom one dares to expose oneself, sharing information that might otherwise be used to shame a person, to diminish or ridicule their agency. Intimacy relates to trust, not in the sense of confidence and security, but in the sense of trusting others enough to take the risk of being betrayed. One could ask what information is intimate, but this assumes that ‘intimacy’ is a property of information, whereas all depends on the situation, the context, and the roles played by intimate others. In some situations financial information, or information shared with a health insurance company may be intimate information, because it reveals to others what makes a person vulnerable to shame, ridicule or even to life-threatening manipulation.
If we then take together privacy as limited access, and secrecy, anonymity and solitude, we can refer to the idea of third-party disclosure. In the US, the Supreme Court decided in 1967,3 that once a person exposes personal data to a third party such as banks or other service providers, they have no reasonable expectation of privacy regarding access by the government. This so-called third-party doctrine reflects an approach to privacy that is radically different from the European approach, which does not presume that disclosing private information to one entity necessarily implies that other entities are now free to obtain and use such information. Note that the US have since enacted legislation requiring a warrant for access to specific data, thus providing specified protection for e.g. financial data and telephone data. We have already encountered the case of US v Jones (followed by Riley and Carpenter, see section 2.1.2, footnote 2), where the Supreme Court decided that police warrants were necessary in the case of GPS-trackers, information on a cell phone and cell-site records of a wireless carrier. These judgments may lead to the end of the third party doctrine, depending on subsequent case law.
The next category, control over information about oneself, is often portrayed as the core meaning of what Americans call informational privacy. This understanding clearly links to the notion of identifiability, as it relates to information about an identifiable person, thus also connecting this particular conception of privacy with the idea of privacy as a subset of digital security. Defining privacy in terms of control comes close to thinking of personally identifiable information (PII) as if it were the property of the person it concerns. PII is, just like informational privacy, a term used in the US, whereas in the EU we generally speak of data protection and personal data. Thinking of PII in terms of property creates a number of problems, as neither data nor information are rivalrous or exclusionary. One person ‘having’ certain information does not necessarily imply that others do not ‘have’ that same information, whereas one person possessing a book implies that others do not possess it. It is therefore important to distinguish between control over ‘access to’ and ‘usage of’ information on the one hand, and property rights in information on the other. The latter applies in the case of intellectual property rights (e.g. copyright or patent), but not in the case of personal data. Below, we will discuss to what extent EU data protection law provides control to data subjects (those to whom personal data refers), but we can already point out here that full control over one’s personal data ignores the relational nature of personal data. To illustrate the latter point, we can think of Robinson Crusoe and ask the question whether he had a name before Friday came to his Island. We have a name to be singled out by others, to be addressed by others and to appear as a singular in-dividual person before others. This implies that, though we need some control over the sharing of our name, such control cannot be unlimited. Without fellows to address us, we effectively ‘have’ no name.
Finally, privacy is connected with personhood, with individuality, with dignity, and with autonomy. One could ask to what extent our personhood is private, noting that becoming a person depends on anticipating how others will frame us. Whereas the right to privacy is often seen as a liberty, as a right to be left alone, as a freedom from outside interference, privacy is also connected with a right to develop one’s own identity, to be treated as worthy of respect, and the freedom to make one’s own choices concerning e.g. life style, employment, education and political opinion. Here, privacy sits on the cusp of freedom from unreasonable constraint on the freedom to construct one’s identity. Indeed, this is how Agre and Rotenberg defined privacy, highlighting the interrelationship between negative and positive freedom. This also suggests that liberty and autonomy overlap and support each other. For instance, what has been called ‘decisional privacy’ (e.g. the right of a woman to decide about an abortion) clearly marks the nexus of positive freedom (to decide an abortion) with negative freedom (to be free from unreasonable constraints on such a decision). The crux of Agre and Rotenberg’s definition resides in the requirement that people are free from unreasonable constraints, not just any constraints. In case law, legislation and doctrine the concept of ‘reasonable’ or ‘unreasonable’ is of prime importance. Instead of framing this as a source of uncertainty, because of its prima facie vagueness, this concept can be seen as an aid in aligning different conceptions of legal goods that warrant protection. Demanding that a duty of care is exercised in a reasonable way acknowledges that ‘a duty of care’ cannot be defined in the abstract, but is better understood in terms of family resemblances. The duty of care of a mother, an employer, a manufacturer and a social network provider may not share any common element; they nevertheless align along the lines of reasonable expectations and proper checks and balances, considering the relevant context and the roles of the parties involved. Similarly, reasonable expectations of privacy depend on context, on roles played, on checks and balances and meaningful choice. This is not because privacy is a vague concept but because the practice of privacy is complex, requiring acuity to what is at stake for whom.
Though the reader may by now be wary of the dynamic and shifting borders of the concept of privacy, it is crucial to sustain awareness that privacy is a moving target. Defining privacy in terms of necessary and sufficient conditions would restrict protection to what happens to fall within their scope, easily rendering the concept both over- and under-inclusive. In the end, defining privacy is a decision to be taken when confronted with its violation. As Solove saliently writes in reference to a famous American philosopher:
‘[K]nowledge is an affair of making sure,’ Dewey observed, ‘not of grasping antecedently given sureties’.
This is what the courts must achieve, every time a case is brought before them.
After tracing the conceptual challenges of delineating privacy, I will briefly trace the relationship between privacy and technology. Some of us may think that privacy is a property of people in general, just like animals often display what ethologists call ‘critical distance’ from each other. Privacy, according to environmental psychologist Altman, is a matter of shaping and negotiating borders between self and others. It is not a property of a person, but of a relationship. Rather than being a matter of seclusion, Altman frames privacy as a continuous process of sharing and excluding, based on societal practices that are in turn dependent on technological affordances of the environment. In that sense, privacy can be detected in most human societies, though under different names and with very different constraints.
The right to privacy, however, is a recent historical artefact. As a subjective right, the right to privacy first surfaced at the end of the 19th century, in response to the proliferation of technologies such as photography and mass media. In a famous article in the Harvard Law Review, US legal scholars Samuel Warren and Louis Brandeis discussed the need the protect oneself against publication of photographs without permission, to enable social withdrawal. In that article they formulated the right to privacy as the right to be left alone, basically arguing for the existence of a privacy tort whenever this right was infringed upon without justification. Interestingly, privacy was thus introduced as a private law issue rather than a constitutional right. When Brandeis later served as justice in the Supreme Court, however, he argued that such a right to be left alone must be ‘read into’ the US Constitution, notably into the Bill of Rights, thus vouching for a right to privacy against the state. The rise of mass media and photography afforded massive dissemination of pictures taken, thus infringing the privacy of those concerned in a previously unprecedented manner. This, in turn, gave rise to defence mechanisms to safeguard one’s capability to withdraw from such exposure. This, first appearance of a right to privacy fostered privacy as negative freedom: the right that others refrain from interference.
After the second world war, a new technological infrastructure surfaced to enable and improve public administration, in the form of computerized databases. This resulted in the collection and storage of myriad data relating to identifiable citizens, enabling government agencies to better target their constituency and to engage in what would now be termed ‘evidence based policy’. This, in turn, raised the question to whom this data belongs. In 1967, Alan Westin wrote a seminal work on Privacy and Freedom, taking a clear stand on the question of who should – by default – be capable of controlling access to data concerning individual persons. Privacy, he wrote, is:
the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.
This concept of informational privacy, as control over information, informs much of the debate about privacy and data protection in our current age. It is interesting to note that it emerged in counterpoint to the rise of data bases in public administration, as well as private enterprise. The fact that data was collected, sorted and recorded, enabling retrieval as well as aggregation, gave rise to new types of transparency, and new types of threats to personal identity. This was related to the fact that in this era the data collected and stored was mostly stable data, allowing the mapping of both individuals and populations in consistent and foreseeable way, without the kind of dynamic and unstructured big data capture that characterises the current era. This, second appearance of the right to privacy fosters privacy as a positive freedom: the freedom to determine how personal information is shared and used.
After the rise of the internet and the world wide web, combined with the capture of big data and data-driven techniques to infer new information, the need for a more complex and contextual right to privacy seems obvious. Negative freedom will not do, as data abounds and is captured beyond one’s control on a permanent basis. For the same reason, positive freedom seems unattainable, as consent loses its meaning amidst the volume, variety and velocity of data capture, storage and use. A more practical and effective way of understanding of privacy should therefore combine negative and positive freedom, while highlighting the relationship with identity-construction, not merely identification. The definition of Agre and Rotenberg, referred to above, may be the most apt for the era of pro-active and pre-emptive computing infrastructures, depicting the right to privacy as:
the right to be free of unreasonable constraints on the building of one’s identity.
For some readers, this may sound overly vague or complicated. To confront a complex, volatile, invasive and pre-emptive environment we will, however, need an understanding of privacy that goes beyond the hiding of personal data.
Privacy is a value, an interest, a right or a good. It can be analysed from an ethical perspective (as a value, a virtue or duty), from an economic perspective (as a utility, a preference or an interest), and from the perspective of political theory (as a public and a private good). In this work, we will focus on the legal perspective, tracing positive law on the subject of privacy. Below, we will discuss the right to privacy from the perspectives of constitutional, international and supranational law, ending with a discussion of art. 8 of the ECHR.
The right to privacy is a subjective right, attributed by objective law. The most obvious branch of objective law that attributes the subjective right of privacy is constitutional law, which often contains a section that aims to protects citizens against overly invasive powers of the state. Historically, human rights initially played out in the vertical relationship between state and citizens, not in the horizontal relations between private parties. The industrial revolution of the 19th century gave rise to powerful economic actors whose ability to infringe privacy, freedom of information and non-discrimination increasingly match the powers of the state.
This has led courts to recognise a so-called horizontal effect of constitutional rights such as privacy. This entails that protection against such infringements is a duty of the state, meaning that citizens can sue the state for failing to impose prohibitions to infringe these rigths upon powerful players in the private sector. This is called indirect horizontal effect, because it cannot be invoked directly against private parties. Depending on national jurisdiction, courts may also attribute direct horizontal effect, when qualifying a violation of privacy by e.g. a company as a tortuous act in the context of private law. In that case, violation of privacy can be invoked directly against e.g. a private company.
In many states outside the Council of Europe, the Constitution provides the main protection against infringements of the right to privacy. For instance in the US, even though neither the 1787 US Constitution, nor the 1791 Amendments to the US Constitution (known as the Bill of Rights) explicitly refer to a right to privacy. In the course of the 20th century the Supreme Court of the US has nevertheless interpreted various articles of the Bill of Rights as safeguarding an individual right to privacy,4 notably based on the IV Amendment:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
This Amendment protects against:
‘unreasonable searches and seizures’ by the police,
which require ‘a warrant’
that may only be issued in the case of probable cause (concrete and objectifiable suspicion) and
must contain a reasonably detailed description of what may be searched or seized.
We can read these protections in terms of legal conditions and legal effect, by stating that ‘searches and seizures’ by government officials are only lawful if:
there is probable cause
a warrant has been issued
which contains limitations as to what is allowed
As we have already seen in section 2.1.2 and 5.1.2, the question here is (1) whether this right protects against violation of property rights (trespass) or also against violation of reasonable expectations of privacy that do not depend on property and (2) whether search and seizure of e.g. a mobile phone falls within the scope of the IV Amendment, as a phone is neither part of a person, a house, paper or effects.
In the US, constitutional protection of the right to privacy (which is also ‘read into’ other parts of the Bill of Rights) thus depends on national law, rather than international law. This has consequences for its applicability in the case of those who have no legal status in the US, as it may be unclear whether the Bill of Rights even applies to them. Another consequence is that the enforcement of rights against the state is dependent on that same state. In contrast, the European Convention of Human Rights (ECHR) offers a more layered architecture of legal protection, which is at least in part dependent on a European court that is not part of the state against which it aims to protect.
Protection of human rights requires a resilient system of checks and balances, i.e. a series of institutional safeguards to ensure that the state does not claim unreasonable exceptions and faces a stringently independent judiciary to keep the powers of the state ‘in check’. As noted above, the need to protect subjects of the state against the state, gave rise to international human rights law, which provides an extra layer of checks and balances. Privacy is explicitly protected by art. 17 of the United Nations (UN) International Covenant on Civil and Political Rights (ICCPR) of 1966,5 and by art. 8 of the European Convention of Human Rights (ECHR) of 1950, two examples of international law.6 Both articles are similar, we quote art. 8 ECHR to give the reader a first taste:
1. Everyone has the right to respect for his private and family life, his home and his correspondence.
2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.
The UN ICCPR has global application, with currently 178 signatories and 172 ratifications, but its enforcement mechanisms are relatively weak compared to the ECHR. In art. 34, the ECHR provides citizens of the 48 contracting parties with an individual right to complain to the European Court of Human Rights (ECtHR):
The Court may receive applications from any person, non-governmental organisation or group of individuals claiming to be the victim of a violation by one of the High Contracting Parties of the rights set forth in the Convention or the protocols thereto. The High Contracting Parties undertake not to hinder in any way the effective exercise of this right.
The ECHR, however, does not have global application, as it only applies within the jurisdiction of the Council of Europe.
Since 2009, when the EU Charter of Fundamental Rights of the EU (CFREU) came into force, the protection of human rights has gained even more traction, adding a second European Court with competence to test legislation, decisions and actions against a catalogue of human rights. This protection, offered at the level of supranational law, is applicable whenever MSs ‘are implementing Union law’ (art. 51 CFREU). As human rights developed with the rise of the modern state, they further developed with the rise of supranational jurisdiction. The prevailing powers of the institutions of the EU demand countervailing powers in the form of supranational fundamental rights.
In this section we will discuss one of the most crucial legal rights of this book. The right to privacy that is articulated in art. 8 ECHR is not only relevant for bodily integrity, decisional privacy, and the other aspects of privacy, but also directly affects issues of cybercrime and copyright. This is due to the fact that cybercrimes may violate privacy (hacking, data breaches), or that copyright holders may violate privacy when disseminating their works (photographs, texts) but also because the investigative measures that aim to detect cybercrime and violations of copyright often infringe upon the right to privacy as protected in art. 8.
Here, we develop a first analysis of the legal conditions it stipulates, how they are explained by the ECtHR and the legal effects it generates.
Art. 8 consists of 2 paragraphs. The first paragraph concerns the question of whether privacy is infringed, the second paragraph clarifies under what conditions an infringement is justified.
1. Everyone has the right to respect for his private and family life, his home and his correspondence.
The legal effect generated by this paragraph is ‘an infringement of privacy’, and this infringement depends on the following alternative legal conditions:
Private life is not respected
Family life is not respected
The protection of one’s home is not respected
The confidentiality of one’s correspondence is not respected
The ECtHR takes the view that these concepts require a broad rather than a narrow interpretation, bringing a wide variety of situations, events, relationships and contexts under the protection of art. 8.
Private life can be at stake in the context of work, meaning that a search of an office space may be an infringement of privacy.7 Family life is at stake when a state prohibits members of a family to live together, for instance in the case a refusal to provide a residence permit for a partner from another state, or of a parent wishing to further develop a relationship with their child despite not being married to the other parent. Protection of the home may become relevant when a person has taken residence in a house they neither own nor rent, meaning that the need to respect one’s home is not dependent upon ownership or contract. The confidentiality of communication has been interpreted to include letters, telephone and more recently all types of internet-enabled communication that is not public. Privacy, as protected by art. 8, clearly concerns physical, spatial, contextual, decisional, communicative and informational privacy, and though art. 8 addresses the contracting states, its indirect horizontal effect has been recognised by the ECtHR, requiring states to ensure proper protection against violations by others than the state. Note that the individual complaint right of the ECHR can only be invoked against a state, not against a company. To invoke direct horizontal effect, a person needs to sue the tortfeasor in a national court.
Once the legal effect of an infringement has been established by the ECtHR, it will investigate whether the state has a valid justification.
2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.
The legal effect of a valid justification is that, despite the infringement, art. 8 is not violated. This effect depends on the following cumulative legal conditions:
The infringement has one of more of the following legitimate aims: national security, public safety or the economic well being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others;
The infringement is in accordance with the law
The infringement is necessary in a democratic society.
The second paragraph of art. 8 requires a triple test, meaning that all three legal conditions must be met. These conditions can be formulated as follows, taking into account relevant case law. Any infringing measures taken by the state must:
have a legitimate aim
have a basis in law and
be proportional in relation to the aim served.
The articulation of legitimate aims in art. 8.2 is rather inclusive, which means that the ECtHR seldom finds reason to endorse the claim that the state lacked a legitimate aim.
Many of the cases where ECtHR (the Court) finds that art. 8 has been violated concern the condition that the infringement must be ‘in accordance with the law’. This basically refers to the legality principle of constitutional law (3.3). The Court has – over the course of the years – developed 3 criteria to decide whether an infringement has a proper basis in law. The legal competence to take infringing measures must:
be accessible, knowable for citizens to whom it will apply
the infringements must be foreseeable, which means sufficiently specified
the quality of the law must include sufficient safeguards that limit the exercise of the competence in time and space, specifying the extent to which privacy may be infringed, and notably requiring independent oversight (e.g. warrants) in the case of more serious infringements
Note that the Court will not merely check legislative or regulatory provisions, but test practical arrangements and actual safeguards to establish whether the infringing measures were taken ‘in accordance with the law’. Throughout its case law, the ECtHR demands that the rights attributed in the ECHR are both ‘practical and effective’, stating that:8
[t]he Convention is intended to guarantee not rights that are theoretical or illusory but rights that are practical and effective (…).
If privacy is infringed with a legitimate aim, based on a legal competence that is accessible, foreseeable, while having sufficient safeguards, the final test is a proportionality test. This entails that the ECtHR investigates whether the measure was necessary in a democratic society, which requires – according to the Court – a pressing social need to resort to such measures. Under this criterion the Court will examine the gravity, invasiveness and seriousness of the infringement in relation to the importance and seriousness of the aim served. This criterion basically requires that the measures taken can reasonably be expected to be effective, because a measure that is not effective cannot be necessary. The proportionality test includes a subsidiarity test; if another measure which is less infringing is feasible or sufficiently effective, the measure is not proportional.
When developing computing architectures, whether in the context of data bases, streaming data, machine-to-machine communication, knowledge discovery in data bases, machine learning or cryptographic infrastructures, computer scientists lay the foundations for ICIs that enable the processing, storage, interlinking and inferencing of behavioural and other personal data. This may regard online clickstream behaviour, location and mobility data, energy usage behaviours, biometric gait behaviour, and a plethora of communication data, including both content and metadata. Governments, tasked with the investigation and prosecution of criminal offences and the protection of national and public security, have many incentives to gain access to such data. Apart from the struggle against serious crime and threats to national security, governments need to collect taxes, attribute social benefits, take precautionary measures regarding public health, and safeguard the economic welfare of the country. All these tasks fall within the scope of the legitimate aims enumerated in art. 8.2 ECHR. This raises the question under what conditions surveillance measures can be qualified as ‘in accordance with the law’ and if so, when they are considered ‘proportional’ to the targeted aim.
Surveillance measures by the police may regard post-crime investigatory measures (to identify an offender after a crime has been committed) or pre-crime investigations (to prevent potential offending, or to foresee likely offences). To understand how the Court deals with various types of electronic surveillance, we will discuss two cases of post-crime surveillance and two cases of pre-crime surveillance (including surveillance by the intelligence services, which falls outside the domain of criminal law). This entails extensive quotation of the relevant case law, to show how the Court reasons, taking into account that the Court’s judgments binds the contracting parties and thus provide ‘practical and effective’ legal protection to those under the jurisdiction of the ECHR.
In 1984, in Malone v UK,9 the ECtHR determined that the UK was in breach of art. 8, where it allowed the interception of telephone conversations by the police upon warrant issued by the Secretary of State. The Court determined that for such a measure to be ‘in accordance with the law’, it must not merely have a basis in domestic law (meaning a legal competence), but must also be foreseeable and sufficiently limited as required by the rule of law:
68. Since the implementation in practice of measures of secret surveillance of communications is not open to scrutiny by the individuals concerned or the public at large, it would be contrary to the rule of law for the legal discretion granted to the executive to be expressed in terms of an unfettered power. Consequently, the law must indicate the scope of any such discretion conferred on the competent authorities and the manner of its exercise with sufficient clarity, (…).
When applying this interpretation, the Court finds that:
79. The foregoing considerations disclose that, at the very least, in its present state the law in England and Wales governing interception of communications for police purposes is somewhat obscure and open to differing interpretations. The Court would be usurping the function of the national courts were it to attempt to make an authoritative statement on such issues of domestic law (see, mutatis mutandis, the Deweer judgment of 27 February 1980, Series A no. 35, p. 28, in fine, and the Van Droogenbroeck judgment of 24 June 1982, Series A no. 50, p. 30, fourth sub-paragraph). The Court is, however, required under the Convention to determine whether, for the purposes of paragraph 2 of Article 8 (art. 8-2), the relevant law lays down with reasonable clarity the essential elements of the authorities’ powers in this domain.
Detailed procedures concerning interception of communications on behalf of the police in England and Wales do exist (see paragraphs 42-49, 51-52 and 54-55 above). What is more, published statistics show the efficacy of those procedures in keeping the number of warrants granted relatively low, especially when compared with the rising number of indictable crimes committed and telephones installed (see paragraph 53 above). The public have been made aware of the applicable arrangements and principles through publication of the Birkett report and the White Paper and through statements by responsible Ministers in Parliament (see paragraphs 21, 37-38, 41, 43 and 54 above).
Nonetheless, on the evidence before the Court, it cannot be said with any reasonable certainty what elements of the powers to intercept are incorporated in legal rules and what elements remain within the discretion of the executive. In view of the attendant obscurity and uncertainty as to the state of the law in this essential respect, the Court cannot but reach a similar conclusion to that of the Commission. In the opinion of the Court, the law of England and Wales does not indicate with reasonable clarity the scope and manner of exercise of the relevant discretion conferred on the public authorities. To that extent, the minimum degree of legal protection to which citizens are entitled under the rule of law in a democratic society is lacking.
80. In sum, as far as interception of communications is concerned, the interferences with the applicant’s right under Article 8 (art. 8) to respect for his private life and correspondence (see paragraph 64 above) were not "in accordance with the law".
In this case, Malone claimed that not only the interception of the content of his telephone converstaions violated his right to privacy under the Convention, but also the capture of what we would now call metadata. The Court states with regard to this capture, which is known as ‘metering’:
83. The process known as ‘metering’ involves the use of a device (a meter check printer) which registers the numbers dialled on a particular telephone and the time and duration of each call (see paragraph 56 above). In making such records, the Post Office - now British Telecommunications - makes use only of signals sent to itself as the provider of the telephone service and does not monitor or intercept telephone conversations at all. From this, the Government drew the conclusion that metering, in contrast to interception of communications, does not entail interference with any right guaranteed by Article 8 (art. 8).
87. Section 80 of the Post Office Act 1969 has never been applied so as to "require" the Post Office, pursuant to a warrant of the Secretary of State, to make available to the police in connection with the investigation of crime information obtained from metering. On the other hand, no rule of domestic law makes it unlawful for the Post Office voluntarily to comply with a request from the police to make and supply records of metering (see paragraph 56 above). The practice described above, including the limitative conditions as to when the information may be provided, has been made public in answer to parliamentary questions (ibid.). However, on the evidence adduced before the Court, apart from the simple absence of prohibition, there would appear to be no legal rules concerning the scope and manner of exercise of the discretion enjoyed by the public authorities. Consequently, although lawful in terms of domestic law, the interference resulting from the existence of the practice in question was not ‘in accordance with the law’, within the meaning of paragraph 2 of Article 8 (art. 8-2) (see paragraphs 66 to 68 above).
Note that the ECtHR established that the practice of ‘metering’ is lawful under UK law, but in violation of art. 8.2 ECHR. Both the interception and the metering violate art. 8.2 because they are not ‘in accordance with the law’ as required by a treaty that binds the UK. This means that the UK has violated its legal obligations under the Convention and is now bound to ensure that these types of surveillance measures are based on a domestic law that both constitutes and sufficiently restricts its legal powers.
In 1990, in Huvig & Kruslin v France,10 the ECtHR determined that art. 8 was breached. The case concerned the interception of telephone conversations, as in the Malone case. The Court extensively refers to its contentions in the Malone judgment as to the requirement of such interceptions being ‘in accordance with the law’. It then states:
35. Above all, the system does not for the time being afford adequate safeguards against various possible abuses. For example, the categories of people liable to have their telephones tapped by judicial order and the nature of the offences which may give rise to such an order are nowhere defined. Nothing obliges a judge to set a limit on the duration of telephone tapping. Similarly unspecified are the procedure for drawing up the summary reports containing intercepted conversations; the precautions to be taken in order to communicate the recordings intact and in their entirety for possible inspection by the judge (who can hardly verify the number and length of the original tapes on the spot) and by the defence; and the circumstances in which recordings may or must be erased or the tapes be destroyed, in particular where an accused has been discharged by an investigating judge or acquitted by a court. The information provided by the Government on these various points shows at best the existence of a practice, but a practice lacking the necessary regulatory control in the absence of legislation or case-law.
36. In short, French law, written and unwritten, does not indicate with reasonable clarity the scope and manner of exercise of the relevant discretion conferred on the public authorities. This was truer still at the material time, so that Mr Kruslin did not enjoy the minimum degree of protection to which citizens are entitled under the rule of law in a democratic society (see the Malone judgment previously cited, Series A no. 82, p. 36, § 79). There has therefore been a breach of Article 8 (art. 8) of the Convention.
Note that in the Huvig & Kruslin judgment, the Court further details the nature of the restrictions that must be laid down by law, compared to the more general formulation in the Malone judgment.
Pre-crime surveillance (including surveillance by the intelligence services):
In 1978, in Klass v Germany,11 the ECtHR decided a case regarding surveillance measures taken by the secret services in Germany. I will quote the most relevant considerations from the judgment, which should clarify how the Court argues points of law and thus shapes the interpretation of legal conditions:
All five applicants claim that Article 10 para. 2 of the Basic Law (Grundgesetz) and a statute enacted in pursuance of that provision, namely the Act of 13 August 1968 on Restrictions on the Secrecy of the Mail, Post and Telecommunications (.., hereinafter referred to as ‘the G 10’), are contrary to the Convention.
They do not dispute that the State has the right to have recourse to the surveillance measures contemplated by the legislation; they challenge this legislation in that it permits those measures without obliging the authorities in every case to notify the persons concerned after the event, and in that it excludes any remedy before the courts against the ordering and execution of such measures.
Their application is directed against the legislation as modified and interpreted by the Federal Constitutional Court (Bundesverfassungsgericht).
The Court first discusses the admissibility of the complaint, raising the question whether the applicant is a victim of violation by one of the MSs.
33. (…) Article 25 (art. 25) [now art. 34, mh] does not institute for individuals a kind of actio popularis for the interpretation of the Convention; it does not permit individuals to complain against a law in abstracto simply because they feel that it contravenes the Convention. In principle, it does not suffice for an individual applicant to claim that the mere existence of a law violates his rights under the Convention; it is necessary that the law should have been applied to his detriment.
34. (…) The question arises in the present proceedings whether an individual is to be deprived of the opportunity of lodging an application with the Commission because, owing to the secrecy of the measures objected to, he cannot point to any concrete measure specifically affecting him. (…)
36. The Court points out that where a State institutes secret surveillance the existence of which remains unknown to the persons being controlled, with the effect that the surveillance remains unchallengeable, Article 8 (art. 8) could to a large extent be reduced to a nullity. It is possible in such a situation for an individual to be treated in a manner contrary to Article 8 (art. 8), or even to be deprived of the right granted by that Article (art. 8), without his being aware of it and therefore without being able to obtain a remedy either at the national level or before the Convention institutions. (…) The Court finds it unacceptable that the assurance of the enjoyment of a right guaranteed by the Convention could be thus removed by the simple fact that the person concerned is kept unaware of its violation. (…)
38. Having regard to the specific circumstances of the present case, the Court concludes that each of the applicants is entitled to ‘(claim) to be the victim of a violation’ of the Convention, even though he is not able to allege in support of his application that he has been subject to a concrete measure of surveillance.
This entails that the Court makes an exception to the requirement that applicants must claim and demonstrate to be a victim of violation in concrete terms. Depending on the specific circumstances of the case at hand, the Court may decide to conduct an abstract test of relevant legislation, attributing the status of ‘victims’ of what is now art. 34 ECHR, to those who may have been a victim of secret surveillance measure.
The Court then quotes relevant legislation, notably Art. 10 of the Basic Law of Germany:
(1) Secrecy of the mail, post and telecommunications shall be inviolable.
(2) Restrictions may be ordered only pursuant to a statute. Where such restrictions are intended to protect the free democratic constitutional order or the existence or security of the Federation or of a Land, the statute may provide that the person concerned shall not be notified of the restriction and that legal remedy through the courts shall be replaced by a system of scrutiny by agencies and auxiliary agencies appointed by the people’s elected representatives.
The Court begins by investigating whether the legislation that is contested by the applicants, constitutes an interference with art. 8.1 of the ECHR:
41. The first matter to be decided is whether and, if so, in what respect the contested legislation, in permitting the above-mentioned measures of surveillance, constitutes an interference with the exercise of the right guaranteed to the applicants under Article 8 para. 1 (art. 8-1). (…)
Furthermore, in the mere existence of the legislation itself there is involved, for all those to whom the legislation could be applied, a menace of surveillance; this menace necessarily strikes at freedom of communication between users of the postal and telecommunication services and thereby constitutes an ‘interference by a public authority’ with the exercise of the applicants’ right to respect for private and family life and for correspondence.
As is often the case, the Court takes a broad view of the scope of the first paragraph and decides that the legislation constitutes an infringement. The next question is whether the infringement is justified:
42. The cardinal issue arising under Article 8 (art. 8) in the present case is whether the interference so found is justified by the terms of paragraph 2 of the Article (art. 8-2).
The Court first tests whether the infringement is ‘in accordance with the law’:
43. In order for the ‘interference’ established above not to infringe Article 8 (art. 8), it must, according to paragraph 2 (art. 8-2), first of all have been ‘in accordance with the law’.
This requirement is fulfilled in the present case since the ‘interference’ results from Acts passed by Parliament, including one Act which was modified by the Federal Constitutional Court, in the exercise of its jurisdiction, by its judgment of 15 December 1970 (see paragraph 11 above).
In addition, the Court observes that, as both the Government and the Commission pointed out, any individual measure of surveillance has to comply with the strict conditions and procedures laid down in the legislation itself.’
This leads the Court to test whether the interference has a legitimate aim:
45. The G 10 defines precisely, and thereby limits, the purposes for which the restrictive measures may be imposed. It provides that, in order to protect against ‘imminent dangers’ threatening ‘the free democratic constitutional order’, ‘the existence or security of the Federation or of a Land’, ‘the security of the (allied) armed forces’ stationed on the territory of the Republic or the security of ‘the troops of one of the Three Powers stationed in the Land of Berlin’, the responsible authorities may authorise the restrictions referred to above (see paragraph 17).’
46. The Court, sharing the view of the Government and the Commission, finds that the aim of the G 10 is indeed to safeguard national security and/or to prevent disorder or crime in pursuance of Article 8 para. 2 (art. 8-2). In these circumstances, the Court does not deem it necessary to decide whether the further purposes cited by the Government are also relevant.
This brings the Court to test the final criterion of the triple test, investigating whether the interference is necessary in a democratic society. Below an extensive quotation of (part) of the reasoning of the Court regarding the question whether the interference enabled by the legislation is proportional, considering what is at stake.
47. The applicants do not object to the German legislation in that it provides for wide-ranging powers of surveillance; they accept such powers, and the resultant encroachment upon the right guaranteed by Article 8 para. 1 (art. 8-1), as being a necessary means of defence for the protection of the democratic State.
The applicants consider, however, that paragraph 2 of Article 8 (art. 8-2) lays down for such powers certain limits which have to be respected in a democratic society in order to ensure that the society does not slide imperceptibly towards totalitarianism. In their view, the contested legislation lacks adequate safeguards against possible abuse.
49. As concerns the fixing of the conditions under which the system of surveillance is to be operated, the Court points out that the domestic legislature enjoys a certain discretion. It is certainly not for the Court to substitute for the assessment of the national authorities any other assessment of what might be the best policy in this field (…)
Nevertheless, the Court stresses that this does not mean that the Contracting States enjoy an unlimited discretion to subject persons within their jurisdiction to secret surveillance. The Court, being aware of the danger such a law poses of undermining or even destroying democracy on the ground of defending it, affirms that the Contracting States may not, in the name of the struggle against espionage and terrorism, adopt whatever measures they deem appropriate.
51. According to the G 10, a series of limitative conditions have to be satisfied before a surveillance measure can be imposed. (…)
52. The G 10 also lays down strict conditions with regard to the implementation of the surveillance measures and to the processing of the information thereby obtained.(…)
53. Under the G 10, while recourse to the courts in respect of the ordering and implementation of measures of surveillance is excluded, subsequent control or review is provided instead, in accordance with Article 10 para. 2 of the Basic Law, by two bodies appointed by the people’s elected representatives, namely, the Parliamentary Board and the G 10 Commission. (…)
54. The Government maintain that Article 8 para. 2 (art. 8-2) does not require judicial control of secret surveillance and that the system of review established under the G 10 does effectively protect the rights of the individual. The applicants, on the other hand, qualify this system as a ‘form of political control’, inadequate in comparison with the principle of judicial control which ought to prevail.
It therefore has to be determined whether the procedures for supervising the ordering and implementation of the restrictive measures are such as to keep the ‘interference’ resulting from the contested legislation to what is ‘necessary in a democratic society’.
55. Review of surveillance may intervene at three stages: when the surveillance is first ordered, while it is being carried out, or after it has been terminated. As regards the first two stages, the very nature and logic of secret surveillance dictate that not only the surveillance itself but also the accompanying review should be effected without the individual’s knowledge.
Consequently, since the individual will necessarily be prevented from seeking an effective remedy of his own accord or from taking a direct part in any review proceedings, it is essential that the procedures established should themselves provide adequate and equivalent guarantees safeguarding the individual’s rights.
In addition, the values of a democratic society must be followed as faithfully as possible in the supervisory procedures if the bounds of necessity, within the meaning of Article 8 para. 2 (art. 8-2), are not to be exceeded.
One of the fundamental principles of a democratic society is the rule of law, which is expressly referred to in the Preamble to the Convention (see the Golder judgment of 21 February 1975, Series A no. 18, pp. 16-17, para. 34). The rule of law implies, inter alia, that an interference by the executive authorities with an individual’s rights should be subject to an effective control which should normally be assured by the judiciary, at least in the last resort, judicial control offering the best guarantees of independence, impartiality and a proper procedure.
56. The Court considers that, in a field where abuse is potentially so easy in individual cases and could have such harmful consequences for democratic society as a whole, it is in principle desirable to entrust supervisory control to a judge.
Nevertheless, having regard to the nature of the supervisory and other safeguards provided for by the G 10, the Court concludes that the exclusion of judicial control does not exceed the limits of what may be deemed necessary in a democratic society.
58. In the opinion of the Court, it has to be ascertained whether it is even feasible in practice to require subsequent notification in all cases.
The activity or danger against which a particular series of surveillance measures is directed may continue for years, even decades, after the suspension of those measures.
Subsequent notification to each individual affected by a suspended measure might well jeopardise the long-term purpose that originally prompted the surveillance. Furthermore, as the Federal Constitutional Court rightly observed, such notification might serve to reveal the working methods and fields of operation of the intelligence services and even possibly to identify their agents.
In the Court’s view, in so far as the ‘interference’ resulting from the contested legislation is in principle justified under Article 8 para. 2 (art. 8-2) (see paragraph 48 above), the fact of not informing the individual once surveillance has ceased cannot itself be incompatible with this provision since it is this very fact which ensures the efficacy of the ‘interference’.
For these reasons the Court
1. holds unanimously that it has jurisdiction to rule on the question whether the applicants can claim to be victims within the meaning of Article 25 (art. 25) of the Convention;
2. holds unanimously that the applicants can claim to be victims within the meaning of the aforesaid Article (art. 25);
3. holds unanimously that there has been no breach of Article 8, Article 13 or Article 6 (art. 8, art. 13, art. 6) of the Convention.
This extensive quotation should contribute to a better understanding of the delicate and complex nature of the issues brought before the Court. This particular case (Klass) is a landmark case that functions as a building block for the reasoning in similar cases and requires the contracting states to incorporate necessary safeguards when developing and implementing legislation that enables surveillance by intelligence agencies.
In 2006 the ECtHR decided the case of Weber & Saravia v Germany,12 once again testing legislation regarding so-called strategic monitoring by intelligence services. In this case the Court specifies in more detail what qualifies as ‘interferences’ that are ‘in accordance with the law’. Though, after having conducted the triple test, the Court decided that the contested legislation did not violate art. 8 ECHR, I will quote the legal conditions summed up by the Court to attain the legal effect of such interferences qualifying as being ‘in accordance with the law’.
95. In its case-law on secret measures of surveillance, the Court has developed the following minimum safeguards that should be set out in statute law in order to avoid abuses of power:
the nature of the offences which may give rise to an interception order;
a definition of the categories of people liable to have their telephones tapped;
a limit on the duration of telephone tapping;
the procedure to be followed for examining, using and storing the data obtained;
the precautions to be taken when communicating the data to other parties; and
the circumstances in which recordings may or must be erased or the tapes destroyed.
Since 2006 a number of cases have been decided on the issue of surveillance, either in the context of post-crime or pre-crime measures, as well as measures taken by the intelligence services.13 This includes both concrete interferences and legislation that would enable such interferences. As recounted above, the latter is not normally open to scrutiny by the Court, as it concerns an abstract test of the compatibility of domestic law against the Convention. The Court, however, can make an exception when applicants claim that the nature of the legislation or practice is such that they cannot know whether or not they have been a victim of state surveillance.
With the above analyses that closely follow the reasonings of the Court, the readers should have sufficient analytical instruments to study, for instance, the case of Big Brother Watch and Others v. the United Kingdom of 2018.14 This case regards complaints about the compatibility with art. 8 of the ECHR of three discrete regimes of mass surveillance in the UK. First, the regime for the bulk interception of communications under section 8(4) of Regulation of Investigatory Powers Act 2000 (RIPA); the UK-US intelligence sharing regime applied by the security service (MI5), the secret intelligence service (MI6) and the Government Communications Headquarters (GCHQ, which covers information and signals intelligence or ‘sigint’); and the regime for the acquisition of communications data under Chapter II of RIPA. The purpose of this work is not to provide an exhaustive overview of positive law in the realm of the right to privacy, but to provide computer scientists and students of computer science with a proper understanding of law as a scholarly discipline and a professional practice. In the end, the proof of the pudding will be in the eating. The reader is invited and encouraged to have their own tastings of legal text, discovering the major impact of legal decision-making on potential violations of e.g. the right to privacy.
Since the Charter of Fundamental Rights of the European Union (CFREU, or ‘the Charter’) is in force (2009), the EU ‘has’ two fundamental rights regarding the processing of personal data:
Article 7 Respect for private and family life
Everyone has the right to respect for his or her private and family life, home and communications.
Article 8 Protection of personal data
Everyone has the right to the protection of personal data concerning him or her.
Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified.
Compliance with these rules shall be subject to control by an independent authority.
This is a new situation in the realm of human rights, because no other Constitution or Human Rights Treaty attributes a right to the protection of personal data. Art. 52 of the Charter clarifies the relationship between art. 7 of the Charter and art. 8 of the ECHR, which both refer to the right to privacy.
3. In so far as this Charter contains rights which correspond to rights guaranteed by the Convention for the Protection of Human Rights and Fundamental Freedoms, the meaning and scope of those rights shall be the same as those laid down by the said Convention. This provision shall not prevent Union law providing more extensive protection.
This stipulates that art. 7 CFREU cannot be interpreted as providing less protection compared to art. 8 ECHR, but may be interpreted as attributing additional protection. To the extent that art. 8 CFREU corresponds to art. 8 ECHR, it can – similarly – not be interpreted as providing less protection than art. 8 ECHR, but it may provide additional protection.
Before diving deep into the General Data Protection Regulation (GDPR) that provides more details rules and principles for the processing of personal data, we will first investigate how the fundamental right to data protection compares to the fundamental right to privacy.
Some authors have argued that whereas, by default, the right to privacy is foremost an opacity right, data protection is a transparency right. As an opacity right, the right to privacy aims to safeguard a private sphere for individual citizens, where they can basically ward-off interference by others, most notably the state. We recognise the idea that privacy is a liberty right, a negative right that obligates others to refrain from interference with the good that is protected. As a transparency right, the right to data protection aims to ensure that whenever personal data is processed (which included collection, access, manipulation and any other usage) such processing must be done in a transparent manner, in compliance with a set of conditions which should ensure fair and lawful processing.
Note that the opacity concerns the private sphere of an individual person, whereas the transparency concerns the state and other powerful actors when processing personal data. This accords with the core tenets of the Rule of Law, which hold that whereas government should be as transparent as possible, citizens should be shielded from intrusive transparency by the government.
Also, as discussed above, even though privacy is an opacity right that requires the state to refrain from interference (negative freedom), the right to privacy may, nevertheless, impose positive obligations on state to enable individuals to exercise their right. Similarly, though data protection is a transparency right that should enable individuals as well as others to act on their personal data (positive freedom), while imposing a number of positive obligations on those who determine the purpose of processing, the right to data protection may, nevertheless, require that others abstain from processing personal data, thus imposing negative obligations on them.
Though one may be tempted to see the right to data protection as a subset of the right to privacy, this is not correct. Within the context of the EU, the right to privacy entails both more and less than the right to data protection. We can portray this as in figure 4 below.
Whenever the processing of personal data constitutes an interference with the right to privacy, there is an overlap. The right to privacy, however, also concerns interference with bodily integrity, decisional privacy, privacy of the home and correspondence when no processing of personal data is involved. This is where the right to privacy entails more than the right to data protection.
Similarly, the right to data protection also concerns the processing of personal data when there is no interference with the right to privacy. For instance, when one’s personal data are processed on one’s own request, e.g. the processing of an address or banking details to deliver goods and charge one’s account as a consequence of the sale of a book.
Note that if such data are subsequently used for other purposes, e.g. to support the business model of a webshop by way of targeted advertising, privacy may be at stake. Whether or not this is the case also relates to the fact that the right to privacy, as discussed above, is primarily at stake in the vertical relationship between a government and its citizens, whereas the right to data protection seems to be applicable to all those who process personal data. This is certainly the case for the GDPR.
The right to privacy can be invoked in a national court of law, for instance in the course of criminal or administrative proceedings. As discussed above, individual citizens have a right to present their claim to the ECtHR, which resides in Strassbourg, but this can only be done after exhausting national remedies. That means that if one fails to claim violation of art. 8 ECHR at the national level, or if one fails to appeal against a judgment that denies such a violation, the application to the ECtHR will be inadmissible. See art. 34 and 35 ECHR:
ARTICLE 34 Individual applications
The Court may receive applications from any person, nongovernmental organisation or group of individuals claiming to be the victim of a violation by one of the High Contracting Parties of the rights set forth in the Convention or the Protocols thereto. The High Contracting Parties undertake not to hinder in any way the effective exercise of this right.
ARTICLE 35 Admissibility criteria
The Court may only deal with the matter after all domestic remedies have been exhausted, according to the generally recognised rules of international law, and within a period of six months from the date on which the final decision was taken.
Both the right to privacy and the right to data protection of the CFREU have direct application in the MSs of the EU. This means one can invoke them in a national court of law. If, however, a question is raised about the interpretation of the Charter, art. 267 Treaty on the Functioning of the EU (TFEU) stipulates that so-called preliminary questions can, or must, be referred to the CJEU (which resides in Luxembourg):
The Court of Justice of the European Union shall have jurisdiction to give preliminary rulings concerning:
(a) the interpretation of the Treaties;
(b) the validity and interpretation of acts of the institutions, bodies, offices or agencies of the Union;
Where such a question is raised before any court or tribunal of a Member State, that court or tribunal may, if it considers that a decision on the question is necessary to enable it to give judgment, request the Court to give a ruling thereon.
Where any such question is raised in a case pending before a court or tribunal of a Member State against whose decisions there is no judicial remedy under national law, that court or tribunal shall bring the matter before the Court.
Clearly, both European Courts have an important role as to the national jurisdiction regarding human and fundamental rights. The case law of both Courts is a pivotal source of law that will remain pivotal throughout this work.
The history of data protection law goes back to the 70s of the last century, when various countries enacted legislation to ensure fair processing of personal information by the government. An early example was the US Privacy Act of 1974,15 which instigated a set of fair practices for dealing with personal information.
In 1980, the global Organisation of Economic Co-operation and Development (OECD) issued the so-called Fair Information Principles (FIPs), as part of the (non-binding) Guidelines governing the protection of privacy and transborder flows of personal data:
Collection Limitation Principle
7. There should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject.
Data Quality Principle
8. Personal data should be relevant to the purposes for which they are to be used, and, to the extent necessary for those purposes, should be accurate, complete and kept up-to-date.
Purpose Specification Principle
9. The purposes for which personal data are collected should be specified not later than at the time of data collection and the subsequent use limited to the fulfilment of those purposes or such others as are not incompatible with those purposes and as are specified on each occasion of change of purpose.
Use Limitation Principle
10. Personal data should not be disclosed, made available or otherwise used for purposes other than those specified in accordance with Paragraph 9 except:
a) with the consent of the data subject; or
b) by the authority of law.
Security Safeguards Principle
11. Personal data should be protected by reasonable security safeguards against such risks as loss or unauthorised access, destruction, use, modification or disclosure of data.
12. There should be a general policy of openness about developments, practices and policies with respect to personal data. Means should be readily available of establishing the existence and nature of personal data, and the main purposes of their use, as well as the identity and usual residence of the data controller.
Individual Participation Principle
13. Individuals should have the right:
a) to obtain from a data controller, or otherwise, confirmation of whether or not the data controller has data relating to them;
b) to have communicated to them, data relating to them
i. within a reasonable time;
ii. at a charge, if any, that is not excessive;
iii. in a reasonable manner; and
iv. in a form that is readily intelligible to them;
c) to be given reasons if a request made under subparagraphs (a) and (b) is denied, and to be able to challenge such denial; and
d) to challenge data relating to them and, if the challenge is successful to have the data erased, rectified, completed or amended.
14. A data controller should be accountable for complying with measures which give effect to the principles stated above.
The version quoted has been taken from the updated Guidelines of 2013. The update does not concern the FIPs themselves, but aims to strengthen worldwide enforcement and accountability. With an eye to the increased scale of data processing and the new techniques for data analytics, the OECD recommends a risk-based approach that is proactive rather than reactive where it comes to the rights and freedoms of those affected by the processing of personal data.
Since 1980, many states have enacted data protection legislation, often following the FIPs. The EU Data Protection Directive (DPD) of 1995 was a prime example of a legally binding implementation of the OECD Guidelines. Since May 2018 the DPD has been succeeded by the GDPR. Just like the updated OECD Guidelines, the basic rules and principles that underlie the GDPR are largely the same as those of the DPD. The difference regards enforcement and various obligations to engage in proactive compliance. Again, a practical and effective reinforcement of the accountability principle is the most significant change.
In the US data protection is part of the right to privacy (in Constitutional and tort law) and subject to sectorial legislation, notably with regard to finance, healthcare, special protection of children and consumer protection. There is no general law on data protection, apart from the 1974 Privacy Act (which only applies to Federal Agencies). This means that the protection of personal data varies with the context of processing. In commercial contexts, much of the actual protection depends on the competences of the Federal Trade Commission (FTC), based on section 5 of the FTC Act:
(1) Unfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce, are hereby declared unlawful.
(2) The Commission is hereby empowered and directed to prevent persons, partnerships, or corporations, [except certain specified financial and industrial sectors] from using unfair methods of competition in or affecting commerce and unfair or deceptive acts or practices in or affecting commerce.
Based on this, the FTC is tasked with protecting consumer privacy and data security in commercial contexts. The notion of a reasonable expectation of privacy is a core concept, because consumer trust is pivotal for a well-functioning market in ecommerce. The FTC deals with violations on a case-by-case basis, but also issues so-called ‘rulings’ if it believes specific types of violations are prevalent. Such ‘rulings’ basically declare how the FTC will use its art. 5 competence, thus encouraging companies to change their behaviour. The FTC is often qualified as ‘the regulator’ concerning informational privacy, due to its central role in US policy making regarding data protection.
In the EU, the situation is altogether different, due to the general applicability of EU data protection law, which does not depend on whether a violation can be framed as ‘an unfair or deceptive act in or affecting commerce’. In the next subsection, we will provide an extensive discussion of the core content of EU data protection law. One could say that whereas in the US the processing of personal information is allowed unless it has been explicitly restricted, whereas in the EU any processing of personal data is conditioned by a set of rules and principles which impose obligations on those who process data and attribute rights to those whose personal data is at stake.
The General Data Protection Regulation (GDPR) is based on art. 16 of the Treaty of the Functioning of the EU (TFEU), which reads:
1. Everyone has the right to the protection of personal data concerning them.
2. The European Parliament and the Council, acting in accordance with the ordinary legislative procedure, shall lay down the rules relating to the protection of individuals with regard to the processing of personal data by Union institutions, bodies, offices and agencies, and by the Member States when carrying out activities which fall within the scope of Union law, and the rules relating to the free movement of such data. Compliance with these rules shall be subject to the control of independent authorities.
The GDPR protects the fundamental right to data protection as stipulated in art. 8 of the CFREU. However, the GDPR goes beyond this, explicitly aiming to protect all the fundamental rights and freedoms that are implicated by the processing of personal data. But this is not the only goal of the Regulation. At the same time, the Regulation aims to prevent that different levels of data protection within the jurisdiction of the MSs result in obstructions of the internal market. So, harmonisation of protection to ensure a free flow of personal data is the second, equally important goal of the GDPR:
Article 1 Subject-matter and objectives
1.This Regulation lays down rules relating to the protection of natural persons with regard to the processing of personal data and rules relating to the free movement of personal data.
2.This Regulation protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data.
3.The free movement of personal data within the Union shall be neither restricted nor prohibited for reasons connected with the protection of natural persons with regard to the processing of personal data.
As discussed above (in section 4.3), the EU has developed from the European Economic Community (EEC), where the most important goal was the creation of an internal market, based on the ‘four freedoms’: free movement of capital, persons, goods and services. As paragraph 3 of art. 1 GDPR clarifies, this Regulation involves ‘full harmonisation’, which means that MSs are not allowed to provide either less or more protection than what is offered in the Regulation (with the exception of explicitly formulated discretion). Full harmonisation ensures the absence of obstructions of the internal market, due to different requirements in terms of data protection. The fact that the GDPR is a regulation instead of a directive confirms the wish to eradicate such obstructions, thus hoping to boost data-driven business across national borders.
So far, we have seen that the sources of law consist of legislation and treaties, case law, doctrine, customary law and fundamental principles. In the case of EU data protection law, we have the founding Treaties,16 the Charter, the GDPR, the Police Data Protection Directive (PDPD),17 the ePrivacy Directive (ePD),18 and a whole series of other Regulations and Directives that may at some point be relevant (but will not be discussed). Next to this we have the case law of the CJEU regarding data protection issues, decisions and policies of the supervisory authorities in the MSs and the European Data Protection Supervisor, and we have doctrinal treatises and journal articles which analyse and discuss the legislation, the case law and the underlying principles and practices.
In the case of EU data protection law, we have one more source of law, which has played an important role in the interpretation of the former DPD: the Opinions and Guidelines of the independent Art. 29 Working Party (Art. 29 WP).19 This was the advisory body (instituted by art. 29 of the DPD) that produced a great number of highly relevant interpretations of EU data protection law, which continue to function as an important source of law. Though its output was not binding, it has persuasive authority based on the experience and expertise of its members (the data protection supervisors of the MSs) and based on its official task, which was to advise on proper implementation of EU data protection law. Most of the Opinions, Guidelines and Recommendations of the Art. 29 WP are equally relevant under the GDPR, as the core principles and concepts have not changed.
The Art. 29 Working Party has been replaced, under the GDPR, with the independent European Data Protection Board (EDPB),20 instituted in art. 68-76 GDPR, again consisting of the supervisory authorities of all the MSs of the EU, again tasked with advising on the correct interpretation of the EU data protection law. The EDBP has further tasks in contributing to a harmonised approach of the national supervisors, throughout the Union.
The material scope of the GDPR is limited to ‘the processing of personal data’ (art 2.1). The definition of ‘processing’, however, is very broad, as art. 4(2) reads:
‘processing’ means any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction;
The GDPR does not apply to the processing of personal data within the context of a household and it does not apply to processing of personal data in the context of the prevention and prosecution of crime and threats to public security.21 The household exception will usually exempt the users of social networks, but not the providers (see below under section 18.104.22.168). With regard to the prevention and prosecution of crime, the Police Data Protection Directive (PDPD) is in force, based on art. 39 Treaty of the European Union (TEU).22 Since the EU has no competence regarding public security (intelligence services), there is no EU legislation as to the processing of personal data in the context of threats to public security. Note that the ECHR does apply to issues of public security, so insofar as privacy is infringed, measures can be tested against art. 8 ECHR (see above section 5.3.5, notably the cases of Klass, and Weber and Saravia).
Next to the exemptions of art. 2, art. 33 states that MSs may enact legislation to restrict the applicability of specific GDPR provisions, if they regard measures that are necessary in a democratic society, targeting a limited set of goals, such as national security, defence, public security, the prevention, investigation, detection and prosecution of criminal offences, or of breaches of ethics for regulated professions, an important object of general public interest of a Member State or of the European Union, including monetary, budgetary and taxation matters. Such restrictions are only valid insofar as they respect the essence of the fundamental rights and freedoms. Note that though restrictions based on these goals are allowed if they pass the proportionality test (‘necessary in a democratic society’ clearly refers to art. 8.2 ECHR), they also require a basis in law.
The territorial scope of the GDPR is defined in art. 3:
1.This Regulation applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not.
Note that if a tech company has an establishment in the EU, the GDPR applies to the processing of personal data, even if the processing takes place elsewhere. At some point a tech company relocated its headquarters from Ireland to the US, because otherwise data subjects in countries outside the EU could appeal to the Irish data protection supervisor under the GDPR.
2.This Regulation applies to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to:
(a) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or
(b) the monitoring of their behaviour as far as their behaviour takes place within the Union.
Here we see that if a company decides to offer goods or services (whether or not they are free) that involve the processing of personal data of data subjects in the Union, or monitor their behaviour in the Union, the GDPR applies, irrespective of whether the company is established in the Union. Note that this jurisdiction is limited to data subjects who are in the Union; it does not apply to EU citizens outside the Union, but does apply to non-EU citizens when they are in the Union.
Art. 4(1) GDPR clarifies that
'personal data' means:
an identified or
identifiable natural person (“data subject”)’
where ‘an identifiable person’ is defined as
‘one who can be identified,
directly or indirectly,
an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity;’
Many authors have pointed out that this entails a very broad view of ‘personal data’, potentially bringing nearly any data under the heading of personal data. This is especially the case as the combination of increased availability with increased searcheability and linkability of massive amounts of data will enable identification and re-identification of data that would previously not have been considered personal data. At some point, data about the weather, about room temperature, about the arrival of a train may become personal data, when it can be related to a person that can be singled out. Recital 26 reads:
To determine whether a natural person is identifiable, account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly.
The criterion to determine whether data is personal, that is ‘identifiable’, is that it is ‘reasonably likely’ that a person can e.g. be singled out. The recital continues:
To ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments.
Here we see that the ‘reasonably likely’ criterion should be understood as an objective criterion, taking into account the costs, the time and effort, and the available technical means at the time of processing. In the case of Breyer v Germany,23 the CJEU decided that even a dynamic IP address may qualify as a personal data, depending on whether the link with a specific person can be made. The case concerned government websites that processed dynamic IP addresses, keeping them longer than was necessary for providing access to the sites. What made this case special is that the ability to link the IP address to a specific person was not in the hands of the operators of the government website but in the hand of internet service providers (ISPs). The CJEU found that because ISPs could be ordered by a court to provide information about the user of a dynamic IP addresses, this IP address should not be considered anonymous.
So personal data is any data that relates to an identifiable natural person (excluding legal persons such as corporations), and a data subject is the identifiable natural person to whom the data relate.
The material scope of the GDPR regards (as discussed in 22.214.171.124) the processing of personal data. This implies that to avoid applicability of the GDPR, one could ‘simply’ anonymise previously personal data. There are two caveats here. First, anonymisation is itself a form of processing, and thereby requires a valid legal ground (see below section 126.96.36.199). Second, anonymisation is not easy, because the risk of re-identification easily turns ‘anonymous’ data into identifiable and thus personal data. In practice, anonymisation will often remove so much information from the data that it is no longer relevant for the purpose of processing. To better understand the difference between personal and anonymised data, we can best check the definition of ‘pseudonymisation’ of art. 4(5):
the processing of personal data in such a manner that
the personal data can no longer be attributed to a specific data subject
without the use of additional information,
provided that such additional information is kept separately and
is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person;
First, we see that pseudonymous data is defined as a subset of personal data. Second, it is defined as data from which any identifying information has been removed and stored separately, subject to technical and organisational measures that resist re-identification. Pseudonymisation is a way to comply with data protection law (by ways of data minimisation), not a way to avoid applicability. With regard to encryption, key management that enables a party other than the data subject to decrypt, will mostly qualify as pseudonymisation, not as anonymisation.
Art. 4(7) GDPR defines 'controller' as:
the natural or legal person, public authority, agency or any other body
which alone or jointly with others
determines the purposes and means of the processing of personal data;
The definition of ‘data controller’ is crucial, because the ‘data controller’ is both accountable and liable for compliance with all the obligations of the GDPR, including obligations to implement a pro-active approach to potential risks to the fundamental rights and freedoms of data subjects. The ‘data controller’ is basically defined as whoever determines the purpose of processing, whereby the CJEU checks who determines such purpose in practice, not merely on paper. The ‘data controller’ also determines the means of processing, but this can be outsourced to a data processor, defined as (art. 4(8)):
a natural or legal person, public authority, agency or any other body
which processes personal data on behalf of the controller;
Here, we clearly see that the data controller remains accountable for the choice of the means of processing, even if that choice is made by a processor. When the landmark case on the so-called right to be forgotten was decided in 2014 (Google Spain v Costeja),24 one of the most important issues was whether Google should be qualified as a data controller or a data processor. Google had argued that its search engine has no other function than to provide its users with automatically generated search results, thereby claiming that it is the user, not the service provider who determines the purpose of the processing. Google argued that its search engine is merely a choice of means (the PageRank algorithm) employed in the service of users that decide the purpose of the search. The highest advisor of the CJEU (the Court), holding the office of Advocat General (AG), who is required to provide a so-called Opinion (advise) to the Court, had taken the position that in the case of a search engine, the service providers is indeed a data processor, not the controller. Surprisingly, the Court (that is not bound by the Opinion of the AG), took another position, based on the fact that Google Spain (the subsidiary that sells advertising space on the search engines pages directed to Spanish users) has its own business model and thereby determines the purpose of processing. If the Court had not qualified Google Spain as a data controller, it could never have required Google to de-reference the news item that Costeja wished to have erased.
Another pivotal case of 2018 concerned the fanpage of Wirtschaftsakademie,25 used to provide services in the realm of education. The fanpage was hosted on Facebook, which enabled the operator to obtain anonymous statistical details on the website visitors via the ‘Facebook Insights’ function, which Facebook offers free of costs under non-negotiable conditions. The CJEU decided that the operator of the fanpage was a joint controller, together with Facebook, as the statistics were obtained by processing cookies placed on the terminal equipment of the visitors. Since the purpose of the processing of such cookies is co-decided by the fanpage operator, even though it has no control over the data processing and was not given access to the data, they are jointly responsible for the necessary processing of personal data. Under the GDPR this would be based on art. 26, which reads that
Where two or more controllers jointly determine the purposes and means of processing, they shall be joint controllers. They shall in a transparent manner determine their respective responsibilities for compliance with the obligations under this Regulation, in particular as regards the exercising of the rights of the data subject and their respective duties to provide the information (…).
As in the case of Google Spain v Costeja, where the AG argued that Google was merely a processor, acting on request of the users of the search engine, one could argue that in this case, Facebook is acting as a processer for the fanpage operator who wishes to obtain the statistics. In line with the Court in Google Spain v Costeja, the Court decided that Facebook is the controller, not the processor. In this case, however, the fanpage operator – other than the users of a search engine – is considered a joint controller.26
The processing of personal data is only allowed on the basis of one of 6 legal grounds. Other than many people seem to think, consent is just one of those legal grounds and not necessarily the most obvious. Art. 6 GDPR reads:
a) the data subject has given consent to the processing of his or her personal data for one or more specific purposes;
Under the DPD ‘for one or more specific purposes’ was not explicitly mentioned, though it was obvious from requirements detailed elsewhere. Under the GDPR it is explicitly clear that consent is only valid if the purpose has been specified. As art. 5 stipulates that data may only processed if necessary for the specified purpose, this means that consent can only concern the processing of personal data that is necessary for the purpose that was communicated. All the other grounds stipulate that the processing must be necessary in relation to the ground.
Valid consent will be further discussed in a dedicated subsection (188.8.131.52).
b) processing is necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract;
This entails that once the contract has been concluded and performed and the data no longer necessary (goods or service delivered, invoice paid), it may no longer be processed on this ground. Further processing will require another ground, e.g. consent (for another purpose).
c) processing is necessary for compliance with a legal obligation to which the controller is subject;
Much processing is mandatory due to legal obligations, such as processing by the tax authority, social security agency, land registry or by commercial enterprise that must e.g. comply with employment, social security and tax legislation. Art. 6.3 stipulates that this processing must be based on MS or Union law, must contain the specific purpose(s) of processing and relevant limitations and safeguards.
d) processing is necessary in order to protect the vital interests of the data subject or of another natural person;
This ground must be understood as concerning life threatening situations, where e.g. medical data must be processed to save someone’s life.
e) processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller;
This ground is comparable to the c-ground, but here there may not be a legal obligation but a legal competence or task that requires the processing of personal data. We can think of processing by various types of government agencies that provide support to those in need, or need to collect information on energy usage to develop policies on the reduction of energy consumption. Note that to the extent that such information can rely on aggregated or otherwise anonymised data, the processing of personal data is not necessary and cannot be based on this ground. Art. 6.3 stipulates that this processing must be based on MS or Union law, must contain the specific purpose(s) of processing and relevant limitations and safeguards.
f) processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.
Point (f) of the first subparagraph shall not apply to processing carried out by public authorities in the performance of their tasks.
The f-ground is important for processing carried out by the commercial sector, including financial institutions, social networks, search engines and we may expect that added value service providers in the context of smart homes, smart grids and connected cars will base the processing of data that is not necessary for the primary process (which may often be based on contract) will be based on the f-ground. As the economic interests of a business, including its competitive edge and innovative potential often depend on advertising revenue and/or the sale of personal data or inferred profiles, the f-ground is a tempting basis to the extent that other grounds do not involve processing that is necessary.
However, the f-ground requires a balancing test. As one can imagine, the business interests of a company cannot, be default, overrule the interests or fundamental rights and freedoms of data subjects whose behavioural data are used to generate income (thus e.g. enabling the so-called free services of social networks and search engines). This has two consequences:
The controller has to assess (before initiating the processing) whether its economic interests in processing personal data that are not necessary to provide a requested service, can overrule the interests and rights of those whose data are used for micro-targeting or other ways of monetising the data.
The data subject can object to the processing based their particular situation, in which case the controller must stop processing unless it can compelling grounds to override the interests, rights and freedoms of the data subject. The right to object is based on art. 21 GDPR and also concerns the e-ground.
The balancing test required of the controller, entails the following considerations:27
the nature and source of the legitimate interest and whether the data processing is necessary for the exercise of a fundamental right, is otherwise in the public interest, or benefits from recognition in the community concerned;
the impact on the data subject and their reasonable expectations about what will happen to their data, as well as the nature of the data and how they are processed;
additional safeguards which could limit undue impact on the data subject, such as data minimisation, privacy-enhancing technologies; increased transparency, general and unconditional right to opt-out, and data portability.
The f-ground is often used to legitimate the advertising business model of free services. For instance, in the Google Spain v Costeja case discussed above, the CJEU concluded that Google was processing personal data based on its legitimate business interests. In this particular case, the CJEU considered two types of legitimate interests that might overrule Costeja’s interest in having a particular search result de-referenced. First, the interest of the controller, second the interests of third parties, namely the users of the search engine.
First, the Courts looks into the economic interest of Google Spain in sustaining its business model, because the right to erase and the right to object that Costeja invoked would involve costs on the side of Google (especially because many others may similarly submit requests to de-reference). The CJEU found that:
81 In the light of the potential seriousness of that interference, it is clear that it cannot be justified by merely the economic interest which the operator of such an engine has in that processing. (…)
The seriousness of the interference, in this case, was argued in considerations 37 and 38:
37 (…) the organisation and aggregation of information published on the internet that are effected by search engines with the aim of facilitating their users’ access to that information may, when users carry out their search on the basis of an individual’s name, result in them obtaining through the list of results a structured overview of the information relating to that individual that can be found on the internet enabling them to establish a more or less detailed profile of the data subject.
38 Inasmuch as the activity of a search engine is therefore liable to affect significantly, and additionally compared with that of the publishers of websites, the fundamental rights to privacy and to the protection of personal data, (…).
Second, the Court considered the legitimate interests of users of the search engine in having access to the search result that may be de-referenced:
81 (…) However, inasmuch as the removal of links from the list of results could, depending on the information at issue, have effects upon the legitimate interest of internet users potentially interested in having access to that information, in situations such as that at issue in the main proceedings a fair balance should be sought in particular between that interest and the data subject’s fundamental rights under Articles 7 and 8 of the Charter. Whilst it is true that the data subject’s rights protected by those articles also override, as a general rule, that interest of internet users, that balance may however depend, in specific cases, on the nature of the information in question and its sensitivity for the data subject’s private life and on the interest of the public in having that information, an interest which may vary, in particular, according to the role played by the data subject in public life.
Here we see a clash between the freedom of information of search engine users and the right to data protection of the data subject, which requires some subtle balancing. Note, however, that the Court is not discussing the removal of content from the internet, but the de-referencing of a search result that links to such content.
The EDPB considers that in principle a controller must make up its mind which legal basis justifies a particular type of processing operations; controllers cannot, for instance process personal data based on consent and then shift to the legitimate interest after the data subject withdraws their consent.28 The ECPB also refers to art. 13.1 and 14.1 that require the controller to provide information about the purpose(s) and the legal basis for its processing operations, meaning that this should be clarified from the start.29
Next to, and thus on top of, having a legal ground, art. 5 of the GDPR stipulates a set of rules under the heading of ‘Principles relating to the processing of personal data’. Though the use of the term ‘principles’ could suggest that these are just some underlying assumptions, they are in fact rules that must be complied with. We will follow the wording of the article, discussing each paragraph along the way (the principles in bold are part of the article, emphasis is mine):
(a) processed lawfully, fairly and in a transparent manner in relation to the data subject (‘lawfulness, fairness and transparency’);
Though one may think that lawfulness merely refers to art. 6, which contains the legal basis, the term ‘lawfulness’ also refers to the bigger picture of the Rule of Law, as with the requirement that infringements of the right to privacy under art. 8 ECHR must be ‘in accordance with the law’. This means that a mere basis in law is not enough, and must be understood in qualitative terms to include respect for legitimate expectations, independent oversight and other checks and balances to ensure that the legal basis of art. 6 is valid (see also art. 6.3). Similarly, fairness refers to various balancing and proportionality test, taking note of the relevant interests and fundamental rights that are at stake. Transparency is further detailed in art. 13, 14 and 14 GDPR.
(b) collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall, in accordance with Article 89(1), not be considered to be incompatible with the initial purposes (‘purpose limitation’);
Here, we can identify one of the most important principles of EU data protection law. The idea that data must be collected and processed for one or more legitimate purposes that have been made explicit and are sufficiently specified pervades the regulation, while at the same time pinpointing whoever – de facto – determines such purpose(s) as the responsible, accountable and liable entity (the data controller). Purpose is, in a way, the vanishing point of the architecture of EU data protection law.
Further processing for another purpose is allowed if the purpose is not incompatible with the initial purpose, as communicated to the data subject. To determine whether the new purpose is compatible, art. 6(4) provides the following indications: any link between the old and the new purpose, the context of collection and the relationship between controller and subject, the nature and sensitivity of the data, the potential consequences of further processing for the data subject, and the existence of appropriate safeguards, such as encryption or pseudonymisation. In case of consent for the new purpose or a legal obligation that involves the new purpose, processing is based on the new ground and cannot be based on processing for a compatible purpose.
Secondary usage (further processing) for scientific or statistical research or archiving in the public interest is considered compatible by default. The GDPR contains an extensive exception for such processing in art. 89, with further exceptions for medical research in e.g. art. 9.2(h). Recital 33 furthermore indicates that ‘[i]t is often not possible to fully identify the purpose of personal data processing for scientific research purposes at the time of data collection. Therefore, data subjects should be allowed to give their consent to certain areas of scientific research when in keeping with recognised ethical standards for scientific research. Data subjects should have the opportunity to give their consent only to certain areas of research or parts of research projects to the extent allowed by the intended purpose’.
(c) adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (‘data minimisation’);
Data minimisation is another core principle, that also underlies the principles of purpose limitation and storage limitation. In the DPD this ground was articulated as ‘adequate, relevant and not excessive’, whereas now the criterion is ‘adequate, relevant and limited to what is necessary’. This is a further restriction, moving towards strict proportionality and subsidiarity, thereby also relating to the requirement to pseudonymise or anonymise the data as soon as possible. This principle links consent to necessity, as observed above. It also connects with the right to request erasure if processing is irrelevant for the given purpose.
(d) accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay (‘accuracy’);
Here the principle of accuracy is formulated as a legal obligation of the data controller, but this connects with the rights of erasure and rectification in the case that data are inaccurate.
(e) kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed; personal data may be stored for longer periods insofar as the personal data will be processed solely for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) subject to implementation of the appropriate technical and organisational measures required by this Regulation in order to safeguard the rights and freedoms of the data subject (‘storage limitation’);
Storage limitation basically requires that controllers engage in life cycle management of the personal data they process, removing them e.g. when the purpose is exhausted and processing is no longer relevant. The exception for scientific research and archiving, mentioned above, requires appropriate technical and organisational safeguards, taking into account the rights and freedoms of the data subject (which will vary depending on e.g. the nature of the data).
(f) processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (‘integrity and confidentiality’).
This principle connects with the requirement of security by design of art. 32, and the legal obligation for controllers to notify supervisory authorities and data subjects of data breaches (art. 33, 34).
2.The controller shall be responsible for, and be able to demonstrate compliance with, paragraph 1 (‘accountability’).
The accountability principle addresses the data controller as the focal point of responsibility, accountability and liability regarding compliance with the principles that pervade the GDPR. Accountability is further detailed in art. 30 that requires controller to demonstrate and document compliance, while liability is further detailed in art. 79-83 about enforcement (including both administrative law fines and prohibitions, and private law compensation and injunctive relief). The roles and responsibilities of the controller (including joint controllers) and processor are further specified in art. 24, 26 and 28.
Other than the DPD, the GDPR contains a separate article on consent. Art. 7 declares, under the heading of ‘Conditions for Consent’:
Where processing is based on consent, the controller shall be able to demonstrate that the data subject has consented to processing of his or her personal data.
This concerns the burden of proof.
If the data subject's consent is given in the context of a written declaration which also concerns other matters, the request for consent shall be presented in a manner which is clearly distinguishable from the other matters, in an intelligible and easily accessible form, using clear and plain language. Any part of such a declaration which constitutes an infringement of this Regulation shall not be binding.
Note that consent may not be hidden in complicated wordy privacy policies, and must be ‘easily accessible’ as to its form (think of the user interface), ‘using clear and plain language’. If consent is part of an elaborate and incomprehensible Terms of Service that basically contain an implicit consent, such consent is not valid.
The data subject shall have the right to withdraw his or her consent at any time. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal. Prior to giving consent, the data subject shall be informed thereof. It shall be as easy to withdraw as to give consent.
This means that if consent is given by ticking a box, it must be as easy to untick the box. If one has to explore every nook and corner of a website to figure out how to withdraw consent, the consent is not valid.
When assessing whether consent is freely given, utmost account shall be taken of whether, inter alia, the performance of a contract, including the provision of a service, is conditional on consent to the processing of personal data that is not necessary for the performance of that contract.
To better understand what this means, we can use recital 43:
In order to ensure that consent is freely given, consent should not provide a valid legal ground for the processing of personal data in a specific case where there is a clear imbalance between the data subject and the controller, in particular where the controller is a public authority and it is therefore unlikely that consent was freely given in all the circumstances of that specific situation. Consent is presumed not to be freely given if it does not allow separate consent to be given to different personal data processing operations despite it being appropriate in the individual case, or if the performance of a contract, including the provision of a service, is dependent on the consent despite such consent not being necessary for such performance.
This seems to indicate that attempts to force consumers to choose between accessing a service and refusing consent for additional processing are unlawful and such consent not valid. Additional processing refers to the processing of data that is not necessary for the provision of the service, or further processing of data after the purpose has been exhausted. One could guess that art. 7.4, read through the lens of recital 43, is the end of the a specific type of business model that puts the site on black if consent for unnecessary processing is not given.
Note that the legal ground must be communicated to the data subject when the processing commences (if data is collected from the data subjects, cf. art. 13), or within a reasonable time, at the latest within one month after obtaining the data (if data has not been obtained from the data subjects, cf. art. 14). Controllers cannot require consent and – after finding that the consent is not valid – claim that the processing is based on its legitimate interest; due to the inherent logic of the different grounds, controllers cannot claim to base the same processing operations on different grounds.
Art. 9 defines a set of data as requiring special treatment. These data are often called ‘sensitive data’ and are defined as: ‘data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation’.
By default, the processing of such data is prohibited. Strictly defined exceptions apply, notably based on explicit consent; specific rights and obligations in the field of employment and social; the vital interests of the data subject or of another natural person; or with regard to processing in the context of not-for-profit bodies with a political, philosophical, religious or trade union aim; processing of personal data which are manifestly made public by the data subject; processing necessary for legal claims, substantial public interest, preventive or occupational medicine, assessment of the working capacity of the employee, medical diagnosis, the provision of health or social care or treatment or the management of health or social care systems and services, for public health, for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes.
On top of that, art. 10 restricts the processing of ‘Processing of personal data relating to criminal convictions and offences’.
Processing of personal data relating to criminal convictions and offences or related security measures based on Article 6(1) shall be carried out only under the control of official authority or when the processing is authorised by Union or Member State law providing for appropriate safeguards for the rights and freedoms of data subjects. Any comprehensive register of criminal convictions shall be kept only under the control of official authority.
Art. 9 and 10 demonstrate that data protection is not just about the right to privacy, but also entails protection against discrimination on prohibited grounds. This is particularly relevant if inferences are made based on machine learning or other techniques to infer patterns from big data, because such inferences may include sensitive data. Social networks, advertising intermediaries or criminal justice authorities may infer racial or ethnic origin, political opinion or sexual preferences, which inferences may then be applied to identifiable persons that match the profile. Such inferencing may be inadvertent, but nevertheless result in decisions based on such inferences, for instance parole decisions based on a correlation between race and recidivism. In chapter 10 we will return to this point when discussing machine learning and profiling, including an analysis of GDPR provisions on profiling and automated decision making based on profiling.
In chapter 1 we have identified the text-driven nature of modern law, in contrast with the orality of prior normative orderings. The rise of data- and code-driven ICIs confronts the text-driven nature of the law with a number of problems. Merely writing down and enacting legal norms may not work if the defaults of the technical and organisational architecture of the onlife world generate a contradictory normativity, which renders compliance with legal norms difficult if not impossible. In other words, the technical architecture may present its users and inhabitants with a choice architecture that limits their understanding of the backend systems of the social networks they use, of their smart homes, connected cars and the more.
Art. 25 of the GDPR requires that data controllers design the data processing operations in compliance with data protection law. Data protection by design (DPbD) may sound like Privacy by Design (PbD). However, the latter is based on an ethical duty, not necessarily on a legal obligation; PbD reflects the choice of a controller to respect the privacy of their users by way of a privacy-friendly design. Also, as privacy is not equivalent with data protection, PbD cannot be equated with DPbD, even though in practice the terminology is often used interchangeably.
DPbD is a new legal obligation (no such obligation applied under the DPD). In case of non-compliance the legal effect is liability for damages (private law liability, art. 82), unlawful processing (administrative fines, art. 83), or injunctive relief (private law injunction to stop unlawful processing with penalty payments for every day of non-compliance).
Under the heading of ‘data protection by design and default’ art. 25 stipulates:
1.Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing, the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects.
Paragraph 1 describes ‘data protection by design’ (DPbD) as a set of technical and organisational measures that embed core data protection principles into the design of the data processing architecture. This should mitigate potential risks for the rights and freedoms of natural persons. The latter demonstrates the risk-based approach of the GDPR, which requires that controllers take a proactive approach when developing their computational backend systems. Note that art. 25 does not speak of the risks for rights and freedoms of data subjects, but of natural persons. This includes processing operations that impact other individuals, for instance when inferencing behavioural correlations that enable the influencing, exclusion or other types of targeting of others than the data subject. Relevant design measures are, for instance pseudonymisation, but one can also think of user-friendly interfaces to enable easy withdrawal of consent (art. 7.3) or subject access requests (SARs) (based on art. 15.3). Both the withdrawal of consent and SARs will involve computational architectures in the backend systems that effectively halt the processing of data for which consent has been withdrawn, or provide the data that are being processed (where art. 15.3 stipulates that if a SAR is made via electronic means, the data shall be provided in a commonly used electronic format). The legal obligation to implement DPbD is not obvious and may result in a major upheaval of backend system, involving substantial costs. Depending on the risks of abstaining from such measures for the rights and freedoms of natural persons, such costs will become part of the relevant business model.
2.The controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual's intervention to an indefinite number of natural persons.
Paragraph 2 describes ‘data protection by default’, which is DPbD with regard to data minimisation. It demands that the architecture is constructed in such a way that no additional processing takes place, beyond what is necessary for the specific purpose of the relevant processing operations. Again, compliance with this legal obligation will result in major reconfigurations of current backend system, involving e.g. effective life-cycle management of personal data (including pseudonymisation, anonymisation and deletion of data).
The third paragraph declares that an approved certification mechanism may contribute to demonstration of compliance with DPbD.
DPbD is closely related to another new compliance mechanism, the data protection impact assessment (DPIA), again exhibiting the risk-based, proactive approach that is favoured under the GDPR. Basically, controllers are obligated to assess potential violations of the GDPR when initiating new data-driven technologies. Art. 35 reads under the heading of ‘data protection impact assessment’ that:
1.Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data. A single assessment may address a set of similar processing operations that present similar high risks.
The criterion that decides whether a controller must conduct a DPIA is that foreseen processing operations are ‘likely to result in a high risk to the rights and freedoms of natural persons’. Again, these risks are not restricted to data subjects, but extend to all natural persons. The assessment investigates the potential impact of envisaged processing operations, which assumes that these are indeed foreseen and mapped against impact on fundamental rights.
2.The controller shall seek the advice of the data protection officer, where designated, when carrying out a data protection impact assessment.
Art. 37-39 detail which types of controller must appoint a data protection officer (DPO), under what conditions (e.g. safeguards for independence) and with what tasks. One of the tasks of the DPO is to advise on the DPIA.
3.A data protection impact assessment referred to in paragraph 1 shall in particular be required in the case of:
(a) a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person;
(b) processing on a large scale of special categories of data referred to in Article 9(1), or of personal data relating to criminal convictions and offences referred to in Article 10; or
(c) a systematic monitoring of a publicly accessible area on a large scale.
Paragraph 3 sums up when a DPIA is mandatory, thus also giving an indication of what types of processing operations are considered high-risk.
Paragraphs 4-6 stipulate that supervisory authorities shall publish a further list of the kind of processing operations where a DPIA is mandatory, and may publish a list of processing operations where a DPIA is not mandatory. Both lists will be shared with the European Data Protection Board (which has an important advisory function as to the interpretation of the GDPR, and is further defined in art. 68-76 GDPR.
7.The assessment shall contain at least:
(a) a systematic description of the envisaged processing operations and the purposes of the processing, including, where applicable, the legitimate interest pursued by the controller;
(b) an assessment of the necessity and proportionality of the processing operations in relation to the purposes;
(c) an assessment of the risks to the rights and freedoms of data subjects referred to in paragraph 1; and
(d) the measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data and to demonstrate compliance with this Regulation taking into account the rights and legitimate interests of data subjects and other persons concerned.
Paragraph 7 provides a first indication of a template for the DPIA. The listing has a high level of abstraction, thus enabling adequate concretisation, depending on the types of processing operations, the context of processing, the nature of the data and so forth. Under (d) we recognise a reference to DPbD, whose purpose is to mitigate risks to the rights and freedoms of natural persons.
Paragraph 8 states that approved codes of conduct (art. 40 GDPR) will be taken into account when assessing the impact.
9.Where appropriate, the controller shall seek the views of data subjects or their representatives on the intended processing, without prejudice to the protection of commercial or public interests or the security of processing operations.
Paragraph 9 emphasises the need to involve those who will suffer the consequences of the intended processing, both on the side of data subjects and on the side of the controller. In earlier versions of the GDPR, the need to involve data subjects in the assessment was articulated more forcefully. One can imagine that a robust architecture will fare well based on input from those who will be effectively affected.
Paragraph 10 provides an exception for processing based on a legal obligation or a public task or authority (art. 6.1 under c and e), whenever the enactment of such an obligation has been preceded by a general DPIA on account of the legislator.
11.Where necessary, the controller shall carry out a review to assess if processing is performed in accordance with the data protection impact assessment at least when there is a change of the risk represented by processing operations.
In an innovative environment, where agile computing strategies lead to iterant changes in processing operations, the DPIA is best seen as a continuous process that continues to monitor both the foreseeable risks and the appropriate mitigating measures.
The GDPR reinforces the accountability principle by initiating new legal obligations to further compliance, notably the obligation to implement DPbD and to conduct a DPIA. Apart from those, other legal obligations require technical and organisational compliance measures, such as easy withdrawal of consent (art. 7.3), provision of access by way of an electronic file (art. 15.3), obligations to employ pseudonymisation (art. 6.4(e), 25.1, 32.1(a), 40.2(d), 89), data portability rights (art. 20), security by design (art. 32) and more generally technical measures (e.g. art. 17.2). At the same time the GDPR requires that the controller keeps a proper administration to demonstrate compliance (art. 30), departing from the old regime (under the DPD) where controllers had to register their operations with the data protection supervisor.
Next to these novel obligations, the regulation takes enforcement seriously. One of the biggest failures of previous regimes of data protection law was a paramount lack of enforcement, providing no incentive whatsoever to comply. The enforcement chapter of the GDPR, however, knits a close network of enforcement activities, by individual persons, non-profit organisations, and by the supervisors.
Chapter VIII provides the following enforcement mechanisms, under the heading of ‘Remedies, liability and penalties’:
Art. 77 and 78 provide the data subject with the right to lodge a complaint with a supervisory authority, and the right to an effective judicial remedy against legally binding decisions of a supervisory authority concerning them, including a remedy against the supervisory authority that does not handle their complaint.
Art. 79 provides the data subject with direct access to court, apart from their right to lodge a complaint with the supervisory authority. This will enable direct action against the controller or processor, for instance an action for injunctive relief, requiring a court order to prohibited unlawful processing.
Art. 80 stipulates that data subjects can mandate their rights from art. 77-79 to a not-for-profit body, organisation or association, enabling such a body to exercise these rights on their behalf. If MS law allows, they can also mandate their right to receive compensation (based on art. 80). MS law may also provide that a not-for-profit can lodge a complaint with the supervisory authority (as in art. 77) or with the court (as in art. 78 and 79).
Article 81 regulates suspension of proceedings and jurisdictional issues in the case of simultaneous or overlapping proceedings in different MSs.
Art. 82 provides a right to compensation in the case that any person (not just the data subject) has suffered material or non-material damages due to infringements of the GDPR. It stipulates that the controller is liable, adding liability for the processor if damage is caused by non-compliance with obligations directed specifically to the processor (or by its acting outside or contrary to lawful instructions by the controller). A controller or processor will be exempted from liability if they prove that they are not in any way responsible for the event that cause damage. If more than one controller and/or processor is liable for damage caused, several liability applies (each must pay the full damage, but each can claim back from the others any damage paid beyond its own responsibility). The competent courts will be the same as those competent under MS law for claims based on art. 79.
Art. 83 stipulates that supervisory authorities shall impose ‘effective, proportionate and dissuasive’ administrative fines and details the general and specific conditions for such fines. Maximum fines can be 20 000 000 EUR, or in the case of an undertaking, up to 4 % of the total worldwide annual turnover of the preceding financial year, whichever is higher.
Finally, art. 84 requires that MSs lay down rules for other penalties, in particular for infringements not subject for administrative fines of art. 83.
In this chapter we have explored human rights law and investigated the ‘workings’ of the right to privacy in the context of the ECHR, and the right to data protection in the context of the CFREU, as further protected by the GDPR. This cannot be more than a first impression of relevant applicable law. Many relevant provisions and other legislation has not been discussed. The Police Data Protection Directive has not been discussed, the ePrivacy Directive (and its draft successor) have not been detailed. Convention 108 of the Council of Europe has been ignored and a further exploration of the differences between EU and US law has been similarly left aside.
What I hope the reader will take from this chapter is the salient complexity of privacy an data protection law and the importance of ‘practical and effective’ legal remedies when rights are violated. Though complexity and practical effectiveness may sometimes be incompatible, more often they demonstrate the adaptive nature of legal protection in the face of an increasingly data-driven environment.
In chapter 10 we will return to the subject of EU data protection law with an eye to the increasingly code-driven nature of our environment, highlighting the unique EU data protection rights with regard to automated decisions based on the processing of personal data.
On the right to privacy:
Council of Europe, Guide on Article 8 of the European Convention on Human Rights. Right to respect for private and family life, home and correspondence, updated on 31 August 2018, https://www.echr.coe.int/Documents/Guide_Art_8_ENG.pdf
Korff, Douwe, The standard approach under articles 8 – 11 ECHR and art. 2 ECHR. 2008. https://www.pravo.unizg.hr/_download/repository/KORFF_-_STANDARD_APPROACH_ARTS_8-11_ART2.pdf
Mowbray, Alastair. 2005. ‘The Creativity of the European Court of Human Rights’. Human Rights Law Review 5 (1): 57–79. https://doi.org/10.1093/hrlrev/ngi003.
On the concept of family resemblance:
Biletzki, Anat and Matar, Anat, "Ludwig Wittgenstein", The Stanford Encyclopedia of Philosophy (Summer 2018 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/sum2018/entries/wittgenstein/ (the quote in this chapter is taken for this entry).
On freedom from and freedom to:
Berlin, Isaiah. 1969. ‘Two Concepts of Liberty’. In Four Essays on Liberty, edited by Isaiah Berlin, 118–73. Oxford New York: Oxford University Press.
On data protection law:
Kuner, Christopher. 2007. European Data Protection Law: Corporate Regulation and Compliance. 2nd ed. Oxford University Press, USA (see updates per chapter at http://global.oup.com/booksites/content/9780199283859/updates/.
European Union Agency for Fundamental Rights (FRA). 2018. ‘Handbook on European Data Protection Law - 2018 Edition’. http://fra.europa.eu/en/publication/2018/handbook-european-data-protection-law.
Journal: International Data Privacy Law. Accessed 13 October 2018. https://global.oup.com/academic/product/international-data-privacy-law-20444001.
On privacy as control over personal information:
Westin, Alan. 1967. Privacy and Freedom. New York: Atheneum. (the quotation is taken from p. 7).
On privacy as freedom from unreasonable constraints on identity construction:
Agre, Philip E., and Marc Rotenberg. 2001. Technology and Privacy: The New Landscape. Cambridge, Massachusetts: MIT (quotation taken from p. 7).
On the difference between privacy and data protection:
Juliane Kokott and Christoph Sobotta. 2013. “The distinction between privacy and data protection in the jurisprudence of the CJEU and the ECtHR”, International Data Privacy Law (3) 4, 222-228, available at http://idpl.oxfordjournals.org/content/3/4/222.full?sid=a0d12330-d8f3-4387-a7dc-58905c9379a2
On data protection by design and on legal protection by design:
Hildebrandt, Mireille, and Laura Tielemans. 2013. ‘Data Protection by Design and Technology Neutral Law’. Computer Law & Security Review.
Hildebrandt, Mireille. 2017. ‘Saved by Design? The Case of Legal Protection by Design’. NanoEthics, August, 1–5. https://doi.org/10.1007/s11569-017-0299-0.