Trevor Paglen’s Autonomy Cube provides visitors of musea and exhibitions the opportunity to connect their mobile devices with the internet via a Tor router that enables anonymous communication to the extent that it hides the provenance of messages. TOR networks are used to protect investigative journalism, dissidents in authoritarian states and the traffic of intelligence agencies. It is used to commit cybercrime without been detectable and to gather intelligence against individual citizens.
The more we become dependent upon data- and code-driven environments, the more serious the impact of cybercrime. Whereas individual damage or harm may be remedied by way of private law compensation, substantial damage to critical infrastructure, societal trust and economic welfare requires a complementary approach that re-establishes and confirms the normative foundations of societal intercourse. To some extent this is the task of administrative law, imposing sanctions for violating legal norms that aim to protect what political theory and legal philosophy call ‘public goods’. In economics, the term ‘public goods’ refers to goods that are non-exclusionary because they cannot be monopolised (such as the air we breathe) and non-rivalrous because use by the one does not imply less use by another (such as information). In political theory and legal philosophy, the term refers to something that benefits society in general, whether that something is exclusionary or non-rivalrous or neither. We can think of welfare, public health, freedom of expression, universal access to electricity, education, or – at a higher level of abstraction – we can think of a fair distribution of income and access to other goods. In this – non-economic – sense, public goods are closely related to shared values, though note that the term good refers to more than an aspiration or mental preference, to the actual availability of the good. In legal philosophy, human rights are considered as public goods. The GDPR is an example of an administrative law that protects public goods such as privacy, non-discrimination, and freedom of expression. The administrative approach, however, easily conflates with paying a fee to exempt oneself from following the law. ‘Speeding on a public road is prohibited, and whoever speeds will be fined’ may turn into ‘speeding on a public road is allowed if one is willing to pay the fine.
Criminal law is not about paying a price for violating societal norms. In the end it is about censure; it is about holding to account those who seriously flout and diminish respect for their fellow citizens, the public goods that sustain societal peace, and for the individual flourishing it enables. It is not merely about the disrespect for a particular person or those close to them, but about the indirect disintegration of societal trust such disrespect brings about. Criminal law is about punishing those who negate or ignore the shared normativity that societies thrive on. It is far more than a utilitarian calculus meant to deter homo economicus (the calculating human agent) from violating the law, by imposing costs that hopefully overrule the benefits. Neither is criminal law a way to shame vulnerable agents into ‘behaving’ themselves. Criminal law is about censure, about addressing fellow citizens as responsible agents instead of manipulable pawns.
The monopoly of violence prohibits private punishment or taking the law in one’s own hands. It thereby implies that a government that does not protect its citizens against crime will lose its footing. Criminal law does not merely provide competences, it also constitutes a task. A government that systematically forsakes punitive interventions when criminal offences are committed, may raise fear about further breaches of the societal contract. This places a heavy burden on governments, as they need to provide safety and trust, without, however, themselves violating safety and trust in the process of defending it. This goes for all criminal law interventions, whether investigatory or punitive.
In the case of cybercrime, competent authorities seem to face a moving target. Technological developments, both on the side of perpetrators and on the side of policing and forensics, often outwit prevalent and tested strategies to deal with the special character of cyber-crime. This chapter will first raise the question of what makes cybercrime ‘cyber’, followed by an introduction to the international and supranational legal frameworks that are meant to cope with cybercrime. Finally, we will provide a more detailed analysis of the Cybercrime Convention, including a reflection on the image of the weighing scale where it comes to balancing safety and security against rights and freedoms.
6.1 The problem of cybercrime
In the Internet Security Threat Report of 2018, Symantic reports:1
From the sudden spread of WannaCry and Petya/NotPetya, to the swift growth in coinminers, 2017 provided us with another reminder that digital security threats can come from new and unexpected sources. With each passing year, not only has the sheer volume of threats increased, but the threat landscape has become more diverse, with attackers working harder to discover new avenues of attack and cover their tracks while doing so.
According to Symantec, 1 in 13 web requests leads to malware, 24,000 is the average number of malicious mobile apps blocked each day, 5.4B WannaCry attacks have been blocked. Compared to 2017 Symantec reports a 80% increase in malware attacks on Macs, a 46% increase in new ransomware variants, 600% increase in attacks against IoT devices, a 13% overall increase in reported vulnerabilities, 29% increase in industrial control system (ICS) related vulnerabilities, and, finally, a 8.500% increase in coinminer detections.
Though these numbers raise many questions, e.g. as to the distribution between high-impact effects and mere nuisance, cybercrime is a major threat for consumers, businesses, law enforcement, national security and critical infrastructure.
Symantec is focused on cybersecurity, which is not the same as cybercrime. We will now first examine what is meant with computer crime and cybercrime and how it relates to cybersecurity.
6.1.1 Computer crime
In the time that computing systems were stand-alone devices, what we now call cybercrime was framed as ‘computer crime’. In an effort to justify this as a special subdomain of law, it was structured as consisting of:
Crimes committed with computer, where the computer was the instrument of crime;
Crimes committed against a computer, where the computer was the target of the crime;
Crimes committed in the context of computers, where ‘traditional’ crimes were committed in an environment where computers played an important role.
The computer ‘as an instrument’ concerned offences such as spam or phishing; the computer ‘as a target’ concerned the use of malware or distributed denial of service (DDOS) attacks; traditional crimes ‘in the context of computers’ would be digital identity fraud, online copyright infringements or online child pornography.
Another analytical distinction differentiates between:
Computer-assisted traditional crimes, where the nature of the crime is transformed due to the different nature of online environments;
New types of crimes, involving the confidentiality, integrity or availability of digital data or computing systems (here computer crime overlaps with digital security), involving both crimes with and crimes against a computer.
Whether old or new, the question remains what is ‘the difference that makes a difference’ between previously known criminal offences and computer or cyber offences.
The rise of the internet and the world wide web, the interconnection between computing systems (the resilient routing of packages across a network of nodes) and the hyperlinking of information across the network (resulting in an unprecedented explosion of content, communication and metadata), signified the shift from computer crime to cybercrime. We can safely say that we now live in a different world than two decades ago. This is related to the affordances of an unprecedented rise of computing power on the one hand (with the implied miniaturization of the carriers of digital data), and hyperconnectivity on the other (with the ensuing network effects).
This makes cybercrime different across 6 dimensions of human intercourse, in ways that are highly relevant for the criminal law: distance, scale, speed, distribution, invisibility and visibility.
Distance is implied in the ability to exercise all kinds of ‘remote control’, ignoring traditional, territorial borders, e.g. causing major issues for the force of law across different jurisdictions;
scale is implied in the ability to automate scripts that can affect an enormous amount of other automated systems, that can in turn easily multiply the reach of a message or malware, e.g. enabling massive spam and attacks;
speed is implied in the combination of exponential increase in computing power and hyperconnectivity, e.g. enabling the immediate or timed destruction of evidence (even at a distance) and an easy way out of criminal accountability;
distribution is implied in the networked nature of both the various stacks of the internet, the web and various application layers, across remote servers (cloud computing) and amalgamated in hardware that combines operating systems, firmware, different software and applications that have been developed by different teams and companies, while default settings may be changed by the seller, by the buyer (e.g. a service provider) and/or by the end-user, presenting all those involved with seemingly unsurmountable problems in the attribution of responsibility when things go wrong, e.g. in the case of self-driving cars or data-driven energy grids;
invisibility is implied in the differentiation between backend systems that call the shots and frontend systems where end-users are presented with a choice architecture that hides the choices made in the backend, presenting huge issues for the foreseeability of one’s actions, for forensics and for the attribution of causality in the case of harm or other types of damage, e.g. in the case of manipulative micro-targeting of individual political opinion;
visibility is implied where the collection, linkability and inferencing of ‘big data’ and the wonders of machine learning enable the ‘legibility’ of end-users in ways that render them vulnerable to e.g. identity theft, invisible nudging or manipulation, blackmailing, extortion and – in the case of children – grooming.
We can continue the listing and move into corporate espionage, cyberwar and concerted attacks on critical infrastructure, e.g. that of energy supply or democratic institutions. Clearly, states, with their ‘traditional’ monopoly of violence and their ‘traditional’ ius puniendi, have been struggling to redefine the borderless and initially lawless realm of ‘cyberspace’, to combat cybercrime at the level of policing, forensics and judicial cooperation. Because cybercrime does not stop at national borders, states are collaborating at the international and supranational level to come to terms with the transnational nature of cybercrime.
6.2 Cybercrime and public law
As discussed above, public law consists of constitutional law, international public law and administrative law. Constitutional law is relevant for cybercrime to the extent that is determines the right to a fair trial, the criminal law legality principle and the right to privacy (that is often at stake when states create and apply investigatory competences to combat cybercrime). International public law is relevant for cybercrime because the need to act across territorial borders has resulted in concerted efforts to conclude international treaties on cybercrime. Administrative law is relevant for cybercrime to the extent that supranational legislation on cyber security (notably EU directives), imposes duties on MSs to align their approach across national borders.
The most important treaty that has been initiated to combat cybercrime across territorial borders is the Cybercrime Convention (CC),2 initiated by the Council of Europe (CoE). Within the context of the EU, two directives are relevant, notably the Directive on Attacks against Information Systems,3 and the Directive (EU) 2016/1148 on Network and Information Security (NIS).4
6.2.1 The Cybercrime Convention
The CC was initiated by the CoE, though from the beginning some states outside the CoE were involved, notably the US, Canada, Japan and South Africa. To this day, it is the most global treaty on cybercrime thus far concluded. The treaty was signed on 23 November 2001 and entered into force on 1 July 2004, after 5 states had ratified, including at least 3 MSs of the CoE (in line with art. 36 of the CC). In the Netherlands the treaty entered into force on 1 March 2007, in Japan on 1 November 2012, in the US on 1 January 2007 (treaties are in force in a contracting state once the treaty itself is in force and after the relevant state has ratified, see section 4.2.1 above). The status of accession and ratification on 30 October 2018 is: 4 signatures that have not yet been ratified and 61 ratifications.
The idea of the CC is (1) to agree on new competences to investigate cybercrime, adapted to the intricacies of cyber- as opposed to traditional crimes, and (2) on joint definitions of criminal behaviour in cyberspace to make sure that offenders cannot avoid charges by escaping to more lenient jurisdictions, while thus (3) ensuring that legal certainty is safeguarded across territorial borders, both with regard to investigatory competences and with regard to what qualifies as criminal conduct, while always (4) preserving a proper level of legal protection regarding related human rights and freedoms.
The fact that the CC is international and not supranational law means that whether it has direct effect in MSs depends on whether a MS has a monist or dualist system of recognising the force of international law. In the Netherlands, as discussed above, art. 93 of the Netherlands Constitution makes this dependent on the way international law is formulated (see section 4.2.2 above). Direct effect is only at stake when the content of a treaty addresses citizens by way of granting them rights. The CC, however, does not address citizens. It addresses the MSs, requiring them to implement the content of the CC. This means that the CC lacks direct effect and must first be implemented in national law.
The content of the CC can be summed up briefly as:
Substantive Criminal Law
art. 1: definitions
art. 2-6: CIA crimes
art. 7-8: ‘traditional’ crimes
art. 9: content crime: child porn
NB see also First Additional Protocol on racism, ETS 189
art. 10: copyright violations
art. 11-13: ancillary provisions
Procedural Criminal Law
art. 14-15: scope
art. 16-21: investigation powers
art. 22: jurisdiction
Art. 23-35: extradition, mutual assistance between justice authorities, provisional measures, investigative powers
Art. 36-38: signature, entry into force
This clearly shows the structure of the CC, highlighting the goal of achieving a similar level of protection against cybercrime on the substantive and the procedural level. The fact that the CC lacks direct effect raises the following questions:
Can Dutch police base their investigations into cybercrime on the CC?
Can a victim of online credit card fraud sue the perpetrator based on the CC?
Can a Dutch court convict on the basis of the CC?
The answer should be clear by now: the police cannot base their investigations on legal powers attributed by the CC (only on competences attributed by national law that implements the CC); a victim of online credit card fraud cannot sue the perpetrator based on the CC (the CC does not concern private law, the police and/or the public prosecutor hold the monopoly to initiate a criminal charge), a court cannot convict a perpetrator based on the CC (only on criminalisation enacted in national law that implements the CC).
22.214.171.124 Substantive law
Note that the CC assumes that the criminal law legality principle is in force (see above 3.1.3 and 126.96.36.199): no punishment without prior and precise criminalisation, as e.g. defined in art. 7 of the ECHR that is binding for the MSs of the CoE. By imposing legal obligations on contracting parties to criminalise specified conduct under the heading of cybercrime, the CC reasserts that criminalisation in the legal sense is a prerequisite of fighting cybercrime in constitutional democracies.
The first set of criminal offences concerns CIA-related crimes, such as hacking, or computer trespass, interception, data interference and system interference. I will discuss them more extensively, as they are highly relevant for computer scientists.
Hacking or computer trespass must be criminalised as stipulated by art. 2:
Article 2 – Illegal access
Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the access to the whole or any part of a computer system without right. A Party may require that the offence be committed by infringing security measures, with the intent of obtaining computer data or other dishonest intent, or in relation to a computer system that is connected to another computer system.
The legal effect of this provision is that contracting Parties are obligated to enacted the relevant criminalisation, under the legal conditions specified.5 This means that, structured in terms of legal effect and legal conditions, Parties should enact that:
The legal effect of ‘access to the whole or any part of a computer system’ being a criminal offence, depends on the following legal conditions:
it has been committed intentionally, and
Additionally a Party may require that to qualify as a criminal offence, such access is achieved:
by infringing security measures, and/or
with the intent of obtaining computer data or other dishonest intent, and/or
in relation to a computer system that is connected to another computer system.
What does this mean for ‘ethical hacking’ (penetration testing to detect security problems)? From an ethical perspective, one can distinguish between a black hat hacker (malicious intent), a white hat hacker (good intent and permission) and a grey hat hacker (good intent but no permission). If a system is hacked intentionally without permission of the user or owner, the act qualifies as a criminal offence, irrespective of good or bad intent, unless there is another ‘right’ to access, such as a legal competence (e.g. for the police, provided the relevant conditions for the exercise of that competence apply). One could think of 3 ways to prevent punishment for ethical hacking (basically, grey hat hacking).
First, the public prosecutor may decide not to prosecute if they find there is no general interest in prosecuting,6 for instance because the hacker followed guidelines of responsible disclosure. Note that penetration testing will fall within the scope of this criminal offence unless one has permission or an assignment to conduct such testing. In some countries, the public prosecutor has no discretion to abstain from prosecution, due to a strict interpretation of the procedural criminal law legality principle (see above section 188.8.131.52).
This brings us to the second way that punishment can be prevented, which would entail that a legal justification can be invoked, despite the fact that the hacker had no right to access the system.7 Such a defence concerns the requirement that a criminal offence implies ‘wrongfulness’ as part of the mens rea that constitutes a criminal offence (see above section 184.108.40.206). One could, for instance, claim a higher – legally relevant duty – overruled the duty to refrain from intentionally accessing the system without right. It will be up to the court to decide whether such a higher duty justified unlawful access. Note that once a court acknowledges such a higher duty, this would justify all similar cases of unlawful access, unless the decision is overruled by a higher court. As the reader may guess, courts will be cautious in accepting such grounds of justification, due to the consequences of such acceptance.
The third way to prevent punishment could be conviction without punishment,8 which would be a clear sign that the court does not accept the lawfulness of the access, but nevertheless finds good reason in the circumstances of the offence that was committed to abstain from punishment. Note that in jurisdictions that impose minimum sentences for such an offence, without enabling courts to convict without punishment, this is not an option.
After art. 2 on unlawful access, we have another CIA-related offence in art. 3 on interception:
Art. 3 Illegal interception
Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the interception without right, made by technical means, of non-public transmissions of computer data to, from or within a computer system, including electromagnetic emissions from a computer system carrying such computer data. A Party may require that the offence be committed with dishonest intent, or in relation to a computer system that is connected to another computer system.
In terms of legal effect and legal conditions, this provision requires the following:9
The legal effect of ‘Interception of non-public transmissions of computer data to, from or within a computer system, including electromagnetic emissions from a computer system carrying such computer data’ being a criminal offence, depends on the following legal conditions:
it has been committed intentionally, and
without right, and
made by technical means,
A Party may require that to qualify as an offence such interception be committed
with dishonest intent, and/or
in relation to a computer system that is connected to another computer system.
Art. 4 CC stipulates the criminalisation of another CIA-related offence, notably that of data interference:10
1 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the damaging, deletion, deterioration, alteration or suppression of computer data without right.
2 A Party may reserve the right to require that the conduct described in paragraph 1 result in serious harm.
In terms of legal effect and legal conditions, this implies that Parties enact that:
The legal effect of ‘the damaging, deletion, deterioration, alteration or suppression of computer data’ being a criminal offence, depends on the following legal conditions:
it has been committed intentionally, and
A Party may reserve the right to require that such data interference, to qualify as a criminal offence
results in serious harm.
Art. 5 then stipulates the criminalisation of the final CIA-related offence, that of system interference:11
Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the serious hindering without right of the functioning of a computer system by inputting, transmitting, damaging, deleting, deteriorating, altering or suppressing computer data.
In terms of legal effect and legal conditions this entails that Parties must legislate.
The legal effect of ‘the serious hindering of the functioning of a computer system’ being a criminal offence, depends on the following legal conditions:
when committed intentionally,
by inputting, transmitting, damaging, deleting, deteriorating, altering or suppressing computer data.
As indicated above, the CIA-related offences are followed by ‘traditional’ crimes such as identity fraud in art. 7-8, by content crime, notably child porn in art. 9, and by copyright violations in art. 10. These can be analysed similarly to art. 2-5.
220.127.116.11 Procedural law
The second part of the CC concerns procedural law, effectively stipulating that specified investigatory powers are attributed to the police and justice authorities: expedited preservation of computer data (traffic and content), production orders, search and seizure, and interception (metadata and content data). I will provide an analysis of the production order and the legal power to conduct search and seizure and leave it to the reader to study the legal conditions for lawful interception.
As explained in section 2.2.1, legal norms can be distinguished as either primary or secondary rules. Primary rules regulate human intercourse by way of prohibitions and obligations. Secondary rules constitute competences to legislate, govern or adjudicate, more generally, they constitute the competence to act. Substantive criminal law, discussed in the previous section, can be understood as a set of secondary rules that impose punitive sanctions when specified primary norms have been violated. The first part of the CC stipulates which primary norms must be protected by way of criminalisation. The second part of the CC, regarding procedural criminal law, can be understood as a set of secondary rules that defines under what conditions ‘competent authorities’ are allowed to exercise a set of legal powers that should enable them to combat cybercrime. The second part of the CC thus stipulates what secondary norms must be instituted by the contracting Parties in the realm of cybercrime investigation.
Art. 18 requires contracting Parties to enact legal powers for its competent authorities to request computer data and subscriber information.
1. Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to order:
a) a person in its territory to submit specified computer data in that person’s possession or control, which is stored in a computer system or a computer-data storage medium; and
b) a service provider offering its services in the territory of the Party to submit subscriber information relating to such services in that service provider’s possession or control.
2. The powers and procedures referred to in this Article shall be subject to Articles 14 and 15.
3. For the purpose of this Article the term ‘subscriber information’ means any information contained in the form of computer data or any other form that is held by a service provider, relating to subscribers of its services, other than traffic or content data and by which can be established:
a) the type of communication service used, the technical provisions taken thereto and the period of service;
b) the subscriber’s identity, postal or geographic address, telephone and other access number, billing and payment information, available on the basis of the service agreement or arrangement;
c) any other information on the site of the installation of communication equipment, available on the basis of the service agreement or arrangement.
Here we see the legality principle at work (see sections 18.104.22.168 and 22.214.171.124 above), as this stipulates that government authorities can only act if there is a legal basis, while in the case of invasive actions such as criminal investigations such action requires a more detailed legal basis. In order words, whenever government authorities act, they must be ‘competent authorities’, meaning they have been attributed the legal powers for their actions. As discussed in section 126.96.36.199, legal competences have a double function: they both constitute and limit the power they attribute. Art. 18 requires Parties to attribute specified legal powers, based on the assumption that competent authorities can only act within the confines of the specification that constitutes the power. The second paragraph further asserts this, by referring to art. 14 and 15, that limit the scope of the investigatory competences and stipulate that relevant safeguards must be in place. We will return to this in section 6.2.2 below.
Let me explain the relevance of a production order in the realm of cybercrime by extensively quoting case law of the ECtHR, which speaks for itself, nicely demonstrating that even without recourse to the CC, the ECHR implies positive obligations for the contracting Parties of the CoE to enact legal competences for the police to give a production order. The case is that of K.U. v. Finland.12 To enable easy reading I have used some bullet points, without, however, changing the text:
7. On 15 March 1999 an unidentified person or persons placed an advertisement on an Internet dating site
in the name of the applicant,
who was 12 years old at the time,
without his knowledge.
The advertisement mentioned his age and year of birth,
gave a detailed description of his physical characteristics,
a link to the web page he had at the time,
which showed his picture, as well as his telephone number, which was accurate save for one digit.
In the advertisement, it was claimed that he was looking for an intimate relationship with a boy of his age or older
“to show him the way”.
9. The applicant’s father requested the police
to identify the person who had placed the advertisement in order to bring charges against that person.
The service provider, however,
refused to divulge the identity of the holder of the so-called dynamic Internet Protocol (IP) address in question,
regarding itself bound by the confidentiality of telecommunications as defined by law.
10. The police then asked the Helsinki District Court (käräjäoikeus, tingsrätten)
to oblige the service provider to divulge the said information pursuant to section 28 of the Criminal Investigations Act (esitutkintalaki, förundersökningslagen; Act no. 449/1987, as amended by Act no. 692/1997).
11. In a decision issued on 19 January 2001, the District Court refused
since there was no explicit legal provision authorising it to order the service provider to disclose telecommunications identification data in breach of professional secrecy.
The court noted that by virtue of Chapter 5a, section 3, of the Coercive Measures Act (…) and section 18 of the Protection of Privacy and Data Security in Telecommunications Act (…)
the police had the right to obtain telecommunications identification data in cases concerning certain offences, notwithstanding the obligation to observe secrecy.
However, malicious misrepresentation was not such an offence.
35. The applicant complained under Article 8 of the Convention that
an invasion of his private life had taken place and that
no effective remedy existed to reveal the identity of the person who had put a defamatory advertisement on the Internet in his name, contrary to Article 13 of the Convention.
Article 8 provides:
“1. Everyone has the right to respect for his private and family life, his home and his correspondence.
2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.”
Article 13 provides:
“Everyone whose rights and freedoms as set forth in [the] Convention are violated shall have an effective remedy before a national authority notwithstanding that the violation has been committed by persons acting in an official capacity.”
41. There is no dispute as to the applicability of Article 8:
the facts underlying the application concern a matter of “private life”,
a concept which covers the physical and moral integrity of the person (see X and Y v. the Netherlands, cited above, § 22).
Although this case is seen in domestic law terms as one of malicious misrepresentation,
the Court would prefer to highlight these particular aspects of the notion of private life,
having regard to the potential threat to the applicant’s physical and mental welfare brought about by the impugned situation and to his vulnerability in view of his young age.
42. The Court reiterates that,
although the object of Article 8 is essentially to protect the individual against arbitrary interference by the public authorities,
it does not merely compel the State to abstain from such interference:
in addition to this primarily negative undertaking,
there may be positive obligations inherent in an effective respect for private or family life (see Airey v. Ireland, 9 October 1979, § 32, Series A no. 32).
43. These obligations may involve
the adoption of measures designed to secure respect for private life even in the sphere of the relations of individuals between themselves.
There are different ways of ensuring respect for private life
and the nature of the State’s obligation will depend on the particular aspect of private life that is at issue.
While the choice of the means to secure compliance with Article 8 in the sphere of protection against acts of individuals
is, in principle, within the State’s margin of appreciation,
effective deterrence against grave acts, where fundamental values and essential aspects of private life are at stake, requires efficient criminal-law provisions (see X and Y v. the Netherlands, cited above, §§ 23-24 and 27; August v. the United Kingdom (dec.), no. 36505/02, 21 January 2003; and M.C. v. Bulgaria, no. 39272/98, § 150, ECHR 2003-XII).
49. The Court considers that practical and effective protection of the applicant required that
effective steps be taken to identify and prosecute the perpetrator, that is, the person who placed the advertisement.
In the instant case, such protection was not afforded.
An effective investigation could never be launched because of an overriding requirement of confidentiality.
Although freedom of expression and confidentiality of communications are primary considerations and users of telecommunications and Internet services must have a guarantee that their own privacy and freedom of expression will be respected, such guarantee cannot be absolute and must yield on occasion to other legitimate imperatives, such as the prevention of disorder or crime or the protection of the rights and freedoms of others.
Without prejudice to the question whether the conduct of the person who placed the offending advertisement on the Internet can attract the protection of Articles 8 and 10, having regard to its reprehensible nature, it is nonetheless the task of the legislator to provide the framework for reconciling the various claims which compete for protection in this context.
Such framework was not, however, in place at the material time, with the result that Finland’s positive obligation with respect to the applicant could not be discharged.
This deficiency was later addressed. However, the mechanisms introduced by the Exercise of Freedom of Expression in Mass Media Act (see paragraph 21 above) came too late for the applicant.
Let us now move to art. 19 CC, which requires contracting Parties to enact a power to conduct a ‘search and seizure of stored computer data’.13
Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to search or similarly access:
a) a computer system or part of it and computer data stored therein; and
b) a computer-data storage medium in which computer data may be stored in its territory.
2. Each Party shall adopt such legislative and other measures as may be necessary to ensure that
where its authorities search or similarly access a specific computer system or part of it, pursuant to paragraph 1.a,
and have grounds to believe that the data sought is stored in another computer system or part of it in its territory,
and such data is lawfully accessible from or available to the initial system,
the authorities shall be able to expeditiously extend the search or similar accessing to the other system
3. Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to seize or similarly secure computer data accessed according to paragraphs 1 or 2. These measures shall include the power to:
a) seize or similarly secure a computer system or part of it or a computer-data storage medium;
b) make and retain a copy of those computer data;
c) maintain the integrity of the relevant stored computer data;
d) render inaccessible or remove those computer data in the accessed computer system.
4. Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities
to order any person who has knowledge about the functioning of the computer system or
measures applied to protect the computer data therein
as is reasonable,
the necessary information, to enable the undertaking of the measures referred to in paragraphs 1 and 2.
5. The powers and procedures referred to in this article shall be subject to Articles 14 and 15.
As in art. 18, this competence is restricted by the safeguards required in art. 14 and 15 (see paragraph 5), thus asserting the legality principle.
Paragraph 4 includes the competence to request a password to access a computing system, or to decrypt content. The reference to the safeguards of the Rule of Law in paragraph 5 may imply that such a request cannot be directed to a suspect, as this could violate the privilege of nemo tenetur, i.e. the privilege against self-incrimination. The full privilege reads nemo tenetur se ipsum accusare, or no one is bound to incriminate themselves. The ECtHR reads this privilege into art. 6 ECHR, even though it is not explicitly articulated.14 It mainly guards against unwarranted compulsion, but it does not provide an absolute right; depending on the severity of the public interest that is at stake, the effectiveness of procedural safeguards and how the information obtained is to be used, infringements can be justified. Whether the ECtHR would consider a categorical competence to order a suspect to provide a password as a violation of art. 6 ECHR is not clear, but this will probably depend on the effective legal safeguards and proportionality requirements.
Note that the CC does not impose an obligation on contracting Parties to enact a legal power for the police to remotely hack into computing systems. Though this is not prohibited, such enactment is not an implementation of the CC and there is no obligation to enable remote access, unless via an already accessed system, based on paragraph 2.
Another caveat concerns the limitation of access to the territory of the investigating state, first in paragraph 1.b, which limits search to databases on the territory of the relevant contracting Party. Second, a search in a remote system via an already accessed system, based on paragraph 2, is restricted to cases where the competent authorities ‘have grounds to believe that the data sought is stored in another computer system or part of it in its territory’. This restriction is based on a fundamental principle of international law, which prohibits extraterritorial jurisdiction to enforce. As discussed in section 4.1 and 4.4, the combination of internal and external sovereignty that constitutes both international and national law implies that states respect one another’s territorial integrity as part of their sovereignty. Conducting criminal law investigations on the territory of another state has therefor been outlawed since the famous Lotus case of the Permanent Court of International Justice (P.C.I.J.) in The Hague of 1927.15 In this case it was decided that such extraterritorial enforcement jurisdiction is only permitted in case of permission granted by the state on whose territory the investigations take place. Such permission can be ad hoc, but it can also be based on mutual legal assistance treaties (MLATs). Art. 32 of the CC confirms the prohibition of extraterritorial enforcement jurisdiction:
Article 32 – Trans-border access to stored computer data with consent or where publicly available
A Party may, without the authorisation of another Party:
a) access publicly available (open source) stored computer data, regardless of where the data is located geographically; or
b) access or receive, through a computer system in its territory, stored computer data located in another Party, if the Party obtains the lawful and voluntary consent of the person who has the lawful authority to disclose the data to the Party through that computer system.
Art. 32 CC has given rise to hefty discussions, as some contracting Parties have enacted competences to remotely hack into computing systems that may be on foreign territory. The reader can imagine that this particular issue has major implications for the force of law, the practice of international law and for the defining features of internal and external sovereignty. The jury is still out on where this will end.
Art. 20 and 21 CC require contracting Parties to provide legal powers to enable interception by the competent authorities, of traffic data (20) and content data (21). Note that without such powers, the police would commit an offence and become itself punishable. The reader is invited to study both articles in detail, dissecting the cumulative and alternative legal conditions that must be fulfilled for interception to be lawful.
6.2.2 Limitations on investigative powers
As indicated above, the legality principle requires that governments act in a way that is not arbitrary, sufficiently foreseeable, proportional, and embedded in adequate safeguards. This includes respect for human rights and a proactive approach to potential risks of democracy and the Rule of Law. As mentioned in the previous section, art. 15 explicitly requires the contracting Parties to implement all relevant provisions of the CC in line with the demands of constitutional democracy.
Article15 – Conditions and safeguards
1. Each Party shall ensure that the establishment, implementation and application of the powers and procedures provided for in this Section are
subject to conditions and safeguards provided for under its domestic law,
which shall provide for the adequate protection of human rights and liberties,
including rights arising pursuant to obligations it has undertaken under the 1950 Council of Europe Convention for the Protection of Human Rights and Fundamental Freedoms,
the 1966 United Nations International Covenant on Civil and Political Rights, and
other applicable international human rights instruments, and
which shall incorporate the principle of proportionality.
2. Such conditions and safeguards shall, as appropriate in view of the nature of the procedure or power concerned, inter alia, include
judicial or other independent supervision,
grounds justifying application, and
limitation of the scope and the duration of such power or procedure.
3. To the extent that it is consistent with the public interest, in particular the sound administration of justice, each Party shall consider the impact of the powers and procedures in this section upon the rights, responsibilities and legitimate interests of third parties.
Art. 15 basically integrates the case law of the ECtHR (the highest court of the CoE, which initiated the CC) into the CC. Above, in section 5.3.5, we have discussed the legal conditions that must be fulfilled for the justification of infringing measures of secret surveillance, such as notably Weber and Saravia.16 The safeguards stipulated in such case law are highly relevant for the competences that must be attributed by the contracting Parties and similarly, the proportionality test of art. 8 ECHR (see also art. 15 paragraph 1) must be built into the procedures that condition the application of these legal powers.
188.8.131.52 Proportionality test for police access to personal data
An interesting example of a proportionality test regarding police access to personal data retained by internet service providers (ISPs) was conducted by the CJEU in its judgement of October 2018.17 The case concerned a police request to obtain identifying information on those who interacted with a stolen smartphone during a 12 day period after the phone was stolen. The question was whether this constituted a ‘serious’ interference with the fundamental rights and freedoms of those persons. The CJEU finds in para 60:
It is therefore apparent that the data concerned by the request for access at issue in the main proceedings only enables the SIM card or cards activated with the stolen mobile telephone to be linked, during a specific period, with the identity of the owners of those SIM cards. Without those data being cross-referenced with the data pertaining to the communications with those SIM cards and the location data, those data do not make it possible to ascertain the date, time, duration and recipients of the communications made with the SIM card or cards in question, nor the locations where those communications took place or the frequency of those communications with specific people during a given period. Those data do not therefore allow precise conclusions to be drawn concerning the private lives of the persons whose data is concerned.
This was the first step in a proportionality test, weighing the proportionality between the infringement and the purpose it aimed to achieve. In para 61 concludes that the request is not a ‘serious’ infringement. For the proportionality test, the CJEU concludes in para 62:
As stated in paragraphs 53 to 57 of this judgment, the interference that access to such data entails is therefore capable of being justified by the objective, to which the first sentence of Article 15(1) of Directive 2002/58 refers, of preventing, investigating, detecting and prosecuting ‘criminal offences’ generally, without it being necessary that those offences be defined as ‘serious’.
Directive 2002/58 is the ePrivacy Directive which aims to protect the confidentiality of online communication. Art. 15 of said directive allows for legislative measures that restrict the protection granted in the ePrivacy Directive, for purposes such as prevention and investigation of criminal offences. The CJEU basically states that art. 15(1) does not restrict such restriction to prevention and investigation of ‘serious’ criminal offences. Together with the finding that the request of identifying information that is at stake in this case, the CJEU concludes that an infringement that is considered as not serious is allowed to investigate criminal offences that are not considered serious. Though the CJEU is the highest court of the EU and not the highest court of the CoE, its proportionality test is relevant as it follows that of the ECtHR, due to art. 52(3) of the CFREU:
In so far as this Charter contains rights which correspond to rights guaranteed by the Convention for the Protection of Human Rights and Fundamental Freedoms, the meaning and scope of those rights shall be the same as those laid down by the said Convention. This provision shall not prevent Union law providing more extensive protection.
184.108.40.206 Proportionality test, balancing tests and the image of the scale
As a final touch down, I want to briefly discuss the image of the scale that is so often invoked when proportionality comes into focus. In computer science literature that is focused on cybersecurity, especially in the subfield of data privacy that is often mistakenly understood as a subfield of security, we often encounter the idea that security and liberty are mutually exclusive, suggesting that we can’t eat our cake and have it too. This suggests that a trade-off between security measures and liberty rights are a given: more of the one supposedly results in less of the other. This is not correct. On the contrary, security measures will often enable or reinforce a user’s capability to make freely chosen and well-informed decisions about sharing personal data. Nevertheless, the opposite is equally incorrect. Some security measures will require disclosure, penetration testing or even deep packet inspection to facilitate attack monitoring, and this will necessarily infringe individual rights and freedoms, especially where such measures are often invisible or even secret.
The idea that security measures and liberty rights must be framed in terms of a trade-off is not restricted to the domain of cybersecurity. It also pervades the broader domain of policy science where it comes to national and public security, the fight against transnational terrorism and foreign intelligence targeting critical infrastructure and democratic processes. Here, security usually refers to threats to a person’s autonomy and bodily integrity, an organisation’s resilience, a state’s existence or economic welfare, based on targeted attacks. In that sense security is a subdomain of safety, which also refers to threats not based on deliberate targeting.
In the context of cybercrime law, the broader discussion of a trade-off between security and liberties plays out whenever investigatory measures infringe human rights such as privacy, freedom of expression or e.g. the privilege against self-incrimination. The CC, as we have seen in the preceding subsections, requires proportionality between the infringing measures and the objective that is meant to be protected. It is crucial to acknowledge that such proportionality is not equivalent with the trade-off that is often suggested when the image of the scale is invoke (more protection of security requires less human rights protection), without taking the opposite position that such a trade-off never occurs.
In a seminal article, written shortly after the attacks of 9/11 on the New York World Trade Centre, legal philosopher Jeremy Waldron discussed six caveats for invoking the image of the scale:
Diminishing liberties does not automatically increase security (a trade-off is not given);
‘Scale’ suggests a precision that is absent, because a tertium comparationis is usually absent;
Liberties cannot be traded at will, they are preconditional for a legitimate government;
Trading liberty against security often generates a distributive effect (trading the liberty of one group to increase the security of another group);
Diminishing liberties will increase insecurity in relation to the state;
‘Scale’ has high symbolic value; may contain no effective safeguards whatsoever.
Sometimes, increased security requires infringement of e.g. privacy, but this is not necessarily the case. Some digital security measures may indeed raise privacy protections, for instance when end-to-end encryption is seen as a security measure. In the domain of cybercriminal police investigations, however, such measures may be used to communicate internally, but if used by consumers they will not be seen as security measures, but as obstruction of police cybercrime investigation. Within the context of cybercrime security measures concern police access to computing systems, production orders and interception. The first point made by Waldron highlights that the fact that such measures infringe privacy does not imply that they raise public security. If a then b does not imply that if b then a.
This also relates to his sixth point: security measures often promise more than they can effectively achieve. In itself this is to be expected, but when a balancing test is done, we must accept that measures that are ineffective cannot be necessary and thus not proportional.
Thinking in terms of a trade-off, using the image of the scale, suggests that the trade-off between liberty and security is a matter of calculation: some amount of liberty is traded against some amount of security. Waldron’s second point is that this is clearly not the case. Though a security measure may – metaphorically – be understood in terms of costs (liberties) and benefits (security), there is no generally valid way of counting either the costs or the benefits. Security is a different ‘thing’ than liberty, while both can be understood as public goods as well as private interests. Though one could ‘rank’ costs and benefits, this does not imply they can be added up or deducted on one and the same scale, which is exactly what the image of the scale tempts us into assuming.
This again links to the sixth point; we should not mistake a security measure for the effect it aims to achieve.
The idea of a trade-off also wrongly assumes that liberty and security are independent variables, whereas in a constitutional democracy there are many dependencies between them. As discussed in section 2.2 and 3.3, in a constitutional democracy a government must not only: ‘(1) act with an eye to the public interest, but also (2) act within the confines of the legality principle and (3) treat citizens with equal respect and concern’. This entails that liberty is not something to trade at will against other public goods, but something that – like security – is constitutive for a legitimate government. Often, as citizens we cannot be secure in our life and limbs if liberties can be flouted by the state in its struggle to provide those same citizens with security. This connect with the fact that diminishing liberties will often increase insecurity in relation to the state.
Convincing people to give up some liberty to gain some security misrepresents a reality where the liberty of one group of people may be diminished to ensure increased security of another group. The security of some is then traded against the liberty of others. Depending on what kind of security measures are at stake, liberty may be redistributed, for instance, when those dependent on welfare benefits are exposed to automated decision systems that disregard their privacy, whereas others can afford to protect themselves by buying an expensive but well protected smart phone. Data protection law may protect legal residents of the EU whereas illegal aliens may find themselves ‘naked’ in the eye of the immigration machine, seeing their privacy ‘traded’ for the security of already securely settled lawful residents.
6.3 The EU cybercrime and cybersecurity directives
In the strict sense, the Directive on attacks against information systems (2013/40/EU) is ‘the’ EU cybercrime directive. As to its aims and instrumental value it overlaps with the CC, requiring EU MSs to criminalise illegal access, attacks against information systems and computer data, and illegal interception (substantive criminal law). Other than the CC it does not concern the criminalisation of fraud, child pornography or copyright violations, clearly focusing on CIA-related offences. Also, other than the CC, it does not impose obligations regarding procedural criminal law. The goal of the directive is minimum harmonisation, meaning that MSs can go beyond what is required, but not below, as art. 1 states under the heading of ‘Subject matter’:
This Directive establishes minimum rules concerning the definition of criminal offences and sanctions in the area of attacks against information systems. It also aims to facilitate the prevention of such offences and to improve cooperation between judicial and other competent authorities.
Interestingly, this directive obligates MSs to impose ‘minimum maximum’ penalties for specific cybercrimes, e.g. in art. 9, para 2:
Member States shall take the necessary measures to ensure that the offences referred to in Articles 3 to 7 are punishable by a maximum term of imprisonment of at least two years, at least for cases which are not minor.
Criminal law is often considered core to internal sovereignty, meaning that states resist supranational interference with their criminal law policy. By stipulating minimum maximum punishment EU law reserves discretion for MSs that reject minimum punishment or allow conviction without punishment (as the Netherlands in art. 9a Netherlands Criminal Code, see above section 4.1.2 and 220.127.116.11).
The directive pays special attention to criminal law liability for legal persons, and for issues of jurisdiction, and includes various types of cross-national collaboration within the Union (e.g. information exchange via national points of contact, and collection of relevant statistics).
Next to the ‘real’ EU cybercrime directive, the EU has also enacted a cybersecurity directive, the Directive on network and information security (NIS) (EU) 2016/1148. The subject matter here is defined in art. 1 as:
1. This Directive lays down measures with a view to achieving a high common level of security of network and information systems within the Union so as to improve the functioning of the internal market.
2. To that end, this Directive:
(a) lays down obligations for all Member States to adopt a national strategy on the security of network and information systems;
(b) creates a Cooperation Group in order to support and facilitate strategic cooperation and the exchange of information among Member States and to develop trust and confidence amongst them;
(c) creates a computer security incident response teams network (‘CSIRTs network’) in order to contribute to the development of trust and confidence between Member States and to promote swift and effective operational cooperation;
(d) establishes security and notification requirements for operators of essential services and for digital service providers;
(e) lays down obligations for Member States to designate national competent authorities, single points of contact and CSIRTs with tasks related to the security of network and information systems.
6. This Directive is without prejudice to the actions taken by Member States to safeguard their essential State functions, in particular to safeguard national security, including actions protecting information the disclosure of which Member States consider contrary to the essential interests of their security, and to maintain law and order, in particular to allow for the investigation, detection and prosecution of criminal offences
The NIS directive is not about criminal law, which – as art. 2 para 6 demonstrates – may even be at odds with the information exchange that is in the heart of the NIS directive. This further clarifies that cybersecurity and cybercrime should not be confused. The CIA-related cybercrimes basically concern the criminalisation of attacks on cybersecurity, such as illegal access, attacks on information systems and illegal interception. In that sense the NIS directive overlaps with the CC and the cybercrime directive in its objective of identifying, preventing and deterring threats to cybersecurity.
Note that art. 2 of the NIS directive states that personal data processed pursuant to this directive falls within the scope of the data protection directive (now the GDPR), meaning it does not fall within the scope of the Police Data Protection Directive (which is focused on processing of personal data ‘by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data’). This, again, clarifies the difference between cybersecurity and cybercrime. The concept of ‘security of network and information systems’ is actually defined in art. 4(2) of the NIS directive:
‘security of network and information systems’ means the ability of network and information systems to resist, at a given level of confidence, any action that compromises the availability, authenticity, integrity or confidentiality of stored or transmitted or processed data or the related services offered by, or accessible via, those network and information systems;
By defining security in terms of resilience against ‘any action that compromises the availability, authenticity, integrity or confidentiality’, the NIS directive anchors itself in the CIA concerns that define cybersecurity.
On cybercrime law:
Brenner, Susan W. 2012. Cybercrime and the Law. Boston: Northeastern University Press.
Schwarzenegger, Christian, Finlay Young, Gian Ege, and Sarah J. Summers. 2014. The Emergence of EU Criminal Law: Cyber Crime and the Regulation of the Information Society. Oxford ; Portland, Oregon: Hart Publishing.
Tosoni, Luca. 2018. ‘Rethinking Privacy in the Council of Europe’s Convention on Cybercrime’. Computer Law & Security Review, September. https://doi.org/10.1016/j.clsr.2018.08.004.
On production orders:
Hert, Paul de, Cihan Parlar, and Juraj Sajfert. 2018. ‘The Cybercrime Convention Committee’s 2017 Guidance Note on Production Orders: Unilateralist Transborder Access to Electronic Evidence Promoted via Soft Law’. Computer Law & Security Review 34 (2): 327–36. https://doi.org/10.1016/j.clsr.2018.01.003.
On extra-territorial jurisdiction to enforce in cyberspace
Hildebrandt, Mireille. 2013. ‘Extraterritorial Jurisdiction to Enforce in Cyberspace? Bodin, Schmitt, Grotius in Cyberspace’. University of Toronto Law Journal 63 (2): 196–224. https://doi.org/10.3138/utlj.1119.
On the image of the scale:
Hildebrandt, M. 2013, Balance or Trade-off? Online Security Technologies and Fundamental Rights, (26) Philosophy & Technology, 4, 357-379
Waldron, Jeremy. 2003. ‘Security and Liberty: The Image of Balance’. Journal of Political Philosophy 11 (2): 191–210. https://doi.org/10.1111/1467-9760.00174.