Data Protection Impact Assessement

The data protection impact assessment, or: how the General Data Protection Regulation may still come to foster ethically responsible data processing


Many policymakers and academics see informational privacy and data protection as being about the ability to control your data. Consent plays a central role in data protection law and data subjects are supposedly empowered through a number of rights, including the right to receive information and to access and rectify data about them. We now know that this does not always play out well in practice. Most people mindlessly click on “I Agree”; we can’t all be like privacy activist Max Schrems all the time. The current data protection reform tries to strengthen the control of users in a number of ways. It might help to actually empower individuals to some extent, but at least as important is the responsibility of controllers – those who collect, use and share the data – to respect the rights of individuals when they process personal data. It should not be up to individuals to ensure that their rights are not violated online. Controllers should offer such protection by default.


But what does decent rights protection look like, and who decides? The upcoming General Data Protection Regulation (GDPR) sees to the collection, organisation, storage, analysis, use and dissemination of personal data and applies to both government entities and private corporations. Data protection law should bring about responsible data processing in a large variety of situations. To name a few things, the GDPR should keep tax authorities from collecting disproportionate amounts of personal data, reign in the power of Internet service providers to influence people by personalising their online experience, and prevent start-ups from leaking information which could be used for identity theft. Some argue that the GDPR is too long – I say it could be much, much longer. It is a general instrument which does not provide clear bright-line rules on how the fundamental rights of individuals should be respected.


The GDPR will contain a safety net of sorts. The European Commission, the European Parliament and the Council of the European Union have each proposed a version of the GDPR, and are currently engaged in negotiations (or trilogues) on the final thing. Each of these versions makes it mandatory for data controllers to carry out a data protection impact assessment (DPIA): Article 33 requires them to assess the impact of their processing operations beforehand. They have to describe the intended processing activity, evaluate the risks which it poses, and identify measures to address these risks. This is an interesting obligation for at least two reasons. Firstly, the DPIA could take shape as requiring a broad ethical assessment. And, secondly, this ethical assessment is overseen by a regulatory agency. In the absence of strict and precise rules for the protection of fundamental rights in the digital context, the data protection impact assessment could prove valuable as a means to get controllers to respect the rights of individuals.


An assessment of risks to the rights and freedoms of individuals The DPIA draws on the so-called privacy impact assessment (PIA). Data protection authorities, like the College bescherming persoonsgegevens, have been advocating PIA as a method for identifying and addressing privacy risks. Some have published guidelines on how to carry out a privacy impact assessment. In these guidelines, as in the literature, PIA is not only a compliance check: controllers are supposed to not only investigate whether the envisaged project would be in line with data protection law, but also to evaluate risks to the privacy of individuals. The data protection impact assessment, by contrast, does not explicitly see to privacy. This has caused concern that Article 33 GDPR is a mere compliance check, limited to the assessment whether the project would be in line with the General Data Protection Regulation.

Fortunately, this is not the case. Article 33 requires that controllers evaluate the risks to the rights and freedoms of individuals posed by the processing of information which relates to them. The Article 29 Working Party has clarified that this primarily refers to the right to privacy, but may also involve ‘other fundamental rights such as freedom of speech, freedom of thought, freedom of movement, prohibition of discrimination, right to liberty, conscience and religion’. A data protection instrument can serve these other rights, too. Since 2009, Article 8 of the EU Charter of Fundamental Rights protects the right to the protection of personal data next to the right to privacy in Article 7 of the Charter. Since the right to the protection of personal data is now autonomous from the right to privacy, data protection is legally distinct from privacy and can encompass other concerns, too. Article 8 of the Charter and data protection law can be seen as safeguarding the various fundamental rights of individuals in the digital context. Back in 2003, in the case of Bodil Lindqvist, the EU

Court of Justice already clarified that data protection law sees to the protection of all fundamental rights at stake. The GDPR is filled with references to the plethora of ‘rights and freedoms’ which exist next to privacy.


The European Parliament is expressly concerned about discrimination, clarifying, for example, that profiling should not have discriminatory effects. Risks can arise from flawed data security practices, but also the other aspects of a data processing operation should be subject to scrutiny. A distinction can be made between ‘inherent’ risks, which arise from the nature of the processing activity – e.g. privacy is harmed if private information is collected, autonomy is impinged if the data is used for personalisation – and ‘management and control risks’, which concern the internal control systems which may mitigate or, conversely, exacerbate the inherent risks – e.g. a lack of adequate data security. The French data protection authority, the CNIL, focuses only on management and control risks. Their

PIA is limited to certain ‘feared events’: illegitimate access to personal data, unwanted alteration of personal data, and disappearance of the data. The DPIA should not, however, be confined to the way in which data is stored and kept safe. It should regard the data processing operation in its entirety. This urges controllers to bring a wide range of ethical considerations into the design process. And if the ‘inherent’ risks are too risky to condone, the controller should pull the plug on the intended operation.


Regulatory oversight


When conducting a DPIA, controllers will need to decide when a threat is severe enough and likely enough to count as a risk and what remedies are necessary or appropriate. For example, when is the use of profiles, which might contain protected characteristics such as race or religion, to be regarded as a risk to the right to nondiscrimination? This is especially difficult in the area of fundamental rights, as these risks are not always so easy to identify and to quantify. A further difficulty is that, in many cases, the only way to avoid a risk is to not engage in the activity at all. It might be sufficient to take measures to minimise or mitigate the chance that the threat will occur – but when is a risk sufficiently addressed, or, in other words, what level of residual risk is acceptable? These decisions are difficult because they

are determined by the level of protection which is strived for. The GDPR and the Article 29 Working Party provide criteria on the basis of which risks can be identified and assessed, but they do not specify what level of risk is acceptable; the controller will have to make these normative decisions during the assessment.


Initially, much of the actual norm-setting is therefore in the hands of the controller. While Article 33 does not require controllers to aim high, they can be inspired to provide for actual rights protection. They become aware of the possible impact of their processing activities and their choices should be made transparent to data protection authorities in the form of a report. If sunlight is indeed the best disinfectant, such transparency alone can help bring about ethical behaviour. Data protection authorities can also exert a little pressure. Like any regulatory agency, data protection authorities can do much more than sanction noncompliant behaviour. They also have the opportunity to inform and negotiate with their regulatees, persuading them to display ethical behaviour which conforms to the “spirit” of the law.


This process is fostered by Article 34 of the GDPR: the prior consultation. If the DPIA points to high risks, the controller must consult with the data protection authority. The data protection authority can limit or prohibit the controller’s plans if it finds that ‘the intended processing does not comply with this Regulation, in particular where risks are insufficiently identified or mitigated’. This is a very ambiguous power. What if the processing would technically comply with the GDPR – e.g. the data subject has consented to the processing, which is necessary for a specified purpose – but still presents risks? It is very well possible that a controller technically adheres to the letter of data protection law without providing adequate protection of fundamental rights.


It remains unclear if Article 34 empowers the data protection authority to prohibit data processing which poses risks, but which meets the minimum requirements posed by the data protection principles. This gives data protection authorities the leverage to bargain for rights protection under the threat, or bluff, that the contested processing operation is so risky that it will be prohibited. The ambiguity of Article 34 thus allows norms on risk mitigation to be developed in a piecemeal fashion, outside of the intense lobbying which characterises the EU legislative process. Together with the DPIA, it provides a channel through which data protection authorities can negotiate with controllers on what measures they should take. It is prerequisite, of course, that they are given the resources needed to make full use of this system. With limited funds, data protection authorities will not be able to give hard cases from powerful players the attention they require. Possibly more problematic is that they should also have a democratically legitimate understanding of what rights protection should look like.


All’s well that ends well


While Article 33 is a welcome addition to the GDPR, it could certainly benefit from a number of improvements. As the trilogues on the GDPR are coming to an end, the EU legislature should use its last opportunity to make it more effective and legitimate. With so few rules on how to protect fundamental rights in the digital context, the data protection impact assessment should be as strong a safety net as it can be. The final GDPR should:

Clarify that risks may also arise from the way in which data is collected, organised, analysed, used or shared. This is necessary to ensure that the DPIA is about more than data security alone, contrary to the CNIL’s guidance on the privacy impact assessment. If the DPIA

is to inspire ethical data processing, it must require a full assessment of the risks to rights and freedoms posed by the entirety of the processing operation.


Create a strong duty to consult the public. This would offset the undemocratic nature of the normsetting process, which would otherwise be in the hands of controllers and data protection authorities. To keep technocracy at bay, the input of citizens in the development of these norms is essential. Make it mandatory for controllers to implement the measures which they identify during the DPIA.

Article 33 does not oblige controllers to actually take mitigating measures. Only the Parliament has been thorough enough to propose Article 33a, which requires controllers to check if their data processing is in compliance with their impact assessment.


Ensure that the DPIA report is always available to the data protection authorities, even if it (according to the controller) does not point to high risks. The Council version currently contains a loophole in this regard. The DPIA report only has to be provided to the data protection authority if the controller has started a prior consultation, which it only has to do if the DPIA report shows a high level of residual risk.


This undercuts the power of the data protection authority to check if it agrees with the assessment. Include a duty to publish the DPIA report. If individuals do not know what is happening, they cannot hold the controller to account. Publication will foster good data practices by enabling individuals to switch to a competitor or to start civil litigation if they see their rights neglected. The report can also function as evidence in court.


Claudia Quelle

29/11/15