By René L. P. Mahieu
The debate about how to govern personal data has intensified in recent years. The European Union’s General Data Protection Regulation, which came into effect in May 2018, relies on transparency mechanisms codified through obligations for organisations and citizen rights. While some of these rights have existed for decades, their effectiveness is rarely tested in practice. This paper reports on the exercise of the so-called right of access, which gives citizens the right to get access to their personal data. We study this by working with participants—citizens for whom the law is written—who collectively sent over a hundred data access requests and shared the responses with us. We analyse the replies to the access requests, as well as the participant's evaluation of them. We find that non-compliance with the law's obligations is widespread. Participants were critical of many responses, though they also reported a large variation in quality. They did not find them effective for getting transparency into the processing of their own personal data. We did find a way forward emerging from their responses, namely by looking at the requests as a collective endeavour, rather than an individual one. Comparing the responses to similar access requests creates a context to judge the quality of a reply and the lawfulness of the data practices it reveals. Moreover, collective use of the right of access can help shift the power imbalance between individual citizens and organisations in favour of the citizen, which may incentivise organisations to deal with data in a more transparent way.
Our lives are increasingly intertwined with the digital realm, and with new technology, new ethical problems emerge. The academic field that addresses these problems—which we tentatively call ‘digital ethics’—can be an important intellectual resource for policy making and regulation. This is why it is important to understand how the new ethical challenges of a digital society are being met by academic research. We have undertaken a scientometric analysis to arrive at a better understanding of the nature, scope and dynamics of the field of digital ethics. Our approach in this paper shows how the field of digital ethics is distributed over various academic disciplines. By first having experts select a collection of keywords central to digital ethics, we have generated a dataset of articles discussing these issues. This approach allows us to generate a scientometric visualisation of the field of digital ethics, without being constrained by any preconceived definitions of academic disciplines. We have first of all found that the number of publications pertaining to digital ethics is exponentially increasing. We furthermore established that whereas one may expect digital ethics to be a species of ethics, we in fact found that the various questions pertaining to digital ethics are predominantly being discussed in computer science, law and biomedical science. It is in these fields, more than in the independent field of ethics, that ethical discourse is being developed around concrete and often technical issues. Moreover, it appears that some important ethical values are very prominent in one field (e.g., autonomy in medical science), while being almost absent in others. We conclude that to get a thorough understanding of, and grip on, all the hard ethical questions of a digital society, ethicists, policy makers and legal scholars will need to familiarize themselves with the concrete and practical work that is being done across a range of different scientific fields to deal with these questions.
We investigate empirically whether the introduction of the General Data Protection Regulation (GDPR) improved compliance with data protection rights of people who are not formally protected under GDPR. By measuring compliance with the right of access for European Union (EU) and Canadian residents, we find that this is indeed the case. We argue this is likely caused by the Brussels Effect, a mechanism whereby policy diffuses primarily through market mechanisms. We suggest that a willingness to back up its rules with strong enforcement, as it did with the introduction of the GDPR, was the primary driver in allowing the EU to unilaterally affect companies' global behavior.