Data protection by design

In order to implement data protection principles that safeguard the individual’s interests and rights, the Regulation foresees in Article 27 that controllers must “implement appropriate technical and organizational measures, such as pseudony-misation” when they plan the means for processing personal data. This is what R. 2018/1725 refers to as “data protection by design.” Article 27 further requires that controllers “implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specificpurpose of the processing are processed.” The technical measures should be foreseen in the development of new information systems and in the creation of web tools.

For some, Article 27 might deliver the promises as outlined by the European Data Protection Supervisor almost a decade ago, ' as it requires data controllers to use technical and organizational measures that by design and default implement the principles of data protection. However, Article 27 has inspired detractors from the privacy engineering community and from some lawmakers for not offering guidance about privacy engineering in practice, or for exposing a tension between principles that seek to uphold the rights of individuals versus principles like data minimization that attempt to eliminate or reduce data storing and sharing. The language of Article 27 probably suffers from several drafting attempts, and does not suggest technical and organizational measures other than pseudonymization; and, in the name of technological neutrality, it avoids any mention of specific privacy engineering techniques, which only heightens the confusion over what it requires. Some solutions to these obvious tensions have been provided by the extensive guidance published by the European Union Agency for Cybersecurity (ENISA) about Privacy-Enhancing Technologies (PETs) and Privacy Management tools,75 which tries to align with current privacy engineering methods and practices and proposes future solutions for information specialists. Commonly, privacy technology seeks on the one hand to minimize any disclosure of personal data to controllers and relies on cryptographic protocols to ensure this outcome, and on the other hand assumes that individuals will at one point lose control over their data and have to place some trust in controllers. In the latter case, privacy engineering builds tools that help users make good decisions about data sharing, while satisfying informed consent requirements (e.g., preference languages, cookie consent management)/ ’

Automated decision-making and its ethical implications

Finally, we should touch upon algorithms, artificial intelligence (Al), and big data as they are all closely linked to data protection. Harari and other scholars and cultural commentators notice that governance and decisions made by algorithms, which were once made by people, increasingly shape important parts of our lives. This includes bank loans, job searches and recruiting, predictions about health, surveillance, and even politicians’ choices. As decisions based on Al technologies and machine learning become the norm, they raise critical questions about the right to challenge automated decision-making. The principles contained in the EU regulations on data protection include appropriate safeguards for individual rights, such as the right to infonnation, the right to transparent decision-making, the ability to change the collected data, and the derived possibility to challenge the assumption that information processing decisions are all ethically right. It is currently difficult to see how this is conceivable in an automated context.80 However, this should not be impossible,81 and the fact that data protection rules recognize that decision-making based on algorithms is not neutral or simply technical is a positive advancement. Although we usually see privacy in terms of personal control over data or data processes, individuals should not be left alone to take the responsibility for processes that are difficult or even impossible to understand or manage.

Privacy is not just personal. It requires trusted stronger actors working on behalf of individuals to implement mechanisms about what is acceptable in terms of sharing personal data. We can learn a great deal from social media, as the majority of Internet users use online social networks at least daily or almost daily. " People are very willing to share insights and information about themselves on social media, yet they still require privacy.8 A recent Eurobarometer enquiry shows that more than six in ten users are concerned about not having complete control over the information they provide online.8'1 Respondents who feel they have partial or no control over the infonnation they provide online were asked how concerned they were about this. Overall, 62% say they are concerned, with 16% who are very concerned. It is therefore understandable that the majority of social network users have tried to change the default privacy settings of their profile.85

Although experiences of privacy are certainly personal, privacy has to be defended as a common good. Privacy is not just a personal experience of border, security, and breaches, but a collective responsibility, a fundamental ethical principle. It is evident, therefore, that individuals’ rights to control personal data and the processing of such data should be carefully addressed in the context of big data and Al. Control requires awareness of the use of personal data and real freedom of choice. These conditions, which are essential to the protection of fundamental rights, can be met through different legal and technical solutions, which we have examined in the previous pages. They should take into account the technological context, but even more, individuals’ lack of knowledge. The complexity and obscurity of big data, algorithms, and Al applications should therefore prompt decision-makers to consider the notion of control, which goes beyond individual control. They should adopt a broader idea of control over the use of data, according to which individual control evolves in a more complex process of impact assessment86 of the risks related to the use of data.87

The guidelines on big data issued by the Council of Europe in 201788 move in this direction and focus on the ethical and social consequences of data use for the protection of individuals with regard to automatic processing of personal data. Therefore, the challenge rests with all actors to embed the principles of transparency89 in all automated decision-making processes and to make clear the terms by which decisions are taken. The recent “Guidelines for the ethical use of Al” and the “Recommendations for boosting European sustainability, growth and competitiveness”9" by the European High-level Group have been described either as “an act of genius, or slow suicide.”91 European Union and Al experts remain divided about which rules are needed. On the one hand, European leaders are convinced that ethical guidelines will be enablers of innovation and consumers will demand “trustworthy Al” once it reaches the market, while critics believe that an ethics-first approach combined with restrictive regulations will prevent European competitiveness on the global market.92 Webb points to the fact that the future of artificial intelligence is already controlled by just nine tech titans: Google, Microsoft, Amazon, Facebook, IBM, and Apple in the United States; and Baidu, Alibaba, and Tencent in China. These companies fund the majority of research and earn the lion’s share of patents. While doing this, they gain access to personal data in ways that are not transparent. In Webb’s view, Europe’s attempt to solve the problem with strict regulations is a mistake, as Al is progressing so fast that any regulations created today will quickly become outdated. Policymakers should instead work towards setting Al on a path that defends democratic values.

 
Source
< Prev   CONTENTS   Source   Next >