Transparency: Its Meaning, Justifications, and Means
Transparency has long been a cornerstone value in the management of information about individuals. What are called Fair Information Practices (FIPs) were first proposed in a 1973 report5 of the then-Department of Health, Education, and Welfare. The report “Records, Computers, and the Rights of Citizens”6 stipulated that there should be no data collection systems whose existence was secret. Many subsequent formulations of FIPs have fleshed this out to include aspects of data collection: people should be able to know what data about them are being collected, who (or what) is collecting the data, how the data are being collected, and how the data are being stored, disclosed, managed, and used. Understanding what the quite abstract idea of transparency means in different data contexts is challenging, however, and turning briefly to justifications for transparency is useful in this regard.
Justifications for transparency may be rooted in the obligations of the entity collecting the data, the rights of individuals, or contracts between the data collector and the individual. Myriad types of entities today are involved in data collection activities: with respect to health data, these include international organizations, such as the World Health Organization, governments (i.e., public health agencies), healthcare organizations, Internet search providers, newspapers, data brokers, other commercial entities, and most recently in the United States, the PMI—to name just a few. These data collectors may have very different rights and obligations. Governments—at least, democratic governments—are generally thought to have ethical obligations to their citizens to be open about what they are doing and about what they are learning, unless there are overriding reasons for information protection.7 These obligations may be especially strong if matters of public interest or public safety are involved, and may be overridden in exceptional cases, such as individual privacy, law enforcement or national security, or commercial secrecy. Freedom of information laws and their exceptions reflect these commitments.8 Commercial entities that assemble and market databases may assert intellectual property rights through copyright9 or trade secrets law to these assets, as appears to be happening increasingly in the wake of court decisions limiting patent rights of genetic testing companies.10 However, these intellectual property rights may be limited or overridden by public interests, such as public health or safety.11
Although understood in multiple forms, the individual right to privacy has been legally recognized for more than 100 years.12 This right has been interpreted as rights to control access to the person, as rights to protection against intrusion into secluded space, as rights to control information, and as rights to make decisions about important or intimate matters, among other conceptualizations. The right to privacy also has been distinguished from the right to confidentiality: the right to control access to and disclosure of information about oneself.13 Data collection, use, or disclosure may implicate both privacy and confidentiality as thus understood.
Many values have been asserted in support of these multiple understandings of privacy and confidentiality rights. These values include autonomy and choice, political liberty, physical security, intimacy, dignity, identity, equality, and justice. Some of these values relate directly to the individual, such as the ability to make choices about one’s life. Understanding what information is being collected can help individuals make choices about what information to share, whom to trust with that information, and whether to rely upon their expectations about what will happen to their information.
Knowing what information has been collected, who has done the collecting, and whether the information has been disclosed to others can also help individuals to be aware of, and thus hopefully protect themselves against, information disclosures, such as those that might occur through a security breach. Some of these values may also be asserted on the level of a group: information about group members or about the group itself may lead to the group being targeted for attack (even genocide), may alter conceptions of group identity, may stigmatize, or may result in discrimination against members of the group.
Transparency may also be useful for the data collector. People may be more willing to share information—thus contributing to more robust data collection possibilities if they believe they can trust data collectors.14 Notorious examples highlight how mistrust about data collection and use can harm data collection abilities. In Texas15 and in Minnesota,16 failures to inform the public about retention and subsequent uses of blood spots obtained in newborn screening programs resulted in public outcry and the eventual destruction of valuable public health resources and the data they contained. Arizona State University settled with the Havasupai tribe after researchers had used genetic data obtained in a study of diabetes and then de-identified it for research studies of mental illness and migration patterns.17 Multiple studies sound the theme that consumer concerns about the privacy of their health information may generate reluctance to use patient portals, health information exchanges (HIEs), or personal health record (PHR) sys- tems.18 Although searching for health information is a common Internet activity,19 recent data also indicates that willingness to share information depends on perceived trade-offs between risks and what is to be gained.20
Most generally, transparency refers to openness about what is being done. In contemporary statements of FIPs, this general idea of transparency has taken two importantly different forms: general publication and direct-to- consumer notice. As an example of the former, the U.S. Privacy Act requires that federal agencies publish notice in the Federal Register of the existence and character of the systems of records they maintain.21 Another example of general publication would be the suggestion of the Federal Trade Commission (FTC) to data brokers—entities that collect and aggregate data for resale—to develop a website register of data collection activities for marketing purposes to allow consumers to understand what they are doing, know their access and choice rights, and opt out of uses of information about them.22
A second type of effort to ensure transparency is giving direct notice to the consumer. This has taken many different forms. Early in their development, FIPs were interpreted to require direct notice to individuals about specific disclosures. For example, the U.S. Privacy Act requires that federal agencies make reasonable efforts to provide notice of disclosures made under compulsory legal process when the disclosure will become a matter of public record.23 Notification of security breaches of protected health information (PHI) is required by the Health Information Technology for Economic and Clinical Health (HITECH) Act amendments24 to the Health Insurance Portability and Accountability Act (HIPAA) for covered entities and their business associates, and for vendors of PHRs.25 Following California’s lead in 2002, most states have also enacted breach notification statutes, although the majority of these do not include health information.26
The idea of a notice of privacy practices—either published on a website for readers to use or given in paper form to individuals—is a more recent development. As early as 1995, the European Union’s Directive 95/46 on the protection of individuals with regard to the processing of personal data required member states to enact notice standards.27 Under EU law, directives give member states flexibility in meeting minimum standards while regulations set out requirements for all to meet. In 2016, Directive 95/46 was replaced by the General Data Protection Regulation, which incorporates and strengthens the requirements of the Directive.28 Any collection of personal data requires notice, including the identity and contact details of the data controller and data protection officer, the purposes of the data collection, the legal basis for the collection, the recipients or categories of recipients of the data, and any intent of the data controller to transfer data outside of the EU.29 Similar information must be provided where personal data have not been obtained from the data subject.30 All of this information must be provided “in a concise, transparent, intelligible and easily accessible form, using clear and plain language . . . .”31 Further information “necessary to ensure fair and transparent processing is also required, including the length of data storage, the right to request rectification or erasure of the data, whether the data will be used in profiling and any envisioned consequences of this, and whether the data subject is required to provide the data along with the consequences of refusal.”32
One major innovation of the Regulation is incorporation of the so-called “right to be forgotten,” a right to have data erased under specified circumstances, including that there are no longer overriding legitimate grounds for maintaining the data.33 A major motivation for the overhaul of EU data protection was the judgment that enforcement of standards for data transfer outside of the EU had become too lenient, especially for transfers to the
United States. In July, 2016, the EU and the United States finalized a Privacy Shield Framework so that data can be transferred back and forth between the two; the framework contains significantly stronger requirements and enforcement guarantees than the prior Safe Harbor arrangement.34 Among the new requirements for Privacy Shield participants are compliance with EU notice and choice requirements and transparency regarding any enforcement actions against the participant.35 The Privacy Shield Framework will likely result in increased transparency and stiffer notice requirements for companies seeking to transfer data from the EU.
Other federal and state laws also require privacy notices. Federally, the Financial Services Modernization Act of 1999 (otherwise known as Gramm- Leach-Bliley) requires financial institutions and insurance companies to send their customers conspicuous yearly notices explaining their policies with respect to information protection and disclosure.36 Without the notice and information about how to opt out, these institutions may not disclose identifiable personal information to unrelated entities.37 California state law requires a privacy notice to be included in a conspicuous manner on any commercial website collecting personally identifiable information about consumers.38
With respect to health information specifically, the HIPAA Privacy Rule requires covered entities to provide patients with a Notice of Privacy Practices (NPP). The HIPAA NPP must include prescribed language, in all capital letters, calling the reader’s attention to what the notice concerns. Prescribed information includes a description of the types of uses and disclosures that are permitted without individual authorization, a statement that other uses and disclosures may occur with authorization, and separate statements about certain uses and disclosures, such as for fundraising. The NPP must also tell the individual about their rights of access to health information, rights to an accounting of uses and disclosures, and rights to request amendments.39 The notice also requires contact information and information about how to file complaints. It is fair to say that the HIPAA regulation is prescriptive and complex and encourages lengthy and formalistic notices.
Over the past decade or more, privacy notices and statements of privacy policies have become a standard practice across the Internet. Many of these notices also feature a notice/choice format in which consumers are invited to make particular choices. Consumers may be asked to click “I agree,” thus potentially becoming contractually bound to the contents of the notice. Or, they might be told that their data will be used in specified ways unless they opt out, and are offered a method for exercising this choice. For particularly controversial types of data use, such as marketing, consumers may be told that they must opt in to have their data used in this way and offered a “yes” button or some other mechanism of acceptance.
Understanding what these privacy notices should be like, what they should say, and what they can be expected to achieve has evolved as well. This chapter returns to these developments after reviewing widely understood challenges to successful transparency.