Ethics Lab
Ethics Lab is a center for creative ethics education and direct impact work at Georgetown University. Launched in 2013, the animating drive of Ethics Lab is to use ethics as a creative tool for making progress in the face of highly complex challenges. More specifically, we offer innovative philosophy courses, develop interdisciplinary exercises to infuse ethics across the curriculum, and conduct collaborative workshops to help research teams and organizations build ethics into their work. Our focus is on emerging, high stakes issues, including the ethics of digital technology, divided democracy, and bioethics. These are areas high in both complexity and novelty, in which ethical issues are often embedded—sometimes shrouded—in highly technical issues. Most of all, they carry an urgency that demands tangible results and not simply theoretical explication. These features, we believe, benefit from a philosophical lens that is agile, attentive, and creative, with a commitment to helping forge new pathways for responsible progress.
To pursue this perspective, we use a methodology' that blends philosophical exploration with approaches adapted from design thinking. Our team is unique: philosophers with backgrounds in theoretical and applied ethics and political philosophy, together with design experts who have a commitment to designing for value. Our approach harnesses the insights and practices of both fields to help students or research teams bring to the surface ethical issues embedded in contemporary problems, make explicit tacit assumptions about the context and what is at stake, and envision creative avenues for ethical impact.
The focus of this essay is our collaboration with partners who are trying to make a difference in the world. We work with a variety of partners, including academic research teams, policy makers, and organizations. Our collaborations develop out of our team members’ work and expertise. As part of our research, we actively seek conversations with people—in the academy, government, industry, or non-profits—who are tussling with how to make practical progress in the areas we work on. The aim, always, is to leverage the Lab’s combined expertise in philosophical ethics and design methods to enable collaborative problem solving through a values-based lens.
A helpful illustration is an exploratory bioethics project we worked on around trauma and birth. One of our ethicists had been working with an obstetrics and gynaecology (OB-GYN) bioethicist and an academic/clinical psychologist exploring how prenatal, labor, and delivery care can better meet the needs of women who come to pregnancy with a history of trauma. A significant percentage of pregnant women have experienced some traumatic event in their past, whether sexual, domestic, or violence experienced during military service. Given the highly intimate nature of prenatal care and delivery, emerging research was showing the extent to which clinical care in this setting could unintentionally re-traumatize patients, since prenatal care and birth are replete with unintentional triggers, including “encouraging” endearments such as “just relax, sweetie” (White et al., 2015). Colleagues in health care policy circles, we had heard, were starting to advocate the addition of “prior trauma” questions to the patient screening tools routinely used in questionnaires or asked by clinicians. Based on the OB-GYN and psychologist’s research about and experience with trauma survivors, this was deeply worrisome. Encountering such questions out of the blue on a form—or worse still, being asked to share one’s experience interpersonally—would itself be a trigger! What seemed to some health care practitioners as a promising solution was in fact a terrible idea.
We worked together with the full Ethics Lab team over a series of meetings to interrogate the core ethical concepts afresh and to explore new pathways for intervention. For instance, in one session, we adapted what designers called “journey mapping”—initially developed to optimize customer experiences and now used in a variety of contexts to understand a given embodied, sequential experience. Drawing out (literally—think of a table covered with butcher paper and markers) a sample experience of a pregnant woman at a neighborhood clinic (What does she see when she enters? Where does she go next? What might she encounter at the reception desk?), we identified places where triggers might happen unexpectedly. We pulled apart different intersections of autonomy and vulnerability. The design-inspired exercise sparked a discussion that opened up potential unexpected pathways for developing a better intervention. We probed the assumption, for instance, of whether the solution is a screening tool at all. Perhaps a video resource would be better, anchored on women’s stories that could be viewed by women in private, along with training for the health care providers on what not to assume.
As another example, we have recently worked with a group of academic computer scientists who are tackling the challenging issue of how to preserve privacy in the era of big data. Existing tools for protecting the privacy of individuals by de-identification or anonymization of data, it turns out, don’t survive recent developments in the ability to aggregate large databases. One solution is to keep the databases sequestered from one another; but that forgoes enormous public good that can come from harnessing data, including, for instance, tracking drivers of social inequalities in order to advocate for better public policies. The team of computer scientists had developed an exciting new algorithm that can preserve higher degrees of privacy by intentionally introducing selected noise into the system—a mathematical invention that won the Gödel Prize in computer science (Dwork et al., 2017). Of course, noise comes at a cost: it reduces the informational utility one can get out of the datasets. How then to set the “privacy loss parameter”—a variable that encodes decisions about how to balance the trade-offs between individual privacy and the utility' of data sets? Privacy is, at the best of times, a nuanced and contested concept, and here those philosophical complexities were matched by enormous technical ones, ranging from computational methodology to the discrete policies governing access to specific databases.
Ethics Lab ran a workshop with the researchers to consider how the formula itself should be implemented in specific real-world contexts. We first went through a series of quick, time-limited exercises designed to elicit the team’s tacit understanding of the value of privacy versus informational utility', the types of considerations that the algorithm can protect (or not), and which kinds of informational risks different stakeholders—data subjects versus data stewards, for instance—might prioritize. Given that one important component of privacy theory indexes to “reasonable expectations,” we also had the group begin mapping the flow of information their specific project would involve, and annotating it visually for insights about the expectations of privacy that various agents might have. The aim was to generate a new set of considerations mathematicians should take into account as they try' to set the privacy loss parameter, and spot value affinities that might suggest new opportunities for merging ethical and normative considerations as a key component of applying the mathematical formula.
In all of our collaborations, our aim is not just to identify the moral guardrails to which a project should be attentive—i.e., to identify' what not to do. Our aim is also to help ethics be part of the solution space. We work to identify places where existing philosophical distinctions or concepts are relevant, where issues challenge the limits of existing theory, and to collaborate to find pathways that can make a moral difference. We call this mode of working “translational philosophy.” In the next section, we explain why.