Embedding Ethics in Neural Engineering: An Integrated Transdisciplinary Collaboration

For the past six years, we have led the ethics “thrust” (i.e., group) in an National Science Foundation (NSF) funded engineering research center focused on neural engineering (the Center for Neurotechnology or CNT). In this chapter, we describe our experiences working collaboratively with an interdisciplinary' team of neuroscientists, electrical engineers, neurosurgeons, and rehabilitation clinicians to explore the ethical implications of innovative neural engineering research and development. Our story provides a case study of integrating ethics in a scientific project and lessons learned in the process. We identify team attributes such as flexibility, perseverance, creativity, reflexivity, vigilance, and humility as significant features that contributed to the effectiveness of our interdisciplinary collaboration, and share challenges we experienced that are likely to be faced by many philosophers considering such “fieldwork,” regardless of the specific focus or arena of practical research.

Getting Started

In 2011, the NSF funded a grant to establish the Center for Sensorimotor Neural Engineering (CSNE) (the Center recently changed its name to the Center for Neurotechnology' or CNT)—a multi-site engineering research center, based at the University of Washington (UW), Seattle, with partner institutions at the Massachusetts Institute of Technology' (MIT) and San Diego State University (SDSU), as well as educational partners at Southwestern University', Spelman College, and Morehouse College. Its initial aim focused on combining robotics with neuroscience to develop brain-computer interfaces (BCI). Because the ultimate goal involved developing design principles for neural devices that could restore or augment human sensation and movement, the principal investigators recognized the potential significance of their work for broad philosophical questions related to what it is to be human and for ethical issues related to opening up new modes of access to and interventions on people’s brains (Denning et al. 2009). As a result, they contacted the Program on Ethics at the UW Department of Philosophy to look for potential collaborators.

Given this situation, our initial foray into this philosophical fieldwork project was relatively easy. We did not have to look for partners or initial funding; they came to us. Nonetheless, the specific content of the project, and the best method by which to pursue it, were completely undefined. The initial funding—a month’s summer salary for one philosophy faculty member, and $2000 summer stipends for four graduate students—involved a very' short internal grant proposal that was intentionally exploratory. Our aim was to figure out how we might best do ethics work in conjunction with neural engineers.

During that first summer, the ethics group met weekly to discuss papers exploring ethical issues with existing neural technology (for example, deep brain stimulators) (e.g., Klaming and Haselager 2013; Kraemer 2013); papers on a variety of present and future neural interventions (e.g., Clausen 2008); and papers focused on different models of ethics engagement in scientific practice (e.g., Fisher et al. 2006; Cho et al. 2008). We also attended CNT events— colloquium talks, student research groups, etc.—to try to get a better sense of what the scientists and engineers affiliated with the CNT were working on. We faced a steep learning curve and spent a fair amount of time just trying to work out what brain-computer interfaces are, given disagreement among the scientific community (Nijboer et al. 2013), and to what uses they might be put.

By mid-summer, we realized that we would need more direct input from the neuroscience researchers if we were to have any chance of successfully integrating our ethics component with the ongoing work in neuroscience and engineering. We needed to know: (a) what the main aims of the affiliated labs and projects were; (b) what the principal investigators (Pls) saw as the most significant current and likely future ethical issues arising from their work; and (c) how they thought we could best work with them to explore those issues. With permission of the Center director, we set up an informal interview project, with our graduate students conducting hour-long interviews in person, where possible, or by video conferencing. We started by asking each PI to describe their work and then asked them to tell us about ethical issues they thought might be related to it. We had developed a list of ethical issues found in the neuroethics literature, and interviewers prompted Pls to consider these issues if they had not already come up in discussion. Finally, we asked Pls how they thought ethics engagement in the Center ought to work, offering a range of possibilities from an ethics consultation model (Cho et al. 2008) to a fully embedded humanities researcher in each lab (Fisher et al. 2006).

Although we were just feeling our way that first summer, in retrospect, what we developed was a bottom-up approach to understanding the needs of our

Center. Starting with interviews with the Pls gave us important scientific grounding in the area and a good sense of the range of projects housed within the Center, but it also positioned us as potential collaborators on ethical issues, rather than as ethics “police.” By reaching out to researchers early on we demonstrated our commitment to understanding what they do, and our commitment to helping to shape technology development with them. In the interviews, we treated the Pls not just as experts in their own areas (e.g., electrode design, neurosurgery, computational neuroscience, bioengineering) but also as people well-positioned to help us recognize and think through potentially troubling ethical matters. Of course, we also brought our own expertise to the exchange. We raised issues related to human identity, privacy, responsibility, and security, inviting the Pls to explore with us how these fundamental human values intersect with the kinds of work undertaken in their labs and beyond. We emphasized our philosophical training, to make clear what we could offer in the collaboration. In so doing, we also proclaimed what we were not: people who would take over all the applications to the institutional review board (IRB) for projects using human subjects, or with the expertise to help navigate through regulatory processes of device approval (e.g., with the Food and Drug Administration).

In respect of modes of ethics engagement, some themes stood out from the interview transcripts. On the one hand, most of the Pls seemed to think that an ethics consultancy service would not be successful, given that the Pls might not always be able to identify the relevant ethical issues on their own or, even if they did, they might not be motivated to make use of the consultancy service until it was “too late.” The Pls wanted a more integrated approach to ethics. On the other hand, some of the Pls found the idea of having a humanities or social science researcher in their lab all the time (modeled on the Socio-Technical Integration Research, or STIR model; Fisher et al. 2006) a bit “creepy” and, in any case, unlikely to be fundable (given the number of labs and the cost of research assistants). What they preferred was something in the middle—an ethics group that would be more integrated with the daily work of labs than a mere consultancy service but also feasible and fundable. What, exactly, such a Goldilocks approach would look like was unclear.

 
Source
< Prev   CONTENTS   Source   Next >