We also needed to subject our own activities, assumptions, and positioning to self-critique. Consider the tension between philosophical scholarship and service. Are the labs we chose to embed students in those that are the most interesting from a philosophical point of view, and perhaps make for the best publication opportunities, or are they the ones that are most important for the Center? For instance, issues surrounding reading intentions from neural devices may be of great philosophical interest, but end-user concerns about safe neurosurgical procedures may have greater real-world impact. Given limited resources, where should our efforts be put? We learned to respond to opportunities made available, but always with thoughtful attention to our own aims which were, and are, mixed between scholarship and service. We were able to increase our capacity over time, at least in part because the NSF site visitors were enamored with the neuroethics group (in at least one site visit report we were described as the “gem” of the Center), but even in this context, our power was relatively limited. Because we had no independent funding stream, we could only work in ways that ultimately served the goals of the Center and the funding agency. We tried to learn from other experiments in engaged ethics—for instance, the “human practices” thrust of an ERC devoted to synthetic biology' (Rabinow and Bennett 2012)—to find the right balance between doing the scholarship we most valued and serving the aims of the broader technoscience project.
One constant in our efforts has been the highlighting of disability perspectives, and the importance of understanding them in designing devices intended to benefit people with disabilities. In the early years, we held a focus group with individuals with spinal cord injury (SCI), and asked them about their attitudes and concerns regarding various BCI technologies under development. One of the things we heard was that restoring walking was too often understood to be the “holy grail” of technology by (non-disabled) technologists, while SCI people were more interested in bowel and bladder control, sexual function, and/or sensor}' restoration as priorities. Additionally, one of our participants noted that low-tech assistive devices might work much more effectively and efficiently than some of the BCI devices under development (at least in the near term), and that perhaps we ought to use the money that was being used to fund the BCI research to improve access to or reliability of such low-tech options. Given these comments, and our efforts to highlight them to help influence the Center’s direction, we sometimes felt tensions between what we valued and what the Center was funded to develop. Maintaining a self-critical stance allowed us to evaluate our commitments and to consider alternatives (e.g., What would the Center look like if we were not regularly raising disability rights focused concerns?), as well as pushing for the early inclusion of end-user values in the design process. Because our “home” funding was not dependent on our work in the Center (i.e., a faculty position in philosophy in respect of one of the authors, a practicing neurolog}' position in respect of the other, and teaching assistant opportunities for philosophy graduate students), we felt secure enough to be able to speak openly about our commitments and concerns. This problem of sufficient independence is an issue raised as a potential problem in ethical, legal, and social issues (ELSI) research (Klein 2010). Appropriate reflexivity may require some level of financial independence.
Vigilance was needed on several fronts. One was reputational. The mere presence as an ethics group can give the “imprimatur of morality” to socially controversial science (Cho et al. 2008). So, we needed to be careful that we were not being used as an explicit or implicit form of public relations. A second front in which vigilance was exercised was in the role of the ethics group, specifically its relationship to regulation and compliance. Even though, at the outset, we made it clear that we would not be doing FDA compliance work, the issue was revisited frequently. And when we did engage in this activity (e.g., helping to write consent forms), it was selective and tied to specific collaborations between neural engineers and embedded ethicists (e.g., RA or postdoctoral). But the pressures to backslide came from other directions as well, such as NSF site visitors repeatedly asking us to take on the role of reducing FDA barriers to device approval
Openness to Learning/Humility
The NSF site visitors regularly asked for advice on how to share what worked in our ethics thrust to other neural engineering centers and ERCs. One of the questions this led us to consider is what allows for good will among collaborators from such different disciplinary backgrounds, and the capacity to work together rather than at cross-purposes. While the NSF ERC structure, which requires cross-disciplinary teamwork that spans fundamental research areas, helps to scaffold the collaboration, in our view a key feature of successful partnership is openness to learning and humility. Although philosophers brought in to collaborate on a scientific project have expertise to offer—familiarity with a history of philosophical conversations on ethics, facility with the language of morality, recognition of significant features of the moral world and key distinctions within that realm, awareness of common errors in logic—that expertise ought to be deployed in ways that also acknowledge what is not known. The context of a scientific project looking for answers and progress can create significant pressure to present one’s philosophical thinking overconfidently. The ambiguities of philosophy may get inadvertently smoothed over in an effort to demonstrate expertise and play one’s part.
In addition, philosophers are sometimes prone to overconfidence in their own ideas, despite their disciplinary commitments to seeking “truth” (or something close to it). For instance, in a paper on how argumentation can help to cultivate intellectual humility, Kidd (2016, 401) suggests that a “fact that complicates easy claims about the humbling potential of argumentation is an observation about the conduct of many philosophers ... who are highly trained and experienced in argumentation ... but who, nonetheless, evince chronic over-confidence.” In this unfortunate reality—given continuing norms of “aggressively adversarial modes of intellectual engagement” (Kidd 2016, 401) within the discipline—field philosophers who aim to be successful will need to check their argumentative styles and confidence levels, and acknowledge and develop sensitivity to what they may not understand.