Lessons Learned

Reflecting back on our experience as embedded ethicists thus far, we recognize several lessons that we have learned in the process of finding our way, identifying and making our contributions, and ensuring that we were “productively disruptive” in our approach (Fisher et al. 2015). Our aim was to develop our collaboration, while making it both productive and responsive to real needs (for the scientists, for end-users, for us, and for the broader neurotechnology' community) and able to be constructively critical. We wanted to gain trust, but also to shake up norms of practice in ways that could foster new, ethically-conscious research and design practices. Here, we set out several of the attributes of our engagement approach that we would recommend to other philosophers considering embedded or engaged philosophical work.

Flexibility

Unlike in a typical philosophy career that allows for slow, reflective, and deliberate work on a self-chosen research project, our experience with embedded ethics required speedy responses, significant adaptability, and willingness to be flexible given multiple stakeholders. ERC grants can run for a maximum of ten years (if funding is continued). In that amount of time, many features of the Center can change, whether due to site visit recommendations to de-fund or de-emphasize certain components, or to changes in faculty and staff. When we started, we had to put in significant effort to get up to speed on the basics of the science and engineering components of the Center, just to be able to understand the presentations and work done by our colleagues. That effort was usually rewarded, but as the Center’s focus shifted following a change in leadership (from robotics to engineered neural plasticity), we had to scramble to keep up.

For example, we ran an early focus group that looked at BCI-controlled exoskeletons, BCI-controlled prosthetics, and the possibility of réanimation of limbs through BCI control. Shortly afterwards, a site visit team encouraged the Center to eliminate the focus on the first two possibilities, given competing work at different institutions. A different site visit team pushed the leadership to better define the Center’s “product,” which led to a move toward “design principles for bidirectional BCI” as opposed to specific devices. In recent years, the focus has shifted to “engineered neural plasticity” as a goal, with different fundamental research groups defined and brought on board. Not all of these changes were drastic, of course, but responding to the pressures to shift in different directions, because of the recommendations of the funders, required flexibility and willingness to adapt on the fly. A project that uses the “old” Center language—e.g., a study looking at how BCI is depicted in the media, as a way to assess how prospective neural device users are likely to think about BCI when they consider entering a research study—might appear out of place within a year due to a shift from “BCI” to “neural devices” more generally (e.g., to capture the spinal stimulation work done by a PI working with human subjects).

We sometimes felt that we had identified key issues and initiated an ethics research project, only to find that the science and engineering grounding for the work had shifted. We had to be prepared to spend significant time gaining a reasonable understanding of a complicated science and technology arena, all while remaining nimble enough to shift directions when funders demanded restructuring. This could be frustrating, but it also helped to ensure that as philosophers, we were responsive to issues faced by others outside our field, and were therefore relevant, rather than set fully apart from other fields of inquiry. We value and appreciate the more abstract and less practical forms of philosophy that some of our colleagues practice, but we have also found value in this more engaged philosophical practice that is partnered with ongoing real-world affairs, with the aim of both understanding changing technological opportunities and working toward more just and ethically sensitive designs.

The need to measure and show results is a pervasive feature of science and another way in which we had to be flexible. The measurement imperative was felt acutely when we had to prepare status reports of funded projects or to present our work in posters or five-minute presentations. The issue here was not just formats less conducive to communicating concepts or theories, but the juxtaposition alongside other posters or presentations in which quantifiable data was the centerpiece. By not conforming to presentation of “data,” we risked our work not being understood and valued. So, we found ways to collect and present “data”—for example, quotes from end-user focus groups—but embedded within discussion of conceptual issues and frameworks. More broadly, we had to be flexible about how we viewed our own impact within the Center. We took the goal of our involvement as raising awareness about ethical issues and, where appropriate, motivating neural engineers or others to act. But, given the measurement imperative, it was difficult not to apply the same measurement impulse to our own work. How could we prove that our ethics efforts were “impactful”? What could be measured to show that progress had been made? Why support (or fund) what you cannot straightforwardly measure through quantitative representations? And while we generally resisted the desire to view our efforts (and their worth) purely or predominantly in terms of quantifiable measures, we did explore ways of quantifying ethical change (e.g., the SPECs project).

 
Source
< Prev   CONTENTS   Source   Next >