In 2014, I joined with my former Notre Dame computer engineering and roboticist colleague, Laurel Riek (now at University of California San Diego) to co-author the first code of ethics for the Human-Robot Interactions (HRI) profession (Riek and Howard 2014). We first presented this paper at the WeRobot 2014 conference in Coral Gables, Florida. Never in my 40-year career has a paper that I have written generated so much intense interest. Calls and emails from reporters poured in, and Riek and I were interviewed for a number of podcasts, radio broadcasts, blogs, and both print and online news and magazine articles (see, for example, CBC Radio 2014; Moon 2014; NBC News 2014). Ten months later we convened a day-long workshop on the same topic at the 2015 HRI annual conference in Portland, Oregon, held under the auspices of Institute of Electrical and Electronics Engineers (IEEE), which Riek and I co-organized along with mechanical engineer and roboticist, Ajung Moon (now CEO and Technology Analyst at Generation R Consulting) and law professors Woodrow Hartzog (Northeastern University) and Ryan Calo (University of Washington). The workshop was a huge success, drawing an audience of 80, most of them robotics engineers (see Riek et al. 2015).
Weapons research and development
I have a long-running collaboration with Major General Robert Latiff (US Air Force, retired), who holds a Ph.D. in materials engineering and had a long career as a developer of weapons, surveillance, and command and control communications technologies for the Department of Defense and other government agencies. Our collaboration began with the development of a successful undergraduate course at Notre Dame on the “Ethics of Emerging Weapons Technology,” which draws mainly engineering students, and the collaboration now extends to a number of other projects. Most rewarding, perhaps, was our work together under contract with the National Academy of Sciences to build a set of teaching modules based on the 2014 Defense Advanced Research Projects Agency (DARPA)-funded, National Research Council and National Academy of Engineering report Emerging and Readily Available Technologies and National Security: A Framework for Addressing Ethical, Legal, and Societal Issues (Chameau et al. 2014). These modules are designed to be used as part of in-service training in weapons research and development labs and have been constructed in such a way that they can be led by engineers themselves.
I also co-organized with General Latiff a 2014 conference at Notre Dame, “Ahead of The Curve: Anticipating Ethical, Legal, and Societal Issues Posed by Emerging Weapons Technologies,” designed to showcase that report. Among the featured speakers was the then Deputy Director (now Director) of D ARP A, Dr. Steven Walker. It was DARPA that had initiated and funded the study that led to the report, because the agency recognized the need for government agencies and private corporations involved in weapons research and development to integrate into their work a more systematic and sophisticated engagement with ethical considerations (see Howard 2014).
General LatifFs personal story as an engineer and weapons developer is a compelling one. A few years after his retirement he found himself in conversation with other retired military officers and intelligence officials who shared his concern about the lack of sufficient ethical input in weapons and intelligence research and development. Eventually he approached his alma mater, thinking that if any major institution would care about the problem it would be Notre Dame. Latiff was referred to the Reilly Center, and that, as Rick says to Captain Renault in Casablanca, was the beginning of a beautiful friendship. The sincerity and intensity of Latiffs worries about the ethics of new weapons technologies is well expressed in his recent book, Future IVar (Latiff
2017), where he expresses a number of specific concerns, such as his unhappiness with the rush to develop and deploy ever more autonomous weapons systems.
Automotive Engineering, Al, and Self-Driving Vehicles
In August 2017 I was an invited presenter at an National Science Foundation (NSF)-funded, one-day workshop at Stanford University on “Collaborative Research with Ethical, Legal and Social Implications.” The workshop was coorganized by the philosophers Shannon Vallor, from Santa Clara University, and Daniel Hicks (my former Ph.D. student), currently at UC Merced, and Stanford mechanical and automotive engineer Christian Gerdes. The focus was on collaboration between engineers, lawyers, and technology ethicists in addressing ethical challenges with self-driving vehicles.1 My understanding is that Hicks and Gerdes first conceived the idea for this workshop when Hicks was an American Association for the Advancement of Science (AAAS) Science Policy Fellow at the National Science Foundation working on ethics and selfdriving vehicles, and Gerdes was serving as the Chief Innovation Officer at the US Department of Transportation. While the workshop yielded no published record of the talks and discussions, it was nonetheless a richly rewarding experience for those who participated; it made clear that the prospects for further such collaboration between philosophers and engineers were bright, as evidenced by some of the work being done at Stanford on integrating ethics explicitly into autonomous vehicle control systems (see, for example, Thornton et al. 2017).
The above is but a sample of my collaborations with engineers. Each has been professionally rewarding and personally satisfying. It is always stimulating to have the excuse to study technical literature that one is new to. All of the above-mentioned colleagues and collaborators have welcomed my input, seeing such work with a philosopher to be comparably enhancing to their own research and teaching. The relationships that we have built are based on mutual respect, shared interests, and a common commitment to making the world a better place. Along the way, I have learned some important lessons about how philosophers can collaborate with engineers.