The Ethical Context
Once thought to be the object of advanced scientific endeavors, robots have steadily made their way into the forefront of technological applications. In nations across the world, robots are now regarded as not only a feasible alternative or assistive device for humans, but also as items that are increasingly present in different spheres of human life. In spite of the varied representations of robots that are spread across media outlets, individuals seem to have an overall positive perception of the development and use of robots. For example, in a survey conducted across European nations, most European Union citizens expressed positive views toward robots, with percentages ranging from 54% in Greece to 88% in Denmark and Sweden (European Commission 2012).
In spite of the overall positive perception of the public toward robots, people feel less optimistic about them when it comes to health care, specifically the care of the elderly, those with chronic health care needs, children, and those with disabilities. In fact, the European Commission (2012) found that European respondents highly supported the use of robots in situations that would be too dangerous or too taxing for humans (e.g., manufacturing, military and security, and space exploration). However, “there is widespread agreement that robots should be banned in the care of children, the elderly or the disabled (60%) with large minorities also wanting a ban when it comes to other ‘human’ areas such as education (34%), healthcare (27%) and leisure (20%)” (p. 4). Similarly, in a national U.S. survey, the Pew Research Center (2014) found that 65% of Americans “think it would be a change for the worse if lifelike robots become the primary caregivers for the elderly and people in poor health” (p. 3). Although the acceptance of technology is believed to be a largely age-dependent phenomenon, Americans across all age groups agreed on the negative implications of using robots in such areas (Pew Research Center 2014).
Although these large data sets did not explore the reasons behind the mistrust toward robots in health, such negative representations seem to spread to other concrete spheres of application, such as robotic-assisted surgery (RS). In a study conducted by Boys and colleagues (2016), 72% of respondents thought RS was safer, faster, more effective, and less painful than conventional surgery. Yet, 55% of respondents would prefer to have conventional surgery instead of RS. The reasons for mistrust toward robots in health care environments may be influenced by several factors.
Perhaps two prominent and interrelated ones are how much people know about robots and their characteristics and the ethical tensions that arise when allowing robots into situations of particular importance and sensitivity for humans, such as health care practices. An example of these tensions is seen more clearly in situations of “algorithm aversion.”
Algorithm aversion refers to the phenomenon by which humans prefer and choose a human over an algorithm, even when the algorithm is shown to consistently outperform the human’s decision making (Dietvorst, Simmons, and Massey 2014). Several factors can give rise to this phenomenon, such as the ethicality of relying on algorithms to make important decisions or the perceived inability of algorithms to learn from experience (Dawes 1979). Also, significant to the implications of such aversion in the field of robotics in health care is the fact that tolerable human errors become intolerable when made by machines (Dietvorst, Simmons, and Massey 2014). Such errors seem to erode the trust of the public in robots and thus could potentially undermine the uptake of robotic solutions in the field of rehabilitation, such as the ones described in this book. However, as each of the chapters of this book has articulated, the benefits of robotics when applied to rehabilitation are numerous. Such important advantages for human health, and the fact that researchers and developers are actively engaged in considering and negotiating the ethical implications of robotics, are proof of the promising advances that are currently possible and those that will be possible within the next generations.
In the forefront, this section is concerned with the implications and potential for a code of values and norms that can guide the practical decision making of professionals in this field (i.e., professional ethics; Airaksinen 2003). In the background, this section is committed to the imperative reflection on “the constructed norms of internal consistency regarding what is right and what is wrong” or the ethics of robotics in health care (Martin, Bengtsson, and Droes 2010, 65). Although we acknowledge that there may be several approaches to an ethical analysis of a field like this one, this section considers a principles-based approach. We believe this approach particularly lends itself for the purpose of illuminating the current ethical considerations of this emerging and expanding field in a way that articulates both the current applications and the potential of these technologies. When considering the ethics of using robots in rehabilitation practice, at least three dimensions must be considered: the ethics of the person who develops the technology, the ethics of the rehabilitation professional when implementing robots in practice, and the ethical systems built into robots (Aasaro 2006). The following sections consider all three of these, sometimes focusing on one more than the others depending on the principle and its practical considerations in this field.