The Political Issues of Social Robot Design
Science studies scholar Lucy Suchman (2006, 239) draws attention to a set of inherently political concerns with social robots when she states, "For me, however, the fear is less that robotic visions will be realized (though real money will be diverted from other investments), than that the discourses and imaginaries that inspire them will retrench received conceptions both of humanness and of desirable robot potentialities, rather than challenge and hold open the space of possibilities." This notion of retrenching is vivid and should be taken literally to express the ways that lines are drawn and positions defended about what counts as proper and preferred relations between people and robots. One way these lines are drawn is through design—through the making of robots that materialize and enact particular conceptions "of humanness and of desirable robot potentialities." Suchman is not arguing against robots but rather calling attention to the need to examine assumptions within robot design and consider alternatives. If we want robots as companions, what kinds of companionship do we want to engage in with them? What models of human companionship or sociability are we drawing from and designing into these machines, and are these really the models we want to emulate, or should others be considered and designed for?
The question for design is not whether to engage in social encounters with robots but rather how to engage in social encounters with robots. Will the design of these encounters reinforce reductive and staid notions of what it means to be human? Will the design cover over anxieties brought about by such animated technology? Or will the design agonistically engage these concerns and perhaps even suggest new experiences with robots?
Consider the baby seal robot PARO again. Throughout video demonstrations, marketing materials, and research papers, it is presented as an innovative and feasible technological solution to the problem of providing therapy to those in need.7 But using an animated intelligent machine for personal, mental comfort is not a casual, everyday scenario.8 This animated intelligent machine imitates an animal that would otherwise rarely come into contact with people. One might expect that the strangeness of the situation would confuse people who were presented with the proposition of interacting with PARO. But the design of PARO mitigates such responses. The seal-likeness of the robot is itself a caricature, more like a child's stuffed animal or toy than an actual creature. The design of the robot (as something cute and docile, with soft fur, wiggling motions, and purring sounds) and the user' s interaction with it (as a tactile affair in which users hold and stroke the robot as it sits in their laps) moderates the unusual scenario of seeking solace from a machine. Through its design, PARO, which materializes and enacts one idea of human-robot relations, is made to appear pleasing, advantageous, and relatively without issues.
The question of how people will relate to and interact with robots, however, is an issue. Surfacing this question and exploring the issues that underlie it can be agonistic endeavors in the sense of agonism as an activity of ongoing contest between ideas through which dominant perspectives and assumptions are revealed and critiqued (Mouffe 2000a, 2005b). The design of social robots can be interpreted as a political issue—and as an activity of design with political qualities—because through shaping encounters between people and robots, expectations and norms concerning those relations are established and reinforced. These expectations and norms have lasting effects. As Suchman (2006) notes, they influence research and product development trajectories, which are enforced by allocations of funding and acceptance in both academic and market settings. The design of social robots also shapes how we understand concepts like care, which may, in turn, affect how we develop other products and services. This issue is addressed at length by another science studies scholar, Sherry Turkle, who has worked extensively with the robot PARO and raises moral and ethical questions concerning social robots that reflect perspectives on what it means to be human and the nature of the human experience. In some cases, these questions have clear political qualities and implications, such as when Turkle asks, "Do plans to provide relational robots to children and the elders make us less likely to look for other solutions for their care?" (Turkle 2006, 2).
This is a different kind of political issue and expression from what has been discussed so far in this book. The political qualities of social robot design are not as immediately obvious as the political qualities of campaign finance or the price of oil. The political qualities of social robot design concern personal relations between ourselves and others and questions about how design shapes these relations. The implications of these issues and the consequences of design lie more in the future than the present, as social robots are still mostly a class of products in development. Addressing the political issues of social robot design is important because it demonstrates how design can be preemptive in its political provocations to engage issues further upstream in the research and development process. The question is, How can design do the work of agonism in the context of social robots?