Summary

Social robots offer another opportunity for examining what it means to do design with computation. An analysis of the design of social robots through the frame of agonism provides more examples of how the distinctive qualities of computational objects might be manipulated to evoke and explore political issues. In this case, the quality is embodiment, and the political issue regards future human-robot relations: what will be the character of these relations, and what qualities of the social are privileged or veiled by design? As research and development initiatives forge ahead with visions of social robots for partnership, companionship, and therapy, it is important to pause to consider the base assumptions of those projects. The question of how to interact with robots or other intelligent artifacts and systems is anything but settled. The possibilities for sources, kinds, and effects of embodiment are anything but exhausted. As the examples in this chapter elaborate, an agonistic approach to the design of social robots looks to embodiment as a means for offering alternatives that keep the space of design and our expectations of social robots open and available to dispute.

Commonly, design is a means to demarcate and advance certain perspectives concerning what is desirable in our present and future. The therapy robot PARO is a case in point, as it materializes beliefs concerning the role of robots in society and the ways we might engage them. But just as design can set boundaries, it can also be used agonistically to disturb those boundaries. This disturbance occurs not by the erasure of lines of difference but by introducing productively disruptive tangents. The robots Blendie, Omo, and Amy and Klara are examples of such productively disruptive tangents. They highlight anxieties with technology, offer new modes of affective and intimate interaction, and bring to the fore assumptions concerning human expression and communication with animated artifacts.

On the surface, the political issues of social robot design appear quite different from the political issues described with regard to information design in chapter 2. Whereas the political issues related to information design were obvious and direct—issues of military funding and academic research, the price of oil, networks of corporate influence—with social robots, they are not. This difference is valuable because it shows the range of agonistic endeavors: adversarial design is not limited to what we commonly consider political issues and certainly not limited to ideological frames of left and right, conservative or liberal. In fact, revealing and articulating the contestable aspects of situations often perceived as nonpolitical is a central goal of agonism because the political is a pervasive condition and the contention that characterizes agonism should occur continuously and everywhere.

Addressing assumptions and expressing alternatives begins to engage what Mouffe (2000a, 2005b) refers to as contingency of the social order— that things could always be otherwise and every order is predicated on exclusion. Moreover, it calls attention to what Honig (1993) terms the remainder—the thing that is excluded. In this case, the "things that could be otherwise" are characteristics of relationships between people and robots. Through the design tactic of reconfiguring the remainder, what is excluded is brought to the fore and made materially and experientially available via the robot 's embodiment. The analysis of Blendie, Omo, and Amy and Klara shows that each can be interpreted as critical expressions of the remainders of social robot design, specifically anxiety and the irrational.

Furthermore, these agonistic encounters with robots are also political in that they envisage the possibility of different we/they relations—another core task to agonism (Mouffe 2005b). Whereas in political theory discussions of we/they distinctions are usually construed as distinctions between socioeconomic classes or categories of people, here it is initially construed as the relations between people and robots. But these agonistic encounters with social robots do not settle that we/they relation. The familiar distinctions between the categories of human and robot are not defended or upheld. Instead, the designs of these robots trouble, even confound, the assumptions on which these distinctions are commonly made. This troubling of categories is a base activity of adversarial design because it leads to exposing the remainder and thereby the political issues that inhabit and transect those categories. In the case of social robot design, the political issue is not whether but how humans and robots will interact. The agonistic endeavor is not to keep the categories of human and robot separate, but rather to identify and explore how the qualities of those categories might intermingle, and through design, to probe the possible relations between people and intelligent artifacts.

 
Source
< Prev   CONTENTS   Source   Next >