Reconfiguring the Remainder: Agonistic Encounters with Social Robots

Whether they are used for personal care or welding cars, robots epitomize complex engineered systems. They weave together software and hardware; interface, interaction, and industrial design; and mechanical engineering, electrical engineering, and computer science. They employ advanced technologies, appear as and work in diverse forms and contexts, and play a role in popular and scientific histories and trajectories. Robots are, therefore, another notable category of computational objects through which to explore what it means to do design with computation.

However, the technical answer to the question "What is a robot?" is a matter of considerable debate. In computer science and engineering, the answer to the question has disciplinary significance and marks borders between conflicting approaches to operationalizing nontrivial subjects such as perception, affect, and cognition. Much of the debate in defining robots is traced to differing notions of intelligence—which is considered to be a fundamental property of a robot. Scientists from a classical artificial intelligence (AI) perspective generally argue that a robot requires the capacity for symbol manipulation and the possession of a symbolic model or representation of the world.1 In contrast, those scientists who endorse what is sometimes referred to as "nouvelle AI" generally counter that intelligence is not synonymous with symbol manipulation and that a robot does not need a model of the world, as "the world is its own best model. . . . The trick is to sense it appropriately and often enough" (Brooks 1990, 6). At stake in this debate are the questions of what constitutes or what counts as intelligence and how to construct a computational artifact or system that can be claimed to possess some form of intelligence.

But intelligence alone does not answer the question "What is a robot?" In addition to the attribute of intelligence, physicality is commonly considered to be a fundamental property of a robot. Consider that virtual on-screen characters are referred to as agents, not robots, even though they may display qualities of intelligence, and yet a physical artifact, such as a toaster or a vacuum cleaner, that displays even rudimentary qualities of intelligence is often referred to as a robot.

In a discussion of design, these qualities can be drawn together in a simple and direct manner to answer the question "What is a robot?" When computational intelligence and the physicality of an artifact are bound together, we have a thing that can be called a robot. This binding of computational intelligence and physicality is significant because it constitutes a kind of embodiment, which structures the possible relations between people and robots. This embodiment makes robots distinctive as a category of computational objects. But the embodiment of robots is not naturally occurring; it is designed. A robot 's embodiment is a consequence of how "intelligence" and "the artifact" are purposefully brought together.

There are many types of robots, including industrial robots, military robots, medical robots, service robots, and social robots. Of these, social robots present a fascinating yet awkward set of issues for contemporary design and have novel political concerns attached to them. Social robots are distinguished from other classes of robots by their modes of interaction and their purposes. They are designed to engage in communicative exchanges with people; to serve human needs beyond those of labor or common notions of work; and to operate with individuals and small groups of people in homes, in health care facilities, or out in public. Most social robots exist as not-quite products; as artifacts in a liminal state between academic and industrial research labs and the consumer market. But at consumer-good expositions where corporations exhibit their near future wares, social robots are increasingly present. Designers are exploring and experimenting with new forms and modes of interaction for social robots. These explorations and experiments evoke political issues: the ways in which we design the character of our relationships with social robots reflect and reinforce beliefs about what it means to be social and set trajectories for how we might live together with computational artifacts in an increasingly intimate manner.

As an example, consider PARO, the baby seal therapy robot—one of the few social robots that exist as a commercially available product.2 PARO is encased in antibacterial fur and designed for physical interaction with humans. It responds to touch, sound, and the presence of others through changes in its body position and by making sounds similar to the animal it imitates. It perceives aspects of the environment and adjusts its behavior accordingly—for example, sleeping when the lights are off. It is also able to learn the preferences of its users over time and move and communicate in ways that will be most pleasing and beneficial to them. Stroking PARO functions as positive behavior reinforcement, and it will register the last actions it did before the stroking occurred and later repeat them. Likewise, striking PARO functions as negative behavior reinforcement, and it will register the last actions it did before the striking occurred and later will not repeat them. Under the fur of the robot are sensors that monitor light, sound, touch, and the position and movement of objects around the robot. Data from the sensors are registered and processed, and instructions are sent to a suite of motors to move accordingly and to play sounds from a speaker embedded below the fur surface of the robot. Sensors detect environmental factors as they change nearly continuously (people move, shadows fall, sounds increase and decrease in volume), and processing and actuation are repeated over and over, resulting in an exhibition of animation and interactivity.

The name PARO is derived from the phrase personal robot and immediately associates PARO with the category of social robots. In fact, PARO is described by its designers as a "mental commitment robot" and is designed to elicit emotional response and attachment from users.3 For example, one scenario of use for PARO is as a surrogate in animal therapy, functioning in a manner analogous to a service or companion animal by providing cognitive and emotional support. The underlying idea is that users will interact with PARO and develop a relationship with the robot similar to the kinds of relationships developed between people and animals.4

PARO is not just a trivial gadget or an obscure technological showpiece. Substantial funding and intellectual effort have been put toward the development of this robot. Interactions with it have been studied from multiple methodological perspectives to ascertain its psychological, physical, and social effects.5 The research teams at the National Institute of Advanced Industrial Science and Technology, who designed PARO, and others have produced scholarly publications related to the robot.6 It is also available for sale from PARO Robots, Inc., a corporation developed to move PARO from the lab into the market. By most common metrics, PARO is as real and legitimate as any other new technology product.

The design of PARO provides one set of answers to the question "What might be the character of our future relationships with robots?" The embodiment of PARO, carefully crafted through the design of form, materials, behavior, and expression, structures a particular set of possible relations between people and the robot. In its marketing literature, PARO Robotics, Inc. asserts precisely such a connection between the design of the robot and the kinds of relationships it is intended to induce (PARO Robotics U.S., Inc. 2008):

Covered in pure white synthetic fur, the built-in intelligence provides psychological, physiological, and social effects through physical interaction with human beings. PARO not only imitates animal behavior, it also responds to light, sound, temperature, touch and posture, and over time develops its own character. As a result, it becomes a "living" cherished pet that provides relaxation, entertainment, and companionship to the owner.

Through such descriptions and the designs that accompany them, the association of terms such as social, living, and companionship to PARO concomitantly defines the robot and redefines these terms in regard to the robot. That is, by labeling the robot as social, we have certain expectations of it and of our interactions with it. At the same time, the label social also takes on new meaning in considering computational objects as entities that people are or might be social with.

Reflecting on PARO as a designed thing prompts questions and issues concerning how to use robots and what role should be played by design in shaping our experiences with these computational objects. But PARO is not an example of adversarial design. PARO is emblematic of what would be designed against and of what adversarial design works to question, challenge, or resist. Before proceeding to specific tactics of designing agonistic encounters with social robots, it is worthwhile to examine a bit more the political issues of social robot design.

 
Source
< Prev   CONTENTS   Source   Next >