An Uncanny Affective Companion

As computational systems have increased in sophistication and expanded in use, the fields of interaction design, human-computer interaction, and human-robot interaction have gained ground as important endeavors. These fields focus on the study and shaping of interactions with computational systems, often to understand patterns of use and meaning making in order to design systems that are useful, usable, and desirable. In mainstream human-robot interaction research and design, robots such as Blendie can be interpreted as provocations that question the base assumptions of such efforts. Recalling Suchman 's (2006) concern about the "retrenching" of desires and imaginaries, such interventions are important because they challenge assumptions about modes of interaction with robots and thereby keep open the space of design possibilities, providing alternative themes for design.

The goal of productivity in the design and use of computational systems is an example of how themes in interaction design, human- computer interaction, and human-robot interaction develop, are materialized in systems, and are challenged and evolve. Much of the practice in these fields is geared toward improving the capacity of systems to enable people and industry to "get work done." Early in the history of the field of human-computer interaction, from the 1980s to the mid- 1990s, the emphasis on productivity led to equating interaction with usability and usefulness with convenience and expediency. Productivity reigned as the predominant purpose of interaction design and human- computer interaction, against which research and practice were judged. Since the late 1990s, the singular importance of productivity has been steadily questioned through a stream of research projects and publications advancing alternative themes to drive the design and use of computational systems.16 As a result of such efforts, today, ludic, reflective, and pleasurable are common qualifiers and themes against which many systems are judged.17 This does not mean that the topic of productivity is resolved and that the contest over productivity is completed. From an agonistic perspective, the contest is never ended: one "needs to be always switching positions, because once any given position sediments, it produces remainders" (Honig 1993, 200). As soon as an issue appears to be settled, subsequent issues or positions emerge, and they need to be addressed. As the challenging of productivity has proceeded, it has developed connections with other themes. These provide other issues to be addressed and further sites of agonistic intervention: chief among them in regard to social robots is affect.

The theme of the robot as companion or partner (the terms are often used interchangeably in robotics discourse) is popular in robotics research and product development. Partner robots offer the potential of becoming a significant consumer market, and the robot as companion speaks to popular culture 's connotations of robots, making them attractive for marketing and public relation purposes. PARO is one example of such a robot. Another example is the NEC PaPeRo. The name PaPeRo is derived from partner-type personal robot, and the robot was designed to be a research platform for personal robots for use in the home. Unlike many academic research robots, PaPeRo appears like a finished consumer product. It is made of plastic, brightly colored, and designed to look cute with gentle bulbous curves and large eyes. Over the course of its development, several roles have been identified for the PaPeRo. One of these roles is as a so-called childcare robot that functions as a companion to children and a partner to parents in the activity of parenting.18

With the exception of PARO, few such robots are readily available as products for either individual or institutional purchase. Most companion and partner robots are currently in the research and development phase. Regardless of whether a specific robot such as PaPeRo is used in the near future, its development and suggested use is evidence of a particular vision of the world in which robots and people work intimately together in their everyday lives. Along with this intimacy comes a series of expectations and standards of interaction. The design of these robots advances and materializes that vision for ongoing research and product development. Even though robots such as PaPeRo are not in mass commercial production, they still frame what can be expected of future robot products.

In the early history of computational systems design, there was little consideration of affect, which did not fit neatly with imperatives of productivity. This perspective has shifted in recent years as discourses of pleasure and play have gained prominence. In the design of companion robots, affect often takes on particular importance because it is assumed that for a robot to be an effective companion, it must take into account emotion. The phrase affective computing was coined by Rosalind Picard, a research scientist at the Massachusetts Institute of Technology, and has become something of a catchphrase for projects ranging from emotion models for machines to sensors that read the emotions of people. As Picard (2005, 3) describes her approach to affective computing:

Affective computing includes implementing emotions, and therefore can aid the development and testing of new and old emotion theories. However, affective computing also includes many other things, such as giving a computer the ability to recognize and express emotions, developing its ability to respond intelligently to human emotions, and enabling it to regulate and utilize its emotions.

Affective computing is subject to philosophical debates that are similar to those concerning the nature of human and machine intelligence. These debates of definition influence the replication or simulation of certain qualities in artificial entities. Picard attempts to sidestep this issue by emphasizing the pragmatic ends of affect rather than the ontological status of emotion. In doing so, she reveals a common position in robotics that casts emotion as a necessity for achieving rationality and as a means for improving the productivity of the computational artifacts and systems. As Picard (2000, 280-281) states,

The inability of today's computers to recognize, express, and have emotions severely limits their ability to act intelligently and interact naturally with us . . . because emotional computing tends to connote computers with an undesirable reduction in rationality, we prefer the term affective computing to denote computing that relates to, arises from, or deliberately influences emotions. Affective still means emotional, but may, perhaps usefully, be confused with effective.

Affect, in the context of robots and specifically social robots, is thus treated as a way of regulating a robot's behavior, as a quality of a machine's expression toward people, and as a way of regulating a person 's encounter with a machine. The general idea is that affect, in the form of emotional models, improves the decision-making capabilities of robots and, in the form of expressive gestures, persuasively shapes desired interactions between people and robots.

From an agonistic perspective, this conceptualization of affect begs examination. What, in such conceptualizations of affect, is being left out?

What is the remainder of affect in the design of social robots? What alternative experiences of companionship and affect might be advanced through design?

Omo was created by Dobson in 2007 and, like Blendie, is a robot with a novel form of embodiment. Omo detects and responds to the breathing patterns of others and can deform its own structure to express breathinglike motions. Omo operates by monitoring the breathing patterns of those holding the robot through the use of pressure sensors. At times, Omo matches the breathing patterns of its companion; at other times, it offers a new pattern for its companion to match, guiding the user through a series of controlled breathing exercises.

The contrast between Omo and other social robots draws into relief some of the developing assumptions concerning the design of robots, particularly the design of robots as companions. Foremost is Omo's form—egglike, glowing, and rubbery (figure 3.3). In contrast to the common design approach to such robots, it is not cute. Its appearance does not mimic a domesticated pet, it is not fuzzy, and it does not have baby face features, such as large eyes. In comparison to PARO, Omo appears alien. Even next to robots such as the NEC PaPeRo, which also has a squat rotund form, Omo is distinctive in its lack of anthropomorphic or zoomorphic features: it has no eyes and no mouth. In Dobson 's (2008) own comments, Omo is designed to be more like an organ than an animal or a person.19

Figure 3.3

Kelly Dobson, Omo (2007b)

Furthermore, the use of breathing as the basis of embodiment—that is, as the source of sensed input and output—enhances the uncanny status of the robot. When robots are given lifelike qualities, it is usually via the common notion of form as shape and volume or the presence of specific anthropomorphic or zoomorphic features. For example, PARO appears lifelike because its appearance imitates an animal, and one instinctively anthropomorphizes PaPeRo due to the outline and position of the eyes in its head. To make the foundation of the robot 's embodiment associated with the activity of breathing—an activity that is distinctive to living entities and only to living entities—is an uncommon design decision, which results in an uncanny experience of the robot. Omo's uncanny-ness, however, does not create distance from the computational object. It draws people closer in what is potentially a much more powerful affective relationship as it calls forth a psychologically complex form of communication and exchange through the experience of shared breathing. As Dobson (2008) describes the interaction with Omo, "as you are holding it, you will slowly change with it, much like as you hold another person you start breathing together." Even compared with the tactile interaction of PARO, the breathing together with Omo appears strikingly intimate. Furthermore, whereas with PARO the design is an attempt to mitigate any strangeness of the encounter, with Omo this strangeness remains; it is indeed the essential quality of the encounter.

Transparency and consistency are commonly lauded as principles of design that make products accessible by way of their predictability. The design of Omo seems to run counter to this principle: Omo is not regular in its behavior and interactions with others. The robot responds in one way at times and another way at other times. At times, it mimics the breathing patterns of its human companion, and at other times, it makes abrupt and startling changes in its own breathlike movements. As a social robot, Omo presents a kind of companionship and affect different from PARO or PaPeRo. Its companionship is not altogether subservient and is designed to include irrationality as a desirable feature, not as the flaw that Picard associates it with in her construction of affective computing. The remainder here is the irrationality of affective experience, and the design of Omo brings this often excluded quality of affect to the fore.

 
Source
< Prev   CONTENTS   Source   Next >