Menu
Home
Log in / Register
 
Home arrow Engineering arrow Designing for Emerging Technologies
Source

Robotic Social Cues

Early robots were simple. They could only perform single tasks, but with amazing accuracy and repeatability. Almost immediately, robotic components found their way into industrial and assembly jobs, performing tasks that humans found dull, dirty, dangerous, or difficult (known as the Four D’s, a kind of application criteria for considering robotic solutions). As sensors evolved, robots began to extend our capabilities — seeing better than the human eye, sniffing for traces of materials we can’t smell, tasting and touching materials in environments where we can’t go. Simultaneously, processors were getting smaller and faster, so when tied to sensors, machines began applying fuzzy logic to situations, making decisions about what they should do next: decisions such as “move this and move me.”

But automated systems can only be so smart. As robots enter more varied aspects of our lives, robot designers are realizing that people need clues about the way robots interpret their environments so that we can understand their intentions, capabilities, and limitations.

Designing a robotic tool is less straightforward than one might think. Baxter, the robot created by Rethink Robotics, is an assembly line worker that is designed to work safely alongside human workers. Baxter is taught by his coworkers, so he learns what they want and expect. Although Baxter’s sensors can help him avoid or minimize accidental human contact, those sensors can’t always anticipate what his human coworkers might do. Baxter needed to help people stay out of his way and therefore harm’s way, to aid customer adoption and increase coworker acceptance. Rethink Robotics gave Baxter a moving head and stylized human eyes for that very purpose. Baxter looks to his next target before he begins to move, giving coworkers a more human and humane clue about his intentions and a friendly warning, through body language, to get out of the way.

RPVita on the other hand, navigates its world autonomously. A remote doctor logs in to a robot and directs the robot to find a particular patient. The robot accesses the hospital’s information systems, learns where the patient is, and goes to the patient on its own. This type of capability introduces new design problems that can be addressed by designed behaviors.

For instance, humans give clues (sometimes unknowingly) about their state of mind as they navigate a hospital hallway or any other setting. You know when you shouldn’t interrupt. You know when someone is rushing to an emergency. You generally know which way someone is about to go as they enter a hallway intersection. Robots typically don’t give people such clues because they haven’t been designed for social situations. Using sensors, and processing the information they provide, RPVita is aware of its physical environment, avoiding not only stationary objects, but also anticipating which way a rushing nurse might be headed in order to take steps to avoid her. After all, technology stops adding value when it gets in our way. Here again, robots can be designed to give clues about their intentions so that we know how to get along with them. RPVita uses turning signals to indicate basic directional intent. It also has a color-coding system that indicates whether the robot is urgently rushing to a site on a doctor’s behalf, if it’s available only for quick chats, or if it’s available for more lengthy conversations. These two intention-signaling systems replace or reinforce a third system — the doctor’s face on the display — to give people the appropriate set of social clues for interacting with RPVita.

Designing ways for humans to read robot intentions is a challenge. Early work indicates that people read signals that imitate human behavior best because those are the signals we’ve evolved to detect. Yet the more human-like robot behaviors become, the more we expect from them. If robots misinterpret us or if we misinterpret them, a lot can go wrong. Communication errors in healthcare settings can be life threatening. How do we design the highest level of human control in these situations? When do we want to relinquish control and avoid certain obligations? And finally, whose responsibility do the actions of the robot become?

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel