Into the Future

Technological Singularity, as theorized by Kurzweil and Vinge, will occur when artificial intelligence will have progressed to the point of a greater-than-human intelligence. Depending upon how “intelligence” is defined, this has already taken place in some arenas. With the ability to super-stimulate the brains of partially robotic people, it will be difficult to predict how that will change human experience.

In the near future, UX designers (working as scientists and engineers and UX professionals) will make neuroprosthetic assistive devices smarter, easier to use, and more available. Designers will give users new capabilities. What these users do with new capabilities will be partially governed by the rules we create for acceptable and unacceptable behaviors. How are we going to get ready to make that level of decision for an individual? For a population?

UX designers from any of the backgrounds we’ve identified can take some comfort in the fact that they will have time to think through the big moral and societal questions posed in this chapter. There will be many situations for which technology selection or cost targets will limit a design’s potential. But, sooner than we probably expect, designers will need to address tough questions about access, control, capability, as well as moral issues. Lacking discipline-wide standards by which to work, designers must look to adjacent fields for inspiration and guidance.

On a tactical level, human-computer interaction (HCI) offers heuristic principles, as originally described by Jakob Nielsen in the 1990s, that can be extrapolated to human- robot interactions (HRI). In this update of Nielsen’s heuristics, we substitute “system” with “robot,” and “user” with “person.”

Visibility of robot status

Keeping people informed of the robot’s status, intentions, confidence, time to chat, and so on.

Match between robot and the real world

Use human terms and behaviors rather than “robot” terms and behaviors.

User control and freedom, error prevention

Providing an emergency exit, or way of correcting the robot when needed (“undo” and “redo”).

Consistency and standards

Follow real-world conventions rather than imposing new platforms or constructs that are unfamiliar.

Recognition rather than recall

People should not have to remember information from one part of the dialogue to another.

Flexibility and efficiency of use

Make novice and expert modes of use equally available.

Aesthetic and minimalist design

Be concise, to avoid clouding relevant information with irrelevant or rarely needed information.

Help and documentation

Make reference materials easily available when needed, germane to the scenario of use or interaction.


Support and enhance the person’s skill and knowledge, not replace them (except in cases when that is an objective of the robot’s design).

Pleasurable and respectful interaction

Interactions with the robot should enhance the quality of the person’s work/life, treat the person with respect, and offer artistic and functional value.

Although robots were initially valued for their ability to accomplish dull, dirty, dangerous, or difficult jobs, they have progressed to higher-order jobs. Robots should be able to respond to all of Maslow’s Hierarchy of Needs:

Physiological needs

  • ? Value: doing dirty jobs we don’t want to do ourselves Safety and security
  • ? Value: doing dangerous jobs Love and belonging
  • ? Value: unconditionally positive dialogue and attention span
  • ? Expectation: social robots, providing meaningful and personal interaction


  • ? Value: helping people build awareness of capability
  • ? Expectation: anticipating human needs and addressing them in a mature, constructive way


  • ? Value: performing more sophisticated procedures on behalf of the doctor
  • ? Value: assisting the patient in making a strong recovery and getting their life back on track
  • ? Expectation: robot becoming self-aware enough to project its own degree of confidence in decision-making
  • ? Expectation: robot assessing its own role and gravity in the human-robot relationship; is it trusted, why or why not
  • ? Expectation: Singularity principle — best-of-robotics and best-of-humanity working together in as yet unimagined ways

We can also reference Aristotle’s thinking on persuasion, appealing to our desire for credible, logical, and emotional interactions. If we believe robots can effectively communicate and facilitate learning, robots should be designed with a combination of these very human attributes to create personality and increase their effectiveness.

Credibility (ethos)

People have credibility based on their pedigree and past performance. Robots will need to be designed to provide ways of expressing their credentials, verbally and nonverbally. They will need to anticipate questions such as “Just how smart are you?” and “Why should I trust you?”

Logic (logos)

As robots become more autonomous, the sources they reference for their actions and recommendations might be questioned. Designers will need to consider ways of reflecting the logic tree, thought patterns, and source materials that lead to the robot’s actions or recommendations. How designers decide to interpret that material for their human audience is a design question in and of itself. Building information and communication methods that are logical, credible, and understandable by the user is a critical design task.

Emotion (pathos)

Emotional appeal is a catalyst, if not a requirement, to the adoption of robots in general. Designers must provide the right stimulus to trigger desired human responses.

Designers must design the language, behavior, tone-of-voice, and every other aesthetic element that can be interpreted. Aspirational qualities of the design (such as what my robot says about me and my work) also play a significant role in adoption. Designers must ask themselves, “How does the user or organization want to be perceived as robots play roles that humans used to?”

While asking and answering questions like those proposed above, wise designers won’t simply pursue the ideal robot-to-human interaction; they’ll think more systematically to define which human-to-human relationships their robot can facilitate. Imagine healthcare experiences in which doctors’, nurses’, and administrators’ primary roles are your cognitive, emotional, and spiritual well-being. Robots designed to take care of routine and even complex tasks can clear the way for people to connect more deeply. They can monitor and interpret information in the environment and deliver that data to the human caregivers so that they treat you more empathically. Robots will assist caregivers in their human-to-human care interactions; that is, if designers decide to address this opportunity. Robot-enhanced human empathy, connection, motivation, belonging, love — why shouldn’t this be our design goal?

< Prev   CONTENTS   Source   Next >