The Interaction Model
For the most part, users’ interaction with wearables is based on microinteractions. Microinteractions are defined as contained product moments that revolve around a single use case — they have one main task. [—] Every time a user answers a call, changes a setting, replies to a message, posts a comment, or checks the weather, she engages with a microinteraction.
Microinteractions can take three forms of operation:
Initiated by the user; for example, double-tapping the Fitbit bracelet to get the information on goal completion status.
The user is alerted by the system to take an action. For example, consider Lumo Back vibrating when the wearer is slouching, signaling him to straighten up. These alerts can be triggered as a result of manual user configuration or contextual signals (for instance, location, proximity, the presence of other people/devices, body physiological metrics, and more).
Performed by the system, in the same manner as the Nike+ FuelBand synchronizes activity data automatically to Nike+.
When it comes to wearables, all three forms of operation come into play, though in different degrees based on the wearable role and the context of use. Trackers, for example, rely heavily on system automation to synchronize the data collected. In addition, many of them also incorporate semi-automatic operation by displaying notifications to users (for example, achieving the daily goal or running out of battery). Messengers work almost solely in semi-automatic mode, focusing on alerting the user whenever an event is taking place on the smartphone (for example, an incoming call or incoming message), based on whether the user chooses to take an action. Facilitators and enhancers, which facilitate richer interactions (and usually offer a richer feature set) incorporate all three.
Still, the largest share of user interaction is generated semi-automatically, as a result of interruptions triggered by the wearable device, or on behalf of the smartphone. The semiautomatic dominance shouldn’t come as a surprise, though. First, wearables are meant to be unobtrusive, and mostly “sit there” (hopefully looking pretty), keeping out of the way when not needed. Second, most wearables rely on just delivering information to the users, with minimal input, if at all. Third, given the wearable constraints in terms of display size and often interaction, too, the engagement pattern with them is mostly quick, focused microinteractions for a specific purpose, on a need basis.
From a UX perspective, this use pattern further emphasizes the importance of “less is more”:
- ? The repeated short interactions, along with the limited attention span allocated to them, require that a special emphasis be placed on simple glanceable UI and fast response times.
- ? Learning is a function of the time spent on a task and the time needed to complete it. Short, scattered interactions, like those that place with wearables, make it harder to promote learning compared to longer, more continuous engagements (as are often done on the desktop, for example). This means that you need to keep the core UX building blocks — mainly, navigation structure, information architecture, and interaction model — as simple and consistent as possible. Deep navigation hierarchies, or diversified interaction patterns along the product will make it harder to use and form habits.
- ? The wearable experience needs to be focused on what the device does absolutely best, while always considering the broader ecosystem. It’s important to crystalize the wearable role in this bigger constellation of devices, and define the relationship between the different devices. Part of designing for wearables is also understanding what the wearable device should handle versus the parts that other devices should take on, together providing an optimized experience for the user.