Scrolling and Zooming
Encouraged by the immersive feel of our AR hand, we next broadened our focus to the general problem of searching through a collection of objects, some of which might not be visible within the current view. Traditional mouse-keyboard-touch interactions rely on UI controls such as scroll bars and lists to provide this functionality. We wanted to avoid these elements, aiming instead for a more intuitive experience that would replicate real- world interaction.
This time, one of our first attempts yielded a promising approach; we imagined the hand as a small, portable video camera, able to move freely in all three axes. We mapped the virtual camera to the center of the hand, so that the perspective view of the application changes according to the movements of the user’s hand. When the user’s hand moves closer to the screen, the collection zooms in, providing a closer look at the content; when her hand moves to the side, the collection scrolls. We liked the fluid, always-on feel of this interaction as well as its seamless integration of the depth (“Z”) dimension.
We began to integrate these concepts into the wider context of a real application, building a prototype experience for browsing and selecting books from a library. This brought us naturally to our next design challenge: how should the user actually select the book she wants, and then how should she open it, read it, and turn its pages? After some experimentation, we decided to further employ our ability to anticipate the user’s next move. As the user moves her hand horizontally over the books, the books react as if the user’s touch slightly moved them. When her hand stops moving, remaining stable over a specific book, the book falls slightly off the shelf, toward the user, as if she had pulled it closer with her finger (see Figure 3-3). This “falling-towards-me” animation invites the user to pick up and open the book. Then, a text balloon appears, instructing the user to perform a pinch gesture to select the book. This behavior is inspired by a classic UI mechanic: the book tipping forward is immediate, like the hover highlight of a mouse, whereas the text balloon is like a tooltip, appearing a few seconds after the action is available. In our case, however, these mechanics were translated to a 3D interface, where the interaction can be done directly with the objects themselves, rather than through the medium of a mouse (Figure 3-3).
Figure 3-3. Anticipating the user’s action by tipping the book forward (Omek Interactive Ltd. © 2013)
The techniques of browsing a shelf of books and then allowing a single book to tip forward are strongly influenced by analogous real-world interactions. However, we also discovered that there are limits to this strategy of reflecting real-world behaviors. In an early prototype, the user had to physically move her hand backward in order to take a book off the shelf, which is, of course, what she would do in the real world. However, when we tested this mechanic with users, it surprised them and disrupted the continuity of their experience. Thus, in a successive implementation, the book is pulled off the shelf automatically when it is grabbed (Figure 3-4). When the book has been selected, the user can do the following, as she would with a real book:
- ? Move it toward her to have a closer look
- ? Rotate her hand to turn it around and look at the back side of the cover
- ? Open it by grabbing the cover with one hand and then moving her hand toward the center of the screen
Figure 3-4. The book rotates off the shelf when selected (Omek Interactive Ltd. © 2013)
As we continued to expand the prototype to provide a more holistic application-level experience, additional issues arose that required attention. One worth noting is the need for “soft landings” when switching contexts. In our design, user interaction varies based on the context of the application. Sometimes, the hand moves over static objects, and sometimes the objects scroll or zoom, according to the movements of the hand.
Transitions from one context to another can catch users off-guard, resulting in false positives and disoriented users. It is therefore important to implement a gradual transition (“soft landing”) at these points, in which the new interaction becomes active only after a few seconds, allowing the user to understand and adapt to the change.