2015年11月12日木曜日

UIST2015: Wearable and Mobile Interactions


NanoStylus: Enhancing Input on Ultra-Small Displays with a Finger-Mounted Stylus

http://dl.acm.org/citation.cfm?id=2807500


Due to their limited input area, ultra-small devices, such as smartwatches, are even more prone to occlusion or the fat finger problem, than their larger counterparts, such as smart phones, tablets, and tabletop displays. We present NanoStylus – a finger-mounted fine-tip stylus that enables fast and accurate pointing on a smartwatch with almost no occlusion. The NanoStylus is built from the circuitry of an active capacitive stylus, and mounted within a custom 3D-printed thimble-shaped housing unit. A sensor strip is mounted on each side of the device to enable additional gestures. A user study shows that NanoStylus reduces error rate by 80%, compared to traditional touch interaction and by 45%, compared to a traditional stylus. This high precision pointing capability, coupled with the implemented gesture sensing, gives us the opportunity to explore a rich set of interactive applications on a smartwatch form factor.
-7mm, 2mm
-3D printed several cases
-nib length > tradeoff accuracy, occlusion and speed
-thumb stabilization
-form factor
  -nib: 2mm
  -center of index finger
-width, length
-reduce error rate by 45%
-nanoStylus + Finger Touch
-type, delete, upper case, copy & paste, draw/sketch, zoom/move the canvas

Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements

http://dl.acm.org/citation.cfm?id=2807499



We introduce Orbits, a novel gaze interaction technique that enables hands-free input on smart watches. The technique relies on moving controls to leverage the smooth pursuit movements of the eyes and detect whether and at which control the user is looking at. In Orbits, controls include targets that move in a circular trajectory in the face of the watch, and can be selected by following the desired one for a small amount of time. We conducted two user studies to assess the technique’s recognition and robustness, which demonstrated how Orbits is robust against false positives triggered by natural eye movements and how it presents a hands-free, high accuracy way of interacting with smart watches using off-the-shelf devices. Finally, we developed three example interfaces built with Orbits: a music player, a notifications face plate and a missed call menu. Despite relying on moving controls – very unusual in current HCI interfaces – these were generally well received by participants in a third and final study.
-Other research: input to strap, above display, use frames, etc
-Using gaze for watch interaction
-User study
  -game, watch video
  -96%
-Pupil pro, Callistro360
-performance > target supported 2-16
-media player, social media, contextual menu (answer phone, mail etc)

Candid Interaction: Revealing Hidden Mobile and Wearable Computing Activities

http://dl.acm.org/citation.cfm?id=2807449


The growth of mobile and wearable technologies has made it often difficult to understand what people in our surroundings are doing with their technology. In this paper, we introduce the concept of candid interaction: techniques for providing awareness about our mobile and wearable device usage to others in the vicinity. We motivate and ground this exploration through a survey on current attitudes toward device usage during interpersonal encounters. We then explore a design space for candid interaction through seven prototypes that leverage a wide range of technological enhancements, such as Augmented Reality, shape memory muscle wire, and wearable projection. Preliminary user feedback of our prototypes highlights the trade-offs between the benefits of sharing device activity and the need to protect user privacy.
-everyone has smart watch or other wearables, not sure what they are doing
-keep interactions hidden
-deceptive interaction (in cup, on book etc)
-subtle <-> candid <-> collaborative (look up important info)
-design space - modality, granularity, representation
-prototype #1 grounding notifications
-prototype #2 abstract history
-prototype #3 semantic focus
-prototype #4 status band
-prototype #5
-prototype #6 proxemic AR
-prototype #7 fog hat
-feedback > willingness to share, importance of context and moderate backchannel
-future work > context awareness, in-situ testing

Sensing Tablet Grasp + Micro-mobility for Active Reading

http://dl.acm.org/citation.cfm?id=2807510


The orientation and repositioning of physical artefacts (such as paper documents) to afford shared viewing of content, or to steer the attention of others to specific details, is known as micro-mobility. But the role of grasp in micro-mobility has rarely been considered, much less sensed by devices. \ \ We therefore employ capacitive grip sensing and inertial motion to explore the design space of combined grasp + micro-mobility by considering three classes of technique in the context of active reading. Single user, single device techniques support grip-influenced behaviors such as bookmarking a page with a finger, but combine this with physical embodiment to allow flipping back to a previous location. Multiple user, single device techniques, such as passing a tablet to another user or working side-by-side on a single device, add fresh nuances of expression to co-located collaboration. And single user, multiple device techniques afford facile cross-referencing of content across devices. Founded on observations of grasp and micro-mobility, these techniques open up new possibilities for both individual and collaborative interaction with electronic documents.

-micro-mobility
-grasp device and pass the device to the other person
-user test > presentation, cooperation, competition
-face to face handoff (multi user, single-device)
-immersive read, thumb bookmark with tip-to-flip (single user,single-device)
-fine-grained reference + hold to refer feedback (single user, multiple-device)
-side-by-side hand-off could not be recognized by the system


Disclaimer: The opinions expressed here are my own, and do not reflect those of my employer. -Fumi Yamazaki

0 件のコメント:

コメントを投稿