2015年11月12日木曜日

UIST2015: Hands and Fingers

Improving Virtual Keyboards When All Finger Positions Are Known

http://dl.acm.org/citation.cfm?id=2807491


Current virtual keyboards are known to be slower and less convenient than physical QWERTY keyboards because they simply imitate the traditional QWERTY keyboards on touchscreens. In order to improve virtual keyboards, we consider two reasonable assumptions based on the observation of skilled typists. First, the keys are already assigned to each finger for typing. Based on this assumption, we suggest restricting each finger to entering pre-allocated keys only. Second, non-touching fingers move in correlation with the touching finger because of the intrinsic structure of human hands. To verify of our assumptions, we conducted two experiments with skilled typists. In the first experiment, we statistically verified the second assumption. We then suggest a novel virtual keyboard using our observations. In the second experiment, we show that our suggested keyboard outperforms existing virtual keyboards.

-Current virtual keyboards are slower, have errors
-Targeting done by touched point > correlation between fingers (10 finger touchpoint)
-Does correlation exist?
-How effective is the pre-allocations and correlations?
-typing speed normal < p- keyboard & PC keyboard
-error rate > PC keyboard < normal keyboard
-Sometimes PC key could enter key that typists wanted to type
-typing errors can be decreased by pre-allocation of keys (horizontal) and correlations between all fingers and keys (vertical)

ATK: Enabling Ten-Finger Freehand Typing in Air Based on 3D Hand Tracking Data

http://dl.acm.org/citation.cfm?id=2807504


Ten-finger freehand mid-air typing is a potential solution for post-desktop interaction. However, the absence of tactile feedback as well as the inability to accurately distinguish tapping finger or target keys exists as the major challenge for mid-air typing. In this paper, we present ATK, a novel interaction technique that enables freehand ten-finger typing in the air based on 3D hand tracking data. Our hypothesis is that expert typists are able to transfer their typing ability from physical keyboards to mid-air typing. We followed an iterative approach in designing ATK. We first empirically investigated users' mid-air typing behavior, and examined fingertip kinematics during tapping, correlated movement among fingers and 3D distribution of tapping endpoints. Based on the findings, we proposed a probabilistic tap detection algorithm, and augmented Goodman's input correction model to account for the ambiguity in distinguishing tapping finger. We finally evaluated the performance of ATK with a 4-block study. Participants typed 23.0 WPM with an uncorrected word-level error rate of 0.3% in the first block, and later achieved 29.2 WPM in the last block without sacrificing accuracy.

-Detect taps as well as tapping location > use bayesian method to detect
-Used LeapMotion as sensor
-Movement correlation between fingers - 89% for one tap, 56% for 5 characters > redesigned the algorithm
-"Augmented Bayesian method"
  -Steps: Detect tap > estimate active finger > use Bayesian method to interpret the input
  -Tap detection: define peak velocity for each finger > detect tap if any finger exceeds threashold > determine optimal alpha > final classification accuracy
  -Language model, 3D spatial model and finger tapping model
  -delete the word / audio feedback
  -user study: type as fast and as accurate as possible, and delete and retype
-Conclusion : user can perform ten-finger typing in the air without feedback, and computer can interpret users' intended word from 3D hand/finger movement data
-Future work: support character-level input, experiment with different sensor placement, improve Bayesian algorithm for missing taps

CyclopsRing: Enabling Whole-Hand and Context-Aware Interactions Through a Fisheye Ring

http://dl.acm.org/citation.cfm?id=2807450


This paper presents CyclopsRing, a ring-style fisheye imaging wearable device that can be worn on hand webbings to en- able whole-hand and context-aware interactions. Observing from a central position of the hand through a fisheye perspective, CyclopsRing sees not only the operating hand, but also the environmental contexts that involve with the hand-based interactions. Since CyclopsRing is a finger-worn device, it also allows users to fully preserve skin feedback of the hands. This paper demonstrates a proof-of-concept device, reports the performance in hand-gesture recognition using random decision forest (RDF) method, and, upon the gesture recognizer, presents a set of interaction techniques including on-finger pinch-and-slide input, in-air pinch-and-motion input, palm-writing input, and their interactions with the environ- mental contexts. The experiment obtained an 84.75% recognition rate of hand gesture input from a database of seven hand gestures collected from 15 participants. To our knowledge, CyclopsRing is the first ring-wearable device that supports whole-hand and context-aware interactions.

-using hand as mouse
-pipeline
-84% accuracy
-on-finger slider
-palm writing
-fingernail detector (lots of false positive)
-pen writing
-visual feature tracking /recognition
-average gesture recognition rate - 84.75%
-conclusion:
  -ring-wearable for whole-hand and context-aware interaction
  -discrete input with gesture recognizer
  -continuous/rich input with heuristics
  -wide-angle short-range depth sensing in future

BackHand: Sensing Hand Gestures via Back of the Hand

http://dl.acm.org/citation.cfm?id=2807462


In this paper, we explore using the back of hands for sensing hand gestures, which interferes less than glove-based approaches and provides better recognition than sensing at wrists and forearms. Our prototype, BackHand, uses an array of strain gauge sensors affixed to the back of hands, and applies machine learning techniques to recognize a variety of hand gestures. \ \ We conducted a user study with 10 participants to better understand gesture recognition accuracy and the effects of sensing locations. Results showed that sensor reading patterns differ significantly across users, but are consistent for the same user. The leave-one-user-out accuracy is low at an average of 27.4%, but reaches 95.8% average accuracy for 16 popular hand gestures when personalized for each participant. The most promising location spans the 1/8~1/4 area between the metacarpophalangeal joints (MCP, the knuckles between the hand and fingers) and the head of ulna (tip of the wrist).

-past work:
  -cat1:
  -sixSense (2009), Digits (2012)
  -cat2: finger-based, wrist-based
  -Ubicomp, wristflex
  -cat3:
  -EMG- high powerconsuption
A-finger based
B-wrist based
C-arm based
-back of the hand
-strain gauge sensor
-8-rows > 16 gestures x 10 trials
-Heat Map visualization
-same gesture heat map by multiple people >27.4% > personalized visualization?
-personalized accuracy rates 95.8%
-confusion matrix
-Limitations -personalized model for each user, sensor durability, smart skin reusability
-Conclusion:
  - new signal source, 16 gestures 95.8% accuracy, sensor location

Disclaimer: The opinions expressed here are my own, and do not reflect those of my employer. -Fumi Yamazaki

0 件のコメント:

コメントを投稿