Keywords
UIST2.0 Archive - 20 years of UIST
Back
Back to keywords index

gesture

foot-based gesture

In Proceedings of UIST 2010
Article Picture

Sensing foot gestures from the pocket (p. 199-208)

Abstract plus

Visually demanding interfaces on a mobile phone can diminish the user experience by monopolizing the user's attention when they are focusing on another task and impede accessibility for visually impaired users. Because mobile devices are often located in pockets when users are mobile, explicit foot movements can be defined as eyes-and-hands-free input gestures for interacting with the device. In this work, we study the human capability associated with performing foot-based interactions which involve lifting and rotation of the foot when pivoting on the toe and heel. Building upon these results, we then developed a system to learn and recognize foot gestures using a single commodity mobile phone placed in the user's pocket or in a holster on their hip. Our system uses acceleration data recorded by a built-in accelerometer on the mobile device and a machine learning approach to recognizing gestures. Through a lab study, we demonstrate that our system can classify ten different foot gestures at approximately 86% accuracy.

freehand gesture

In Proceedings of UIST 2005
Article Picture

Distant freehand pointing and clicking on very large, high resolution displays (p. 33-42)

gesture

In Proceedings of UIST 1994
Article Picture

A mark-based interaction paradigm for free-hand drawing (p. 185-192)

In Proceedings of UIST 1995
Article Picture

Some design refinements and principles on the appearance and behavior of marking menus (p. 189-195)

In Proceedings of UIST 2000
Article Picture

SATIN: a toolkit for informal ink-based applications (p. 63-72)

In Proceedings of UIST 2001
Article Picture

Cursive: a novel interaction technique for controlling expressive avatar gesture (p. 151-152)

In Proceedings of UIST 2001
Article Picture

Conducting a realistic electronic orchestra (p. 161-162)

In Proceedings of UIST 2001
Article Picture

Pop through mouse button interactions (p. 195-196)

In Proceedings of UIST 2002
Article Picture

Boom chameleon: simultaneous capture of 3D viewpoint, voice and gesture annotations on a spatially-aware display (p. 111-120)

In Proceedings of UIST 2003
Article Picture

VisionWand: interaction techniques for large displays using a passive wand tracked in 3D (p. 173-182)

In Proceedings of UIST 2003
Article Picture

Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays (p. 193-202)

In Proceedings of UIST 2004
Article Picture

Collapse-to-zoom: viewing web pages on small screen devices by interactively removing irrelevant content (p. 91-94)

In Proceedings of UIST 2005
Article Picture

Interacting with large displays from a distance with vision-tracked multi-finger gestural input (p. 43-52)

In Proceedings of UIST 2006
Article Picture

CINCH: a cooperatively designed marking interface for 3D pathway selection (p. 33-42)

In Proceedings of UIST 2006
Article Picture

Robust computer vision-based detection of pinching for one and two-handed gesture input (p. 255-258)

In Proceedings of UIST 2007
Article Picture

Boomerang: suspendable drag-and-drop interactions based on a throw-and-catch metaphor (p. 187-190)

Abstract plus

We present the boomerang technique, which makes it possible to suspend and resume drag-and-drop operations. A throwing gesture while dragging an object suspends the operation, anytime and anywhere. A drag-and-drop interaction, enhanced with our technique, allows users to switch windows, invoke commands, and even drag other objects during a drag-and-drop operation without using the keyboard or menus. We explain how a throwing gesture can suspend drag-and-drop operations, and describe other features of our technique, including grouping, copying, and deleting dragged objects. We conclude by presenting prototype implementations and initial feedback on the proposed technique.

In Proceedings of UIST 2008
Article Picture

Lineogrammer: creating diagrams by drawing (p. 161-170)

Abstract plus

We present the design of Lineogrammer, a diagram-drawing system motivated by the immediacy and fluidity of pencil-drawing. We attempted for Lineogrammer to feel like a modeless diagramming "medium" in which stylus input is immediately interpreted as a command, text label or a drawing element, and drawing elements snap to or sculpt from existing elements. An inferred dual representation allows geometric diagram elements, no matter how they were entered, to be manipulated at granularities ranging from vertices to lines to shapes. We also integrate lightweight tools, based on rulers and construction lines, for controlling higher-level diagram attributes, such as symmetry and alignment. We include preliminary usability observations to help identify areas of strength and weakness with this approach.

In Proceedings of UIST 2008
Article Picture

Scratch input: creating large, inexpensive, unpowered and mobile finger input surfaces (p. 205-208)

Abstract plus

We present Scratch Input, an acoustic-based input technique that relies on the unique sound produced when a fingernail is dragged over the surface of a textured material, such as wood, fabric, or wall paint. We employ a simple sensor that can be easily coupled with existing surfaces, such as walls and tables, turning them into large, unpowered and ad hoc finger input surfaces. Our sensor is sufficiently small that it could be incorporated into a mobile device, allowing any suitable surface on which it rests to be appropriated as a gestural input surface. Several example applications were developed to demonstrate possible interactions. We conclude with a study that shows users can perform six Scratch Input gestures at about 90% accuracy with less than five minutes of training and on wide variety of surfaces.

In Proceedings of UIST 2009
Article Picture

Disappearing mobile devices (p. 101-110)

Abstract plus

In this paper, we extrapolate the evolution of mobile devices in one specific direction, namely miniaturization. While we maintain the concept of a device that people are aware of and interact with intentionally, we envision that this concept can become small enough to allow invisible integration into arbitrary surfaces or human skin, and thus truly ubiquitous use. This outcome assumed, we investigate what technology would be most likely to provide the basis for these devices, what abilities such devices can be expected to have, and whether or not devices that size can still allow for meaningful interaction. We survey candidate technologies, drill down on gesture-based interaction, and demonstrate how it can be adapted to the desired form factors. While the resulting devices offer only the bare minimum in feedback and only the most basic interactions, we demonstrate that simple applications remain possible. We complete our exploration with two studies in which we investigate the affordance of these devices more concretely, namely marking and text entry using a gesture alphabet.

In Proceedings of UIST 2009
Article Picture

Abracadabra: wireless, high-precision, and unpowered finger input for very small mobile devices (p. 121-124)

Abstract plus

We present Abracadabra, a magnetically driven input technique that offers users wireless, unpowered, high fidelity finger input for mobile devices with very small screens. By extending the input area to many times the size of the device's screen, our approach is able to offer a high C-D gain, enabling fine motor control. Additionally, screen occlusion can be reduced by moving interaction off of the display and into unused space around the device. We discuss several example applications as a proof of concept. Finally, results from our user study indicate radial targets as small as 16 degrees can achieve greater than 92% selection accuracy, outperforming comparable radial, touch-based finger input.

In Proceedings of UIST 2009
Article Picture

Bonfire: a nomadic system for hybrid laptop-tabletop interaction (p. 129-138)

Abstract plus

We present Bonfire, a self-contained mobile computing system that uses two laptop-mounted laser micro-projectors to project an interactive display space to either side of a laptop keyboard. Coupled with each micro-projector is a camera to enable hand gesture tracking, object recognition, and information transfer within the projected space. Thus, Bonfire is neither a pure laptop system nor a pure tabletop system, but an integration of the two into one new nomadic computing platform. This integration (1) enables observing the periphery and responding appropriately, e.g., to the casual placement of objects within its field of view, (2) enables integration between physical and digital objects via computer vision, (3) provides a horizontal surface in tandem with the usual vertical laptop display, allowing direct pointing and gestures, and (4) enlarges the input/output space to enrich existing applications. We describe Bonfire's architecture, and offer scenarios that highlight Bonfire's advantages. We also include lessons learned and insights for further development and use.

In Proceedings of UIST 2009
Article Picture

Optically sensing tongue gestures for computer input (p. 177-180)

Abstract plus

Many patients with paralyzing injuries or medical conditions retain the use of their cranial nerves, which control the eyes, jaw, and tongue. While researchers have explored eye-tracking and speech technologies for these patients, we believe there is potential for directly sensing explicit tongue movement for controlling computers. In this paper, we describe a novel approach of using infrared optical sensors embedded within a dental retainer to sense tongue gestures. We describe an experiment showing our system effectively discriminating between four simple gestures with over 90% accuracy. In this experiment, users were also able to play the popular game Tetris with their tongues. Finally, we present lessons learned and opportunities for future work.

In Proceedings of UIST 2010
Article Picture

Imaginary interfaces: spatial interaction with empty hands and without visual feedback (p. 3-12)

Abstract plus

Screen-less wearable devices allow for the smallest form factor and thus the maximum mobility. However, current screen-less devices only support buttons and gestures. Pointing is not supported because users have nothing to point at. However, we challenge the notion that spatial interaction requires a screen and propose a method for bringing spatial interaction to screen-less devices.

We present Imaginary Interfaces, screen-less devices that allow users to perform spatial interaction with empty hands and without visual feedback. Unlike projection-based solutions, such as Sixth Sense, all visual "feedback" takes place in the user's imagination. Users define the origin of an imaginary space by forming an L-shaped coordinate cross with their non-dominant hand. Users then point and draw with their dominant hand in the resulting space.

With three user studies we investigate the question: To what extent can users interact spatially with a user interface that exists only in their imagination? Participants created simple drawings, annotated existing drawings, and pointed at locations described in imaginary space. Our findings suggest that users' visual short-term memory can, in part, replace the feedback conventionally displayed on a screen.

In Proceedings of UIST 2010
Article Picture

Hands-on math: a page-based multi-touch and pen desktop for technical work and problem solving (p. 17-26)

Abstract plus

Students, scientists and engineers have to choose between the flexible, free-form input of pencil and paper and the computational power of Computer Algebra Systems (CAS) when solving mathematical problems. Hands-On Math is a multi-touch and pen-based system which attempts to unify these approaches by providing virtual paper that is enhanced to recognize mathematical notations as a means of providing in situ access to CAS functionality. Pages can be created and organized on a large pannable desktop, and mathematical expressions can be computed, graphed and manipulated using a set of uni- and bi-manual interactions which facilitate rapid exploration by eliminating tedious and error prone transcription tasks. Analysis of a qualitative pilot evaluation indicates the potential of our approach and highlights usability issues with the novel techniques used.

In Proceedings of UIST 2010
Article Picture

Pen + touch = new tools (p. 27-36)

Abstract plus

We describe techniques for direct pen+touch input. We observe people's manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Microsoft Surface application, centered on note-taking and scrapbooking of materials. Based on our explorations we advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen + touch yields new tools. This articulates how our system interprets unimodal pen, unimodal touch, and multimodal pen+touch inputs, respectively. For example, the user can hold a photo and drag off with the pen to create and place a copy; hold a photo and cross it in a freeform path with the pen to slice it in two; or hold selected photos and tap one with the pen to staple them all together. Touch thus unifies object selection with mode switching of the pen, while the muscular tension of holding touch serves as the "glue" that phrases together all the inputs into a unitary multimodal gesture. This helps the UI designer to avoid encumbrances such as physical buttons, persistent modes, or widgets that detract from the user's focus on the workspace.

gesture recognition

In Proceedings of UIST 2003
Article Picture

EdgeWrite: a stylus-based text entry method designed for high accuracy and stability of motion (p. 61-70)

In Proceedings of UIST 2004
Article Picture

SHARK2: a large vocabulary shorthand writing system for pen-based computers (p. 43-52)

In Proceedings of UIST 2006
Article Picture

Camera phone based motion sensing: interaction techniques, applications and performance study (p. 101-110)

In Proceedings of UIST 2007
Article Picture

Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes (p. 159-168)

Abstract plus

Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a "$1 recognizer" that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers' N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing.

In Proceedings of UIST 2008
Article Picture

OctoPocus: a dynamic guide for learning gesture-based command sets (p. 37-46)

Abstract plus

We describe OctoPocus, an example of a dynamic guide that combines on-screen feedforward and feedback to help users learn, execute and remember gesture sets. OctoPocus can be applied to a wide range of single-stroke gestures and recognition algorithms and helps users progress smoothly from novice to expert performance. We provide an analysis of the design space and describe the results of two experi-ments that show that OctoPocus is significantly faster and improves learning of arbitrary gestures, compared to con-ventional Help menus. It can also be adapted to a mark-based gesture set, significantly improving input time compared to a two-level, four-item Hierarchical Marking menu.

gesture sensing

In Proceedings of UIST 2003
Article Picture

PreSense: interaction techniques for finger sensing input devices (p. 203-212)

interaction with gesture

In Proceedings of UIST 2004
Article Picture

A gesture-based authentication scheme for untrusted public terminals (p. 157-160)

pen gesture

In Proceedings of UIST 2001
Article Picture

Cursive: a novel interaction technique for controlling expressive avatar gesture (p. 151-152)

unistroke gesture

In Proceedings of UIST 1998
Article Picture

Cirrin: a word-level unistroke keyboard for pen input (p. 213-214)