Keywords
UIST2.0 Archive - 20 years of UIST
Back
Back to keywords index

input

3d input device

absolute input

In Proceedings of UIST 2006
Article Picture

HybridPointing: fluid switching between absolute and relative pointing with a direct input device (p. 211-220)

bimanual input

In Proceedings of UIST 2005
Article Picture

Bimanual and unimanual image alignment: an evaluation of mouse-based techniques (p. 123-131)

In Proceedings of UIST 2007
Article Picture

Lucid touch: a see-through mobile device (p. 269-278)

Abstract plus

Touch is a compelling input modality for interactive devices; however, touch input on the small screen of a mobile device is problematic because a user's fingers occlude the graphical elements he wishes to work with. In this paper, we present LucidTouch, a mobile device that addresses this limitation by allowing the user to control the application by touching the back of the device. The key to making this usable is what we call pseudo-transparency: by overlaying an image of the user's hands onto the screen, we create the illusion of the mobile device itself being semi-transparent. This pseudo-transparency allows users to accurately acquire targets while not occluding the screen with their fingers and hand. Lucid Touch also supports multi-touch input, allowing users to operate the device simultaneously with all 10 fingers. We present initial study results that indicate that many users found touching on the back to be preferable to touching on the front, due to reduced occlusion, higher precision, and the ability to make multi-finger input.

buttonless input

In Proceedings of UIST 2003
Article Picture

VisionWand: interaction techniques for large displays using a passive wand tracked in 3D (p. 173-182)

camera-based input

In Proceedings of UIST 1999
Article Picture

The VideoMouse: a camera-based multi-degree-of-freedom input device (p. 103-112)

collaborative input

In Proceedings of UIST 2001
Article Picture

DiamondTouch: a multi-user touch technology (p. 219-226)

finger input

In Proceedings of UIST 2008
Article Picture

Scratch input: creating large, inexpensive, unpowered and mobile finger input surfaces (p. 205-208)

Abstract plus

We present Scratch Input, an acoustic-based input technique that relies on the unique sound produced when a fingernail is dragged over the surface of a textured material, such as wood, fabric, or wall paint. We employ a simple sensor that can be easily coupled with existing surfaces, such as walls and tables, turning them into large, unpowered and ad hoc finger input surfaces. Our sensor is sufficiently small that it could be incorporated into a mobile device, allowing any suitable surface on which it rests to be appropriated as a gestural input surface. Several example applications were developed to demonstrate possible interactions. We conclude with a study that shows users can perform six Scratch Input gestures at about 90% accuracy with less than five minutes of training and on wide variety of surfaces.

In Proceedings of UIST 2009
Article Picture

Abracadabra: wireless, high-precision, and unpowered finger input for very small mobile devices (p. 121-124)

Abstract plus

We present Abracadabra, a magnetically driven input technique that offers users wireless, unpowered, high fidelity finger input for mobile devices with very small screens. By extending the input area to many times the size of the device's screen, our approach is able to offer a high C-D gain, enabling fine motor control. Additionally, screen occlusion can be reduced by moving interaction off of the display and into unused space around the device. We discuss several example applications as a proof of concept. Finally, results from our user study indicate radial targets as small as 16 degrees can achieve greater than 92% selection accuracy, outperforming comparable radial, touch-based finger input.

foldable input device

In Proceedings of UIST 2008
Article Picture

Towards more paper-like input: flexible input devices for foldable interaction styles (p. 283-286)

Abstract plus

This paper presents Foldable User Interfaces (FUI), a combination of a 3D GUI with windows imbued with the physics of paper, and Foldable Input Devices (FIDs). FIDs are sheets of paper that allow realistic transformations of graphical sheets in the FUI. Foldable input devices are made out of construction paper augmented with IR reflectors, and tracked by computer vision. Window sheets can be picked up and flexed with simple movements and deformations of the FID. FIDs allow a diverse lexicon of one-handed and two-handed interaction techniques, including folding, bending, flipping and stacking. We show how these can be used to ease the creation of simple 3D models, but also for tasks such as page navigation.

gestural input

In Proceedings of UIST 1992
Article Picture

Two-handed gesture in multi-modal natural dialog (p. 7-14)

haptic input

In Proceedings of UIST 1997
Article Picture

The metaDESK: models and prototypes for tangible user interfaces (p. 223-232)

input

In Proceedings of UIST 1999
Article Picture

The role of kinesthetic reference frames in two-handed input performance (p. 171-178)

In Proceedings of UIST 2001
Article Picture

Toward more sensitive mobile phones (p. 191-192)

In Proceedings of UIST 2005
Article Picture

Zliding: fluid zooming and sliding for high precision parameter manipulation (p. 143-152)

In Proceedings of UIST 2006
Article Picture

Soap: a pointing device that works in mid-air (p. 43-46)

In Proceedings of UIST 2007
Article Picture

Gui --- phooey!: the case for text input (p. 193-202)

Abstract plus

Information cannot be found if it is not recorded. Existing rich graphical application approaches interfere with user input in many ways, forcing complex interactions to enter simple information, requiring complex cognition to decide where the data should be stored, and limiting the kind of information that can be entered to what can fit into specific applications' data models. Freeform text entry suffers from none of these limitations but produces data that is hard to retrieve or visualize. We describe the design and implementation of Jourknow, a system that aims to bridge these two modalities, supporting lightweight text entry and weightless context capture that produces enough structure to support rich interactive presentation and retrieval of the arbitrary information entered.

In Proceedings of UIST 2009
Article Picture

Enabling always-available input with muscle-computer interfaces (p. 167-176)

Abstract plus

Previous work has demonstrated the viability of applying offline analysis to interpret forearm electromyography (EMG) and classify finger gestures on a physical surface. We extend those results to bring us closer to using muscle-computer interfaces for always-available input in real-world applications. We leverage existing taxonomies of natural human grips to develop a gesture set covering interaction in free space even when hands are busy with other objects. We present a system that classifies these gestures in real-time and we introduce a bi-manual paradigm that enables use in interactive systems. We report experimental results demonstrating four-finger classification accuracies averaging 79% for pinching, 85% while holding a travel mug, and 88% when carrying a weighted bag. We further show generalizability across different arm postures and explore the tradeoffs of providing real-time visual feedback.

input and interaction technology

In Proceedings of UIST 2004
Article Picture

SketchREAD: a multi-domain sketch recognition engine (p. 23-32)

input device

In Proceedings of UIST 1992
Article Picture

A testbed for characterizing dynamic response of virtual environment spatial sensors (p. 15-22)

In Proceedings of UIST 1997
Article Picture

Pick-and-drop: a direct manipulation technique for multiple computer environments (p. 31-39)

In Proceedings of UIST 1997
Article Picture

A finger-mounted, direct pointing device for mobile computing (p. 41-42)

In Proceedings of UIST 1997
Article Picture

The omni-directional treadmill: a locomotion device for virtual worlds (p. 213-221)

In Proceedings of UIST 1997
Article Picture

The metaDESK: models and prototypes for tangible user interfaces (p. 223-232)

In Proceedings of UIST 1998
Article Picture

Interaction and modeling techniques for desktop two-handed input (p. 49-58)

In Proceedings of UIST 1998
Article Picture

A user interface using fingerprint recognition: holding commands and data objects on fingers (p. 71-79)

In Proceedings of UIST 1999
Article Picture

The VideoMouse: a camera-based multi-degree-of-freedom input device (p. 103-112)

In Proceedings of UIST 1999
Article Picture

Real-world interaction using the FieldMouse (p. 113-119)

In Proceedings of UIST 2000
Article Picture

Sensing techniques for mobile interaction (p. 91-100)

In Proceedings of UIST 2000
Article Picture

ToolStone: effective use of the physical manipulation vocabularies of input devices (p. 109-117)

In Proceedings of UIST 2001
Article Picture

Empirical measurements of intrabody communication performance under varied physical configurations (p. 183-190)

In Proceedings of UIST 2001
Article Picture

Pop through mouse button interactions (p. 195-196)

In Proceedings of UIST 2003
Article Picture

Synchronous gestures for multiple persons and computers (p. 149-158)

In Proceedings of UIST 2003
Article Picture

VisionWand: interaction techniques for large displays using a passive wand tracked in 3D (p. 173-182)

In Proceedings of UIST 2003
Article Picture

PreSense: interaction techniques for finger sensing input devices (p. 203-212)

In Proceedings of UIST 2004
Article Picture

Using light emitting diode arrays as touch-sensitive input and output devices (p. 287-290)

In Proceedings of UIST 2006
Article Picture

Mobile interaction using paperweight metaphor (p. 111-114)

In Proceedings of UIST 2008
Article Picture

An exploration of pen rolling for pen-based interaction (p. 191-200)

Abstract plus

Current pen input mainly utilizes the position of the pen tip, and occasionally, a button press. Other possible device parameters, such as rolling the pen around its longitudinal axis, are rarely used. We explore pen rolling as a supporting input modality for pen-based interaction. Through two studies, we are able to determine 1) the parameters that separate intentional pen rolling for the purpose of interaction from incidental pen rolling caused by regular writing and drawing, and 2) the parameter range within which accurate and timely intentional pen rolling interactions can occur. Building on our experimental results, we present an exploration of the design space of rolling-based interaction techniques, which showcase three scenarios where pen rolling interactions can be useful: enhanced stimulus-response compatibility in rotation tasks [7], multi-parameter input, and simplified mode selection.

In Proceedings of UIST 2009
Article Picture

Mouse 2.0: multi-touch meets the mouse (p. 33-42)

Abstract plus

In this paper we present novel input devices that combine the standard capabilities of a computer mouse with multi-touch sensing. Our goal is to enrich traditional pointer-based desktop interactions with touch and gestures. To chart the design space, we present five different multi-touch mouse implementations. Each explores a different touch sensing strategy, which leads to differing form-factors and hence interactive possibilities. In addition to the detailed description of hardware and software implementations of our prototypes, we discuss the relative strengths, limitations and affordances of these novel input devices as informed by the results of a preliminary user study.

In Proceedings of UIST 2009
Article Picture

Disappearing mobile devices (p. 101-110)

Abstract plus

In this paper, we extrapolate the evolution of mobile devices in one specific direction, namely miniaturization. While we maintain the concept of a device that people are aware of and interact with intentionally, we envision that this concept can become small enough to allow invisible integration into arbitrary surfaces or human skin, and thus truly ubiquitous use. This outcome assumed, we investigate what technology would be most likely to provide the basis for these devices, what abilities such devices can be expected to have, and whether or not devices that size can still allow for meaningful interaction. We survey candidate technologies, drill down on gesture-based interaction, and demonstrate how it can be adapted to the desired form factors. While the resulting devices offer only the bare minimum in feedback and only the most basic interactions, we demonstrate that simple applications remain possible. We complete our exploration with two studies in which we investigate the affordance of these devices more concretely, namely marking and text entry using a gesture alphabet.

In Proceedings of UIST 2010
Article Picture

MAI painting brush: an interactive device that realizes the feeling of real painting (p. 97-100)

Abstract plus

Many digital painting systems have been proposed and their quality is improving. In these systems, graphics tablets are widely used as input devices. However, because of its rigid nib and indirect manipulation, the operational feeling of a graphics tablet is different from that of real paint brush. We solved this problem by developing the MR-based Artistic Interactive (MAI) Painting Brush, which imitates a real paint brush, and constructed a mixed reality (MR) painting system that enables direct painting on physical objects in the real world.

input handling

In Proceedings of UIST 2010
Article Picture

A framework for robust and flexible handling of inputs with uncertainty (p. 47-56)

Abstract plus

New input technologies (such as touch), recognition based input (such as pen gestures) and next-generation interactions (such as inexact interaction) all hold the promise of more natural user interfaces. However, these techniques all create inputs with some uncertainty. Unfortunately, conventional infrastructure lacks a method for easily handling uncertainty, and as a result input produced by these technologies is often converted to conventional events as quickly as possible, leading to a stunted interactive experience. We present a framework for handling input with uncertainty in a systematic, extensible, and easy to manipulate fashion. To illustrate this framework, we present several traditional interactors which have been extended to provide feedback about uncertain inputs and to allow for the possibility that in the end that input will be judged wrong (or end up going to a different interactor). Our six demonstrations include tiny buttons that are manipulable using touch input, a text box that can handle multiple interpretations of spoken input, a scrollbar that can respond to inexactly placed input, and buttons which are easier to click for people with motor impairments. Our framework supports all of these interactions by carrying uncertainty forward all the way through selection of possible target interactors, interpretation by interactors, generation of (uncertain) candidate actions to take, and a mediation process that decides (in a lazy fashion) which actions should become final.

input output device

In Proceedings of UIST 2002
Article Picture

TiltType: accelerometer-supported text entry for very small devices (p. 201-204)

input redirection

In Proceedings of UIST 2002
Article Picture

PointRight: experience with flexible input redirection in interactive workspaces (p. 227-234)

input technique and device

In Proceedings of UIST 2006
Article Picture

Camera phone based motion sensing: interaction techniques, applications and performance study (p. 101-110)

inverted input

jittery input

In Proceedings of UIST 2005
Article Picture

Zoom-and-pick: facilitating visual zooming and precision pointing with interactive handheld projectors (p. 73-82)

lightweight input

In Proceedings of UIST 2007
Article Picture

Gui --- phooey!: the case for text input (p. 193-202)

Abstract plus

Information cannot be found if it is not recorded. Existing rich graphical application approaches interfere with user input in many ways, forcing complex interactions to enter simple information, requiring complex cognition to decide where the data should be stored, and limiting the kind of information that can be entered to what can fit into specific applications' data models. Freeform text entry suffers from none of these limitations but produces data that is hard to retrieve or visualize. We describe the design and implementation of Jourknow, a system that aims to bridge these two modalities, supporting lightweight text entry and weightless context capture that produces enough structure to support rich interactive presentation and retrieval of the arbitrary information entered.

mid-air input

In Proceedings of UIST 2006
Article Picture

Soap: a pointing device that works in mid-air (p. 43-46)

mouse input

In Proceedings of UIST 2008
Article Picture

OctoPocus: a dynamic guide for learning gesture-based command sets (p. 37-46)

Abstract plus

We describe OctoPocus, an example of a dynamic guide that combines on-screen feedforward and feedback to help users learn, execute and remember gesture sets. OctoPocus can be applied to a wide range of single-stroke gestures and recognition algorithms and helps users progress smoothly from novice to expert performance. We provide an analysis of the design space and describe the results of two experi-ments that show that OctoPocus is significantly faster and improves learning of arbitrary gestures, compared to con-ventional Help menus. It can also be adapted to a mark-based gesture set, significantly improving input time compared to a two-level, four-item Hierarchical Marking menu.

multi degree of freedom input

In Proceedings of UIST 2008
Article Picture

An exploration of pen rolling for pen-based interaction (p. 191-200)

Abstract plus

Current pen input mainly utilizes the position of the pen tip, and occasionally, a button press. Other possible device parameters, such as rolling the pen around its longitudinal axis, are rarely used. We explore pen rolling as a supporting input modality for pen-based interaction. Through two studies, we are able to determine 1) the parameters that separate intentional pen rolling for the purpose of interaction from incidental pen rolling caused by regular writing and drawing, and 2) the parameter range within which accurate and timely intentional pen rolling interactions can occur. Building on our experimental results, we present an exploration of the design space of rolling-based interaction techniques, which showcase three scenarios where pen rolling interactions can be useful: enhanced stimulus-response compatibility in rotation tasks [7], multi-parameter input, and simplified mode selection.

multi degree-of-freedom input

In Proceedings of UIST 2003
Article Picture

Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays (p. 193-202)

multi-degree-of-freedom input

In Proceedings of UIST 1999
Article Picture

The VideoMouse: a camera-based multi-degree-of-freedom input device (p. 103-112)

multi-finger and two-handed gestural input

In Proceedings of UIST 2004
Article Picture

Multi-finger gestural interaction with 3d volumetric displays (p. 61-70)

multi-user input

In Proceedings of UIST 2005
Article Picture

DTLens: multi-user tabletop spatial data exploration (p. 119-122)

multiple function input

In Proceedings of UIST 2000
Article Picture

ToolStone: effective use of the physical manipulation vocabularies of input devices (p. 109-117)

multiple-degree-of-freedom input

In Proceedings of UIST 2000
Article Picture

ToolStone: effective use of the physical manipulation vocabularies of input devices (p. 109-117)

pen input

In Proceedings of UIST 1998
Article Picture

Integrating pen operations for composition by example (p. 211-212)

In Proceedings of UIST 2006
Article Picture

Interacting with dynamically defined information spaces using a handheld projector and a pen (p. 225-234)

In Proceedings of UIST 2008
Article Picture

OctoPocus: a dynamic guide for learning gesture-based command sets (p. 37-46)

Abstract plus

We describe OctoPocus, an example of a dynamic guide that combines on-screen feedforward and feedback to help users learn, execute and remember gesture sets. OctoPocus can be applied to a wide range of single-stroke gestures and recognition algorithms and helps users progress smoothly from novice to expert performance. We provide an analysis of the design space and describe the results of two experi-ments that show that OctoPocus is significantly faster and improves learning of arbitrary gestures, compared to con-ventional Help menus. It can also be adapted to a mark-based gesture set, significantly improving input time compared to a two-level, four-item Hierarchical Marking menu.

pen input device

In Proceedings of UIST 2006
Article Picture

Multi-layer interaction for digital tables (p. 269-272)

pen-based input

In Proceedings of UIST 2008
Article Picture

Attribute gates (p. 57-66)

Abstract plus

Attribute gates are a new user interface element designed to address the problem of concurrently setting attributes and moving objects between territories on a digital tabletop. Motivated by the notion of task levels in activity theory, and crossing interfaces, attribute gates allow users to operationalize multiple subtasks in one smooth movement. We present two configurations of attribute gates; (1) grid gates which spatially distribute attribute values in a regular grid, and require users to draw trajectories through the attributes; (2) polar gates which distribute attribute values on segments of concentric rings, and require users to align segments when setting attribute combinations. The layout of both configurations was optimised based on targeting and steering laws derived from Fitts' Law. A study compared the use of attribute gates with traditional contextual menus. Users of attribute gates demonstrated both increased performance and higher mutual awareness.

precision input

In Proceedings of UIST 2009
Article Picture

Ripples: utilizing per-contact visualizations to improve user interaction with touch displays (p. 3-12)

Abstract plus

We present Ripples, a system which enables visualizations around each contact point on a touch display and, through these visualizations, provides feedback to the user about successes and errors of their touch interactions. Our visualization system is engineered to be overlaid on top of existing applications without requiring the applications to be modified in any way, and functions independently of the application's responses to user input. Ripples reduces the fundamental problem of ambiguity of feedback when an action results in an unexpected behaviour. This ambiguity can be caused by a wide variety of sources. We describe the ambiguity problem, and identify those sources. We then define a set of visual states and transitions needed to resolve this ambiguity, of use to anyone designing touch applications or systems. We then present the Ripples implementation of visualizations for those states, and the results of a user study demonstrating user preference for the system, and demonstrating its utility in reducing errors.

reconfigurable input device

In Proceedings of UIST 2009
Article Picture

A reconfigurable ferromagnetic input device (p. 51-54)

Abstract plus

We present a novel hardware device based on ferromagnetic sensing, capable of detecting the presence, position and deformation of any ferrous object placed on or near its surface. These objects can include ball bearings, magnets, iron filings, and soft malleable bladders filled with ferrofluid. Our technology can be used to build reconfigurable input devices -- where the physical form of the input device can be assembled using combinations of such ferrous objects. This allows users to rapidly construct new forms of input device, such as a trackball-style device based on a single large ball bearing, tangible mixers based on a collection of sliders and buttons with ferrous components, and multi-touch malleable surfaces using a ferrofluid bladder. We discuss the implementation of our technology, its strengths and limitations, and potential application scenarios.

relative input

In Proceedings of UIST 2006
Article Picture

HybridPointing: fluid switching between absolute and relative pointing with a direct input device (p. 211-220)

spatial input

speech and pen input

In Proceedings of UIST 2000
Article Picture

Multimodal system processing in mobile environments (p. 21-30)

stylus input

In Proceedings of UIST 2004
Article Picture

The radial scroll tool: scrolling support for stylus- or touch-based document navigation (p. 53-56)

text input

In Proceedings of UIST 2000
Article Picture

The metropolis keyboard - an exploration of quantitative techniques for virtual keyboard design (p. 119-128)

In Proceedings of UIST 2003
Article Picture

EdgeWrite: a stylus-based text entry method designed for high accuracy and stability of motion (p. 61-70)

In Proceedings of UIST 2004
Article Picture

SHARK2: a large vocabulary shorthand writing system for pen-based computers (p. 43-52)

tilt input

In Proceedings of UIST 2003
Article Picture

TiltText: using tilt for text input to mobile phones (p. 81-90)

touch input

In Proceedings of UIST 2009
Article Picture

Contact area interaction with sliding widgets (p. 13-22)

Abstract plus

We show how to design touchscreen widgets that respond to a finger's contact area. In standard touchscreen systems a finger often appears to touch several screen objects, but the system responds as though only a single pixel is touched. In contact area interaction all objects under the finger respond to the touch. Users activate control widgets by sliding a movable element, as though flipping a switch. These Sliding Widgets resolve selection ambiguity and provide designers with a rich vocabulary of self-disclosing interaction mechanism. We showcase the design of several types of Sliding Widgets, and report study results showing that the simplest of these widgets, the Sliding Button, performs on-par with medium-sized pushbuttons and offers greater accuracy for small-sized buttons.

In Proceedings of UIST 2010
Article Picture

Performance optimizations of virtual keyboards for stroke-based text entry on a touch-based tabletop (p. 77-86)

Abstract plus

Efficiently entering text on interactive surfaces, such as touch-based tabletops, is an important concern. One novel solution is shape writing - the user strokes through all the letters in the word on a virtual keyboard without lifting his or her finger. While this technique can be used with any keyboard layout, the layout does impact the expected performance. In this paper, I investigate the influence of keyboard layout on expert text-entry performance for stroke-based text entry. Based on empirical data, I create a model of stroking through a series of points based on Fitts's law. I then use that model to evaluate various keyboard layouts for both tapping and stroking input. While the stroke-based technique seems promising by itself (i.e., there is a predicted gain of 17.3% for a Qwerty layout), significant additional gains can be made by using a more-suitable keyboard layout (e.g., the OPTI II layout is predicted to be 29.5% faster than Qwerty).

two-handed input

In Proceedings of UIST 1998
Article Picture

Interaction and modeling techniques for desktop two-handed input (p. 49-58)

In Proceedings of UIST 1999
Article Picture

The role of kinesthetic reference frames in two-handed input performance (p. 171-178)

In Proceedings of UIST 2000
Article Picture

ToolStone: effective use of the physical manipulation vocabularies of input devices (p. 109-117)

In Proceedings of UIST 2000
Article Picture

The architecture and implementation of CPN2000, a post-WIMP graphical application (p. 181-190)

In Proceedings of UIST 2004
Article Picture

Tangible NURBS-curve manipulation techniques using graspable handles on a large display (p. 81-90)