Keywords
UIST2.0 Archive - 20 years of UIST
Back
Back to keywords index

direct

direct data editing

In Proceedings of UIST 2006
Article Picture

From information visualization to direct manipulation: extending a generic visualization framework for the interactive editing of large datasets (p. 67-76)

direct display

direct manipulating

In Proceedings of UIST 1996
Article Picture

Adding a collaborative agent to graphical user interfaces (p. 21-30)

direct manipulation

In Proceedings of UIST 1992
Article Picture

Adding rule-based reasoning to a demonstrational interface builder (p. 89-97)

In Proceedings of UIST 1992
Article Picture

Declarative programming of graphical interfaces by visual examples (p. 107-116)

In Proceedings of UIST 1992
Article Picture

Graphical styles for building interfaces by demonstration (p. 117-124)

In Proceedings of UIST 1993
Article Picture

Converting an existing user interface to use constraints (p. 207-215)

In Proceedings of UIST 1994
Article Picture

Reconnaissance support for juggling multiple processing options (p. 27-28)

In Proceedings of UIST 1994
Article Picture

An architecture for an extensible 3D interface toolkit (p. 59-67)

In Proceedings of UIST 1994
Article Picture

Extending a graphical toolkit for two-handed interaction (p. 195-204)

In Proceedings of UIST 1995
Article Picture

Animating direct manipulation interfaces (p. 3-12)

In Proceedings of UIST 1995
Article Picture

Directness and liveness in the morphic user interface construction environment (p. 21-28)

In Proceedings of UIST 1995
Article Picture

SDM: selective dynamic manipulation of visualizations (p. 61-70)

In Proceedings of UIST 1996
Article Picture

A new direct manipulation technique for aligning objects in drawing programs (p. 157-164)

In Proceedings of UIST 1997
Article Picture

Pick-and-drop: a direct manipulation technique for multiple computer environments (p. 31-39)

In Proceedings of UIST 1999
Article Picture

Integrated manipulation: context-aware manipulation of 2D diagrams (p. 159-160)

In Proceedings of UIST 2001
Article Picture

Voice as sound: using non-verbal voice input for interactive control (p. 155-156)

In Proceedings of UIST 2002
Article Picture

Dynamic approximation of complex graphical constraints by linear constraints (p. 191-200)

In Proceedings of UIST 2005
Article Picture

Informal prototyping of continuous graphical interactions by demonstration (p. 221-230)

In Proceedings of UIST 2006
Article Picture

From information visualization to direct manipulation: extending a generic visualization framework for the interactive editing of large datasets (p. 67-76)

In Proceedings of UIST 2007
Article Picture

Bubble clusters: an interface for manipulating spatial aggregation of graphical objects (p. 173-182)

Abstract plus

Spatial layout is frequently used for managing loosely organized information, such as desktop icons and digital ink. To help users organize this type of information efficiently, we propose an interface for manipulating spatial aggregations of objects. The aggregated objects are automatically recognized as a group, and the group structure is visualized as a two-dimensional bubble surface that surrounds the objects. Users can drag, copy, or delete a group by operating on the bubble. Furthermore, to help pick out individual objects in a dense aggregation, the system spreads the objects to avoid overlapping when requested. This paper describes the design of this interface and its implementation. We tested our technique in icon grouping and ink relocation tasks and observed improvements in user performance.

In Proceedings of UIST 2008
Article Picture

Video object annotation, navigation, and composition (p. 3-12)

Abstract plus

We explore the use of tracked 2D object motion to enable novel approaches to interacting with video. These include moving annotations, video navigation by direct manipulation of objects, and creating an image composite from multiple video frames. Features in the video are automatically tracked and grouped in an off-line preprocess that enables later interactive manipulation. Examples of annotations include speech and thought balloons, video graffiti, path arrows, video hyperlinks, and schematic storyboards. We also demonstrate a direct-manipulation interface for random frame access using spatial constraints, and a drag-and-drop interface for assembling still images from videos. Taken together, our tools can be employed in a variety of applications including film and video editing, visual tagging, and authoring rich media such as hyperlinked video.

In Proceedings of UIST 2008
Article Picture

Extending 2D object arrangement with pressure-sensitive layering cues (p. 87-90)

Abstract plus

We demonstrate a pressure-sensitive depth sorting technique that extends standard two-dimensional (2D) manipulation techniques, particularly those used with multi-touch or multi-point controls. We combine this layering operation with a page-folding metaphor for more fluid interaction in applications requiring 2D sorting and layout.

In Proceedings of UIST 2009
Article Picture

PhotoelasticTouch: transparent rubbery tangible interface using an LCD and photoelasticity (p. 43-50)

Abstract plus

PhotoelasticTouch is a novel tabletop system designed to intuitively facilitate touch-based interaction via real objects made from transparent elastic material. The system utilizes vision-based recognition techniques and the photoelastic properties of the transparent rubber to recognize deformed regions of the elastic material. Our system works with elastic materials over a wide variety of shapes and does not require any explicit visual markers. Compared to traditional interactive surfaces, our 2.5 dimensional interface system enables direct touch interaction and soft tactile feedback. In this paper we present our force sensing technique using photoelasticity and describe the implementation of our prototype system. We also present three practical applications of PhotoelasticTouch, a force-sensitive touch panel, a tangible face application, and a paint application.

In Proceedings of UIST 2009
Article Picture

A screen-space formulation for 2D and 3D direct manipulation (p. 69-78)

Abstract plus

Rotate-Scale-Translate (RST) interactions have become the de facto standard when interacting with two-dimensional (2D) contexts in single-touch and multi-touch environments. Because the use of RST has thus far focused almost entirely on 2D, there are not yet standard techniques for extending these principles into three dimensions. In this paper we describe a screen-space method which fully captures the semantics of the traditional 2D RST multi-touch interaction, but also allows us to extend these same principles into three-dimensional (3D) interaction. Just like RST allows users to directly manipulate 2D contexts with two or more points, our method allows the user to directly manipulate 3D objects with three or more points. We show some novel interactions, which take perspective into account and are thus not available in orthographic environments. Furthermore, we identify key ambiguities and unexpected behaviors that arise when performing direct manipulation in 3D and offer solutions to mitigate the difficulties each presents. Finally, we show how to extend our method to meet application-specific control objectives, as well as show our method working in some example environments.

In Proceedings of UIST 2010
Article Picture

UIMarks: quick graphical interaction with specific targets (p. 173-182)

Abstract plus

This paper reports on the design and evaluation of UIMarks, a system that lets users specify on-screen targets and associated actions by means of a graphical marking language. UIMarks supplements traditional pointing by providing an alternative mode in which users can quickly activate these marks. Associated actions can range from basic pointing facilitation to complex sequences possibly involving user interaction: one can leave a mark on a palette to make it more reachable, but the mark can also be configured to wait for a click and then automatically move the pointer back to its original location, for example. The system has been implemented on two different platforms, Metisse and OS X. We compared it to traditional pointing on a set of elementary and composite tasks in an abstract setting. Although pure pointing was not improved, the programmable automation supported by the system proved very effective.

In Proceedings of UIST 2010
Article Picture

Multitoe: high-precision interaction with back-projected floors based on high-resolution multi-touch input (p. 209-218)

Abstract plus

Tabletop applications cannot display more than a few dozen on-screen objects. The reason is their limited size: tables cannot become larger than arm's length without giving up direct touch. We propose creating direct touch surfaces that are orders of magnitude larger. We approach this challenge by integrating high-resolution multitouch input into a back-projected floor. As the same time, we maintain the purpose and interaction concepts of tabletop computers, namely direct manipulation.

We base our hardware design on frustrated total internal reflection. Its ability to sense per-pixel pressure allows the floor to locate and analyze users' soles. We demonstrate how this allows the floor to recognize foot postures and identify users. These two functions form the basis of our system. They allow the floor to ignore users unless they interact explicitly, identify and track users based on their shoes, enable high-precision interaction, invoke menus, track heads, and allow users to control high-degree of freedom interactions using their feet. While we base our designs on a series of simple user studies, the primary contribution on this paper is in the engineering domain.

direct touch

In Proceedings of UIST 2007
Article Picture

Lucid touch: a see-through mobile device (p. 269-278)

Abstract plus

Touch is a compelling input modality for interactive devices; however, touch input on the small screen of a mobile device is problematic because a user's fingers occlude the graphical elements he wishes to work with. In this paper, we present LucidTouch, a mobile device that addresses this limitation by allowing the user to control the application by touching the back of the device. The key to making this usable is what we call pseudo-transparency: by overlaying an image of the user's hands onto the screen, we create the illusion of the mobile device itself being semi-transparent. This pseudo-transparency allows users to accurately acquire targets while not occluding the screen with their fingers and hand. Lucid Touch also supports multi-touch input, allowing users to operate the device simultaneously with all 10 fingers. We present initial study results that indicate that many users found touching on the back to be preferable to touching on the front, due to reduced occlusion, higher precision, and the ability to make multi-finger input.

direct touch interaction