Keywords
UIST2.0 Archive - 20 years of UIST
Back
Back to keywords index

interface

10 foot user interface

In Proceedings of UIST 2006
Article Picture

Soap: a pointing device that works in mid-air (p. 43-46)

3d user interface

In Proceedings of UIST 1994
Article Picture

3D widgets for exploratory scientific visualization (p. 69-70)

In Proceedings of UIST 1995
Article Picture

The virtual tricorder: a uniform interface for virtual reality (p. 39-40)

In Proceedings of UIST 1996
Article Picture

The go-go interaction technique: non-linear mapping for direct manipulation in VR (p. 79-80)

In Proceedings of UIST 1998
Article Picture

Data mountain: using spatial memory for document management (p. 153-162)

adaptable user interface

In Proceedings of UIST 2006
Article Picture

User interface façades: towards fully adaptable user interfaces (p. 309-318)

adaptive interface

In Proceedings of UIST 1997
Article Picture

CyberDesk: a framework for providing self-integrating ubiquitous software services (p. 75-76)

In Proceedings of UIST 2010
Article Picture

Designing adaptive feedback for improving data entry accuracy (p. 239-248)

Abstract plus

Data quality is critical for many information-intensive applications. One of the best opportunities to improve data quality is during entry. Usher provides a theoretical, data-driven foundation for improving data quality during entry. Based on prior data, Usher learns a probabilistic model of the dependencies between form questions and values. Using this information, Usher maximizes information gain. By asking the most unpredictable questions first, Usher is better able to predict answers for the remaining questions. In this paper, we use Usher's predictive ability to design a number of intelligent user interface adaptations that improve data entry accuracy and efficiency. Based on an underlying cognitive model of data entry, we apply these modifications before, during and after committing an answer. We evaluated these mechanisms with professional data entry clerks working with real patient data from six clinics in rural Uganda. The results show that our adaptations have the potential to reduce error (by up to 78%), with limited effect on entry time (varying between -14% and +6%). We believe this approach has wide applicability for improving the quality and availability of data, which is increasingly important for decision-making and resource allocation.

adaptive user interface

In Proceedings of UIST 1994
Article Picture

Evolutionary learning of graph layout constraints from examples (p. 103-108)

aggregate user interface

asynchronous user interface

In Proceedings of UIST 2002
Article Picture

User interfaces when and where they are needed: an infrastructure for recombinant computing (p. 171-180)

attentive user interface

In Proceedings of UIST 2005
Article Picture

ViewPointer: lightweight calibration-free eye tracking for ubiquitous handsfree deixis (p. 53-61)

In Proceedings of UIST 2005
Article Picture

eyeLook: using attention to facilitate mobile media consumption (p. 103-106)

audio interface

In Proceedings of UIST 2009
Article Picture

User guided audio selection from complex sound mixtures (p. 89-92)

Abstract plus

In this paper we present a novel interface for selecting sounds in audio mixtures. Traditional interfaces in audio editors provide a graphical representation of sounds which is either a waveform, or some variation of a time/frequency transform. Although with these representations a user might be able to visually identify elements of sounds in a mixture, they do not facilitate object-specific editing (e.g. selecting only the voice of a singer in a song). This interface uses audio guidance from a user in order to select a target sound within a mixture. The user is asked to vocalize (or otherwise sonically represent) the desired target sound, and an automatic process identifies and isolates the elements of the mixture that best relate to the user's input. This way of pointing to specific parts of an audio stream allows a user to perform audio selections which would have been infeasible otherwise.

audio user interface

auditory interface

In Proceedings of UIST 1994
Article Picture

An architecture for transforming graphical interfaces (p. 39-47)

In Proceedings of UIST 1994
Article Picture

ENO: synthesizing structured sound spaces (p. 49-57)

auditory user interface

In Proceedings of UIST 1998
Article Picture

Audio hallway: a virtual acoustic environment for browsing (p. 163-170)

automatic interface generation

blowable user interface

In Proceedings of UIST 2007
Article Picture

Blui: low-cost localized blowable user interfaces (p. 217-220)

Abstract plus

We describe a unique form of hands-free interaction that can be implemented on most commodity computing platforms. Our approach supports blowing at a laptop or computer screen to directly control certain interactive applications. Localization estimates are produced in real-time to determine where on the screen the person is blowing. Our approach relies solely on a single microphone, such as those already embedded in a standard laptop or one placed near a computer monitor, which makes our approach very cost-effective and easy-to-deploy. We show example interaction techniques that leverage this approach.

brain-computer interface

In Proceedings of UIST 2006
Article Picture

Using a low-cost electroencephalograph for task classification in HCI research (p. 81-90)

In Proceedings of UIST 2009
Article Picture

Using fNIRS brain sensing in realistic HCI settings: experiments and guidelines (p. 157-166)

Abstract plus

Because functional near-infrared spectroscopy (fNIRS) eases many of the restrictions of other brain sensors, it has potential to open up new possibilities for HCI research. From our experience using fNIRS technology for HCI, we identify several considerations and provide guidelines for using fNIRS in realistic HCI laboratory settings. We empirically examine whether typical human behavior (e.g. head and facial movement) or computer interaction (e.g. keyboard and mouse usage) interfere with brain measurement using fNIRS. Based on the results of our study, we establish which physical behaviors inherent in computer usage interfere with accurate fNIRS sensing of cognitive state information, which can be corrected in data analysis, and which are acceptable. With these findings, we hope to facilitate further adoption of fNIRS brain sensing technology in HCI research.

collaborative interface

In Proceedings of UIST 1997
Article Picture

An interactive constraint-based system for drawing graphs (p. 97-104)

command line interface

In Proceedings of UIST 1997
Article Picture

A shared command line in a virtual space: the working man's MOO (p. 73-74)

conversational interface

In Proceedings of UIST 2003
Article Picture

TalkBack: a conversational answering machine (p. 41-50)

correction interface

In Proceedings of UIST 2006
Article Picture

CueTIP: a mixed-initiative interface for correcting handwriting errors (p. 323-332)

crossing based interface

In Proceedings of UIST 2004
Article Picture

CrossY: a crossing-based drawing application (p. 3-12)

crossing interface

In Proceedings of UIST 2008
Article Picture

Attribute gates (p. 57-66)

Abstract plus

Attribute gates are a new user interface element designed to address the problem of concurrently setting attributes and moving objects between territories on a digital tabletop. Motivated by the notion of task levels in activity theory, and crossing interfaces, attribute gates allow users to operationalize multiple subtasks in one smooth movement. We present two configurations of attribute gates; (1) grid gates which spatially distribute attribute values in a regular grid, and require users to draw trajectories through the attributes; (2) polar gates which distribute attribute values on segments of concentric rings, and require users to align segments when setting attribute combinations. The layout of both configurations was optimised based on targeting and steering laws derived from Fitts' Law. A study compared the use of attribute gates with traditional contextual menus. Users of attribute gates demonstrated both increased performance and higher mutual awareness.

crossing-based interface

In Proceedings of UIST 2004
Article Picture

Combining crossing-based and paper-based interaction paradigms for dragging and dropping between overlapping windows (p. 193-196)

database user interface

In Proceedings of UIST 2006
Article Picture

From information visualization to direct manipulation: extending a generic visualization framework for the interactive editing of large datasets (p. 67-76)

demonstrational interface

In Proceedings of UIST 1992
Article Picture

Some virtues and limitations of action inferring interfaces (p. 79-88)

In Proceedings of UIST 1992
Article Picture

Graphical styles for building interfaces by demonstration (p. 117-124)

In Proceedings of UIST 1996
Article Picture

Simplifying macro definition in programming by demonstration (p. 173-182)

distributed interface

In Proceedings of UIST 1996
Article Picture

XXL: a dual approach for building user interfaces (p. 99-108)

document interface

In Proceedings of UIST 1999
Article Picture

Using properties for uniform interaction in the Presto document system (p. 55-64)

dynamic interface

In Proceedings of UIST 1993
Article Picture

Animation support in a user interface toolkit: flexible, robust, and reusable abstractions (p. 57-67)

ethical and sustainable interface

In Proceedings of UIST 2010
Article Picture

Connected environments (p. 183-184)

Abstract plus

Can new interfaces contribute to social and environmental improvement? For all the care, wit and brilliance that UIST innovations can contribute, can they actually make things better - better in the sense of public good - not merely lead to easier to use or more efficient consumer goods? This talk will explore the impact of interface technology on society and the environment, and examine engineered systems that invite participation, document change over time, and suggest alternative courses of action that are ethical and sustainable, drawing on examples from a diverse series of experimental designs and site-specific work Natalie has created throughout her career.

exertion interface

In Proceedings of UIST 2010
Article Picture

Jogging over a distance between Europe and Australia (p. 189-198)

Abstract plus

Exertion activities, such as jogging, require users to invest intense physical effort and are associated with physical and social health benefits. Despite the benefits, our understanding of exertion activities is limited, especially when it comes to social experiences. In order to begin understanding how to design for technologically augmented social exertion experiences, we present "Jogging over a Distance", a system in which spatialized audio based on heart rate allowed runners as far apart as Europe and Australia to run together. Our analysis revealed how certain aspects of the design facilitated a social experience, and consequently we describe a framework for designing augmented exertion activities. We make recommendations as to how designers could use this framework to aid the development of future social systems that aim to utilize the benefits of exertion.

extensible interface

In Proceedings of UIST 1993
Article Picture

Animating user interfaces using animation servers (p. 69-79)

fluid user interface

In Proceedings of UIST 1998
Article Picture

A negotiation architecture for fluid documents (p. 123-132)

future of user interface

In Proceedings of UIST 2008

Interactive viscosity (p. 1-2)

Abstract plus

When the Macintosh first made graphical user interfaces popular the notion of each person having their own computer was novel. Today's technology landscape is characterized by multiple computers per person many with far more capacity than that original Mac. The world of input devices, display devices and interactive techniques is far richer than those Macintosh days. Despite all of this diversity in possible interactions very few of these integrate well with each other. The monolithic isolated user interface architecture that characterized the Macintosh still dominates a great deal of today's personal computing. This talk will explore how possible ways to change that architecture so that information, interaction and communication flows more smoothly among our devices and those of our associates.

gestural interface

In Proceedings of UIST 2001
Article Picture

A suggestive interface for 3D drawing (p. 173-181)

In Proceedings of UIST 2009
Article Picture

EverybodyLovesSketch: 3D sketching for a broader audience (p. 59-68)

Abstract plus

We present EverybodyLovesSketch, a gesture-based 3D curve sketching system for rapid ideation and visualization of 3D forms, aimed at a broad audience. We first analyze traditional perspective drawing in professional practice. We then design a system built upon the paradigm of ILoveSketch, a 3D curve drawing system for design professionals. The new system incorporates many interaction aspects of perspective drawing with judicious automation to enable novices with no perspective training to proficiently create 3D curve sketches. EverybodyLovesSketch supports a number of novel interactions: tick-based sketch plane selection, single view definition of arbitrary extrusion vectors, multiple extruded surface sketching, copy-and-project of 3D curves, freeform surface sketching, and an interactive perspective grid. Finally, we present a study involving 49 high school students (with no formal artistic training) who each learned and used the system over 11 days, which provides detailed insights into the popularity, power and usability of the various techniques, and shows our system to be easily learnt and effectively used, with broad appeal.

gestural interface user interface design

graphical interface

In Proceedings of UIST 1995
Article Picture

Animating direct manipulation interfaces (p. 3-12)

graphical user interface

In Proceedings of UIST 1992
Article Picture

Declarative programming of graphical interfaces by visual examples (p. 107-116)

In Proceedings of UIST 1992
Article Picture

TelePICTIVE: computer-supported collaborative GUI design for designers with diverse expertise (p. 151-160)

In Proceedings of UIST 1992
Article Picture

Progress in building user interface toolkits: the world according to XIT (p. 181-190)

In Proceedings of UIST 1993
Article Picture

Pacers: time-elastic objects (p. 35-43)

In Proceedings of UIST 1993
Article Picture

Window real objects: a distributed shared memory for distributed implementation of GUI applications (p. 237-247)

In Proceedings of UIST 1994
Article Picture

Reconnaissance support for juggling multiple processing options (p. 27-28)

In Proceedings of UIST 1994
Article Picture

Interactive generation of graphical user interfaces by multiple visual examples (p. 85-94)

In Proceedings of UIST 1995
Article Picture

The continuous zoom: a constrained fisheye technique for viewing and navigating large information spaces (p. 207-215)

In Proceedings of UIST 1997
Article Picture

Pick-and-drop: a direct manipulation technique for multiple computer environments (p. 31-39)

In Proceedings of UIST 1998
Article Picture

An insidious Haptic invasion: adding force feedback to the X desktop (p. 59-64)

graphics user interface (gui)

In Proceedings of UIST 2005
Article Picture

Predictive interaction using the delphian desktop (p. 133-141)

graspable user interface

In Proceedings of UIST 2004
Article Picture

Tangible NURBS-curve manipulation techniques using graspable handles on a large display (p. 81-90)

haptic interface

In Proceedings of UIST 2007
Article Picture

Robust, low-cost, non-intrusive sensing and recognition of seated postures (p. 149-158)

Abstract plus

In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87% for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78% classification accuracy using data from 19sensors).

In Proceedings of UIST 2010
Article Picture

Gilded gait: reshaping the urban experience with augmented footsteps (p. 185-188)

Abstract plus

In this paper we describe Gilded Gait, a system that changes the perceived physical texture of the ground, as felt through the soles of users' feet. Ground texture, in spite of its potential as an effective channel of peripheral information display, has so far been paid little attention in HCI research. The system is designed as a pair of insoles with embedded actuators, and utilizes vibrotactile feedback to simulate the perceptions of a range of different ground textures. The discreet, low-key nature of the interface makes it particularly suited for outdoor use, and its capacity to alter how people experience the built environment may open new possibilities in urban design.

haptic user interface

In Proceedings of UIST 1998
Article Picture

An insidious Haptic invasion: adding force feedback to the X desktop (p. 59-64)

human-computer interface

In Proceedings of UIST 1995
Article Picture

Social activity indicators: interface components for CSCW systems (p. 159-168)

hybrid paper electronic interface

In Proceedings of UIST 1999
Article Picture

Linking and messaging from real paper in the Paper PDA (p. 179-186)

informal interface

In Proceedings of UIST 2001
Article Picture

The designers' outpost: a tangible interface for collaborative web site (p. 1-10)

In Proceedings of UIST 2007
Article Picture

SketchWizard: Wizard of Oz prototyping of pen-based user interfaces (p. 119-128)

Abstract plus

SketchWizard allows designers to create Wizard of Oz prototypes of pen-based user interfaces in the early stages of design. In the past, designers have been inhibited from participating in the design of pen-based interfaces because of the inadequacy of paper prototypes and the difficulty of developing functional prototypes. In SketchWizard, designers and end users share a drawing canvas between two computers, allowing the designer to simulate the behavior of recognition or other technologies. Special editing features are provided to help designers respond quickly to end-user input. This paper describes the SketchWizard system and presents two evaluations of our approach. The first is an early feasibility study in which Wizard of Oz was used to prototype a pen-based user interface. The second is a laboratory study in which designers used SketchWizard to simulate existing pen-based interfaces. Both showed that end users gave valuable feedback in spite of delays between end-user actions and wizard updates.

informal user interface

In Proceedings of UIST 2000
Article Picture

Suede: a Wizard of Oz prototyping tool for speech user interfaces (p. 1-10)

In Proceedings of UIST 2004
Article Picture

Topiary: a tool for prototyping location-enhanced applications (p. 217-226)

intelligent multimedia interface

In Proceedings of UIST 2004
Article Picture

An optimization-based approach to dynamic data content selection in intelligent multimedia interfaces (p. 227-236)

intelligent user interface

In Proceedings of UIST 2004
Article Picture

Citrine: providing intelligent copy-and-paste (p. 185-188)

interactive user interface

In Proceedings of UIST 1994
Article Picture

Pad++: a zooming graphical interface for exploring alternate interface physics (p. 17-26)

interface

In Proceedings of UIST 1996
Article Picture

The Lego interface toolkit (p. 97-98)

In Proceedings of UIST 2006
Article Picture

Procedural haptic texture (p. 179-186)

In Proceedings of UIST 2007
Article Picture

Blui: low-cost localized blowable user interfaces (p. 217-220)

Abstract plus

We describe a unique form of hands-free interaction that can be implemented on most commodity computing platforms. Our approach supports blowing at a laptop or computer screen to directly control certain interactive applications. Localization estimates are produced in real-time to determine where on the screen the person is blowing. Our approach relies solely on a single microphone, such as those already embedded in a standard laptop or one placed near a computer monitor, which makes our approach very cost-effective and easy-to-deploy. We show example interaction techniques that leverage this approach.

interface builder

In Proceedings of UIST 1992
Article Picture

Adding rule-based reasoning to a demonstrational interface builder (p. 89-97)

In Proceedings of UIST 1996
Article Picture

XXL: a dual approach for building user interfaces (p. 99-108)

In Proceedings of UIST 2005
Article Picture

Citrus: a language and toolkit for simplifying the creation of structured editors for code and data (p. 3-12)

interface component

In Proceedings of UIST 2000
Article Picture

Providing visually rich resizable images for user interface components (p. 227-235)

interface design

interface design issue

In Proceedings of UIST 1995
Article Picture

3-dimensional pliable surfaces: for the effective presentation of visual information (p. 217-226)

In Proceedings of UIST 2001
Article Picture

A framework for unifying presentation space (p. 61-70)

interface feedback

In Proceedings of UIST 2000
Article Picture

Illusions of infinity: feedback for infinite worlds (p. 237-238)

interface metaphor

In Proceedings of UIST 1993
Article Picture

Stretching the rubber sheet: a metaphor for viewing large layouts on small screens (p. 81-91)

In Proceedings of UIST 1994
Article Picture

Translucent patches---dissolving windows (p. 121-130)

In Proceedings of UIST 1995
Article Picture

3-dimensional pliable surfaces: for the effective presentation of visual information (p. 217-226)

In Proceedings of UIST 2001
Article Picture

A framework for unifying presentation space (p. 61-70)

mark-based user interface

In Proceedings of UIST 2007
Article Picture

SketchWizard: Wizard of Oz prototyping of pen-based user interfaces (p. 119-128)

Abstract plus

SketchWizard allows designers to create Wizard of Oz prototypes of pen-based user interfaces in the early stages of design. In the past, designers have been inhibited from participating in the design of pen-based interfaces because of the inadequacy of paper prototypes and the difficulty of developing functional prototypes. In SketchWizard, designers and end users share a drawing canvas between two computers, allowing the designer to simulate the behavior of recognition or other technologies. Special editing features are provided to help designers respond quickly to end-user input. This paper describes the SketchWizard system and presents two evaluations of our approach. The first is an early feasibility study in which Wizard of Oz was used to prototype a pen-based user interface. The second is a laboratory study in which designers used SketchWizard to simulate existing pen-based interfaces. Both showed that end users gave valuable feedback in spite of delays between end-user actions and wizard updates.

marking interface

In Proceedings of UIST 2006
Article Picture

CINCH: a cooperatively designed marking interface for 3D pathway selection (p. 33-42)

mobile device and interface

In Proceedings of UIST 2002
Article Picture

Ambient touch: designing tactile interfaces for handheld devices (p. 51-60)

mobile interface

In Proceedings of UIST 2001
Article Picture

Empirical measurements of intrabody communication performance under varied physical configurations (p. 183-190)

mobile interface design

In Proceedings of UIST 2000
Article Picture

Multimodal system processing in mobile environments (p. 21-30)

model-based user interface

model-based user interface design

In Proceedings of UIST 1993
Article Picture

Model-based user interface design by example and by interview (p. 129-137)

multi-computer user interface

In Proceedings of UIST 1997
Article Picture

Pick-and-drop: a direct manipulation technique for multiple computer environments (p. 31-39)

In Proceedings of UIST 1998
Article Picture

A user interface using fingerprint recognition: holding commands and data objects on fingers (p. 71-79)

multi-touch interface

In Proceedings of UIST 2006
Article Picture

A direct texture placement and editing interface (p. 23-32)

multi-user interface

In Proceedings of UIST 1993
Article Picture

Window real objects: a distributed shared memory for distributed implementation of GUI applications (p. 237-247)

In Proceedings of UIST 1996
Article Picture

A mechanism for supporting client migration in a shared window system (p. 11-20)

In Proceedings of UIST 2003
Article Picture

Synchronous gestures for multiple persons and computers (p. 149-158)

In Proceedings of UIST 2003
Article Picture

Dynamo: a public interactive surface supporting the cooperative sharing and exchange of media (p. 159-168)

multimedia interface

In Proceedings of UIST 2003
Article Picture

Rapid serial visual presentation techniques for consumer digital video devices (p. 115-124)

multimedia user interface

In Proceedings of UIST 1992
Article Picture

Programming time in multimedia user interfaces (p. 125-134)

multimodal interface

In Proceedings of UIST 1992
Article Picture

Some virtues and limitations of action inferring interfaces (p. 79-88)

In Proceedings of UIST 1994
Article Picture

An architecture for transforming graphical interfaces (p. 39-47)

In Proceedings of UIST 1994
Article Picture

ENO: synthesizing structured sound spaces (p. 49-57)

In Proceedings of UIST 1999
Article Picture

Multimodal agent interface based on dynamical dialogue model: MAICO: multimodal agent interface for communication (p. 69-70)

multimodal user interface

In Proceedings of UIST 1998
Article Picture

A user interface using fingerprint recognition: holding commands and data objects on fingers (p. 71-79)

multiscale interface

In Proceedings of UIST 1994
Article Picture

Pad++: a zooming graphical interface for exploring alternate interface physics (p. 17-26)

In Proceedings of UIST 1998
Article Picture

Constant density visualizations of non-uniform distributions of data (p. 19-28)

In Proceedings of UIST 2000
Article Picture

Illusions of infinity: feedback for infinite worlds (p. 237-238)

multiscale zoomable interface

muscle-computer interface

In Proceedings of UIST 2009
Article Picture

Enabling always-available input with muscle-computer interfaces (p. 167-176)

Abstract plus

Previous work has demonstrated the viability of applying offline analysis to interpret forearm electromyography (EMG) and classify finger gestures on a physical surface. We extend those results to bring us closer to using muscle-computer interfaces for always-available input in real-world applications. We leverage existing taxonomies of natural human grips to develop a gesture set covering interaction in free space even when hands are busy with other objects. We present a system that classifies these gestures in real-time and we introduce a bi-manual paradigm that enables use in interactive systems. We report experimental results demonstrating four-finger classification accuracies averaging 79% for pinching, 85% while holding a travel mug, and 88% when carrying a weighted bag. We further show generalizability across different arm postures and explore the tradeoffs of providing real-time visual feedback.

music-playback interface

In Proceedings of UIST 2003
Article Picture

SmartMusicKIOSK: music listening station with chorus-search function (p. 31-40)

natural language interface

In Proceedings of UIST 2010
Article Picture

A conversational interface to web automation (p. 229-238)

Abstract plus

This paper presents CoCo, a system that automates web tasks on a user's behalf through an interactive conversational interface. Given a short command such as "get road conditions for highway 88," CoCo synthesizes a plan to accomplish the task, executes it on the web, extracts an informative response, and returns the result to the user as a snippet of text. A novel aspect of our approach is that we leverage a repository of previously recorded web scripts and the user's personal web browsing history to determine how to complete each requested task. This paper describes the design and implementation of our system, along with the results of a brief user study that evaluates how likely users are to understand what CoCo does for them.

nested interface

In Proceedings of UIST 1999
Article Picture

Nested user interface components (p. 11-18)

non-visual interface

In Proceedings of UIST 2010
Article Picture

VizWiz: nearly real-time answers to visual questions (p. 333-342)

Abstract plus

The lack of access to visual information like text labels, icons, and colors can cause frustration and decrease independence for blind people. Current access technology uses automatic approaches to address some problems in this space, but the technology is error-prone, limited in scope, and quite expensive. In this paper, we introduce VizWiz, a talking application for mobile phones that offers a new alternative to answering visual questions in nearly real-time - asking multiple people on the web. To support answering questions quickly, we introduce a general approach for intelligently recruiting human workers in advance called quikTurkit so that workers are available when new questions arrive. A field deployment with 11 blind participants illustrates that blind people can effectively use VizWiz to cheaply answer questions in their everyday lives, highlighting issues that automatic approaches will need to address to be useful. Finally, we illustrate the potential of using VizWiz as part of the participatory design of advanced tools by using it to build and evaluate VizWiz::LocateIt, an interactive mobile tool that helps blind people solve general visual search problems.

object-oriented user interface toolkit

In Proceedings of UIST 1993
Article Picture

Animation support in a user interface toolkit: flexible, robust, and reusable abstractions (p. 57-67)

organic user interface

In Proceedings of UIST 2008
Article Picture

Towards more paper-like input: flexible input devices for foldable interaction styles (p. 283-286)

Abstract plus

This paper presents Foldable User Interfaces (FUI), a combination of a 3D GUI with windows imbued with the physics of paper, and Foldable Input Devices (FIDs). FIDs are sheets of paper that allow realistic transformations of graphical sheets in the FUI. Foldable input devices are made out of construction paper augmented with IR reflectors, and tracked by computer vision. Window sheets can be picked up and flexed with simple movements and deformations of the FID. FIDs allow a diverse lexicon of one-handed and two-handed interaction techniques, including folding, bending, flipping and stacking. We show how these can be used to ease the creation of simple 3D models, but also for tasks such as page navigation.

orientation aware interface

In Proceedings of UIST 2009
Article Picture

Detecting and leveraging finger orientation for interaction with direct-touch surfaces (p. 23-32)

Abstract plus

Current interactions on direct-touch interactive surfaces are often modeled based on properties of the input channel that are common in traditional graphical user interfaces (GUI) such as x-y coordinate information. Leveraging additional information available on the surfaces could potentially result in richer and novel interactions. In this paper we specifically explore the role of finger orientation. This property is typically ignored in touch-based interactions partly because of the ambiguity in determining it solely from the contact shape. We present a simple algorithm that unambiguously detects the directed finger orientation vector in real-time from contact information only, by considering the dynamics of the finger landing process. Results of an experimental evaluation show that our algorithm is stable and accurate. We then demonstrate how finger orientation can be leveraged to enable novel interactions and to infer higher-level information such as hand occlusion or user position. We present a set of orientation-aware interaction techniques and widgets for direct-touch surfaces.

painting interface

In Proceedings of UIST 2005
Article Picture

Mediating photo collage authoring (p. 183-186)

paper based user interface

In Proceedings of UIST 2003
Article Picture

Paper augmented digital documents (p. 51-60)

paper interface

In Proceedings of UIST 2008
Article Picture

Towards more paper-like input: flexible input devices for foldable interaction styles (p. 283-286)

Abstract plus

This paper presents Foldable User Interfaces (FUI), a combination of a 3D GUI with windows imbued with the physics of paper, and Foldable Input Devices (FIDs). FIDs are sheets of paper that allow realistic transformations of graphical sheets in the FUI. Foldable input devices are made out of construction paper augmented with IR reflectors, and tracked by computer vision. Window sheets can be picked up and flexed with simple movements and deformations of the FID. FIDs allow a diverse lexicon of one-handed and two-handed interaction techniques, including folding, bending, flipping and stacking. We show how these can be used to ease the creation of simple 3D models, but also for tasks such as page navigation.

paper-based interface

In Proceedings of UIST 2006
Article Picture

Pen-top feedback for paper-based interfaces (p. 201-210)

pen based interface

In Proceedings of UIST 1994
Article Picture

Translucent patches---dissolving windows (p. 121-130)

pen based user interface

pen interface

In Proceedings of UIST 1997
Article Picture

Pick-and-drop: a direct manipulation technique for multiple computer environments (p. 31-39)

In Proceedings of UIST 2000
Article Picture

Dual touch: a two-handed interface for pen-based PDAs (p. 211-212)

In Proceedings of UIST 2006
Article Picture

Mobile interaction using paperweight metaphor (p. 111-114)

In Proceedings of UIST 2006
Article Picture

Pen-top feedback for paper-based interfaces (p. 201-210)

pen user interface

In Proceedings of UIST 2004
Article Picture

A remote control interface for large displays (p. 127-136)

pen-based interface

In Proceedings of UIST 2003
Article Picture

Fluid interaction techniques for the control and annotation of digital video (p. 105-114)

In Proceedings of UIST 2005
Article Picture

Zliding: fluid zooming and sliding for high precision parameter manipulation (p. 143-152)

In Proceedings of UIST 2006
Article Picture

CINCH: a cooperatively designed marking interface for 3D pathway selection (p. 33-42)

pen-based user interface

In Proceedings of UIST 2005
Article Picture

Informal prototyping of continuous graphical interactions by demonstration (p. 221-230)

In Proceedings of UIST 2007
Article Picture

SketchWizard: Wizard of Oz prototyping of pen-based user interfaces (p. 119-128)

Abstract plus

SketchWizard allows designers to create Wizard of Oz prototypes of pen-based user interfaces in the early stages of design. In the past, designers have been inhibited from participating in the design of pen-based interfaces because of the inadequacy of paper prototypes and the difficulty of developing functional prototypes. In SketchWizard, designers and end users share a drawing canvas between two computers, allowing the designer to simulate the behavior of recognition or other technologies. Special editing features are provided to help designers respond quickly to end-user input. This paper describes the SketchWizard system and presents two evaluations of our approach. The first is an early feasibility study in which Wizard of Oz was used to prototype a pen-based user interface. The second is a laboratory study in which designers used SketchWizard to simulate existing pen-based interfaces. Both showed that end users gave valuable feedback in spite of delays between end-user actions and wizard updates.

perspective-aware interface

In Proceedings of UIST 2007
Article Picture

E-conic: a perspective-aware interface for multi-display environments (p. 279-288)

Abstract plus

Multi-display environments compose displays that can be at different locations from and different angles to the user; as a result, it can become very difficult to manage windows, read text, and manipulate objects. We investigate the idea of perspective as a way to solve these problems in multi-display environments. We first identify basic display and control factors that are affected by perspective, such as visibility, fracture, and sharing. We then present the design and implementation of E-conic, a multi-display multi-user environment that uses location data about displays and users to dynamically correct perspective. We carried out a controlled experiment to test the benefits of perspective correction in basic interaction tasks like targeting, steering, aligning, pattern-matching and reading. Our results show that perspective correction significantly and substantially improves user performance in all these tasks.

physical interface

In Proceedings of UIST 2005
Article Picture

DT controls: adding identity to physical interfaces (p. 245-252)

physical user interface

In Proceedings of UIST 1998
Article Picture

Informative things: how to attach information to the real world (p. 81-88)

In Proceedings of UIST 2000
Article Picture

ToolStone: effective use of the physical manipulation vocabularies of input devices (p. 109-117)

post-wimp interface

In Proceedings of UIST 2000
Article Picture

The architecture and implementation of CPN2000, a post-WIMP graphical application (p. 181-190)

previewable user interface

In Proceedings of UIST 2003
Article Picture

PreSense: interaction techniques for finger sensing input devices (p. 203-212)

rapid prototyping of physical interface

In Proceedings of UIST 2006
Article Picture

Rapid construction of functioning physical interfaces from cardboard, thumbtacks, tin foil and masking tape (p. 289-298)

real-world interface

In Proceedings of UIST 1999
Article Picture

Real-world interaction using the FieldMouse (p. 113-119)

robotic user interface

In Proceedings of UIST 2005
Article Picture

Physical embodiments for mobile communication agents (p. 231-240)

small screen interface

In Proceedings of UIST 1996
Article Picture

Tilting operations for small screen interfaces (p. 167-168)

smooth interface

In Proceedings of UIST 1995
Article Picture

Multiple-view approach for smooth information retrieval (p. 199-206)

spatially-aware interface

In Proceedings of UIST 2005
Article Picture

Sensing and visualizing spatial relations of mobile devices (p. 93-102)

speech interface

In Proceedings of UIST 2000
Article Picture

Cross-modal interaction using XWeb (p. 191-200)

speech interface for children

In Proceedings of UIST 1995
Article Picture

Demonstration of a reading coach that listens (p. 77-78)

speech user interface

In Proceedings of UIST 1993
Article Picture

SpeechSkimmer: interactively skimming recorded speech (p. 187-196)

In Proceedings of UIST 1995
Article Picture

A tool to support speech and non-speech audio feedback generation in audio interfaces (p. 171-179)

In Proceedings of UIST 2000
Article Picture

Suede: a Wizard of Oz prototyping tool for speech user interfaces (p. 1-10)

In Proceedings of UIST 2004
Article Picture

Augmenting conversations using dual-purpose speech (p. 237-246)

spoken language interface

In Proceedings of UIST 2002
Article Picture

Query-by-critique: spoken language access to large lists (p. 131-140)

stylus interface

In Proceedings of UIST 1997
Article Picture

Pick-and-drop: a direct manipulation technique for multiple computer environments (p. 31-39)

subjunctive interface

In Proceedings of UIST 2006
Article Picture

RecipeSheet: creating, combining and controlling information processors (p. 145-154)

tabletop interface

In Proceedings of UIST 2009
Article Picture

PhotoelasticTouch: transparent rubbery tangible interface using an LCD and photoelasticity (p. 43-50)

Abstract plus

PhotoelasticTouch is a novel tabletop system designed to intuitively facilitate touch-based interaction via real objects made from transparent elastic material. The system utilizes vision-based recognition techniques and the photoelastic properties of the transparent rubber to recognize deformed regions of the elastic material. Our system works with elastic materials over a wide variety of shapes and does not require any explicit visual markers. Compared to traditional interactive surfaces, our 2.5 dimensional interface system enables direct touch interaction and soft tactile feedback. In this paper we present our force sensing technique using photoelasticity and describe the implementation of our prototype system. We also present three practical applications of PhotoelasticTouch, a force-sensitive touch panel, a tangible face application, and a paint application.

tangible interface

In Proceedings of UIST 1999
Article Picture

The information percolator: ambient information display in a decorative object (p. 141-148)

In Proceedings of UIST 2001
Article Picture

The designers' outpost: a tangible interface for collaborative web site (p. 1-10)

tangible user interface

In Proceedings of UIST 1997
Article Picture

The metaDESK: models and prototypes for tangible user interfaces (p. 223-232)

In Proceedings of UIST 1999
Article Picture

Implementing phicons: combining computer vision with infrared technology for interactive physical icons (p. 67-68)

In Proceedings of UIST 1999
Article Picture

Building virtual structures with physical blocks (p. 71-72)

In Proceedings of UIST 2002
Article Picture

The actuated workbench: computer-controlled actuation in tabletop tangible interfaces (p. 181-190)

In Proceedings of UIST 2010
Article Picture

Madgets: actuating widgets on interactive tabletops (p. 293-302)

Abstract plus

We present a system for the actuation of tangible magnetic widgets (Madgets) on interactive tabletops. Our system combines electromagnetic actuation with fiber optic tracking to move and operate physical controls. The presented mechanism supports actuating complex tangibles that consist of multiple parts. A grid of optical fibers transmits marker positions past our actuation hardware to cameras below the table. We introduce a visual tracking algorithm that is able to detect objects and touches from the strongly sub-sampled video input of that grid. Six sample Madgets illustrate the capabilities of our approach, ranging from tangential movement and height actuation to inductive power transfer. Madgets combine the benefits of passive, untethered, and translucent tangibles with the ability to actuate them with multiple degrees of freedom.

tangible user interface (tui)

timeline interface

In Proceedings of UIST 2003
Article Picture

Classroom BRIDGE: using collaborative public and desktop timelines to support activity awareness (p. 21-30)

tongue-computer interface

In Proceedings of UIST 2009
Article Picture

Optically sensing tongue gestures for computer input (p. 177-180)

Abstract plus

Many patients with paralyzing injuries or medical conditions retain the use of their cranial nerves, which control the eyes, jaw, and tongue. While researchers have explored eye-tracking and speech technologies for these patients, we believe there is potential for directly sensing explicit tongue movement for controlling computers. In this paper, we describe a novel approach of using infrared optical sensors embedded within a dental retainer to sense tongue gestures. We describe an experiment showing our system effectively discriminating between four simple gestures with over 90% accuracy. In this experiment, users were also able to play the popular game Tetris with their tongues. Finally, we present lessons learned and opportunities for future work.

transparent user interface

tv interface

In Proceedings of UIST 2003
Article Picture

Rapid serial visual presentation techniques for consumer digital video devices (p. 115-124)

two handed interface

two-handed interface

In Proceedings of UIST 2000
Article Picture

Dual touch: a two-handed interface for pen-based PDAs (p. 211-212)

universal speech interface (usi)

user interface

In Proceedings of UIST 1992
Article Picture

MediaMosaic---a multimedia editing environment (p. 135-141)

In Proceedings of UIST 1993
Article Picture

User interfaces for symbolic computation: a case study (p. 1-10)

In Proceedings of UIST 1993
Article Picture

Animation: from cartoons to the user interface (p. 45-55)

In Proceedings of UIST 1994
Article Picture

Putting people first: specifying proper names in speech interfaces (p. 29-37)

In Proceedings of UIST 1995
Article Picture

Visual interfaces for solids modeling (p. 51-60)

In Proceedings of UIST 1995
Article Picture

Social activity indicators: interface components for CSCW systems (p. 159-168)

In Proceedings of UIST 1997
Article Picture

Elastic Windows: a hierarchical multi-window World-Wide Web browser (p. 169-177)

In Proceedings of UIST 1998
Article Picture

Scratchpad: mechanisms for better navigation in directed Web searching (p. 1-8)

In Proceedings of UIST 1998
Article Picture

Don't click, paint! Using toggle maps to manipulate sets of toggle switches (p. 65-66)

In Proceedings of UIST 2000
Article Picture

The AHI: an audio and haptic interface for contact interactions (p. 149-158)

In Proceedings of UIST 2002
Article Picture

Clothing manipulation (p. 91-100)

In Proceedings of UIST 2006
Article Picture

Phosphor: explaining transitions in the user interface using afterglow effects (p. 169-178)

In Proceedings of UIST 2007
Article Picture

Continuum: designing timelines for hierarchies, relationships and scale (p. 101-110)

Abstract plus

Temporal events, while often discrete, also have interesting relationships within and across times: larger events are often collections of smaller more discrete events (battles within wars; artists' works within a form); events at one point also have correlations with events at other points (a play written in one period is related to its performance over a period of time). Most temporal visualisations, however, only represent discrete data points or single data types along a single timeline: this event started here and ended there; this work was published at this time; this tag was popular for this period. In order to represent richer, faceted attributes of temporal events, we present Continuum. Continuum enables hierarchical relationships in temporal data to be represented and explored; it enables relationships between events across periods to be expressed, and in particular it enables user-determined control over the level of detail of any facet of interest so that the person using the system can determine a focus point, no matter the level of zoom over the temporal space. We present the factors motivating our approach, our evaluation and implementation of this new visualisation which makes it easy for anyone to apply this interface to rich, large-scale datasets with temporal data.

In Proceedings of UIST 2007
Article Picture

Rethinking the progress bar (p. 115-118)

Abstract plus

Progress bars are prevalent in modern user interfaces. Typically, a linear function is employed such that the progress of the bar is directly proportional to how much work has been completed. However, numerous factors cause progress bars to proceed at non-linear rates. Additionally, humans perceive time in a non-linear way. This paper explores the impact of various progress bar behaviors on user perception of process duration. The results are used to suggest several design considerations that can make progress bars appear faster and ultimately improve users' computing experience.

In Proceedings of UIST 2007
Article Picture

Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes (p. 159-168)

Abstract plus

Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a "$1 recognizer" that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers' N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing.

In Proceedings of UIST 2008
Article Picture

Tapping and rubbing: exploring new dimensions of tactile feedback with voice coil motors (p. 181-190)

Abstract plus

Tactile feedback allows devices to communicate with users when visual and auditory feedback are inappropriate. Unfortunately, current vibrotactile feedback is abstract and not related to the content of the message. This often clash-es with the nature of the message, for example, when sending a comforting message.

We propose addressing this by extending the repertoire of haptic notifications. By moving an actuator perpendicular to the user's skin, our prototype device can tap the user. Moving the actuator parallel to the user's skin induces rub-bing. Unlike traditional vibrotactile feedback, tapping and rubbing convey a distinct emotional message, similar to those induced by human-human touch.

To enable these techniques we built a device we call soundTouch. It translates audio wave files into lateral motion using a voice coil motor found in computer hard drives. SoundTouch can produce motion from below 1Hz to above 10kHz with high precision and fidelity.

We present the results of two exploratory studies. We found that participants were able to distinguish a range of taps and rubs. Our findings also indicate that tapping and rubbing are perceived as being similar to touch interactions exchanged by humans.

user interface animation

In Proceedings of UIST 1993
Article Picture

Animating user interfaces using animation servers (p. 69-79)

user interface appearance

In Proceedings of UIST 1997
Article Picture

Supporting dynamic downloadable appearances in an extensible user interface toolkit (p. 159-168)

In Proceedings of UIST 2000
Article Picture

Providing visually rich resizable images for user interface components (p. 227-235)

user interface builder

In Proceedings of UIST 1992
Article Picture

Graphical styles for building interfaces by demonstration (p. 117-124)

In Proceedings of UIST 1994
Article Picture

Blending structured graphics and layout (p. 167-173)

user interface component

In Proceedings of UIST 2008
Article Picture

Attribute gates (p. 57-66)

Abstract plus

Attribute gates are a new user interface element designed to address the problem of concurrently setting attributes and moving objects between territories on a digital tabletop. Motivated by the notion of task levels in activity theory, and crossing interfaces, attribute gates allow users to operationalize multiple subtasks in one smooth movement. We present two configurations of attribute gates; (1) grid gates which spatially distribute attribute values in a regular grid, and require users to draw trajectories through the attributes; (2) polar gates which distribute attribute values on segments of concentric rings, and require users to align segments when setting attribute combinations. The layout of both configurations was optimised based on targeting and steering laws derived from Fitts' Law. A study compared the use of attribute gates with traditional contextual menus. Users of attribute gates demonstrated both increased performance and higher mutual awareness.

user interface construction

In Proceedings of UIST 1995
Article Picture

Directness and liveness in the morphic user interface construction environment (p. 21-28)

user interface design

In Proceedings of UIST 1993
Article Picture

Concept clustering ina query interface to an image database (p. 11-21)

In Proceedings of UIST 1995
Article Picture

An experimental evaluation of transparent user interface tools and information content (p. 81-90)

In Proceedings of UIST 1995
Article Picture

Some design refinements and principles on the appearance and behavior of marking menus (p. 189-195)

In Proceedings of UIST 2000
Article Picture

Dynamic space management for user interfaces (p. 239-248)

In Proceedings of UIST 2001
Article Picture

A suggestive interface for 3D drawing (p. 173-181)

user interface design issue

user interface development environment

In Proceedings of UIST 1996
Article Picture

Easily adding animations to interfaces using constraints (p. 119-128)

user interface development tool

In Proceedings of UIST 1992
Article Picture

Progress in building user interface toolkits: the world according to XIT (p. 181-190)

user interface framework

In Proceedings of UIST 1995
Article Picture

Directness and liveness in the morphic user interface construction environment (p. 21-28)

user interface implementation

In Proceedings of UIST 1994
Article Picture

Skyblue: a multi-way local propagation constraint solver for user interface construction (p. 137-146)

user interface layout

user interface management system

In Proceedings of UIST 1992
Article Picture

Frameworks for interactive, extensible, information-intensive applications (p. 33-41)

In Proceedings of UIST 1992
Article Picture

Graphical styles for building interfaces by demonstration (p. 117-124)

In Proceedings of UIST 1992
Article Picture

Using taps to separate the user interface from the application code (p. 191-198)

In Proceedings of UIST 1993
Article Picture

Model-based user interface design by example and by interview (p. 129-137)

In Proceedings of UIST 1993
Article Picture

The Rendezvous constraint maintenance system (p. 225-234)

In Proceedings of UIST 1993
Article Picture

A framework for shared applications with a replicated architecture (p. 249-257)

user interface management system (uims)

In Proceedings of UIST 2000
Article Picture

Jazz: an extensible zoomable user interface graphics toolkit in Java (p. 171-180)

user interface metaphor

In Proceedings of UIST 1996
Article Picture

The go-go interaction technique: non-linear mapping for direct manipulation in VR (p. 79-80)

user interface software

In Proceedings of UIST 1996
Article Picture

XXL: a dual approach for building user interfaces (p. 99-108)

In Proceedings of UIST 1996
Article Picture

Inductive groups (p. 193-199)

user interface software and technology

user interface specification language

In Proceedings of UIST 1995
Article Picture

Grizzly Bear: a demonstrational learning tool for a user interface specification language (p. 75-76)

user interface system evaluation

In Proceedings of UIST 2007
Article Picture

Evaluating user interface systems research (p. 251-258)

Abstract plus

The development of user interface systems has languished with the stability of desktop computing. Future systems, however, that are off-the-desktop, nomadic or physical in nature will involve new devices and new software systems for creating interactive applications. Simple usability testing is not adequate for evaluating complex systems. The problems with evaluating systems work are explored and a set of criteria for evaluating new UI systems work is presented.

user interface tool kit

In Proceedings of UIST 1997
Article Picture

Supporting dynamic downloadable appearances in an extensible user interface toolkit (p. 159-168)

user interface toolkit

In Proceedings of UIST 1992
Article Picture

Frameworks for interactive, extensible, information-intensive applications (p. 33-41)

In Proceedings of UIST 1992
Article Picture

Progress in building user interface toolkits: the world according to XIT (p. 181-190)

In Proceedings of UIST 1993
Article Picture

Converting an existing user interface to use constraints (p. 207-215)

In Proceedings of UIST 1994
Article Picture

An architecture for an extensible 3D interface toolkit (p. 59-67)

In Proceedings of UIST 1994
Article Picture

Blending structured graphics and layout (p. 167-173)

In Proceedings of UIST 1995
Article Picture

Animating direct manipulation interfaces (p. 3-12)

In Proceedings of UIST 1997
Article Picture

Systematic output modification in a 2D user interface toolkit (p. 151-158)

In Proceedings of UIST 1997
Article Picture

Debugging lenses: a new class of transparent tools for user interface debugging (p. 179-187)

In Proceedings of UIST 2000
Article Picture

The architecture and implementation of CPN2000, a post-WIMP graphical application (p. 181-190)

virtual device interface

visual interface

In Proceedings of UIST 1995
Article Picture

Learning from TV programs: application of TV presentation to a videoconferencing system (p. 147-154)

wall interface

In Proceedings of UIST 1997
Article Picture

HoloWall: designing a finger, hand, body, and object sensitive wall (p. 209-210)

web search interface

In Proceedings of UIST 2007
Article Picture

SearchTogether: an interface for collaborative web search (p. 3-12)

Abstract plus

Studies of search habits reveal that people engage in many search tasks involving collaboration with others, such as travel planning, organizing social events, or working on a homework assignment. However, current Web search tools are designed for a single user, working alone. We introduce SearchTogether, a prototype that enables groups of remote users to synchronously or asynchronously collaborate when searching the Web. We describe an example usage scenario, and discuss the ways SearchTogether facilitates collaboration by supporting awareness, division of labor, and persistence. We then discuss the findings of our evaluation of SearchTogether, analyzing which aspects of its design enabled successful collaboration among study participants.

In Proceedings of UIST 2007
Article Picture

Assieme: finding and leveraging implicit references in a web search interface for programmers (p. 13-22)

Abstract plus

Programmers regularly use search as part of the development process, attempting to identify an appropriate API for a problem, seeking more information about an API, and seeking samples that show how to use an API. However, neither general-purpose search engines nor existing code search engines currently fit their needs, in large part because the information programmers need is distributed across many pages. We present Assieme, a Web search interface that effectively supports common programming search tasks by combining information from Web-accessible Java Archive (JAR) files, API documentation, and pages that include explanatory text and sample code. Assieme uses a novel approach to finding and resolving implicit references to Java packages, types, and members within sample code on the Web. In a study of programmers performing searches related to common programming tasks, we show that programmers obtain better solutions, using fewer queries, in the same amount of time spent using a general Web search interface.

web-based interface

In Proceedings of UIST 1997
Article Picture

Supporting dynamic downloadable appearances in an extensible user interface toolkit (p. 159-168)

web-portal user interface

In Proceedings of UIST 2005
Article Picture

A1: end-user programming for web-based system administration (p. 211-220)

wireless interface

In Proceedings of UIST 1996
Article Picture

The VIEP system: interacting with collaborative multimedia (p. 59-66)

zoomable user interface

In Proceedings of UIST 2003
Article Picture

Automatic thumbnail cropping and its effectiveness (p. 95-104)

zoomable user interface (zui)

In Proceedings of UIST 2000
Article Picture

Jazz: an extensible zoomable user interface graphics toolkit in Java (p. 171-180)

In Proceedings of UIST 2001
Article Picture

PhotoMesa: a zoomable image browser using quantum treemaps and bubblemaps (p. 71-80)

zooming interface

In Proceedings of UIST 1994
Article Picture

Pad++: a zooming graphical interface for exploring alternate interface physics (p. 17-26)

zooming user interface

In Proceedings of UIST 1999
Article Picture

Nested user interface components (p. 11-18)