Keywords
UIST2.0 Archive - 20 years of UIST
Back
Back to keywords index

context

context

In Proceedings of UIST 2008
Article Picture

Backward highlighting: enhancing faceted search (p. 235-238)

Abstract plus

Directional faceted browsers, such as the popular column browser iTunes, let a person pick an instance from any column-facet to start their search for music. The expected effect is that any columns to the right are filtered. In keeping with this directional filtering from left to right, however, the unexpected effect is that the columns to the left of the click provide no information about the possible associations to the selected item. In iTunes, this means that any selection in the Album column on the right returns no information about either the Artists (immediate left) or Genres (leftmost) associated with the chosen album.

Backward Highlighting (BH) is our solution to this problem, which allows users to see and utilize, during search, associations in columns to the left of a selection in a directional column browser like iTunes. Unlike other possible solutions, this technique allows such browsers to keep direction in their filtering, and so provides users with the best of both directional and non-directional styles. As well as describing BH in detail, this paper presents the results of a formative user study, showing benefits for both information discovery and subsequent retention in memory.

In Proceedings of UIST 2010
Article Picture

Mixture model based label association techniques for web accessibility (p. 67-76)

Abstract plus

An important aspect of making the Web accessible to blind users is ensuring that all important web page elements such as links, clickable buttons, and form fields have explicitly assigned labels. Properly labeled content is then correctly read out by screen readers, a dominant assistive technology used by blind users. In particular, improperly labeled form fields can critically impede online transactions such as shopping, paying bills, etc. with screen readers. Very often labels are not associated with form fields or are missing altogether, making form filling a challenge for blind users. Algorithms for associating a form element with one of several candidate labels in its vicinity must cope with the variability of the element's features including label's location relative to the element, distance to the element, etc. Probabilistic models provide a natural machinery to reason with such uncertainties. In this paper we present a Finite Mixture Model (FMM) formulation of the label association problem. The variability of feature values are captured in the FMM by a mixture of random variables that are drawn from parameterized distributions. Then, the most likely label to be paired with a form element is computed by maximizing the log-likelihood of the feature data using the Expectation-Maximization algorithm. We also adapt the FMM approach for two related problems: assigning labels (from an external Knowledge Base) to form elements that have no candidate labels in their vicinity and for quickly identifying clickable elements such as add-to-cart, checkout, etc., used in online transactions even when these elements do not have textual captions (e.g., image buttons w/o alternative text). We provide a quantitative evaluation of our techniques, as well as a user study with two blind subjects who used an aural web browser implementing our approach.

context awareness

In Proceedings of UIST 2003
Article Picture

Synchronous gestures for multiple persons and computers (p. 149-158)

In Proceedings of UIST 2007
Article Picture

Gui --- phooey!: the case for text input (p. 193-202)

Abstract plus

Information cannot be found if it is not recorded. Existing rich graphical application approaches interfere with user input in many ways, forcing complex interactions to enter simple information, requiring complex cognition to decide where the data should be stored, and limiting the kind of information that can be entered to what can fit into specific applications' data models. Freeform text entry suffers from none of these limitations but produces data that is hard to retrieve or visualize. We describe the design and implementation of Jourknow, a system that aims to bridge these two modalities, supporting lightweight text entry and weightless context capture that produces enough structure to support rich interactive presentation and retrieval of the arbitrary information entered.

detail+context context

focus and context

In Proceedings of UIST 2006
Article Picture

Interactive environment-aware display bubbles (p. 245-254)

focus plus context

In Proceedings of UIST 2005
Article Picture

Automatic image retargeting with fisheye-view warping (p. 153-162)

focus plus context screen

In Proceedings of UIST 2001
Article Picture

Focus plus context screens: combining display technology with visualization techniques (p. 31-40)

focus+context context

In Proceedings of UIST 1994
Article Picture

Laying out and visualizing large trees using a hyperbolic space (p. 13-14)

In Proceedings of UIST 1996
Article Picture

FOCUS: the interactive table for product comparison and selection (p. 41-50)

In Proceedings of UIST 1998
Article Picture

A negotiation architecture for fluid documents (p. 123-132)

interaction context

In Proceedings of UIST 1999
Article Picture

PeopleGarden: creating data portraits for users (p. 37-44)