Keywords
UIST2.0 Archive - 20 years of UIST
Back
Back to keywords index

mobile

handheld and mobile

mobile

In Proceedings of UIST 2001
Article Picture

Join and capture: a model for nomadic interaction (p. 131-140)

In Proceedings of UIST 2007
Article Picture

RubberEdge: reducing clutching by combining position and rate control with elastic feedback (p. 129-138)

Abstract plus

Position control devices enable precise selection, but significant clutching degrades performance. Clutching can be reduced with high control-display gain or pointer acceleration, but there are human and device limits. Elastic rate control eliminates clutching completely, but can make precise selection difficult. We show that hybrid position-rate control can outperform position control by 20% when there is significant clutching, even when using pointer acceleration. Unlike previous work, our RubberEdge technique eliminates trajectory and velocity discontinuities. We derive predictive models for position control with clutching and hybrid control, and present a prototype RubberEdge position-rate control device including initial user feedback.

In Proceedings of UIST 2008
Article Picture

Foldable interactive displays (p. 287-290)

Abstract plus

Modern computer displays tend to be in fixed size, rigid, and rectilinear rendering them insensitive to the visual area demands of an application or the desires of the user. Foldable displays offer the ability to reshape and resize the interactive surface at our convenience and even permit us to carry a very large display surface in a small volume. In this paper, we implement four interactive foldable display designs using image projection with low-cost tracking and explore display behaviors using orientation sensitivity.

In Proceedings of UIST 2010
Article Picture

Imaginary interfaces: spatial interaction with empty hands and without visual feedback (p. 3-12)

Abstract plus

Screen-less wearable devices allow for the smallest form factor and thus the maximum mobility. However, current screen-less devices only support buttons and gestures. Pointing is not supported because users have nothing to point at. However, we challenge the notion that spatial interaction requires a screen and propose a method for bringing spatial interaction to screen-less devices.

We present Imaginary Interfaces, screen-less devices that allow users to perform spatial interaction with empty hands and without visual feedback. Unlike projection-based solutions, such as Sixth Sense, all visual "feedback" takes place in the user's imagination. Users define the origin of an imaginary space by forming an L-shaped coordinate cross with their non-dominant hand. Users then point and draw with their dominant hand in the resulting space.

With three user studies we investigate the question: To what extent can users interact spatially with a user interface that exists only in their imagination? Participants created simple drawings, annotated existing drawings, and pointed at locations described in imaginary space. Our findings suggest that users' visual short-term memory can, in part, replace the feedback conventionally displayed on a screen.

mobile computer

In Proceedings of UIST 2000
Article Picture

Dual touch: a two-handed interface for pen-based PDAs (p. 211-212)

In Proceedings of UIST 2003
Article Picture

Tactile interfaces for small touch screens (p. 217-220)

mobile computing

In Proceedings of UIST 2000
Article Picture

The metropolis keyboard - an exploration of quantitative techniques for virtual keyboard design (p. 119-128)

In Proceedings of UIST 2002
Article Picture

An annotated situation-awareness aid for augmented reality (p. 213-216)

In Proceedings of UIST 2004
Article Picture

Augmenting conversations using dual-purpose speech (p. 237-246)

In Proceedings of UIST 2005
Article Picture

Sensing and visualizing spatial relations of mobile devices (p. 93-102)

In Proceedings of UIST 2005
Article Picture

eyeLook: using attention to facilitate mobile media consumption (p. 103-106)

In Proceedings of UIST 2009
Article Picture

Virtual shelves: interactions with orientation aware devices (p. 125-128)

Abstract plus

Triggering shortcuts or actions on a mobile device often requires a long sequence of key presses. Because the functions of buttons are highly dependent on the current application's context, users are required to look at the display during interaction, even in many mobile situations when eyes-free interactions may be preferable. We present Virtual Shelves, a technique to trigger programmable shortcuts that leverages the user's spatial awareness and kinesthetic memory. With Virtual Shelves, the user triggers shortcuts by orienting a spatially-aware mobile device within the circular hemisphere in front of her. This space is segmented into definable and selectable regions along the phi and theta planes. We show that users can accurately point to 7 regions on the theta and 4 regions on the phi plane using only their kinesthetic memory. Building upon these results, we then evaluate a proof-of-concept prototype of the Virtual Shelves using a Nokia N93. The results show that Virtual Shelves is faster than the N93's native interface for common mobile phone tasks.

In Proceedings of UIST 2010
Article Picture

Gesture search: a tool for fast mobile data access (p. 87-96)

Abstract plus

Modern mobile phones can store a large amount of data, such as contacts, applications and music. However, it is difficult to access specific data items via existing mobile user interfaces. In this paper, we present Gesture Search, a tool that allows a user to quickly access various data items on a mobile phone by drawing gestures on its touch screen. Gesture Search contributes a unique way of combining gesture-based interaction and search for fast mobile data access. It also demonstrates a novel approach for coupling gestures with standard GUI interaction. A real world deployment with mobile phone users showed that Gesture Search enabled fast, easy access to mobile data in their day-to-day lives. Gesture Search has been released to public and is currently in use by hundreds of thousands of mobile users. It was rated positively by users, with a mean of 4.5 out of 5 for over 5000 ratings.

mobile device

In Proceedings of UIST 2000
Article Picture

Sensing techniques for mobile interaction (p. 91-100)

In Proceedings of UIST 2000
Article Picture

The metropolis keyboard - an exploration of quantitative techniques for virtual keyboard design (p. 119-128)

In Proceedings of UIST 2001
Article Picture

Toward more sensitive mobile phones (p. 191-192)

In Proceedings of UIST 2002
Article Picture

TiltType: accelerometer-supported text entry for very small devices (p. 201-204)

In Proceedings of UIST 2006
Article Picture

Camera phone based motion sensing: interaction techniques, applications and performance study (p. 101-110)

In Proceedings of UIST 2006
Article Picture

Mobile interaction using paperweight metaphor (p. 111-114)

In Proceedings of UIST 2008
Article Picture

Scratch input: creating large, inexpensive, unpowered and mobile finger input surfaces (p. 205-208)

Abstract plus

We present Scratch Input, an acoustic-based input technique that relies on the unique sound produced when a fingernail is dragged over the surface of a textured material, such as wood, fabric, or wall paint. We employ a simple sensor that can be easily coupled with existing surfaces, such as walls and tables, turning them into large, unpowered and ad hoc finger input surfaces. Our sensor is sufficiently small that it could be incorporated into a mobile device, allowing any suitable surface on which it rests to be appropriated as a gestural input surface. Several example applications were developed to demonstrate possible interactions. We conclude with a study that shows users can perform six Scratch Input gestures at about 90% accuracy with less than five minutes of training and on wide variety of surfaces.

In Proceedings of UIST 2008
Article Picture

Lightweight material detection for placement-aware mobile computing (p. 279-282)

Abstract plus

Numerous methods have been proposed that allow mobile devices to determine where they are located (e.g., home or office) and in some cases, predict what activity the user is currently engaged in (e.g., walking, sitting, or driving). While useful, this sensing currently only tells part of a much richer story. To allow devices to act most appropriately to the situation they are in, it would also be very helpful to know about their placement - for example whether they are sitting on a desk, hidden in a drawer, placed in a pocket, or held in one's hand - as different device behaviors may be called for in each of these situations. In this paper, we describe a simple, small, and inexpensive multispectral optical sensor for identifying materials in proximity to a device. This information can be used in concert with e.g., location information, to estimate, for example, that the device is "sitting on the desk at home", or "in the pocket at work". This paper discusses several potential uses of this technology, as well as results from a two-part study, which indicates that this technique can detect placement at 94.4% accuracy with real-world placement sets.

In Proceedings of UIST 2009
Article Picture

TapSongs: tapping rhythm-based passwords on a single binary sensor (p. 93-96)

Abstract plus

TapSongs are presented, which enable user authentication on a single "binary" sensor (e.g., button) by matching the rhythm of tap down/up events to a jingle timing model created by the user. We describe our matching algorithm, which employs absolute match criteria and learns from successful logins. We also present a study of 10 subjects showing that after they created their own TapSong models from 12 examples (< 2 minutes), their subsequent login attempts were 83.2% successful. Furthermore, aural and visual eavesdropping of the experimenter's logins resulted in only 10.7% successful imposter logins by subjects. Even when subjects heard the target jingles played by a synthesized piano, they were only 19.4% successful logging in as imposters. These results are attributable to subtle but reliable individual differences in people's tapping, which are supported by prior findings in music psychology.

In Proceedings of UIST 2009
Article Picture

Disappearing mobile devices (p. 101-110)

Abstract plus

In this paper, we extrapolate the evolution of mobile devices in one specific direction, namely miniaturization. While we maintain the concept of a device that people are aware of and interact with intentionally, we envision that this concept can become small enough to allow invisible integration into arbitrary surfaces or human skin, and thus truly ubiquitous use. This outcome assumed, we investigate what technology would be most likely to provide the basis for these devices, what abilities such devices can be expected to have, and whether or not devices that size can still allow for meaningful interaction. We survey candidate technologies, drill down on gesture-based interaction, and demonstrate how it can be adapted to the desired form factors. While the resulting devices offer only the bare minimum in feedback and only the most basic interactions, we demonstrate that simple applications remain possible. We complete our exploration with two studies in which we investigate the affordance of these devices more concretely, namely marking and text entry using a gesture alphabet.

In Proceedings of UIST 2009
Article Picture

SemFeel: a user interface with semantic tactile feedback for mobile touch-screen devices (p. 111-120)

Abstract plus

One of the challenges with using mobile touch-screen devices is that they do not provide tactile feedback to the user. Thus, the user is required to look at the screen to interact with these devices. In this paper, we present SemFeel, a tactile feedback system which informs the user about the presence of an object where she touches on the screen and can offer additional semantic information about that item. Through multiple vibration motors that we attached to the backside of a mobile touch-screen device, SemFeel can generate different patterns of vibration, such as ones that flow from right to left or from top to bottom, to help the user interact with a mobile device. Through two user studies, we show that users can distinguish ten different patterns, including linear patterns and a circular pattern, at approximately 90% accuracy, and that SemFeel supports accurate eyes-free interactions.

In Proceedings of UIST 2009
Article Picture

Abracadabra: wireless, high-precision, and unpowered finger input for very small mobile devices (p. 121-124)

Abstract plus

We present Abracadabra, a magnetically driven input technique that offers users wireless, unpowered, high fidelity finger input for mobile devices with very small screens. By extending the input area to many times the size of the device's screen, our approach is able to offer a high C-D gain, enabling fine motor control. Additionally, screen occlusion can be reduced by moving interaction off of the display and into unused space around the device. We discuss several example applications as a proof of concept. Finally, results from our user study indicate radial targets as small as 16 degrees can achieve greater than 92% selection accuracy, outperforming comparable radial, touch-based finger input.

In Proceedings of UIST 2010
Article Picture

Sensing foot gestures from the pocket (p. 199-208)

Abstract plus

Visually demanding interfaces on a mobile phone can diminish the user experience by monopolizing the user's attention when they are focusing on another task and impede accessibility for visually impaired users. Because mobile devices are often located in pockets when users are mobile, explicit foot movements can be defined as eyes-and-hands-free input gestures for interacting with the device. In this work, we study the human capability associated with performing foot-based interactions which involve lifting and rotation of the foot when pivoting on the toe and heel. Building upon these results, we then developed a system to learn and recognize foot gestures using a single commodity mobile phone placed in the user's pocket or in a holster on their hip. Our system uses acceleration data recorded by a built-in accelerometer on the mobile device and a machine learning approach to recognizing gestures. Through a lab study, we demonstrate that our system can classify ten different foot gestures at approximately 86% accuracy.

mobile device and interface

In Proceedings of UIST 2002
Article Picture

Ambient touch: designing tactile interfaces for handheld devices (p. 51-60)

mobile device interaction

In Proceedings of UIST 2008
Article Picture

SideSight: multi-"touch" interaction around small devices (p. 201-204)

Abstract plus

Interacting with mobile devices using touch can lead to fingers occluding valuable screen real estate. For the smallest devices, the idea of using a touch-enabled display is almost wholly impractical. In this paper we investigate sensing user touch around small screens like these. We describe a prototype device with infra-red (IR) proximity sensors embedded along each side and capable of detecting the presence and position of fingers in the adjacent regions. When this device is rested on a flat surface, such as a table or desk, the user can carry out single and multi-touch gestures using the space around the device. This gives a larger input space than would otherwise be possible which may be used in conjunction with or instead of on-display touch input. Following a detailed description of our prototype, we discuss some of the interactions it affords.

mobile interaction

In Proceedings of UIST 2000
Article Picture

Sensing techniques for mobile interaction (p. 91-100)

mobile interface

In Proceedings of UIST 2001
Article Picture

Empirical measurements of intrabody communication performance under varied physical configurations (p. 183-190)

mobile interface design

In Proceedings of UIST 2000
Article Picture

Multimodal system processing in mobile environments (p. 21-30)

mobile phone

In Proceedings of UIST 2001
Article Picture

LetterWise: prefix-based disambiguation for mobile text input (p. 111-120)

In Proceedings of UIST 2003
Article Picture

TiltText: using tilt for text input to mobile phones (p. 81-90)

In Proceedings of UIST 2004
Article Picture

A gesture-based authentication scheme for untrusted public terminals (p. 157-160)

In Proceedings of UIST 2006
Article Picture

Camera phone based motion sensing: interaction techniques, applications and performance study (p. 101-110)

In Proceedings of UIST 2008
Article Picture

Browsing large HTML tables on small screens (p. 259-268)

Abstract plus

We propose new interaction techniques that support better browsing of large HTML tables on small screen devices, such as mobile phones. We propose three modes for browsing tables: normal mode, record mode, and cell mode. Normal mode renders tables in the ordinary way, but provides various useful functions for browsing large tables, such as hiding unnecessary rows and columns. Record mode regards each row (or column) as the basic information unit and displays it in a record-like format with column (or row) headers, while cell mode regards each cell as the basic unit and displays each cell together with its corresponding row and column headers. For these table presentations, we need to identify row and column headers that explain the meaning of rows and columns. To provide users with both row and column headers even when the tables have attributes for only one of them, we introduce the concept of keys and develop a method of automatically discovering attributes and keys in tables. Another issue in these presentations is how to handle composite cells spanning multiple rows or columns. We determine the semantics of such composite cells and render them in appropriate ways in accordance with their semantics.

In Proceedings of UIST 2010
Article Picture

PhoneTouch: a technique for direct phone interaction on surfaces (p. 13-16)

Abstract plus

PhoneTouch is a novel technique for integration of mobile phones and interactive surfaces. The technique enables use of phones to select targets on the surface by direct touch, facilitating for instance pick&drop-style transfer of objects between phone and surface. The technique is based on separate detection of phone touch events by the surface, which determines location of the touch, and by the phone, which contributes device identity. The device-level observations are merged based on correlation in time. We describe a proof-of-concept implementation of the technique, using vision for touch detection on the surface (including discrimination of finger versus phone touch) and acceleration features for detection by the phone.

In Proceedings of UIST 2010
Article Picture

Jogging over a distance between Europe and Australia (p. 189-198)

Abstract plus

Exertion activities, such as jogging, require users to invest intense physical effort and are associated with physical and social health benefits. Despite the benefits, our understanding of exertion activities is limited, especially when it comes to social experiences. In order to begin understanding how to design for technologically augmented social exertion experiences, we present "Jogging over a Distance", a system in which spatialized audio based on heart rate allowed runners as far apart as Europe and Australia to run together. Our analysis revealed how certain aspects of the design facilitated a social experience, and consequently we describe a framework for designing augmented exertion activities. We make recommendations as to how designers could use this framework to aid the development of future social systems that aim to utilize the benefits of exertion.

mobile search

In Proceedings of UIST 2008
Article Picture

Search Vox: leveraging multimodal refinement and partial knowledge for mobile voice search (p. 141-150)

Abstract plus

Internet usage on mobile devices continues to grow as users seek anytime, anywhere access to information. Because users frequently search for businesses, directory assistance has been the focus of many voice search applications utilizing speech as the primary input modality. Unfortunately, mobile settings often contain noise which degrades performance. As such, we present Search Vox, a mobile search interface that not only facilitates touch and text refinement whenever speech fails, but also allows users to assist the recognizer via text hints. Search Vox can also take advantage of any partial knowledge users may have about the business listing by letting them express their uncertainty in an intuitive way using verbal wildcards. In simulation experiments conducted on real voice search data, leveraging multimodal refinement resulted in a 28% relative reduction in error rate. Providing text hints along with the spoken utterance resulted in even greater relative reduction, with dramatic gains in recovery for each additional character.

mobile terminal

In Proceedings of UIST 2004
Article Picture

C-blink: a hue-difference-based light signal marker for large screen interaction via any mobile terminal (p. 147-156)

mobile web

In Proceedings of UIST 2008
Article Picture

Highlight: a system for creating and deploying mobile web applications (p. 249-258)

Abstract plus

We present a new server-side architecture that enables rapid prototyping and deployment of mobile web applications created from existing web sites. Key to this architecture is a remote control metaphor in which the mobile device controls a fully functional browser that is embedded within a proxy server. Content is clipped from the proxy browser, transformed if necessary, and then sent to the mobile device as a typical web page. Users' interactions with that content on the mobile device control the next steps of the proxy browser. We have found this approach to work well for creating mobile sites from a variety of existing sites, including those that use dynamic HTML and AJAX technologies. We have conducted a small user study to evaluate our model and API with experienced web programmers.