People of ACM - Yuta Sugiura

November 15, 2022

How did you initially become interested in human-computer interaction and, specifically, in user interfaces?

Human-computer interaction (HCI) is a multidisciplinary field of study. Unlike other technologies, HCI has been solving visible and tangible problems related to people and exploring how technology can be better applied to people. This is very interesting to me, and I can think of myself as a designer who has the power to use technology to develop interesting systems that solve real-life problems that people encounter.

Since the birth of the mouse and keyboard, people have become familiar with user-interface technology to interact with machine systems. As the only bridge between machine systems and humans, the user interface has great significance in the entire field of human-computer interaction. Research to promote the development of user interface technology enables more people to enjoy the benefits of technological advances.

Your work with Cuddly User Interface (Cuddly UI) focuses on finding new ways for computers to understand human physical gestures. How will this area of research shape smart homes in the future? What are some important technical challenges that need to be overcome in this area?

The Cuddly UI focuses on the application of soft interface technology. I proposed combining soft objects around the user (e.g., sofa, pillow, etc.) with sensor technology to build a friendlier and ubiquitous interaction interface. In addition, using soft objects also improves the user's acceptance of the interface and we can enable the soft materials to make a display. For example, we can use the robotic device to change the direction of the fiber on the carpet to show the graphics.

When it comes to smart homes, current intelligent home systems still use an explicit way to interact (e.g., remote control, voice control, etc.). Using Cuddly UIs can bring more implicit interactions to the user. This interaction with the Cuddly User Interface can be a more convenient, ubiquitous, and low-cost way to learn to interact with the computer system. For example, we can expect remote communication using the Cuddly UI. A person operates the soft objects (e.g., a toy or cushion) in their home, and the other can receive the toy’s movement. In addition, it is common and natural for people to come into contact with soft interfaces. It may also record many users' mood states invisibly. For example, a person may hammer the sofa when angry or hug a pillow tightly when watching a horror movie. These behaviors can be recorded by the intelligent Cuddly UI, which can analyze the emotional state to change its morphology (e.g., hardness) into feedback for the user.

Interaction with the Cuddly UI is still highly dependent on sensor technology and machine-learning techniques. Following a basic development paradigm, the system design must follow data collection and classifier training processes. The degree of user customizability is low. Therefore, the next step to focus on is reducing the system’s development cost to increase the user’s customizability so that each user can rely on their habits to design the interaction.

Will you give us an example of a recent project you were involved with in the medical engineering field? What were the goals of this project?

Most recently, we have been developing a screening system for orthopedic diseases based on mobile terminals. For example, we have developed a game system for users’ smartphones and smart tablets. By playing this simple game, the system can analyze the user’s hand data to determine whether they have carpal tunnel syndrome (CTS), which helps doctors diagnose it. This direction also stands from the perspective of human-computer interaction combined with medicine. We want to use the terminal devices around the user to screen for diseases to achieve a medical-assisted diagnosis in a nonintrusive, low-cost, and natural way.

Ultimately, we want to use more ubiquitous, convenient, and unconscious methods to detect users’ daily behavior and analyze their risk of disease without their knowledge. For example, we can use a smartphone to accumulate data on a user’s daily walking to analyze whether they have an arthritic condition. Using smart tablets and intelligent pens, we can also analyze whether a user has a hand disease while writing. This helps users detect possible diseases early and stop their rapid development.

What is another example of a UI or human-computer interaction research area (or application) where we will see important advances in the near future?

Human-computer interaction or user interface technologies are moving toward more pervasive and convenient applications. For interaction technologies, there is a move toward wireless gesture recognition and other applications, such as millimeter wave radar and ultrasonic signals. Unlike voice interaction or video detection, wireless signal recognition has better privacy protection features and does not require additional equipment. We are also looking forward to applying VR/AR technology in users’ lives to help more people with disabilities or special needs understand the world better. In fact, human-computer interaction is a very broad topic and there are many technological advances we can expect, and human-computer interaction researchers like me are actively exploring more technological applications.

Yuta Sugiura is an Associate Professor at Keio University in Yokohama, Japan. His research interests include human-computer interaction, ubiquitous computing, and medical informatics. He has served as a program committee member for various international conferences, including ACM UIST, TEI, and SIGGRAPH ASIA E-Tech.

Sugiura was the recipient of the Information Processing Society of Japan (IPSJ/ACM) Award for Early Career Contributions to Global Research. He was recognized for his pioneering work in user interfaces and ubiquitous computing and for his recent focus on medical-engineering collaboration.