People of ACM - Guoliang Xing
October 7, 2025
You work in a wide range of areas—from improving healthcare to autonomous driving and cyber-physical systems. Is there a common thread that runs through your various interests?
The common thread across my work is embedded AI systems. While AI has already enabled a wide array of applications in the cloud and on personal computers, my research focuses on bringing these capabilities to edge devices—those worn on the body, embedded in vehicles, or placed in home environments. These devices operate closer to the user with richer contextual awareness. By embedding AI directly into them, we can create real-time, user-centered systems that open new possibilities for interaction, autonomy, and care.
In one of your most cited recent papers, “ClusterFL: A Similarity-Aware Federated Learning System for Human Activity Recognition,” you and your co-authors explored how sensor data can be collected and modeled to recognize human activity:
- What is a practical application of this technology?
We are actively applying this technology to healthcare and daily wellness monitoring. For example, the core algorithm of ClusterFL was integrated into the ADMarker system, which has been deployed in more than 120 homes in Hong Kong. Leveraging ClusterFL’s federated learning algorithm, the system can monitor elderly individuals with Alzheimer’s disease in a privacy-preserving way. Building on this research, our lab also spun out a startup that delivers products for real-time safety alerts and mental health monitoring using federated learning.
- What is the difference between centralized and federated learning?
The key difference is privacy protection. In centralized learning, user data is collected and uploaded to a central server for training. Federated learning, by contrast, trains models directly on client devices; raw data stays local, and only model updates are shared and aggregated. This decentralized approach is especially important for applications like healthcare where privacy is paramount.
- What was a key insight of this paper?
A core insight is that exploiting inter-user data similarity can reduce the computation and communication overhead of training while preserving efficiency and accuracy. Balancing training costs with performance has long been a challenge for federated learning in applications such as healthcare, which typically involve large and diverse user populations.
One of the initiatives of your AIoT Lab is the Roadside Infrastructure–Assisted Autonomous Driving project. What are some key hurdles we need to overcome for automated vehicles to become pervasive?
One key hurdle is achieving efficient collaboration and coordination between vehicles and surrounding infrastructure, such as sensors mounted on lampposts. Current driver-assistance systems work well in predictable environments like highways or low-traffic streets, where tasks such as lane following or lane changing are relatively straightforward.
However, in complex, real-world settings—such as temporary road construction or locally specific visual elements like Chinese New Year lanterns that may confuse vehicle perception systems—onboard models can struggle. In such cases, infrastructure-assisted driving becomes essential. By sharing context-sensitive information from the roadside, infrastructure can guide vehicle decisions, enhance perception, and improve safety.
Beyond technical issues, there are challenges in public trust, regulatory frameworks, and legal liability. Recent advances—such as autonomous taxis, deployment of smart roadside infrastructure, and more flexible regulatory policies—are steadily paving the way for broader adoption.
Your team recently received grants from the Alzheimer’s Drug Discovery Foundation to develop digital biomarkers for Alzheimer’s disease. What are the overarching goals of this project?
Alzheimer’s disease (AD) is the most common form of dementia, characterized by gradual memory loss, cognitive decline, and behavioral changes. It is a progressive and currently incurable neurodegenerative disorder that affects millions worldwide. Early detection is vital to improving quality of life and enabling more effective care strategies.
Our project aims to develop new digital biomarkers—AI-powered indicators derived from passive sensing in daily environments—to detect early signs of Alzheimer’s and related conditions. By using data from various sensors on personal devices and in the home, we hope to build tools that support early screening outside clinical settings.
Ultimately, we want to enable low-cost, proactive assessments that reduce healthcare burdens and offer families timely support. In the longer term, we envision systems that provide timely reports on daily routines, independence levels, and behavioral changes—helping individuals and caregivers understand evolving health conditions more holistically.
What is another research direction in your field where we are poised to make significant progress soon?
A promising direction is integrating large language models (LLMs) into real-world sensing applications. LLMs have rapidly evolved from language-focused tools to general-purpose problem solvers, but they still largely rely on textual or visual inputs.
The next leap is equipping these models with real-time sensory input from edge devices—microphones, wearables, cameras, and environmental sensors. This would allow them to reason not just from static data but through continuous, multimodal interactions with the world. If successful, such systems could become highly personalized AI assistants capable of understanding context, behavior, and emotion in far more nuanced ways than we currently imagine.
You have had successful experiences in entrepreneurship. How has technology transfer shaped your journey from research to market? Can you give an example?
Entrepreneurship is a dynamic process that bridges the gap between academic research and real-world impact. My journey has taught me that the transition from lab innovation to market-ready products requires more than just technical expertise—it demands a deep understanding of user needs, market dynamics, and iterative problem-solving. This experience has also provided valuable exposure to real-world challenges, which, in turn, has reshaped my research focus to address significant practical issues.
Our start-up, ThingX Technologies—born from years of research on sensor systems, LLM systems, and AI for healthcare—is an example of this journey. Its flagship product, the Nuna smart pendant, is the world’s first emotion-tracking wearable, delivering personalized emotional insights in real time. However, bringing Nuna to market required significant adaptation. We optimized the power consumption of our on-device LLM models and customized multimodal sensor systems for seamless wearable integration. Privacy was a key priority, and we designed a secure, on-device architecture to address concerns about emotional data security. Collaborating with designers and market analysts helped us balance technical sophistication with user-friendly features. It’s been rewarding to see cutting-edge research evolve into a product that impacts lives.
Guoliang Xing is a Professor and Director of the AIoT Lab at the Chinese University of Hong Kong (CUHK). His research spans embedded AI, AI for health, autonomous driving, and cyber-physical systems. Highlights of his work include leading the development and field deployment of large-scale systems for roadside infrastructure–assisted autonomous driving, early clinical diagnosis and treatment of Alzheimer’s disease, and real-time volcano monitoring. His work has received seven Best Paper Awards, five Best Demo/Poster/Artifact Awards, and seven Best Paper Finalist distinctions at top-tier international conferences.
Xing serves as an Associate Editor for ACM Transactions on Computing for Healthcare and ACM Transactions on Sensor Networks, and has served as general or program co-chair for multiple ACM conferences. He was recently named an ACM Fellow for contributions to embedded AI and mobile computing systems.