People of ACM - Xing Xie

June 11, 2024

How did you initially become interested in data mining and AI?

I became interested in data mining approximately 15 years ago when we started a project aimed at uncovering user interests through vast spatial datasets. Our objective was to leverage this information to recommend restaurants, travel routes, and social connections. Data mining emerged as a pivotal technology in this endeavor, enabling us to extract meaningful patterns from the abundance of data at our disposal. The impact of these research works grew substantially with the rapid proliferation of mobile devices, sensors, and social networks. While our focus wasn't exclusively on AI at the time, many of the machine learning algorithms we developed within these contexts laid the foundation for our later exploration into responsible AI practices. Particularly, our work intersected with privacy research, fairness considerations, and the pursuit of explainable AI, fostering a deeper appreciation for the ethical dimensions of AI deployment.

Along with your co-authors, you received the ACM SIGSPATIAL 2019 10-Year Impact Award for the paper “Map-Matching for Low-Sampling-Rate GPS Trajectories.” What was the key challenge you and your co-authors were trying to address with this paper?

In this paper, we introduced a method for map-matching GPS points to specific locations on a map with explicit semantics, thereby enhancing the accuracy of user positioning by leveraging the characteristics of human movement trajectories. This endeavor addressed a fundamental challenge at the intersection of computer science and spatial information science, driven by both data and practical applications. Our interdisciplinary research in this area commenced as early as 2009, making us pioneers in this field and a significant factor in our recognition for the award.

What is the most important way in which AI is transforming recommendation systems?

Since late 2022, the emergence of large language models (LLMs) has significantly reshaped the realm of recommendation systems. While conversational and explainable recommendation systems have been discussed for years, it's only recently that these concepts have become truly viable in practice. Recommendation systems essentially function as predictors of user behavior, extrapolating from their past actions. The primary challenge lies in dealing with sparse signals and a lack of world models. LLMs have made notable strides in addressing the latter, offering promising solutions to mitigate the former challenge. Another pivotal transformation is the shift towards an interaction-centric approach. Looking ahead, LLM-based recommendation systems are poised to operate more akin to personal assistants, engaging in dialogue to better understand user preferences and provide increasingly accurate recommendations. This evolution signifies a departure from traditional static recommendation models towards dynamic, conversational systems tailored to individual needs.

As the leader of Microsoft Research's Societal AI initiative, could you outline the primary research areas and challenges within this interdisciplinary field, focusing on the impact of AI on society?

Certainly. Our efforts in responsible AI have long encompassed areas such as privacy preservation, explainable machine learning, and model robustness. However, the advent of large language models has introduced a host of new challenges, prompting the realization that addressing these issues requires a socio-technical approach. Take fairness, debiasing, and language detoxification, for instance; while we've made strides in these domains, the context of LLMs necessitates indirect methods like fine-tuning or employing prompts to shape model behavior. While LLMs offer unparalleled sophistication in understanding complex requirements, a critical challenge now lies in accurately defining and communicating our values to AI, necessitating close collaboration with social science researchers.

Similarly, the landscape of AI evaluation has evolved. Previously, assessments were based on predefined tasks and benchmarks, but as AI becomes more versatile, a pressing question arises: what aspects should we evaluate, and how can we ensure the robustness of these evaluations? We're actively collaborating with psychology researchers to explore these new evaluation paradigms.

Beyond these focal points, we're also delving into AI safety, copyright considerations, and examining its broader impact on areas like research and education. Our overarching belief is that fostering closer collaboration between computer scientists and social scientists is essential to addressing these multifaceted challenges effectively.


Xing Xie is a Partner Research Manager at Microsoft Research Asia. His research interests include data mining, social computing, and responsible AI.

Xie serves on the editorial boards of several publications including ACM Transactions on Recommender Systems (TORS), ACM Transactions on Social Computing (TSC), and ACM Transactions on Intelligent Systems and Technology (TIST), among others. His work has been recognized with several awards including the ACM SIGKDD 2022 Test of Time Award and the ACM SIGKDD China 2021 Test of Time Award. He is a Fellow of the China Computing Federation and was recently named an ACM Fellow for contributions to spatial data mining and recommendation systems.