People of ACM - Elizabeth Churchill

June 4, 2015

How has your international exposure and travel influenced your interest in the ways that social technologies and social media are created, consumed, adopted and adapted in different cultures and social settings?

Thinking internationally is the best way to realize that technology isn't always the solution. And you, as a designer or developer, don't always know the right solution. Travel helps you understand another point of view, which is very helpful in software and hardware design, in fact, an essential skill for practicing user-centered design and development.

That said, thinking abstractly and from the social science perspective, it is clear that human beings are naturally social animals. We know this intuitively of course, but increasingly, research indicates that our brains are wired for social connection and communication. So it is no surprise that social technologies and social media are part of the fabric of everyday life for those who have access to them.

But how those technologies fit into people's lives can vary enormously, and the diversity of people's practices can get lost in the abstract numbers that that are so seductive—x thousand transactions, y million users, z billion clicks, and so on. Traveling forces you to be open to learning about the details, the nuances that get lost in the aggregates. It forces you to challenge your stereotypes and examine your sometimes ill-founded assumptions. Traveling around the world, living in different countries, moving between social groups and being observant reveals a rich pattern of differences as well as similarities in how social technologies are used and understood, and how people conduct themselves in those moments of connection. These nuances and differences are often elided in the broad strokes data that are reported, but this is where opportunities for new innovations lie. Good examples of differences include value systems around data privacy, which are reflected in people's tolerance and resistance to online data gathering, and differences in activism and in laws and policies around the world. These in turn affect what services are available and how they are implemented and used.

While I was at Yahoo!, my group investigated differences in technology and social media use across different cultures and subcultures. Developing a set of methods over several years, we strove to find methods that reveal the "meaning in the mechanisms and measures" —that is, combining qualitative investigations into people's use of everyday technologies through field observations and log analysis of user activities, interviews and surveys, and deployment and evaluation of mock-ups, carefully instrumented prototypes and experimental interfaces.

We conducted network analyses to see how social media content, like videos, spread virally, but also how they move between subgroups and different subcultures. We'd look at patterns of sharing, exchanging and conversation to expose regularities, similarities and differences. We developed methods for what we called "experience mining" from use data. Our focus was to complement business metrics with measures that really reflect people's experiences, reflect what was meaningful to them. Our work reinforced for me the need to always be expanding one's horizons and looking to expose ones self to, and develop empathy for, other perspectives.

My view is, whether you travel in person or travel in your perspectives through experimental data collection and exploratory analyses in the small and in the large, being confronted with people who think and act in ways that differ from what you would do—that is key to being a good technology designer/developer.

There is another reason for actually traveling to an unfamiliar place though. An international trip highlights the ways in which infrastructures, from physical landscapes to institutional and governmental policies and laws, affect what technologies are readily available and how they are used. You may have the best social media platform in the world but if the infrastructures create hard and soft barriers to adoption, you are not going to have an audience.

This is one of the reasons I was active in creating an Adjunct Chair for Policy on the SIGCHI Executive Committee, a role that Jonathan Lazar has been in for the last several years. He recently released a co-authored document that addresses the role(s) that ACM HCI scholars are taking and can more actively pursue in addressing standards and policy issues that affect technology design.

Finally, it is important to realize that not only does technology change, but what is culturally acceptable, also changes. For example, online dating sites, once taboo, now comprise a huge world-wide industry, challenging assumptions around romance and relationships.

What are the advantages for effective communication and connection technologies of the intelligent user interfaces known as embodied conversational agents—sometimes represented graphically—that you pursue in your research?

One of the dreams of human computer interaction has been to create interfaces that are "naturalistic," meaning easy to use, intelligible and adaptive. The work on embodied interface agents (and by extension from 2D/2.5D graphical to humanistic robots) has been driven by the idea that multi-modal communication involving not only speech but the ability to use deictic gestures like pointing and eye gaze, and paralinguistic cues like "beat" gestures would make interaction with a computational agent more "natural." These features make it not only easier but more enjoyable and emotionally satisfying to use.

My work in this area was a while back—from the late 1990's to the early 2000's. Although we now have better technologies for implementing our ideas, much of that early research from a social science viewpoint still stands, and the challenges remain the same. In creating embodied agents, just as when creating disembodied agents like the ones we speak to—Hal being the canonical example—our aim is to signal the limits of the agents' capabilities, knowing when the Turing test will fail and how to recover when it does, and determining how to degrade gracefully.

Research has shown that people quickly come to rely on and even to personify agents they interact with, whether they are disembodied or not. And personification means coming to rely on agents and make assumptions about the extent of their capabilities. As technology designers, we must be aware of that reliance and prevent frustration and disappointment by signaling capabilities. That is one reason why simple cues like making the agent an animal or a robot, thus signaling a different scope of capability from an adult human, invites the "user" to be aware there may be a limit to the agent's powers.

The second challenge is to address how to signal the agent's powers that exceed the human users' expectations. If the agent is evidently too powerful and may not have the users' interests at heart, there is anxiety. Designing agents of any kind starts with communication capabilities and a socially motivated commitment to transparency of process, but really at the center are psychological issues that amount to reliability, anxiety and trust. That is why so much work on agent systems has focused on what has been called "the uncanny."

Simple models of dialogue assume information exchange, while sophisticated models of dialogue emphasize the tactics that are needed to develop mutual understanding and creation of the possibility for agreement through clarification and the possibility of disagreement. Transparency and commitment to developing shared perspectives are at the root of empathy, and the presence of a mutually shared and enacted empathic connection builds trust. It reduces uncertainty and anxiety.

In our work, we investigated principles of empathic connection, appropriate prediction and effective repair alongside the technical aspects. These principles extend to all technology design where the technology is a collaborator, an agent, a confederate, or a delegate. "Assistant" technologies, whether embodied or not, need to know when to take initiative and when not to, and that means being reliable, consistent and predictable, and building a predictive model of their user(s).

In your assertion that personalization is central to the discipline of human-computer interaction, why is it important to put the "person" back into personalization, as you have stated?

The consistent bedrock of all the research areas I have studied from a technological perspective is a focus on social engagement between people, and between people and technologies, whether those technologies demonstrate smarts or not. One of the mantras of today's world is that technologies can learn about us, can be personalized or personalize themselves to us. If you ask most people what they think of when they hear the phrase "personalized for you," it conjures up a high-end concierge service.

The collaborative filtering algorithms that underlie personalization take us a long way on the path to personalization. But there is so much left to do in that "last mile", from personalizing in general terms for my cohort or segment to personalizing for me. So, at eBay, with Atish Das Sarma, I drove a research agenda we called Putting the Person into Personalization, addressing not just the algorithms but also the interface and interaction aspects of personalization. We looked at two aspects of personalization that I think are critical for that last mile:

  1. outcome personalization, where we worked on creating new interface and interaction experiences that playfully engage people in refining recommended content Our experimental prototypes triangulated data from profiles, transaction histories and models of the user based on the more traditional collaborative filtering algorithms.

  2. process personalization, where we built models of users' preferences for interaction, looking at when, on what device, and how information should be presented to them. We explored the possibility of filtering content by time, location and device, based on expressed and inferred preferences.

As we move into a world of more and more sentient and semi-sentient devices, the so-called Internet of Things, there are new paradigms for what we can do in terms of process and presentation personalization.

As an applied social scientist, interactive technology designer and social communications researcher with a background in psychology, artificial intelligence and cognitive science, what advice would you give to young people considering careers in computing?

Ask hard questions, invite others to help you solve them and learn to love those moments when you are wrong as much as, if not more than, when you are right. Look at the questions you pose from different angles.

Don't be dismissive of disciplines outside your comfort zone. Respect epistemological stances that differ from your own. Watch people who are not like you interact with world. Learn from them. Do not assume your way of doing things is the right way. And don't assume it isn't either. Play, tinker, try all the courses, tools and platforms that are increasingly available online.

And finally, share your knowledge. Make it accessible to others. If you don't like talking about what you know, show people with the things you build, including the things that don't work out. Share your process as well as your products. Learn how to teach well. In teaching others, you will learn more about topics and about yourself than you could possibly imagine. 

 

Elizabeth Churchill, a Director of User Experience at Google, is an applied social scientist working in the areas of human computer interaction, computer mediated communication, mobile/ubiquitous computing, and social media. Prior to her current position, her roles have included Director of Human Computer Interaction at eBay Research Labs in San Jose, California and Principal Research Scientist and Research Manager at Yahoo! in Santa Clara, California.

A psychologist by training, Churchill holds a BS in Experimental Psychology and an MS in Knowledge Based Systems from the University of Sussex, and a PhD in Cognitive Science from the University of Cambridge. She has helped to create online collaboration tools, mobile applications, and public media installations that promote collaboration and communication. Her current work focuses on the design of developer tools for device ecosystems.

Churchill has more than 50 patents granted or pending, and over 100 publications in theoretical and applied psychology, cognitive science, human-computer interaction, mobile and ubiquitous computing, computer mediated communication and social media. She has co-edited the books Embodied Conversational Agents, Collaborative Virtual Environments, Inhabited Information Spaces, Public and Situated Displays, and Agent Supported Cooperative Work. Her co-authored text, Foundations for Designing User Centered Systems, was published in 2014. She has been a regular columnist for ACM's interactions magazine since 2008.

An ACM Distinguished Scientist, Churchill serves as Executive Vice President of SIGCHI, ACM's  Special Interest Group on Computer-Human Interaction. She is a Distinguished Visiting Scholar at Stanford University's Media X lab..