People of ACM - Jeanna Matthews

February 7, 2023

How did you initially become interested in operating systems?

I like de-mystifying black boxes, and operating systems were one of the first big mysteries in computing where it felt incredibly satisfying to me to understand how they worked top to bottom.

Your work seems to go well beyond operating systems these days? Can you tell us a bit about that evolution?

When I first began in computing, it felt to me somewhat like a fundamental good—automating tasks that were drudgery, supplying the right information at the right time, connecting people, and amplifying more voices. I was mostly concerned initially with making computing systems faster and more efficient, but over time I began to see patterns of computing being used also for surveillance, manipulation, disinformation, and even oppression. I stopped wanting to just make the system faster and more efficient. I didn’t want to just put my foot on the gas pedal when I had fundamental questions about the way the car was being driven. That has led me increasingly to work in security, privacy, algorithmic accountability, and transparency. For example, I have been working on identifying and reducing bias in natural language processing and increasing accountability in criminal justice software.

In the recent paper, “Political Polarization and Platform Migration: A Study of Parler and Twitter Usage by United States of America Congress Members,” you and your co-authors explored the growing dissatisfaction with platform governance decisions at major social media platforms such as Twitter. What was a key insight of this paper? In broad terms, is there an overarching consideration society and/or the computing field can keep in mind to allow for free speech while having civil discourse on these platforms?

Growing dissatisfaction with platform governance decisions has led to efforts to found new platforms and motivate users to shift. Movement from Twitter to Parler in 2020 was a one of the most substantial examples of this and it ended up playing a notable role in the January 6, 2021, attack on the United States Capitol building. We happened to be studying this shift exactly during the leadup to those events and their aftermath, so we had a fascinating set of data. Many people have recognized that in today’s media landscape, competing political groups don’t just have different opinions, they often have different facts. Shifts like this show us that it goes much further than that. They also have different criteria for what a fact even is (e.g., different rules for fact checking or content moderation). They increasingly don’t even use the same public square for discourse (e.g., moving to different platforms). Civil discourse is increasingly difficult when we spend time in these polarized online environments that share so very little with each other. The shift to Parler is one example of how that happens.

You were a co-lead author of the global ACM Technology Policy Council’s (TPC) recent Statement on Principles for Responsible Algorithmic Systems. What was the impetus for producing this statement?

Algorithmic systems are increasingly used to make critical decisions about the lives of individuals in areas such as hiring, housing, credit, criminal justice, and the allocation of public resources. In 2017, we released a “Statement on Algorithmic Transparency and Accountability” with seven principles that has been widely cited. In this 2022 statement, we continue and expand on that. To some of the original principles of explanation, access and redress, auditability, and more, we have added explicit statements about legitimacy, competency, minimizing harm, and limiting environmental impacts. The 2017 statement has been used countless times as a starting point in conversations with legislators and policy makers around the world to help shape technology policy. We have also seen working engineers point to the statement when advocating within their workplace for investments in responsible system design and implementation. I can already see this 2022 statement being used in a similar and powerful way.

The development of fair and transparent algorithms has been a rapidly-growing research area, as witnessed by the program offerings at ACM conferences such as FAccT and AIES. Where do you envision (or hope) this research area is heading?

Even though I have been discouraged to increasingly see negative uses of computing, I still also see the same positive potential that drew me to computing in the first place. Achieving the positives without the negatives is going to require investments in public policy, law, regulation, professional codes of conduct, independent auditing, and more. Computing innovations, often powered by surveillance data, are often portrayed as “being good for everyone” when there is substantial evidence that the costs and benefits do not fall evenly throughout society and may disproportionately impact already marginalized communities. The research being published in venues such as FAccT and AIES is clearly documenting this evidence and exploring possible solutions. That work is desperately needed if we are going to live in a world where computing respects the rights of individuals and contributes to healthy societies. Computing will not be automatically good for everyone unless we insist on it, and building responsible algorithmic systems is going to take investment and oversight in a world where cutting costs and avoiding oversight is too often the norm.

Jeanna N. Matthews is a Professor of Computer Science at Clarkson University and an affiliate of the non-profit research organization Data & Society. Her current work focuses on securing societal decision-making processes and supporting the rights of individuals in a world of automation. She has published research in a broad range of systems topics from virtualization and cloud computing to social media security and distributed file systems. Matthews’ books include Running Xen: A Hands-On Guide to the Art of Virtualization and Computer Networking: Internet Protocols in Action.

For ACM, she is a SIG Governing Board Member of the ACM Council, former Chair of the ACM Special Interest Group Governing Board, and former Chair of the ACM Special Interest Group on Operating Systems (SIGOPS). Presently, she is a member of ACM Council and Co-Chair of both the ACM US Technology Policy Committee’s (USTPC) Subcommittee on Artificial Intelligence and Algorithmic Accountability, and the global Technology Policy Council's Working Group on the same topic.