People of ACM - Manish Raghavan

October 4, 2022

To some, a computer scientist teaching at a business-focused school may seem unusual. Will you tell us about your role at MIT Sloan and why you chose this position?

My role at MIT Sloan is part of a larger trend in which computer science is increasingly important throughout a variety of academic fields. My shared appointment between Sloan and EECS is through MIT’s Schwarzman College of Computing, which established several such joint positions in order to further interdisciplinary work between computer science and other fields. To me, this was an exciting opportunity to broaden the scope of my work and make it more appealing to business audiences. To that end, I’ll be developing new classes for both computer science and MBA students that help them understand computing applied to real-world domains.

There is a small but growing body of research on algorithmic decision making. What new ground did you want your dissertation to cover? How is your dissertation organized?

With my dissertation, I wanted to rigorously work through the consequences of deploying algorithms, especially predictive algorithms, into the world. Because this is such a broad area of inquiry, my dissertation incorporates ideas and techniques from computer science, economics, and legal scholarship. It’s loosely organized according to those topics, though as with any interdisciplinary work, there isn’t necessarily a clean separation between them.

In the 2019 paper “Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices,” you and your co-authors discuss how algorithmic de-biasing techniques interface with, and create challenges for, antidiscrimination law. Can you discuss how de-biasing techniques cause challenges for antidiscrimination law?

A common interpretation of existing antidiscrimination law in employment is that it triggers heightened scrutiny when a selection mechanism introduces significant disparities in representation across demographic groups. As written, it is intended to protect against employers who, nefariously or not, use selection criteria that discriminate against marginalized groups without any job-related reason. This heuristic, known as the 4/5 rule, has long been used as an initial filter to look for evidence of discrimination in past decisions. However, as employers have begun to introduce algorithmic decision-making into the hiring process, the 4/5 rule has become a target for optimization. Specifically, employers may test their algorithms before deploying them to see whether they adhere to the 4/5 rule. If not, they can modify these algorithms until they do.

In spirit, discrimination law is not meant to be automated. Algorithmic techniques that seek to automatically pass particular statistical tests fail to grapple with the larger implications of the law. While proponents of these new algorithmic techniques argue that they can significantly improve upon human decision-making and increase representation of marginalized groups, their focus on particular quantitative notions of discrimination draws focus away from other important factors. Do these tools work well for everyone? How do they perform when they encounter a candidate who looks very different from any past examples? While existing law struggles to contend with these questions, particularly as they apply to algorithms, they are crucial in identifying and preventing discrimination going forwards.

Are you hopeful that computational tools for detecting and/or mitigating bias in algorithms could be effective in the right circumstances?

Computational tools offer us new opportunities to confront biased and discriminatory decision-making, and determining how to leverage these opportunities in a responsible way is an important direction for ongoing research. I’m somewhat optimistic about this—we know from decades of research that humans exhibit all sorts of biases in our decision-making. Algorithms may provide us with new tools to understand and control these biases. But at the same time, there are risks to algorithmic decision-making that we have yet to fully reckon with. Without a deep understanding of the contexts in which algorithms are deployed, we will fail to account for their far-ranging consequences. If we proceed with caution and pay close attention to the insights and expertise from the social sciences, I’m hopeful that we can contribute to improving decision-making in practice.

What’s another line of research you are presently working on that you are particularly excited about?

I’ve been interested for a while in how algorithms shape our online experiences across a variety of information platforms. These algorithms have a complex relationship with our behavior—they learn from what we do, but they also influence the choices we make in the future. I’ve been working with various collaborators to understand this human-algorithmic interaction and how it leads to some of the negative consequences of information platforms that we’re seeing today. For example, we often make choices that we regret, and we spend more time consuming online content than we would like. Could we take this into account when designing algorithms that learn from our past choices? My hope is that we can design algorithms and guardrails around them that enhance our agency, let us make better choices, and ultimately make us happier.

Manish Raghavan is an Assistant Professor at the MIT Sloan School of Management. Prior to his current position, he was a postdoctoral fellow at the Harvard Center for Research on Computation and Society (CRCS). His research interests lie in the application of computational techniques to domains of social concern, including online platforms, algorithmic fairness, and behavioral economics. He is particularly interested in the use of algorithmic tools in the hiring pipeline.

Raghavan received the 2021 ACM Doctoral Dissertation Award for his dissertation “The Societal Impacts of Algorithmic Decision-Making," which he wrote toward a PhD in computer science earned at Cornell University.