People of ACM - Janet Haven

December 5, 2023

How did your career path lead you to research and advocacy at the intersection of public policy and technology?

I graduated from college in the mid-1990s into two extraordinary moments of global change. One was political, the purported "end of history"—the fall of the Soviet Union and the collapse of its sphere influence, and the supposed ascendance of liberal democracy around the world as a natural and inevitable political state. The second was the arrival of the commercial internet, in the form of the World Wide Web, and the early days of the "information society." My path married those two moments. I moved to Central Europe, wanting to experience the expected political changes in real-time, and I started my career in the boom of late 90s tech start-ups there as I was curious to understand how the region's newfound freedoms might enable innovative technology communities making use of the web.

Fast-forwarding five years, I lived in Budapest, Hungary. Even then, in the early 2000s, it was becoming apparent that liberal democracy was not an inevitability in that region (or elsewhere), and that the hype/boom/bust cycle of the tech start-up world was just beginning. I became increasingly interested in how these two seemingly disparate fields—political and social change efforts and the rapid development of data-centric technologies—were growing intimately connected. In 2002 I was lucky to have the opportunity to join the Open Society Foundations Information Program, then based in Budapest, and I spent the next 14 years in philanthropy. I focused on supporting a nascent and rapidly evolving international field of human rights and good governance advocates, researchers, and non-profit organizations that were working, initially, to use technology to support their efforts to strengthen democratic practices, democratize highly controlled information and media environments, and defend the rule of law in national and international settings. Later, as the harm of data-centric technologies became more visible alongside the benefits, my focus shifted to the idea that technology was not only a tool for governance, but a societal product that itself needed to be governed. I recognized that governance would be key to ensuring that innovation drove towards social benefit, particularly for the people and communities that were historically most vulnerable and likely to be impacted.

Data & Society was founded and publicly launched by danah boyd in 2014 to study the social and cultural implications of data-centric technologies. When I joined the organization in 2016, I was interested in how to connect the research—rigorous social science focused on the impact of technology on society—to unfolding policy debates. Policy to govern technology is rarely informed by the experiences of people who live with technology and are impacted by it. Often, people are entirely absent, or the imagined benefits of technology to humanity as a whole—without specific context—are taken as fact. Our work at Data & Society argues that policymaking should be informed and shaped by grounded, socio-technical research—research that values human experience, challenges the hype that drives "technosolutionism," and prioritizes the social, cultural, and political impacts of technology alongside pure technical advancement.

Recently, a group of technology leaders issued a widely circulated letter calling for a pause in the development of AI technologies. Along with your Data & Society colleague Jenna Burrell, you authored an op-ed which argued that urging more self-regulation from industry is not the right approach. What was the key point of your article?

While it is absolutely critical that industry voluntarily adopt approaches to advancing responsible and trustworthy AI, we've learned from the past that self-regulation is not sufficient for protecting fundamental rights, nor for ensuring that meaningful accountability mechanisms are in place should harm occur. Without government playing an enforcement role, industry has strong incentives to externalize product risk onto individuals and society. What government can do is define what harms our society will not allow and make it more expensive (through regulatory action) to cause harm than to allow it to happen. This is not only true in the AI field. We look to regulation to safeguard our food, our transportation systems and devices, our energy sources, our homes, our health, and our environment. The idea that the technology industry's products are an exception to this basic social contract says to me that the government is abdicating its responsibility to protect the public interest, and our fundamental rights.

Two often discussed challenges for AI regulation are: 1) The difficulty of government regulatory efforts (enacting laws and adopting policies) to keep up with the pace of technological innovation; and 2) Since technology is pervasive around the world, regulatory policies must be adopted internationally. Given these challenges, what gives you hope that we can effectively regulate these technologies in the near future?

First, "effectively regulate" is a very, very contested term. I might define it quite differently than the CEO of an AI company would, which in turn might be different from how a person experiencing discrimination in obtaining a mortgage or access to public benefits because of an algorithmic system would. So, there isn't much agreement on that. But it's been fascinating to see members of Congress point to the lack of social media regulation as a failure that we should be learning from; it appears to be a rare point of agreement when it comes to what not to do.

In terms of what I think it means to effectively regulate: at the very baseline, it means to protect a set of fundamental American rights in the context of algorithmic systems, and to foster AI innovation that benefits the public interest rather than solely private actors. In working toward that, what gives me hope is that we're not starting from scratch. Sorelle Friedler and I wrote a piece recently in The Hill arguing that a great deal of thought and design work has already gone into developing approaches to AI governance that could achieve those goals, and we're lucky that Congress can draw on it directly as they move towards legislation. These include policy documents like the "Blueprint for an AI Bill of Rights" from the White House Office of Science and Technology Policy, NIST's "AI Risk Management Framework," and the deep body of interdisciplinary scholarship that has articulated both harm from AI systems and potential solutions to advance a human-centric, public interest AI innovation environment. I also think it should be heartening that we've seen the EU take major steps in the EU AI Act to think through and articulate an approach to protecting European citizens while preserving innovation.

What worries me, though, are two key issues. First, I think technical and tech industry actors are overrepresented in the policymaking conversations happening right now on the federal level. While I think technical expertise and an understanding of the competitive environment are critical inputs to governance, we need an interdisciplinary and participatory process to create robust regulation that prioritizes understanding the societal impacts of AI and mitigation of the harm to individuals and groups.

Second, regulation is a tool, but many of the societal impacts of AI systems will fall outside of regulatory reach. I worry that in the focus on regulation—and particularly through the lens of existential risk—we are missing the opportunity for a broader societal conversation about automation, the societal bias towards AI adoption, and its role in our future. Meredith Broussard, in her 2018 book Artificial UnIntelligence, writes about "technochauvanism"—the faulty belief that technology is always the solution to a given societal problem. As a society, we're fixated on the question of "what can we automate?" while failing to ask a much more critical one: "what should we automate?" Right now, the fact that something can be automated is widely seen as reason enough to do it—even at the risk of harming people and society at large. That shouldn't be the case.

What is an important issue Data & Society is working on that you think hasn't received enough attention?

I'll mention three issues. First, it's critical that we understand AI R&D—particularly federally-funded AI R&D—as a sociotechnical research field rather than a purely technical one. Sociotechnical research studies technologies in context—social, political, cultural—and recognizes that to successfully deploy a new technology, it must integrate with human workflows and infrastructures which are often invisible. Approaching technology through a sociotechnical lens means never assuming that the impact of a technology can be predicted or assumed from its technical properties alone. The approach also asks whether a given technology is appropriate to the problem (or not) and how it might work alongside non-technical solutions. Our research methodologies at Data & Society are sociotechnical, and we believe that sociotechnical approaches should be much more widely integrated into AI R&D. I was very pleased that the first-year report of the National AI Advisory Committee to the White House, which I sit on, included an extensive recommendation about advancing government commitments to sociotechnical research.

Second, and relatedly, we're focused on the issue of participation—who is at the table in the design and governance of AI and algorithmic decision-making systems. At the end of September 2023, we released a policy brief by Data & Society affiliate and University of Baltimore legal scholar Michele Gilman called, "Democratizing AI: Principles for Meaningful Public Participation," that not only lays out an argument for why the participation of impacted groups in AI governance is critical, but also how to create pathways for participation in meaningful ways. We recently held a webinar on this new policy brief with Gilman, Harini Suresh, Assistant Professor of Computer Science at Brown University, and Richard Wingfield, Director of Technology and Human Rights at BSR, a business network and consultancy focused on global sustainability. We want to bring participation into AI governance conversations, and through our research and projects like the Algorithmic Impact Methods Lab continue to build strong methodologies for participatory governance.

Third, our Labor Futures program at Data & Society investigates the impacts of data-centric technologies and AI on work and workers, particularly those in low-income or precarious work environments. A great deal of attention has focused on concerns that workers will be replaced by automation and the subsequent need for retraining programs. Our work instead focuses on the ways in which automated technologies and AI can enhance or erode worker power, protections, and job quality, and the role that workers can play in ensuring better outcomes. Our Labor Futures team, led by Aiha Nguyen, has released research on issues like algorithmic management and the expansion of workplace surveillance practices. Together with our policy team, the team has submitted public comments on work, workers, and automation to the federal government. We've been excited to see our work moving policy in favor of workers.

For example, in November 2022 the National Labor Relations Board's general counsel released a memo calling for greater protections for workers against the harm of algorithmic management, citing our work on this as foundational. I'd like to see more of the public discussion on the impacts of AI on workers broaden beyond "the robots will take our jobs," and include robust attention to what integration of AI systems means for workers, how to ensure worker participation in the design and deployment of AI in the workplace, and the kinds of protections we need to have in place to ensure that algorithmic systems and AI are truly beneficial for everyone, including workers.

Janet Haven is the Executive Director of Data & Society, a non-profit organization with a mission to advance public understanding of the social implications of data-centric technologies, automation, and AI. She is also a member of the (US) National Artificial Intelligence Advisory Committee. Haven started her career in technology startups in central Europe and lived in the region for more than 15 years, deepening her understanding of the ways the internet and algorithmic technologies impact societies outside the US.

For ACM, she serves as a member of the ACM US Technology Policy Committee (ACM USTPC). ACM USTPC currently comprises more than 170 members and serves as the focal point for ACM's interaction with all branches of the US government, the computing community, and the public on policy matters related to information technology.