ACM Conference Showcases Research on Fairness, Accountability and Transparency in Algorithmic Systems

Interdisciplinary Conference Opens June 21 in Hybrid Format

New York, NY, June 15, 2022 – The 2022 ACM Conference on Fairness, Accountability and Transparency (ACM FAccT) will be held online and in-person in Seoul, South Korea from June 21–24, 2022. Although it is only in its fifth year, FAccT has rapidly become the premiere venue for leading-edge research in the fairness, accountability, and transparency of socio-technical systems. The work presented at FAccT represents the cutting edge of practical applications of AI algorithms in hiring, policing, standardized test bias, and privacy protection among many other areas that impact daily life. Because the FAccT community is multidisciplinary in its makeup and approach, the presenting researchers hail from a broad range of academic backgrounds including computer science, statistics, the social sciences, philosophy, and law.

“To accommodate the growth of research in our field, we will be presenting twice as many papers as last year,” explained FAccT Program Co-Chair Kristian Lum, Sr. Staff Responsible Machine Learning Researcher, Twitter. “We expect that about half of our registrants will attend the conference in Seoul and half will participate online. But almost all of the work presented at the conference will be available online for everyone to access.”

Algorithmic systems are being adopted in a growing number of contexts, fueled by big data. These systems filter, sort, score, recommend, personalize, and otherwise shape human experience, increasingly making or informing decisions with major impact on access to, e.g., credit, insurance, healthcare, parole, social security, and immigration. Although these systems may bring myriad benefits, they also contain inherent risks, such as codifying and entrenching biases, reducing accountability, and hindering due process. They also increase the information asymmetry between individuals whose data feed into these systems and big players capable of inferring potentially relevant information.

“Almost every day, we read headlines about how machine learning algorithms can impact people’s job prospects or home loan application, or how mistakes in facial recognition software might lead to a false arrest,” added ACM FAccT Program Co-Chair Michael Kearns, Professor, University of Pennsylvania. “But these examples are only the tip of the iceberg. Machine learning algorithms are being deployed in more and more aspects of our daily lives and are, at the same time, becoming increasingly technically sophisticated. For this year’s FAccT conference, we have put together a broad program of papers, workshops, and tutorials to explore the profound impact these algorithms have now, and how our community might develop technical approaches and ethical frameworks to ensure that these technologies serve people in a socially responsible and transparent way.”

ACM FAccT 2022 HIGHLIGHTS

Keynote Lectures [Partial List]
For a complete list of FAccT 2022 keynotes, visit https://facctconference.org/2022/keynotes.html.

André Brock, Associate Professor of Media Studies, Georgia Institute of Technology

Brock writes on Western technoculture and Black cybercultures. His scholarship examines race in social media, video games, blogs, and other digital media. Block’s book, Distributed Blackness: African American Cybercultures, examines how Black everyday lives are mediated by networked technologies.

Pascale Fung, Professor, The Hong Kong University of Science and Technology (HKUST)

Fung is an elected Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) for her “significant contributions to the field of conversational AI and to the development of ethical AI principles and algorithms,” and an elected Fellow of the Association for Computational Linguistics (ACL) for her “significant contributions towards statistical NLP, comparable corpora, and building intelligent systems that can understand and empathize with humans.”

Vanessa Bain, Grassroots Organizer

Bain has been grassroots organizing Instacart Shoppers for over five years and has led several national walkouts, boycotts, and direct actions over labor grievances, worker classification, and workplace safety in the grocery gig economy. In January 2020, Bain cofounded Gig Workers’ Collective, a group that fosters worker-led worker organizing and advocacy in the gig economy.

Dan Calacci, Doctoral Candidate, MIT Media Lab

Calacci is a doctoral candidate at the MIT Media Lab studying how data stewardship and analysis can impact community governance. His current work investigates how data rights and labor rights intersect and explores how to build real-world tools that help workers build power by leveraging the data they generate at work. He and his colleagues have exhibited digital art that addresses themes such as urban inequality and digital surveillance in galleries around the globe.

Algorithmic Governance of the Public Sphere (Panel Discussion)
Daphne Keller, former Director of Intermediary Liability at Stanford's Center for Internet and Society and former Associate General Counsel for Google; Tarleton Gillespie, Senior Principal Researcher, Microsoft Research, and Associate Professor, Cornell University; Jon Kleinberg, Professor, Cornell University

Media companies have always curated the public sphere of the political community where they operate. They shape the information environment in which the community deliberates about collective action—deciding what is included or excluded, and what is amplified or reduced. This keynote panel brings together scholars from communication studies, philosophy, law, and computer science to better understand the nature of algorithmic governance of online speech and to propose regulatory and technological paths forward.

Accepted Papers [Partial List]
For a complete list of research papers and posters which will be presented at the FAccT Conference visit https://facctconference.org/2022/acceptedpapers.html.

“Auditing for Gerrymandering by Identifying Disenfranchised Individuals”
Jerry Lin, Duke University; Carolyn Chen, Duke University; Marc Chmielewski, Duke University; Samia Zaman, Duke University; and Brandon Fain, Duke University

Gerrymandering is the practice of drawing congressional districts to advantage or disadvantage particular electoral outcomes or population groups. The authors study the problem of computationally auditing a voting district for evidence of gerrymandering.

“BCIs and Human Rights: Brave New Rights for a Brave New World”
Marietjie Botes, University of KwaZulu-Natal (South Africa)

Digital health applications include a wide range of wearable, implantable, injectable, and ingestible digital medical devices. One of the most compelling digital medical device developments is brain-computer interfaces (BCIs) which entails the connecting of a person’s brain to a computer or to another device outside the human body. Botes examines issues such as mental privacy, the right to identity or self, agency or free will, fair access to cognitive augmentation, and the protection against algorithmic bias and discrimination.

“Bias in Automated Speaker Recognition”
Wiebke Toussaint, Technical University of Delft; Aaron Yi Ding, Technical University of Delft (The Netherlands)

Automated speaker recognition technologies are deployed on billions of smart devices for supporting services such as call centres. Despite their wide-scale deployment and the known sources of bias as found in face recognition and natural language processing, bias in automated speaker recognition has not been systematically studied. The authors show that bias exists at every development stage in the well-known VoxCeleb Speaker Recognition Challenge, including model building, implementation, and data generation. Most affected are female speakers and non-US nationalities.

Flipping the Script on Criminal Justice Risk Assessments”
Mikaela Meyer, Carnegie Mellon University; Aaron Horowitz, American Civil Liberties Union (ACLU); Erica Marshall, Idaho Justice Project; and Kristian Lum, University of Pennsylvania

In the criminal justice system, algorithmic risk assessment instruments are used to predict the risk a defendant poses to society; examples include the risk of recidivism or the risk of failing to appear at future court dates. The authors develop a risk assessment instrument that “flips the script.” Using data about US federal sentencing decisions, the authors build a risk assessment instrument that predicts the likelihood an individual will receive an especially lengthy sentence given factors that should be legally irrelevant to the sentencing decision.

“News from Generative Artificial Intelligence is Believed Less”
Chiara Longoni, Boston University; Andrey Fradkin, Boston University; Luca Cian, University of Virginia; and Gordon Pennycook, University of Regina

Artificial Intelligence (AI) can generate text virtually indistinguishable from text written by humans. A key question, then, is whether people believe news generated by AI as much as news generated by humans. AI is viewed as lacking human motives and emotions, suggesting that people might view news written by AI as more accurate. The authors present findings which show that people were more likely to incorrectly rate news written by AI (vs. a human) as inaccurate when it was actually true, and more likely to correctly rate it as inaccurate when it was indeed false.

“Tech Worker Organizing”
William Boag, MIT; Harini Suresh, MIT; Bianca Lepe, MIT; and Catherine D'Ignazio, MIT

The authors argue that through collective action, the employees of powerful tech companies can act as a countervailing force against strong corporate impulses to grow or make a profit to the detriment of other values. In this work, the authors ground these efforts in existing scholarship of social movements and labor organizing. Further, they examine an archive chronicling many recent collective actions and contextualize those actions as part of a broader movement with methods for how workers can build power and use power.

Accepted Tutorials
ACM FAccT Tutorials aims to introduce technical, policy, regulatory, ethical, or social science aspects of FAccT issues for a broad audience. For a complete list of tutorials to be presented at the FAccT Conference visit https://facctconference.org/2022/acceptedtuts.html.

CRAFT Sessions
CRAFT, which stands for Critiquing and Rethinking Accountability, Fairness, and Transparency, explicitly push the boundaries of FAccT as a field of study and a community of practice. These sessions bring together academics of all disciplines and people representing different communities of practice (including journalism, advocacy, activism, organizing, education, art, spirituality, and public authorities) in the spirit of reflection and engagement. For a complete list of CRAFT sessions which will be presented at the FAccT Conference visit https://facctconference.org/2022/acceptedcraft.html.

About ACM

ACM, the Association for Computing Machinery, is the world's largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field's challenges. ACM strengthens the computing profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Contact:
Jim Ormond
ACM
212-626-0505
[email protected]

Printable PDF File