People of ACM - Catherine Flick

August 8, 2023

How did you initially become interested in ethics and social responsibility in computing?

Back in the early 2000’s, I was pursuing a Bachelor of Science with a major in Computer Science while also working part time as an IT tech support technician for a major Australian recording company. While working there, I discovered how terrible their security was (to be fair, lots of security was very poor back then) and that I, a lowly, poorly paid support worker, had access to extremely sensitive information about the famous artists they were representing. It was then that I realized that I needed a strong moral code to work in that position, and that experience opened up my interest in ethics and social responsibility.

From there I pursued a major in History and Philosophy of Science and became interested in bioethics—which was increasingly popular at the time due to the Visible Human Project and the race to map the human genome. I started asking whether the principles used in bioethics (such as informed consent) were applicable to technology. These interests culminated in my PhD project on looking at End User License Agreements and informed consent. I discovered along the way that there was a significant movement in the UK and US looking at these issues and that ethical governance of technologies was on the EU’s radar, so I was able to hook into a large network of interesting and enthusiastic researchers looking to make technology more socially responsible.

In your most recent paper “The Many Faces of Monetisation: Understanding the Diversity and Extremity of Player Spending in Mobile Games via Massive-Scale Transactional Analysis,” you along with your co-authors analyzed spending patterns of players of mobile computer games. What was a key finding of this research?

This paper is interesting for a couple of reasons. For starters, it was the first study working with massive scale data lakes of video game player behavior. We found that an industry-academic partnership to analyze these massive datasets can have a tangible impact on the field. For example, our paper was used as a case study of good practice in data sharing in the UK government’s new Video Games Research Framework. A more concrete finding from the paper itself is that the stories game developers have told themselves about monetization are not quite right—there aren’t just “minnows and whales” but more nuanced groups that spend money differently depending on the type of game. This can potentially lead to a better understanding of monetization design that might exacerbate problematic spending and lead to evidence-based policy around video game monetization.

What have you learned from being a member of ACM COPE and working to update ACM’s Code of Ethics and Professional Conduct?

Working to update the ACM Code of Ethics and Professional Conduct has been one of the highlights of my career to date. It was really interesting to get at the heart of what it is to be a computing professional, and to get feedback from a variety of different stakeholders including ACM members, ethics experts, and computing professionals more generally as to what the “conscience of the profession” might look like.

One of the more interesting parts of the challenge was to develop wording that was precise (as we’d been told the Code was too long) but also captured the essence of the principle we were articulating. It was much harder than I expected! We also had to completely rethink certain aspects where the debates had moved on significantly, such as the clause on intellectual property. We reformulated it so that it didn’t rely on a principle of obeying the law blindly as it was in the 1992 Code and made it a more nuanced argument based on the public good. More generally, it’s been very interesting to see how the Code has been adopted around the world, and especially satisfying to see the interest in our efforts—in particular from new members who have joined due to the Code.

In a 2018 article in ACM InroadsThe Continual Evolution of Interest in Computing Ethics,” you and your co-authors noted that “since 1988-89, the Department of Computer Science at Stanford University has offered CS201 (Computers, Ethics, and Social Responsibility) as part of its undergraduate curriculum.” Should universities mandate that Computer Science majors take ethics and social responsibility courses as part of their core curriculum requirements?

As an academic of course I agree! But I’m also a little biased. Of course everyone should be taking ethics! But in all honesty, it is a great course to take to help you stop and reflect on your practice—"How do I make things responsibly? How are they used? How might they be misused?" Taking space for reflection is an important part of being a computing professional, and giving students the skills to be able to do that effectively is really what ethics and social responsibility courses are about. We also hope to encourage students to think about what they end up doing when they go into the industry after their degrees. We want them to contemplate scenarios such as how they might be able to challenge an unethical practice, and how can they ensure ethical practice is embedded in their work from the beginning.

What is an area of computing ethics that you think hasn’t received enough attention?

I think there’s a lot happening at the moment exposing startup/venture capital and Silicon Valley bubbles in terms of tech investment and culture and the ethical implications of these. How can we ensure that venture capital is put into socially responsible technology and not just the most potentially profitable ones, goals which are often at odds with each other? How do we ensure that this sector is benefiting everyone and not just people who are in those bubbles (e.g., the exploitation of low-paid workers in developing countries used to train AI models)? How do we increase the diversity of startups and accessibility to those not from traditional startup backgrounds, so that more diverse voices are part of the innovation process? All of these things have significant ethical and social impact dimensions but are largely hidden behind lots of money. But there’s a bit of a backlash happening now, especially after the crypto hype (and now AI hype) given the socially negative impacts these technologies have had. I’d like to be out of a job one day because there won’t be any more negative ethical/social impacts of technology, but sadly I don’t think I will be.

 

Catherine Flick is a Reader in Computing and Social Responsibility at De Montfort University in Leicester, England. Her research interests have included ethics and video games, responsible research and innovation in technology, anonymous technologies, trusted computing, and informed consent in information technology (IT).

Flick is a member of the Digital Observatory Research Cluster (DORC, and Co-Investigator with the EPSRC Centre for Doctoral Training in Intelligent Games and Game Intelligence (IggI). She is also the Vice Chair and Code Outreach Coordinator for the ACM Committee on Professional Ethics (COPE). ACM COPE developed and continues to promulgate ACM’s Code of Ethics and Professional Conduct. The Code is designed to inspire and guide the ethical conduct of all computing professionals, including current and aspiring practitioners, instructors, students, influencers, and anyone who uses computing technology in an impactful way.