People of ACM - Francesca Rossi
January 15, 2019
What are your key responsibilities as AI Ethics Global Leader at IBM Research?
IBM is committed to developing and deploying AI services and products that augment human capabilities to improve people’s wellbeing and enterprise productivity. To achieve this vision, AI needs to be trusted by those who adopt it, use it, or are affected by it. This means AI that is fair and explainable, and AI producers that are transparent on the design choices to develop AI systems. As the IBM AI Ethics Global Leader, I help the company along all these dimensions, by leading research initiatives on value alignment and engineering AI ethics; by working with all the company divisions (research, legal, communications, policies) to coordinate the various AI ethics activities to have a coherent approach; and by joining several external partnerships (such as those with the World Economic Forum, the Partnership on AI, the IEEE, and the United Nations) and government advisory boards (such as the European Commission High Level Expert Group on AI).
Will you tell us a little about constraint reasoning and how recent developments in this field are impacting the wider AI landscape?
The power of constraint reasoning and optimization is in the ability to focus on the declarative formulation of a problem, leaving the task of finding the best way to solve it to a constraint solver. This is why researchers have been working as much on solving techniques as on modeling methodologies. Recently, besides the core activities around improving the solvers’ performance, generality, and flexibility, and to expand the application scenarios, the field has also adopted techniques from knowledge representation and preferences to make constraints more flexible, methods for multiagent systems to support portfolio approaches, and machine learning algorithms to learn constraints and solution methods.
Constraints and optimization are very general and ubiquitous concepts that occur in many other areas of AI, and they are fundamental ingredients in any intelligent system. Therefore I hope to see much more cross-fertilization between constraint reasoning and other AI areas in the future, to develop AI systems that can seamlessly mix problem learning and solving.
In a recent interview, you maintained that, if AI is not trusted by the public, it will not be as widely adopted as it could be. How can the computing community play a constructive role in building public trust in AI?
Indeed, I strongly believe that if AI is not trusted it will not be widely adopted, and this would not allow people to get all the potential benefits of this powerful technology. As with people, trust is built over time and is based on many properties of the technology and those who produce it. In terms of trust in the technology, the computing community can find technical solutions to current concerns that are raised, such as the need of fairness, value alignment, and explainability. The computing community can also contribute to trust-building by engaging with all the other AI stakeholders in predicting, identifying, and tackling future concerns.
Fortunately, many AI researchers are now working to define and address these challenges. New scientific conferences focus just on these issues, such as the AAAI/ACM conference on AI, Ethics, and Society (AIES); AI developers are collaborating and sharing knowledge to better understand how to produce trustworthy AI, such as via the open source AI fairness 360 toolkit by IBM Research; and the first concrete solutions are being embedded in deployed services, platforms, and solutions, such as the AI Open Scale environment by IBM.
How does the AIES conference fill an important niche in advancing and disseminating research on the relationship between AI and ethics?
Scientific conferences are fundamental to advancing the state of research in a certain field. They do that by an open call for papers and by a peer review process that selects the highest quality work that is worth being discussed and presented to the conference attendees. This works very well in computer science and artificial intelligence, and it allows the fields to move fast and the best results to be quickly shared and built upon.
However, this approach is usually mono-disciplinary. On the contrary, for the field of AI ethics, the work needs to be multidisciplinary, involving AI researchers as well as experts from social sciences, such as sociologists, psychologists, philosophers, and economists. There was no scientific conference allowing all these experts to get together, present their work to each other, and learn from the other disciplines. The AIES conference is a response to this need.
Both AAAI and ACM immediately supported this idea and helped organize the inaugural conference in 2018, which was co-located with AAAI and included more than 250 participants, five invited talks from the different disciplines, and 61 presentations. In 2019 the AIES conference will again be co-located with AAAI and it looks like it will greatly surpass the success of the first gathering.
Francesca Rossi is the IBM AI Ethics Global Leader and a Distinguished Research Staff Member at the IBM T.J. Watson Research Center. Previously she was a Professor of Computer Science at the University of Padova, Italy. Her research focuses on artificial intelligence, specifically constraint reasoning, preferences, multi-agent systems, computational social choice and collective decision making. She is also interested in ethical issues surrounding the development and behavior of AI systems. She has authored more than 190 publications, including co-authoring the book A Short Introduction to Preferences: Between AI and Social Choice, and co-editing the Handbook of Constraint Programming.
Rossi has served as the President of the International Joint Conference on Artificial Intelligence (IJCAI) and is currently Editor-in-Chief of the Journal of Artificial Intelligence. She is also on the steering committee of the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES).