People of ACM - Monika Henzinger

August 10, 2017

At SIGIR 2017, you will be receiving the Test of Time Award for the paper you co-authored with Krishna Bharat in 1998 titled Improved Algorithms for Topic Distillation in a Hyperlinked Environment. In the paper, you propose algorithms that initiate web search by topic rather than the precise wording of the given query. Can you tell us a little about the insights in that paper and how that research has influenced web search today?

This paper was one of the first papers that combined hyperlink analysis with text-based analysis in web search. In many cases, users of web search engines do not necessarily want result pages that match their exact query terms, but instead want search results that are the best-quality results for the topic of the search query. Hyperlink analysis is often used to determine the best-quality results for a given topic, because it inspects the hyperlink structure of the web to determine the most relevant search results. The most famous example of hyperlink analysis is the PageRank algorithm used by Google. At the time Jon Kleinberg at IBM Research in Almaden had developed a different hyperlink-based approach, called HITS. That algorithm worked very well for some queries, but suffered from the problem of topic drift for other queries, i.e., search results for different topics were returned. We considerably reduced topic drift by using text-based analysis in combination with HITS.

Computer verification, the process of proving or disproving the correctness of intended algorithms, becomes more challenging as computer systems become more complex. Why do graph algorithms offer a promising approach to computer verification?

Graphs and extensions of graphs such as Markov Decision Processes and Game Graphs are used to model all possible states a computer system can be in. Basically each state in the system becomes a node in the graph, and edges in the graph correspond to possible state transitions. One node is usually the start state of the system. Now when you analyze the properties of infinite paths starting at the start node, you can determine which properties your system has, for example, if certain “dangerous” states are visited at all, or whether they are visited only finitely often. The analysis of these properties of infinite paths can be done using a suitable algorithm on the corresponding graph. Thus graph algorithms allow you to check certain system properties. And this is not just wishful thinking. Theoretical work from model-checking has led, for example, to industrial-strength verification tools that are used in checking hardware circuits and to create plans, schedules and control programs that run in embedded systems.

You will serve as the program chair for the 50th Anniversary of the ACM Symposium on the Theory of Computing (STOC 2018). Why is this an exciting time to be working in theoretical computer science?

I think this is an exciting time to do any kind of computer science, considering the huge impact that the field has on people’s lives. The goal of theoretical computer science is to understand and prove various properties of algorithms and develop new ones. As computer science in general is used in more and more areas, it naturally increases the need for theoretical computer science.

However, I also want to see that my theoretical work has a more direct impact, and that’s why I started an algorithms engineering group in my research lab that is experimentally evaluating graph algorithms—for example, algorithms for finding the minimum cut in a graph, on real-life data sets. Research in algorithms engineering is very valuable, as it is investigating which techniques that work well in theory are actually performing well in practice. The publications in algorithms engineering conferences are often read by algorithms engineers working in industry. Thus, research in algorithms engineering is a valuable way to transfer new algorithmic ideas into practice. But, on the other side, it also provides a useful feedback loop showing which techniques do not work. The intuition of algorithms designers might be very wrong there. It is, for example, not always the case that a simple, theoretically elegant algorithm performs well in practice. One concrete example is the contraction-based  (global) minimum cut algorithm by Karger and Stein that is very simple, but on real-life data sets is beaten often by less elegant algorithms such as those developed by Hao-Orlin, Nagamochi, Ono, and Ibaraki. Thus, we need algorithms engineering to experimentally evaluate the algorithms.

What advice would you offer a younger colleague just starting out in the field?

Work on important problems and stay focused. There are simply too many interesting problems one can work on, and thus it is crucial to restrict one's attention to the most important problems that one can contribute to.

Monika Henzinger is a Professor of Computer Science at the University of Vienna, where she heads the Theory and Applications of Algorithms research group. Earlier she held academic positions at Cornell University and École polytechnique fédérale de Lausanne (EPFL). She has served on the technical staff of the Systems Research Center at Digital Equipment Corporation and was Director of Research at Google. Her core research interests include combinatorial algorithms and data structures and their applications.

Henzinger’s honors include a European Young Investigator Award, an NSF CAREER Award, and a Top 25 Women on the Web Award. She is a Fellow of ACM and of the European Association for Theoretical Computer Science (EATCS). She is a member of the Academia Europaea, of the German Academy of Sciences Leopoldina, and of the Austrian Academy of Sciences. She received an honorary doctorate from TU Dortmund University. Henzinger was recently awarded a € 2.5 million European Research Council Advanced Grant to conduct fundamental research. She has published more than 100 scientific articles and is the co-inventor or more than 80 patents.