People of ACM - Kurt Mehlhorn

February 22, 2024

The focus of your PhD from Cornell University in 1974 was computational complexity. What made you decide to focus on computational complexity, and how was the field different then?

Computer Science was still in its infancy in the early 70s. The department at Cornell University had a weekly colloquium. I went to every talk and was able to follow most talks to the very end. This is very different today. CS has become a broad field. I still go to our weekly colloquium, but I am glad if I understand the first half of the talks. Once the part for the specialists starts, I am usually lost.

Why did I choose computational complexity? At the time, the theory group at Cornell was extremely strong: Juris Hartmanis, John Hopcroft, Bob Constable, and Bob Tarjan. And I had a strong interest in the mathematical side of CS. I spent a year reading about different aspects of theoretical CS (semantics of programming languages, algorithms, formal languages, computational complexity) until I made up my mind. I wanted to do a thesis in computational complexity and asked Bob Constable to advise me. However, I have to admit that after completing my thesis, I switched to algorithms and data structures.

In 1988 you and Stefan Näher started the Library of Efficient Data types and Algorithms (LEDA) project. What challenges were you trying to overcome and how did LEDA address this?

It was always my goal that my research would not only produce beautiful mathematical results but would also be eminently practical. However, when I asked former students whether they have implemented any of the algorithms that I taught them, the answer was usually a polite no. They would say something to the effect of,  “Yes, the problems do come up, but my boss never gave me the time to implement an advanced solution.“ It always had to be the simplest solution. I concluded from this feedback that writing books and papers and teaching algorithms is not enough. It is also necessary to provide knowledge of the field in the form of an easy-to-use, efficient, and correct software library. This is what LEDA should be and this is what it became. I expected that we would need about 1 ½ years for the project. It took more than a decade.

You are known as a key figure in the development of algorithm engineering. What is algorithm engineering, and in what important ways has it shaped computer science?

Algorithm engineering treats programs as first-class citizens. I want to mention two key lessons that we learned in the LEDA project.

Lesson 1: It is difficult to turn a correct algorithm into a correct program. Many of our first implementations were incorrect. We concluded that one should not write programs that give yes/no answers to complex questions. Rather, all answers should be justified. We therefore made all our implementations certifying. They do not only compute the output that they are supposed to compute but also a succinct witness for the correctness of the output. This witness can then be checked by a checker program. The concept of certifying algorithms proved extremely powerful: the implementations in LEDA are bug-free.

Lesson 2: The correct implementation of geometric algorithms is highly non-trivial. These algorithms are usually designed for a machine that can compute with real numbers in the sense of mathematics. But the floating-point numbers of our computers are only a crude approximation of real numbers. None of our first implementations of geometric algorithms were correct. We and others created the scientific basis for geometric computing in work that spanned more than a decade. I even started to work in computer algebra, in particular, algorithms for isolating roots of polynomials. Our root isolation algorithm is now part of Maple and the geometric algorithms are part of LEDA and CGAL.

What is an interesting problem you are working on now in your area of research?

At the moment, I work mainly on fair division problems. Imagine a set of indivisible goods, such as a house, a car, a toothbrush, which are to be allocated to a set of agents in a fair manner. Each agent has his own valuation for sets of goods. What do I mean by fair? A strong notion would be envy-freeness. Each agent likes the bundle allocated to them at least as much as the bundle allocated to any other agent. This is too much to ask for. Imagine two agents and one good that both like. One has to allocate the good to one of the agents and the other will envy.

A natural relaxation is envy-freeness up to any item. Although an agent might prefer the bundle allocated to another agent over his own bundle, this no longer holds after the removal of any good from the other bundle. Does such an allocation always exist? We do not know. Together with Bhaskar Chaudhury and Jugal Garg, I was able to show that the answer is YES for three agents and any number of goods. But for four and more, the question is open.

Towards which specific areas would you like to see the German government—or the European Union more broadly—direct funds for research in the next 10 to 20 years?

I will answer a related, but different question. What mechanism should be used to allocate such funds?

Europe has excellent funding models for basic research. For example, the Max Planck Society in Germany and CNRS in France operate excellent institutes for basic research over a wide spectrum of fields. The European Research Council highly successfully funds individual researchers with substantial amounts and little strings attached. The sole criterium for funding is scientific excellence.

The same boldness does not exist for the development of new products. Why don’t we have a competition between start-ups, select a dozen, and give them 20 million each for a period of five years? A first step was taken by the EU with the creation of the European Innovation Council that funds many start-ups that came out of projects of the European Research Council.

 

Kurt Mehlhorn was a director of the Max Planck Institute for Informatics for thirty years from 1990 - 2020. He has authored six books and more than 300 publications in areas including algorithms, computational geometry, computer algebra, and theoretical computer science. Among his many broad and impactful contributions, Mehlhorn is known for his important work in the development of algorithm engineering.

His honors include receiving the Gottfried Wilhelm Leibniz Prize, which is considered the most important research award in Germany. He was the recipient of the 2010 ACM Paris Kanellakis Theory and Practice Award for contributions to algorithm engineering by creating the LEDA Library for algorithmic problem solving, and was named an ACM Fellow for important contributions in complexity theory and in the design, analysis, and practice of combinatorial and geometric algorithms.

Mehlhorn played an important role in the establishment of several research centres for computer science in Germany, including the Max Planck Institute for Computer Science and the Schloss Dagstuhl – Leibniz Center for Informatics.