People of ACM - Andrew Odlyzko

August 1, 2019

You have worked in diverse areas. Is there a common thread that connects the kinds of problems you enjoy working on?

The main common thread is the delight in finding new ways of looking at problems, ideally ones that lead to contrarian conclusions and recommendations. One instance was the discovery in the late 1990s that data networks were very lightly utilized on average. It was not a technical issue, basically an empirical observation, and was very well known to network engineers. But for strange, but not atypical, reasons, this information simply did not spread beyond the ranks of the operators. The research, management, and policy communities were working under the assumption that networks, and the internet in particular, were hopelessly congested. Hence much of the effort in networking was seriously misdirected, as is noted in a 1999 paper I authored. But, as so often happens with contrarian views, few people could be bothered to take this seriously, and this led to huge waste, in particular in the fiber network buildout that played a big role in the internet bubble and crash.

If you were a graduate student today, what is an exciting research avenue in computational complexity you would be drawn to?

One very intriguing topic is quantum computing. There is extensive activity in it, with substantial progress on the physical front, in terms of ability to build larger devices. But there has been disappointingly little progress on the algorithmic side. Suppose we do succeed in building a large quantum computer. What could we do with it? There is a lot of hype in the popular press on how this would revolutionize the world by solving all sorts of problems. But aside from quantum simulations and breaking public-key cryptosystems, we do not have all that many applications that could take advantage of the novel features of this technology. I strongly suspect the current effort is unbalanced, with far more thinking and resources devoted to hardware than to novel algorithms for quantum computers. So this would be an exciting area to get into.

And of course there are so many other areas that are interesting. Just among the ones I used to be very active in, there is the complexity of integer factorization and discrete logarithms, which are important for cryptography. A lot of progress is being made there, and much more seems achievable. It would be attractive both from the intellectual point of view and in the impact it might have on practice.

In the current issue of Ubiquity, you authored the article “Cybersecurity is not very important," which challenges conventional wisdom. Will you briefly summarize your argument?

The basic observation is that in spite of several decades of dire predictions of imminent doom to come from lack of cyber security, we have been living quite well and even prospering. How did this happen, even though none of the fundamental security re-engineering that has been regarded as critically needed has taken place? My answer is that we have learned to live with insecure systems, and have managed to limit the damage that results to an acceptable level.

Our information systems are certainly insecure. But they are also buggy, and we suffer more from the general innocuous defects of our systems than we do from malicious attacks. So managers have in effect been saying that there is no call for shifting resources to security when there are more pressing problems with our computers and networks. And when security defects do bite, there are a range of tools that can be, and are, deployed to lessen the damage. A simple one is just enforcing standard good practices, such as regular patching, regular backups, and observing proper access policies. And then organizations can go on to other techniques, such as two-factor authentication, greater compartmentalization, and so on. A whole range of dangers (such as ransomware) can be either eliminated or at least mitigated by using regular and secure backups. Those can be implemented in the background, without significant impact on regular users. And backups are one area where the tasks are simple enough that verifiably secure system can be built and deployed.

The general conclusion is that we have learned how to live with insecure information systems, just as we have learned to live with lack of absolute security in the physical world. In fact, what we observe is that we have often resorted to methods from the physical world to help improve security of cyberspace.

The argument of my article does not say we can neglect security. Technology is changing, threats are growing, and our dependence on information technologies is increasing. So we need to continue shifting resources into security. But we should be realistic about it. There is no serious crisis, and there are not likely to be any drastic changes to the incremental approach we have been pursuing. It is worthwhile thinking of technological breakthroughs in security, but expectations should be modest, because even if they prove feasible, they will have to compete with a range of tools that are already available. User convenience is certain to continue dominating security in deployment decisions.

In a recent article, you noted that a study of the British railway industry in the early 19th century illustrates that it is difficult to persuade the public or policymakers when a dangerous financial bubble is in progress and bound to fail. Is there an overarching lesson in the history of financial bubbles that we are presently failing to heed?

I very strongly suspect there is. There is much effort at various macro studies, investigations of interconnectedness of systems, capital ratios, and the like. But there is not enough of looking at the fundamentals and the sometimes glaring problems (such as those seen in the real estate bubble a bit more than a decade ago) that cause crashes.

Why is this such a common and serious problem?

In a single word, groupthink. We are basically a herd species. This is a weakness, but is also a strength, in that it inspires the trust and cooperation that are needed for society at large to function, and for scientists and engineers to work together and rely on strangers. But, as has been noted in studies of intelligence failures as well as many other areas, it does lead to groups settling on dogmatic views and being resistant to contrary evidence.

Like many others, I was drawn to science and technology because they offered the promise of clean, logical insights into the workings of our world, avoiding the messy human elements of life. And that promise was satisfied, as far as the basic problems are concerned. But the human elements cannot be neglected, and, in fact, human irrationality is a key element in technological progress. Promoters, such as Steve Jobs with his "reality distortion field," appear essential in inspiring people to aim for big goals. Even when those ambitions are not fulfilled, society is enriched by the more modest accomplishments, and the various byproducts of those efforts.

Andrew Odlyzko is a mathematician and professor at the University of Minnesota (UMN), where he has also served as the head of UMN's Digital Technology Center and Minnesota Supercomputing Institute. Before that, he had a long career in research and research management at Bell Labs and AT&T Labs. He has authored more than 150 technical papers in computational complexity, cryptography, combinatorics, probability and related fields. He is perhaps best known for his work on the Riemann zeta function, which has fostered many improved algorithms. Recently he has done work in areas including communication networks, ecommerce and financial manias and panics.

Odlyzko is a Fellow of the International Association for Cryptologic Research and the American Mathematical Society. Since 2013, he has served as an Associate Editor of the online ACM magazine Ubiquity.