People of ACM - Jack Dongarra

November 19, 2013

Jack Dongarra is a University Distinguished Professor at University of Tennessee's Department of Electrical Engineering and Computer Science. He is also a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory, and an adjunct professor in the Computer Science Department at Rice University. Dongarra holds the Turing Fellowship in the schools of Computer Science and Mathematics at the University of Manchester. He is the founding director of the Innovative Computing Laboratory in UT's Computer Science Department, which engages in research in various areas of high-performance computing.

A graduate of Chicago State University with a BS degree in Mathematics, Dongarra received an MS degree in Computer Science from Illinois Institute of Technology. He was awarded a PhD degree in Applied Mathematics from the University of New Mexico. His research includes the development, testing and documentation of high quality mathematical software, and he contributed to the design and implementation of many open source software packages and systems, including LINPACK, LAPACK, and Netlib.

The recipient of the 2013 ACM/IEEE Ken Kennedy Award, he also won the IEEE IPDPS Charles Babbage Award, the IEEE Medal of Excellence in Scalable Computing, and the SIAM Special Interest Group on Supercomputing's award for Career Achievement. He is a Fellow of ACM, IEEE, AAAS, and SIAM, and a member of the National Academy of Engineering.

As an innovator who has contributed to the steep growth of high performance computing, what scientific challenges do you think this technology been most successful in illuminating?

High performance computing enables simulation — that is, the numerical computations to understand and predict the behavior of scientifically or technologically important systems — and therefore accelerate the pace of innovation. Simulation enables better and more rapid product design.

Simulation has already allowed Cummins to build better diesel engines faster and less expensively, Goodyear to design safer tires much more quickly, Boeing to build more fuel-efficient aircraft, and Procter & Gamble to create better materials for home products. Simulation also accelerates the progress of technologies from laboratory to application.

High performance computing is on a path to Exascale computing, 10 18 operations per second. This will provide capability benefits to a broad range of industries, including energy, pharmaceutical, aircraft, automobile, entertainment, and others.

More powerful computing capability will allow these diverse industries to more quickly engineer superior new products that could improve a nation's competitiveness. In addition, there are considerable flow-down benefits that will result from meeting both the hardware and software high performance computing challenges. These would include enhancements to smaller computer systems and many types of consumer electronics, from smartphones to cameras.

How does your new supercomputer benchmark differ from your pioneering Top500 standard, and how will this new rating system drive future computer system design and implementation?

The High Performance Linpack (HPL) benchmark which is used in the Top500 ranking of the fastest computers is the most widely recognized and discussed metric for ranking high performance computing systems. When HPL gained prominence as a performance metric in the early 1990s there was a strong correlation between its predictions of system rankings and the ranking that full-scale applications would realize. Computer system vendors pursued designs that would increase HPL performance, which would in turn improve overall application performance.

Today HPL remains tremendously valuable as a measure of historical trends, and as a stress test, especially for leadership class systems that are pushing the boundaries of current technology. Furthermore, HPL provides the high performance computing (HPC) community with a valuable outreach tool, understandable to the outside world. Anyone with an appreciation for computing is impressed by the tremendous increases in performance that HPC systems have attained over the past several decades.

At the same time, HPL rankings of computer systems are no longer so strongly correlated to real application performance, especially for the broad set of HPC applications governed by differential equations, which tend to have much stronger needs for high bandwidth and low latency, and tend to access data using irregular patterns. In fact, we have reached a point where designing a system for good HPL performance can actually lead to design choices that are wrong for the real application mix, or add unnecessary components or complexity to the system.

We expect the gap between HPL predictions and real application performance to increase in the future. In fact, the fast track to a computer system with the potential to run HPL in the Exascale range is a design that may be very unattractive for our real applications.

Without some intervention, future architectures targeted toward good HPL performance will not be a good match for our applications. As a result, we seek a new metric that will have a stronger correlation to our application base and will therefore drive system designers in directions that will enhance application performance for a broader set of HPC applications.

As a Turing Fellow at the University of Manchester, are there connections between Alan Turing's contributions to the first computers and your high performance computing innovations?

Alan Turing was highly influential in the development of computer science, giving a formalization of the concepts of "algorithm" and "computation." In 1936 Alan Turing wrote a paper describing a machine that could be used to perform any calculation that could be calculated, laying some important groundwork for the idea of the computer.

Turing is widely considered to be the father of computer science and artificial intelligence. His work is so widespread that his influence infused so much of modern thinking on how computers can work.

I'm highly honored to be named as the first Turing Fellow at the University of Manchester. The position is associated with the Mathematics and Computer Science Schools at the University. Alan Turing's work is felt in all areas of high performance computing research.

As a leader in high performance computing, what advice would you give to young people considering careers in high performance computing?

I advise students to learn the fundamentals, invest in a solid base of mathematics and learn to write well. Explore as many things as you can and try to find a project you have a deep passion for.

In high performance computing we think of tackling projects through co-design. Co-design is a holistic design process where integrated teams of hardware architects, system software developers, domain scientists, computer scientists, and applied mathematicians work together to collaboratively develop compatible software and hardware solutions.

It is an opportunity not only for the software and application side to reason about how to leverage emerging architectures and technology, but also for the hardware developers and vendors to better understand the needs of scientific computing. The ability and opportunity to work with other smart, creative people, often with expertise quite different than your own, is an important characteristic of many computing projects and is very rewarding.