People of ACM - Nicholas Higham

December 21, 2021

Your 2002 book Accuracy and Stability of Numerical Algorithms explored the behavior of numerical algorithms in finite precision arithmetic. What are the current challenges in this area?

The key question that my book addresses is, "how can we understand the behavior of an algorithm such as Gaussian elimination or the fast Fourier transform when the arithmetic operations are subject to rounding errors?" Many examples are known of methods that are attractive in theory but prove to be unsuitable in practice because of numerical instability.

In recent years, this question has taken on a new angle with the advent of low-precision floating-point arithmetics such as IEEE half precision and bfloat16. The provision of fast implementations in graphics processing units (GPUs) has made these arithmetics attractive for a wide range of computations, notably deep learning. But with only three or four digits of precision, even a simple summation of few thousand numbers can potentially produce an answer with no correct digits. Low-precision computations are more likely to succeed with the use of block algorithms (which reduce error constants), mixed-precision matrix multiply-accumulate units available in hardware, and iterative refinement to boost accuracy. Furthermore, new insight is gained by looking at the average case growth of rounding errors rather than the worst case, with the use of probabilistic rounding error analysis.

Some of your recent work has a probabilistic flavor. What role does probability play in applied mathematics?

In the 1950s, researchers such as George Forsythe (the founder of the Stanford University Computer Science department) considered the idea of randomized rounding, in which one rounds the result of an elementary operation up or down randomly instead of to the nearest floating-point number (the default for IEEE standard floating-point arithmetic). This idea has attracted a lot of interest recently under the label of "stochastic rounding," where the probabilities of rounding up or down are chosen proportional to the distances to the next and previous floating-point numbers. One reason for the interest is that stochastic rounding can avoid the phenomenon known as stagnation, whereby many small updates to a variable are lost even though collectively the updates should cause a change. A particular application in which stagnation arises is deep learning, when parameters of neural networks are updated.

My collaborators and I have recently proven that stochastic rounding avoids stagnation and has some desirable statistical properties.

Why is the development of exposition and communication skills essential for mathematicians and computer scientists?

With the huge growth in scientific publications over the years, it has become much harder to attract readers to our work because any paper we write will usually be one of many on the topic. It is therefore essential that our papers have informative titles and clear abstracts and conclusions, and ideally are accessible to people outside our particular specialty. With today's often entirely electronic workflows, the writing process can be very quick, but students need to realize that a good paper takes time to write and should go through multiple revisions. My Handbook of Writing for the Mathematical Sciences covers most aspects of writing and publishing, and I regularly give lectures to students on how to write, in which I include lots of real-life examples and anecdotes.

What's an exciting avenue of work you are currently pursuing?

I have just finished a book titled How to be Creative: A Practical Guide for the Mathematical Sciences , co-authored with creativity expert Dennis Sherwood and to be published by SIAM in 2022. The book is a guide to generating great ideas. The principles are general, but the examples are mathematically oriented, so the target audience is anyone working in the mathematical sciences. We hope that the book will be particularly helpful to students and early-career researchers, as well as to anyone who wishes to train others in creativity.

Nicholas Higham is a Royal Society Research Professor and Richardson Professor of Applied Mathematics at the University of Manchester. His research focus is the development of algorithms (primarily in numerical linear algebra) and analysis of their accuracy and stability. He has authored more than 180 publications on topics in numerical analysis, numerical linear algebra, and mathematical software. His books include Handbook of Writing for the Mathematical Sciences, MATLAB Guide, and The Princeton Companion to Applied Mathematics, of which he is the editor.

Higham was President of the Society for Industrial and Applied Mathematics (SIAM) from 2017 to 2018 and is Editor-in-Chief of the SIAM book series “Fundamentals of Algorithms.”

His awards include the 2019 Naylor Prize and Lectureship from the London Mathematical Society, the 2021 George Pólya Prize for Mathematical Exposition from SIAM, and the 2022 Hans Schneider Prize in Linear Algebra. Higham was named an ACM Fellow for contributions to numerical linear algebra, numerical stability analysis and communication of mathematics. He is also a Fellow of the Royal Society and of SIAM, and is a member of Academia Europaea.