People of ACM - Tim Davis
January 29, 2019
How did you initially become interested in sparse matrix problems?
I wasn't expecting to work on mathematical software as a graduate student at the University of Illinois at Urbana-Champaign (UIUC). My interest was focused on computer architecture. I was in the Architecture group in Computer Science Research & Development (CSRD) working with Ed Davidson, who had me creating hardware for data prefetching. I was having a hard time coming up with pure hardware, so I thought I'd look at hardware and software together. Today we'd call that co-design.
I picked Gaussian elimination. "That will be quick," I thought. Not! I got started on matrices in 1985 and I'm not done with Gaussian elimination yet. I started working on hardware prefetching strategies that would speed up parallel Gaussian elimination: getting the data from memory to the processor before it was needed. My ideas turned out to be a better fit for sparse Gaussian elimination. That's what got me into sparse matrices.
In an abstract for GraphBLAS, a project you're currently working on, you indicated that the project has the potential for creating a transformative shift in how graph algorithms are expressed. Will you explain this?
GraphBLAS is a community effort, including industry, academics, and government labs, that is working to design a library that can implement graph algorithms based on sparse linear algebra over semirings. There are lots of great graph libraries out there that don't exploit the linear algebraic abstraction, but most of what they do can be viewed as matrix operations on adjacency matrices. GraphBLAS makes the connection to linear algebra explicit.
GraphBLAS gives the graph algorithm developer the power and abstraction of linear algebra, with all its advantages (associative and distributive laws, AB=(B'A')', and so on). One level of breadth-first search, for example, is a single sparse matrix-times-sparse vector multiply, with no loss of asymptotic efficiency. Google's entire PageRank computation can be done with a handful of iterations around a single call to GraphBLAS, using what I call the "PageRank semiring."
Constructing an independent set of nodes via Luby's algorithm takes two different semirings on the same adjacency matrix: the max-second semiring to find the largest score of the neighbors of all the nodes, and the "and-or" semiring to eliminate neighbors of the selected nodes. The main iteration of Luby's method, to add a set of new neighbors to the independent set, takes 12 lines of code, nine of which are calls to GraphBLAS. Each of the seven calls to GraphBLAS that take more than constant time can be eventually done in parallel, inside GraphBLAS. GraphBLAS has opaque data structures and supports lazy evaluation. This allows me to implement an efficient method to incrementally update a graph (or matrix, as you prefer), where the updates can be queued up and done in a batch later. This supports asymptotically fast algorithms for dynamic graphs.
Writing graph algorithms in the language of sparse linear algebra over semirings, instead of nodes and edges, is what has the potential for transforming how we reason about and create graph algorithms. The vision of the GraphBLAS.org community is to make that a reality.
Will you tell us how you began creating images from computer code and music?
I love writing beautiful code. The most beautiful code is bug-free code, and everyone hates it when their favorite app crashes from a bug. But a challenge has been: how can the beauty of code be shared with others, especially those who don't write or understand code?
One day Mark Robbins, from a London design agency (Neighbour UK) learned about my sparse matrix collection with its images and asked me, "Can artistic images of music be created in a similar way as your matrix images?" Mark was asking about drawing an entire piece of music as a single image.
Music, like matrices, has structure. It has repeated melodies and rhythms, just like regular and repeated structure in a matrix. I thought to myself, “What if I converted music into a matrix?" I realized that with Fourier transforms, I could extract the regular structure from music. With graph theory, I could connect the music to itself, and reveal repeated structures in the music, by encoding it as a sparse matrix and drawing it as a graph. A week later, I sent Mark Robbins an image that he used as the theme artwork for the LEAF 2013 music festival. The artwork appeared on billboards all over London.
The images I create represent an entire piece of music. The colors represent the various frequencies on the piano keyboard, with blue as low notes, green and yellow in the middle, and orange and red as high notes. The nodes of the graph are points in time, and each edge of the graph is a note that appears in the music at a specific point in time.
I can look at one of my images and tell the genre of the music it comes from, so somehow I'm capturing the essence of the music in my art. It's a way I can express the artistic beauty of code, by creating mathematical code solely for the sake of art. So now I can show anyone: code is beautiful.
Timothy A. Davis is a Professor in the Computer Science and Engineering Department at Texas A&M University. Davis creates algorithms and software for solving large sparse matrix problems that arise in a vast range of real-world technological and social applications. His recent algorithms rely on graphics processing units (GPUs) to accelerate solutions to sparse linear systems.
His publications include the book Direct Methods for Sparse Linear Systems, and many published and peer-reviewed algorithms. Davis also creates algorithmic artwork that translates music into visual art via Fourier transforms, graph algorithms, and force-directed graph visualization. His most recent artistic work is a rendition of Donald Knuth’s Fantasia Apocalyptica. Davis was named an ACM Fellow (2014) for contributions to sparse matrix algorithms and software.