People of ACM - Willem Visser

October 30, 2018

What is the most significant way that software testing has changed since you entered the field?

I work on the academic side of testing, where I’d say the most significant changes have been in the field of automated test-case generation and bug finding. Specifically, the use of symbolic execution was not on the horizon at all when I started, whereas now it is actually starting to become common (even in industry) as a technology employed to “discover” inputs that revealinteresting parts of a program’s behavior. Another promising field is search-based techniques, where clever heuristics are used to search for inputs that trigger behaviors of interest. “Fuzzing,” which can be classified as a search-based technique, has also become very popular. I believe the combination of fuzzing and symbolic execution is where the next big breakthroughs are going to come from.

In industry I think there have been two major advances since I started in this field: automated unit test frameworks for most popular programming languages were established, and continuous integration (CI) became a must-have for any medium or large project. Making testing “easy” to do allowed more developers to start making testing part of their development routine (automated frameworks like jUnit made this possible), and allowing these tests to run during the build process (as in CI, after every commit) simply made the process seamless and effective.

In the 2004 paper you co-authored with Corina S. Păsăreanu and Sarfraz Khurshid, “Test Input Generation with Java PathFinder,” which recently received the ISSTA Retrospective Impact Paper Award, you explored innovations that showed promise for automating software testing. Since the publication of the paper, what recent innovations have improved the software testing infrastructure?

First, our 2004 paper was all about symbolic execution for test generation, which I think really panned out as a worthwhile technology, and this was probably one of the main reasons the paper got the award. Second, I truly believe the two technologies mentioned above (unit test frameworks and continuous integration) are making a substantial difference in testing’s effectiveness. Testing is now no longer an afterthought—it is a first-class citizen of the development process. If a unit test fails, you will not commit, and if the build fails during CI, you fix it quickly.

From a slightly more research-oriented point of view, fuzzing has shown itself to be a very powerful approach to finding errors. Fuzzing that uses search heuristics based on feedback from running the code (for example, as is done in American Fuzzy Lop, or AFL) is especially good at unearthing serious issues in code that is in use daily. The only problem with fuzzing is that it typically doesn’t have much visibility into the code structure and can thus get stuck generating a very particular input. Luckily, symbolic execution is particularly good at finding these inputs and, unsurprisingly, a combination of the two techniques is now a lively research area. I think there is much more to come.

What is an exciting area of software development and/or testing that you would like to see industry and/or the research community direct more resources toward?

This one is easy to answer, but hard to do: we need techniques to allow us to test machine learning systems. Machine learning algorithms are used everywhere lately, but hardly any of the established testing theories apply directly to such systems. For example, recently it was shown that small perturbations in an image (invisible to a human) can cause a neural net to misclassify it. How do we automatically find such an issue? When do we have adequate testing for a neural net? How do we certify software based on machine learning algorithms, such as we see in self-driving cars? A few people have started looking at these issues, but a lot more needs to be done.

You spent the earlier part of your career working in practice settings before joining the faculty at Stellenbosch University. How is software engineering research different in university vs. practice settings?

In the testing and program analysis fields, it is converging. It used to be the case that people spent time on approaches that require users with PhDs (such as formal methods for proving program correctness), whereas nowadays the focus is shifting to simpler, but automated, techniques with practical applicability as required in most industrial settings.

Willem Visser is a Professor in the Division of Computer Science at Stellenbosch University in South Africa. His core research interests center around finding bugs in software, including testing, program analysis and model checking. Earlier in his career, Visser worked at the NASA Ames Research Center, where he was the Area Lead for the Reliable Software Engineering Group.

Visser was a Member-at-Large of the Executive Committee of ACM’s Special Interest Group on Software Engineering (SIGSOFT), and served as the Program Co-Chair for the International Conference on Software Engineering (ICSE 2016). An ACM Distinguished Member, he received the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2018) Retrospective Impact Paper Award; the Best Paper Award at the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2017); and the IEEE/ACM International Conference on Automated Software Engineering (ASE) Most Influential Paper Award in 2014.