People of ACM - Carl Landwehr
May 6, 2025
You began working in computer security when you moved to Washington D.C. in 1976. How did the field develop since you started your career?
Research in “cybersecurity” (which was an unknown term in 1976—then it was called “computer security” and later “information security”) was primarily of concern to the military, oriented toward preventing unauthorized disclosure of classified information. The notions of “security kernel,” “reference monitor,” and “trusted computing base” were developed in that era. The goal was to enable a single computer to provide users with differing clearances access to information with different classification levels without compromising sensitive information. The goal was that even a Trojan horse program run by an uncleared user should not be able to gain access to sensitive data.
As large mainframes were replaced by networks of minicomputers and PC’s, sharing a single CPU among many users became less important; securing distributed systems and networks seemed more critical. Neglect of security within PC’s facilitated the development, and sometimes rapid global spread, of worms and viruses in the late 20th century, and security concerns shifted to dealing with those threats. Funding for cybersecurity research outside the defense department really began in the early 2000s with the initiation of the first Trusted Computing program at the National Science Foundation. There was commercial sector involvement in cybersecurity in the late 20th century as well, since there was cybercrime even then. Businesses were typically more concerned with access control, audit functions, and continuity of service than with protecting sensitive information.
Your focus within the broader cybersecurity field has been on “trustworthy computing.” Will you explain what is meant by this term?
We trust many things, meaning we rely on them to behave properly—but not all of the things we trust are in fact trustworthy. For computing to be trustworthy, there must be a justifiable confidence that the computation will produce its intended results, even in the face of malicious behaviors. Trustworthy computing focuses on the design and implementation of computing systems that can provide justifiable assurance that those systems will behave as intended, even when attacked. A closely related but broader term is dependable computing, which has been elaborated in a number of books and papers.
When you joined the National Science Foundation in the early 2000’s, how were the computer security programs organized?
Prior to my arrival there, NSF had supported some research on cryptographic algorithms through its mathematics program, but support for computer security had been provided mostly by defense research agencies and, somewhat sporadically, by DARPA. I arrived at the NSF in the fortunate position of being able to solicit, review, and fund proposals for the new Trusted Computing program. Subsequently, I was able to work with NSF management as the program grew into “Cyber Trust” and eventually “Secure and Trustworthy Cyberspace,” which continues today as SaTC 2.0. As the scope of the program broadened, more NSF directorates participated; SaTC 2.0 includes participation from four NSF directorates. This broadening was not difficult to justify because the role of mathematics, economics, organizational, and human behavior in securing (or subverting) system behavior is so evident, at least to me.
What are the goals of the influential undergraduate course you developed, “Cybersecurity for Future Presidents?”
My goal was to develop a course for students with leadership ambitions that would enable them to make good decisions regarding cybersecurity and public policy later in their careers. After teaching the course once or twice, I realized that what I was really trying to do was to teach them how to ask the right questions of their future advisors. I was trying to provide students with the tools to understand and evaluate the answers they would receive. Richard Muller’s book Physics for Future Presidents inspired my efforts.
In terms of government funding for cybersecurity research, what is an example of one area where you would like to see more resources allocated in the coming years?
I continue to believe that the most effective way to improve the security of our cyberinfrastructure is to stop building vulnerabilities into it. Being able to specify rigorously what we want a system to do is a first step. We need to demonstrate mathematically that our implementations conform to our specifications. This goal is now within our grasp for some critical systems. Recent advances in machine learning and AI can also be exploited to enable us to keep proofs up to date even as these systems evolve. I would like to see more investment in the tools and methods that can make software engineering into the true engineering discipline that society needs.
Since retiring from the National Science Foundation in 2011, Carl E. Landwehr has served in many positions, most recently as Visiting Professor at the University of Michigan, Ann Arbor, and lead research scientist at the Cyber Security and Privacy Research Institute, George Washington University. His interests include cybersecurity and trustworthy computing. Landwehr is recognized for developing and leading cybersecurity research programs at many institutions, including the National Science Foundation (NSF).
Among his honors, Landwehr received ACM SIGSAC's Outstanding Contribution Award, was inducted into the National Cybersecurity Hall of Fame, and made a Fellow of IEEE. He was recently selected as the recipient of the Computing Research Association’s Distinguished Service Award.