People of ACM - John Martinis

May 16, 2017

Quantum computing is still mysterious to many people and brief descriptions are bound to be inadequate. Given these constraints, can you give us a rough sketch of how “quantum weirdness,” such as the ability of a particle to occupy two states at once, holds promise for immensely increasing the performance of computers?

In quantum mechanics, a physical system can be in superposition of two different quantum states, which can be labeled as 0 and 1. Nature thus allows information to be stored in a quantum bit (qubit) both as a 0 and a 1 at the same time, allowing a kind of quantum “parallel computing” of both states at the same time. This amount of parallelism doubles with each added qubit, so the computational power scales exponentially with qubit number. At 50 qubits, the system is performing a parallel calculation over a system size of 2 to the 50th power, or about 1015. This number is roughly the size of memory in modern supercomputers. At 300 qubits, the parallel computing size 2300 is about equal to the number of atoms in the universe, something we clearly cannot ever duplicate with classical computers.

Later this year, you and your team at Google plan to race a chip you are developing, composed of a grid of 49 qubits, against one of the world’s fastest conventional supercomputers in a confined experiment. How confident are you that the new chip will be able to outperform any conventional supercomputer and why?

Since moving the project to Google about three years ago, we have been working on our scale-up infrastructure to dramatically increase the number of qubits, while at the same time improving qubit coherence. Both are needed for this experiment, and in fact we chose this goal as a way to focus on the two most important issues for building a quantum computer.

This year we started to put all of this together, and preliminary results make the team feel confident that this is a good “stretch goal” for 2017. At a recent physics conference, we showed data on a 9-qubit device that demonstrated the basic physics of this experiment. We found that the inner workings are as expected theoretically, and we were able to explicitly demonstrate quantum parallelism at about 3000. Most importantly, we found that the error rate per qubit was constant, about 1% per qubit, as we tested the algorithm from 3 to 9 qubits. If this error rate continues to stay constant to 45 qubits, which of course has to be shown with experiments, then we should be able to do this supremacy experiment. We are currently testing a 22-qubit chip, and the full device is now in design.

While the success of your experiment would mark an important stepping stone, a lot more work is needed to make the technology scalable and programmable to the point where you would have a fully-realized quantum computer. Of the many challenges going forward, what do you think will be the biggest technical hurdle of ushering in the era of quantum computing?

This is a challenging problem, since we need improvements in both hardware and software. In hardware, we have to scale up the number of qubits, while at the same time decreasing the qubit error rate. Doing both at the same time is fundamentally hard. In software, we would like to develop algorithms that can find useful solutions without demanding quantum error correction, so that they can be run on near-term (but post-supremacy) quantum computers. Our plan is to meet in the middle, making hardware so that we and others can discover such “heuristic” algorithms.

How far away are we from the introduction of fully-developed quantum computers? What might be some particularly exciting applications of quantum computing to artificial intelligence and machine learning?

For me this is a hard question, since our team is tasked to build such a quantum computer! Our goal is to demonstrate a powerful quantum computer in the next year or so, and then to demonstrate some useful algorithms in the next few years. Since problems in quantum chemistry and quantum materials map naturally to a quantum computer, this is the most likely first application. If quantum computers can be used to solve optimization problems, then there should be many uses for them in machine learning, such as learning with smaller datasets or disregarding mislabeled data. As the technology and ideas are now moving so rapidly, we are simply keeping an open mind as to where all of this research will lead us.

John M. Martinis holds the Worster Chair of Experimental Physics at the University of California, Santa Barbara. As a Research Scientist at Google’s Quantum AI Lab, he oversees a team of 25 engineers and physicists who are working to build the first useful quantum computer. The lab is particularly interested in applying quantum computing to artificial intelligence and machine learning.

His honors include receiving the Fritz London Memorial Prize for advances in the field of low-temperature physics, and the American Association for the Advancement of Science’s Breakthrough of the Year Award. At ACM’s upcoming Celebration of 50 Years of the Turing Award, Martinis will participate in a panel titled “Quantum Computing: Far Away? Around the Corner? Or Maybe Both at the Same Time?”.