People of ACM - Shyam Gollakota

July 22, 2021

You have done groundbreaking work in a number of areas within wireless communications and sensing. You have also noted that your work is interdisciplinary and has involved collaborators such as computer scientists, mechanical engineers, biologists and physicians. Do you think there is a common thread that connects the challenges you take on in your lab?

The research in my group is driven by our curiosity and zeal to push the boundaries of what we can do with technology. What this means is we do not artificially constrain ourselves by the boundaries and research disciplines that are typically defined by our modern universities. We go wherever the problems take us.

As an example, in a recent project we looked at the problem of low-power wireless vision. It turns out that this is a fundamental problem even in nature: in insects like flies, the visual system has a low resolution but can consume up to 13% of the body mass. Further, vision represents a substantial energy cost; this has led many insects to evolve and develop high visual acuity in only small regions, and they instead move their visual system independent of their body through either head or eye motion. This added degree of freedom is used by some insects to infer depth or motion information or to orient their gaze in a direction independent of their movement, as well as maintain focus on moving objects (e.g., prey or potential mates).

Inspired by this, we designed a fully wireless, power-autonomous, mechanically steerable vision system that imitates head motion in a form factor small enough to mount on the back of a live beetle or a similarly-sized terrestrial robot. Our camera system streams “first-person” video wirelessly and can be mechanically steered at low power to capture panoramic images. The whole programmable system is so small and lightweight that we could mount it on freely walking live beetles to understand their behavior in the wild. We also built the world’s smallest terrestrial robot that is power autonomous and can support wireless vision.

This required us to work across multiple disciplines including embedded systems, wireless networking, low-power mechanical actuation, and robotics, as well as biology, since attaching these sensors to the insects really requires getting advice from biologists!

One of your most high-profile contributions is ambient backscatter, in which you and your colleagues developed a technique to harness energy from the environment so that mobile devices can compute and communicate without needing a battery. What was a key insight that made your technique successful, and how do you see ambient backscatter advancing in the future?

Embedding cheap connectivity into billions of everyday objects has been a long-standing vision, and we are seeing the first steps of it in today’s internet of things (IoT). The challenge is that as these IoT devices become smaller and more numerous, powering them becomes more difficult; wires are often not feasible, and batteries add weight, bulk, and cost, and require recharging/replacement that is impractical once we deploy these devices at large scales. The problem is that generating radio signals is a very power-expensive operation; the biggest power consuming component in these IoT devices is typically a radio.

What ambient backscatter says is instead of generating your own radio signals, we can have devices talk to each other by backscattering ambient signals in the environment like TV, FM radio, wi-fi and cellular signals. These signals are ubiquitous, and we can reflect them to enable communication between battery-free devices at a very low power. The intuition is similar to using mirrors to communicate by reflecting sunlight. Since we do not generate signals of our own, we can achieve communication at 100 to 1,000 times lower power than radios.

When we introduced this concept in 2013, we achieved a range of 1-2 meters and a data rate of 1 kbps by reflecting TV signals. Since then, we showed that we can backscatter a range of sources including FM radio and wi-fi signals that are more ubiquitous. We also showed that we can use backscatter to communicate from something like a contact lens that can be decoded by your smartphone at data rates of 1-11 Mbps. Further, we increased its range using long-range backscatter to hundreds of meters, which would enable more reliable operations for ultra-low power sensors.

Backscatter technology is now being commercialized by companies including Jeeva Wireless for IoT applications like inventory tracking. It has also found uses in interesting niche applications including medical implants and underwater sensing.

A recent project of the Network and Mobile Systems Lab has been attaching extremely lightweight sensors and cameras to insects. One outcome of this work may be the ability to drop mobile sensors from the air into the environment. What are some potential future applications of this work?

While backscatter communication provides a tool to reduce power consumption and even design battery-free devices, I think achieving large-scale sensor deployment is the next big challenge we will have to address as a research community. Deploying hundreds of sensors in a large smart farm or smart city is an expensive and labor-intensive process. There are also scenarios in which the act of deployment could be dangerous. For example, when sensors were deployed to monitor the volcano Mt. St. Helens, it required a helicopter and highly trained professionals in dangerous scenarios to place them. The same is true for deploying sensors in the context of forest fires that are increasingly common in the west.

To address this problem, we recently presented the first system that can airdrop wireless sensors from small drones and even from the backs of live insects (e.g., moths). Airdropping wireless sensors is difficult because it requires the sensor to survive the impact when dropped in midair. We take inspiration from nature again: small insects like ants can fall from tall buildings and survive because of their tiny mass and size.

Inspired by this, we designed insect-scale wireless sensors that come fully integrated with an onboard power supply and a lightweight mechanical actuator to detach from the aerial platform. Our programmable sensors weigh only 98 mg and are estimated to run for 1.3-2.5 years when transmitting temperature data from distances of hundreds of meters away. We demonstrated attachment to a small drone and a moth and showed that our insect-scale sensors suffered no damage on impact onto a tile floor from heights of 22 m. This I think is just the first step and there are a lot of exciting problems here. For example, can we get more sophisticated sensors like cameras to be deployed? Can we get them to disperse in the wind without having the drone fly to each location?

Along with your students, you also worked on a smartphone and smart speakers to detect medical conditions. What is the enabling technology for these tools?

The key enabler here is to transform smartphones and speakers into contactless active sonar systems. We transmit inaudible sound signals from a speaker. These sound signals get reflected off the human body and the reflections arrive back at the microphones. When a person breathes, the minute motion changes the reflections that we can use to extract breathing by designing computational methods. Since microphones and speakers are included in almost all devices we have including smartphones, laptops, smart speakers (e.g., Alexa) and earbuds, this technology can transform all these devices into contactless physiological sensors similar to the Star Trek tricorder.

We showed that we can use this approach to contactlessly track breathing and perform sleep tracking and detect sleep apnea using just a smartphone. This technology has been commercialized and has already been used to track more than 30 million hours of sleep in the wild. We recently showed that we can adapt this technology to detect opioid overdoses and even ear infections. We have also generalized this to smart speakers to show that a Google home device, for example, can be used to track breathing for infants, detect cardiac arrests and monitor heart arrhythmias.

Why is this an exciting time to be working in networks and mobile systems?

What is considered a networking or mobile systems problem is evolving rapidly as new technologies and applications keep getting discovered. Phones, speakers, earbuds, etc., can enable lots of applications in health and can be transformed into accessible medical diagnostic tools that can have significant impact at the scale of billions of humans. Couple this with tiny battery-free sensors and computers that are integrated with biology and living organisms and we are talking about creating the internet of biological and bio-inspired things. This is a great time to be at the ground floor and shape the future of such promising and pioneering systems over the next decade.

Shyam Gollakota is an Associate Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, where he leads the Networks and Mobile Systems Lab. His research interests include wireless networks, ambient backscatter, wi-fi sensing, battery-free phones, computing for insects and computing tools that democratize medical diagnostics and care.

Gollakota’s numerous honors include the ACM Doctoral Dissertation Award (2012); the ACM SIGCOMM Best Paper Award (2008, 2011, 2013 and 2016); ACM MobiCom Best Paper Award (2013); ACM SenSys Best Paper Award (2018); ACM IMWUT Distinguished Paper Award (2018); an ACM SIGMobile RockStar Award (2017); an NSF Career Award (2015); an Alfred Sloan Fellowship (2015); and being named a Forbes 30 Under 30 All-Star Alumnus (2017). He was named the recipient of the 2020 ACM Grace Murray Hopper Award for contributions to the use of wireless signals in creating novel applications, including battery-free communications, health monitoring, gesture recognition, and bio-based wireless sensing.