People of ACM - David Atienza Alonso

September 19, 2023

While a core focus of your research has been embedded systems, you have branched out into a number of areas. Is there a common thread that runs through the varied projects you work on?

Yes, the common thread of my research work (whether it is for low-power embedded devices or high-end computing servers) has always been characterized by the concept of co-design. This means that I have always investigated how to get the maximum on deploying architectures, methodologies, and optimization techniques that combine optimizations beyond hardware and software boundaries. Indeed, my projects always explore how the system can be improved by iteratively optimizing hardware and software aspects in a close loop fashion.

The paper you co-authored, “3D-ICE: Fast Compact Transient Thermal Modeling for 3D ICs With Inter-Tier Liquid Cooling,” was recognized with a 10-Year Retrospective Most Influential Paper Award. Why is thermal management an important aspect of computer system design? What has been an especially significant innovation in this area over the past 10 years?

Performance requirements keep growing across the computing continuum from wearables to supercomputers, and for several decades computing performance has relied on silicon technology scaling. Technology scaling, known as Moore’s law, has improved performance by increasing the number of transistors and complexity of computer designs in the same surface area while delivering energy efficiency gains proportional to density scaling, thus keeping power at the same or similar levels as previous technology generations.

However, digital circuits and systems are increasingly power-bound, and thermal constraints are becoming much stronger as Moore’s law is slowing down and failing. In fact, the latest computing systems (both high-performance and battery-operated) require power and thermal management to operate. Power creates heat, and heat affects power consumption, thus accurate thermal modeling with a configurable granularity and simulation time overhead for evaluating different cooling techniques and system co-design is a must. Providing a theoretically grounded way to perform transient thermal modeling in 2D and 3D computing systems manufactured with nano-scale technologies, which enables exploring different power and thermal management techniques, is what we proposed in our 3D-ICE paper in 2010 (and the released open-source tool to the computing architecture and engineering community has greatly appreciated with more than 2000 official users registered today in our mailing list for v3.1.

Since then, a very significant innovation in this area over the past 10 years has been the inclusion of machine learning (e.g., multi-armed bandits or multi-agent techniques) to develop thermal management schemes that can learn from the thermal modeling data collected with 3D-ICE to figure out the best thermal operating point for each type of workload that needs to be executed in a specific computer design with a certain manufacturing technology.

Computing technologies, including AI, are energy intensive. What innovations do you see on the horizon to make computing more sustainable?

There are many different innovations that need to be considered at the same time (both at hardware and software levels) to improve the sustainability of computing during execution but also the fabrication of computing systems.

First of all, we need to understand that sustainability is not only about the energy consumed to execute a certain AI algorithm but also the creation of the computing fabric that is used (e.g., GPUs, memories, etc.). Therefore, we need to develop new system integration schemes to reduce fabrication costs where new computing system designs exploiting the latest heterogeneous 2.5D and 3D IC packaging solutions are needed.

Second—considering the execution of AI or other workloads—new computing innovations are required to minimize communication energy and performance overhead by combining software (memory) and computing (logic), which is broadly known as the new computing-in-memory concept.

Third, general-purpose computing systems have a large energy and cooling overhead with respect to effective computing efficiency regarding specialized hardware. Therefore, we need to develop EDA methodologies and more effective high-level synthesis tools to develop accelerator-based architectures from high-level AI/ML/DL descriptions. These newly synthesized architectures can selectively activate or deactivate the different needed computing blocks according to what is required at any moment in time for the target AI system. This holistic approach, with strong synergies across all the abstraction layers of the computing design process, will minimize energy for operation and cooling at the same time to truly target sustainable computing systems.

As part of the ACM Distinguished Speakers series, you give a talk titled “Biologically-Inspired IoT Systems for Federated Learning-Based Healthcare.” Will you tell us a little about how this new design approach can improve the next generation of edge AI computing systems?

The fact is that quite a few of the required innovations that I already mentioned in the previous question to improve energy-efficient computing systems have already been implemented in different ways in biological systems after many millions of years of evolution. So, this talk is about creating a new generation of edge AI computing systems that combine all these new concepts to effectively co-design the next generation of edge AI computing systems by taking inspiration from how biological computing systems operate.

The critical elements of this new design approach are the combination of multiple types of ultra-low power (but imprecise) specialized computing platforms that execute multiple types of ensembles of neural networks to improve the robustness of the final outputs of the system level while minimizing memory and computation resources.

Moreover, these specialized computing platforms can help each other with a computing task by exploiting federated learning and sharing their network model coefficients, which preserves the privacy of the computation performed in each individual platform and does not require sending data between edge AI devices. This is very similar to the process done by the human nervous system to detect a certain event that is creating different effects in the body, which is much more energy efficient than transferring all the data and computing in a single edge AI device.

David Atienza Alonso is a Professor of Electrical and Computer Engineering, Head of the Embedded Systems Laboratory (ESL), and Scientific Director of the EcoCloud Sustainable Computing Center at the Swiss Federal Institute of Technology in Lausanne (EPFL). His research interests include system-level design methodologies for high-performance multi-processor system-on-chip (MPSoC) and low-power Internet-of-Things (IoT) systems, as well as ultra-low power edge AI architectures for wireless body sensor nodes and smart embedded systems.

Atienza has co-authored more than 400 papers, two books, and has received 14 licensed patents. He has also received several recognitions and awards, among them the ACM/IEEE International Conference on Computer-Aided Design (ICCAD) 110-Year Retrospective Most Influential Paper Award in 2020 for his work on thermal modeling of 2D/3D MPSoC designs. Atienza was named an ACM Fellow for contributions to the design of high-performance integrated systems and ultra-low power edge circuits and architectures and has been an IEEE Fellow since 2016.