People of ACM - Erol Gelenbe

January 30, 2018

How did your invention of the G-Network model overcome long-standing challenges in computer system and network evaluation and analysis?

Queueing Networks with Product-Form Solution are the standard theoretical tool used to analyze and predict computer system and data network performance. Developed in the 1960s to the early 1980s, these mathematical models have a separable solution, so that their predictions can be computed even for the very large systems that are part of today’s internet. However, these models did not include state-based control schemes that are used to dynamically improve system performance.

G-Networks fill the gap by incorporating control schemes, such as dynamic workload reassignment, traffic rerouting, and admission control, which are required when systems are congested or unequally loaded. I was also able to derive the separable product-form solution for such models, which allows for the modeling and analysis of arbitrarily large systems. The RNN is actually a special case that is useful for fast Machine Learning, with Deep, Reinforcement, and Gradient Descent Learning Algorithms.

The G-Network’s product-form solution also allows one to calculate in advance, the decisions that need to be taken in order to optimize the performance of the system being considered. New problems, such as simultaneously optimizing quality of service and energy consumption in a distributed computer system and in networks, can be solved in this manner. G-Networks are also useful to represent intermittent renewable energy systems as “Energy Packet Networks.”

What do you see as the most pressing current challenges in the field of analysis and evaluation of computer networks?

System and network security, including for the Internet of Things, is a novel and key problem for computer system analysts. Assuring a system’s cybersecurity is very similar to optimizing its performance, and it combines measurement, modeling and system control. I discussed this point recently in my keynote lecture at the annual conference of the European Academies of Engineering (EuroCase 2017).

Another very important issue concerns energy consumption. Electricity consumption is one of the most important cost items in operating and manufacturing analog or digital systems. Computers and the internet are currently estimated to consume as much electrical energy as the whole of Germany and Japan together, and it is thought that ICT has a greater CO2 impact than air travel. Thus we need to develop performance analysis and optimization methods that address energy consumption as well as quality of service.

It has recently been estimated that, in a given year, the Bitcoin network uses more electricity than 150 countries in the world, and consumption is growing rapidly every day. What areas of research hold promise in reducing the amount of energy required for computing?

So far, our “normal” currencies and funds transfers have used very little energy. A transfer of a large amount of money between any two banks requires very little data, and is scalable in the sense that transferring $100 or $100,000 essentially uses the same amount of data.

On the other hand, cryptocurrencies, because of the anonymous nature of the holdings, require a detailed history regarding the creation (“mining bitcoins”) and usage of each specific currency unit. I am not sure that we currently have a rigorous analysis concerning the amount of energy that is used by cryptocurrencies. But it reported today that operations with Bitcoin are using each year as much electrical energy as a small country such as Denmark. If the usage of such currencies becomes common, the amount of energy used with current technologies may become colossal. Thus a key technical issue for cryptocurrency markets is linked to being able to process their transactions in a scalable and energy-efficient manner. This is an open problem that is starting to be addressed by researchers in the computer science and network performance evaluation communities.

One of your recent research interests has been deep learning in neural networks. What are some exciting avenues of exploration in this area that you have been working on?

Although there are numerous examples of success stories of deep learning in various applications, we do not yet well understand how they actually achieve such successes. Many deep learning algorithms combine multiple computational stages that alternate between randomization to “cast a wide net” and optimization to “focus down to the important characteristics or parameters of the network that is being used.” Of course each of these steps is based on the data that is being used for learning. These steps are also computationally intensive and have become possible thanks to the types of processors that have become widely available.

In order to gain better insight in the computational steps, and also to reduce the amount of computer time that is needed for deep learning, I have been investigating an approach that exploits the mathematical properties of the random neural network (RNN). The RNN is a recurrent (or “convolutional”) network model that I had developed in order to better mimic the “spiking” or impulse-like, and probabilistic, behavior of natural neuronal systems. I proved that the RNN has a convenient and simple mathematical solution and a “polynomial time and space” gradient learning algorithm. Using this mathematical structure, we have derived the equations that describe a very large RNN with randomized connections that mimic the randomization step of deep learning algorithms, without having to carry out numerous Monte Carlo simulations and hence saving computer time. We also exploit the RNN’s mathematical structure for the optimization step of deep learning. The combination of these two elements seems to provide excellent learning abilities at a low computational cost on several example problems that we have addressed recently.

I am also interested in using such techniques to build intelligent search assistants that can help a user retrieve data that is tuned to her/his own needs, rather than data that responds primarily to the commercial needs of the search engines themselves.

Sami Erol Gelenbe is the Dennis Gabor Professor of Electrical and Electronic Engineering at Imperial College London. Gelenbe invented trailblazing mathematical models including G (Gelenbe)-Network and Random Neural Network (RNN) models that allow for performance evaluation and analysis of computer systems and networks. Gelenbe’s fundamental contributions in these areas have also been instrumental in allowing networks to operate seamlessly without overloading. Along with colleagues, he is also credited with inventing an early computer architecture that allowed voices and images to travel over multi-hop and multi-path computer and communications networks. Gelenbe’s most recent research interests include Software Defined Networks (SDNs), energy savings in information and computing technology (ICT), security in networks, and reinforcement and deep learning within neural networks.

Gelenbe is a Fellow of ACM, IEEE and the Institution of Engineering and Technology (IET) UK. He was also elected a Fellow of the National Academy of Technologies of France, and the Science Academies of Belgium, Hungary, Poland and Turkey.

In 2017, Gelenbe received the Mustafa Prize, a $500,000 biennial science and technology award which aims to rival the Nobel prizes, that is bestowed by the Mustafa Foundation of Iran. He is also a recipient of the 2008 ACM SIGMETRICS Achievement Award, given annually to an individual who has made long-lasting influential contributions to the analysis and evaluation of computer and communication system performance, and several other awards, including the Grand Prix France Telecom 1996 of the French Academy of Sciences.