Deep learning is about to become less energy-intensive

Deep learning is about to become less energy-intensive

2024-05-06 09:30:00

EPFL researchers have developed an algorithm to train an analog neural network with as much precision as a digital network. This major development would reduce energy consumption while improving their performance.

Deep learning models are based on multi-layer neural networks, operating on the model of the human brain. However, these processes are quite expensive from an energy point of view, especially during the training phase, which can last several days.

According to different studiesseveral factors have an impact on the energy cost, in particular data size, network architecture, task type, parameter optimization decisions, etc.

Researchers from EPFL (École Polytechnique Fédérale de Lausanne) have apparently found a solution to reduce this impact. They developed an advanced algorithm for neural networks. Their algorithm efficiently trains analog neural networks, providing an energy-efficient alternative to traditional digital networks.

Scalable algorithms

Their method, which more closely aligns with human learning (referring to neural networks and neuromorphic chips), has shown promising results. It has been successfully tested on three physical systems based on waves (sound, light and microwave) to transport information.

Without going into too technical detail, their approach involves two steps. The first is a forward pass where data is sent through the network and an error function is calculated based on the output. The second step is a backward pass (also known as backpropagation) where a gradient of the error function with respect to all network parameters is calculated.

But this second step is not ideal, because it requires a digital twin and it consumes a lot of energy. The reason for this is the repeated iterations for the system to update itself based on the two calculations from the first step in order to provide increasingly precise values.

The scientists’ idea was to replace this second step with a second pass through the physical system to locally update each layer of the network. In addition to reducing power consumption and eliminating the need for a digital twin, this method better reflects human learning.

EPFL is not the only entity conducting research aimed at improving energy-efficient neural networks. A team of researchers at Oak Ridge National Laboratory (ORNL) has demonstrated that evolutionary algorithms can produce not only neural networks that perform a task well, but also energy-efficient networks that are small and fast. Enough to accelerate cancer research.

“Creating software that can understand not only the meaning of words, but also the contextual relationships between them is not a simple task. Humans develop these skills over years of interaction and training. For specific tasks, deep learning can compress this process into a few hours”announces this laboratory which conducts research since 2014.

The data challenge

These initiatives highlight ongoing efforts to improve the energy efficiency of neural networks, which might lead to significant advances in the field of artificial intelligence. They might be used to manage energy consumption in various sectors.

These networks would also make it possible to determine the current energy performance of a building and predict the energy saving potential through renovation strategies.

However, the wider adoption of energy-efficient neural networks faces several challenges. As with artificial intelligence in general, the main challenge is always the availability of relevant data. Their absence can in fact harm the effectiveness of these networks.

Another challenge is the lack of skills in the field of AI and machine learning. It requires professionals who understand not only how to develop and train neural networks, but also how to implement them in an energy-efficient manner.

Finally there is scaling. For the moment, EPFL experiments only rely on neural networks comprising up to 10 layers. Would their process give as good results out of 100 layers and billions of parameters ? This is the next step that will require overcoming the technical limitations of physical systems.


Image credit of one: © Anaïs Chalard (IMRCP) – Laurence Vaysse (ToNIC) – Brice Ronsin and Stéphanie Bosch (CBI-LITC-TRI), Toulouse

Cells nestled in the heart of the fibers of the N-heptyl-galactonamide molecular gel. The cells, round in shape, are visible in green. Straight, stiff fibers are visible in pink. Curved, flexible fibers are visible in green.

1715326668
#Deep #learning #energyintensive

Leave a Replay