AI Algorithms Deployed On-Chip Reduce Power Consumption Of Deep Learning Applications – Yurika.R

Neuromorphic deep learning algorithm deployment may herald a low-power processing solution for artificial intelligence. Courtesy/LANL

LANL News:

  • Brain-inspired algorithm on neuromorphic hardware reduces cost of learning

Recent generations of machine learning, the methodology supporting artificial intelligence, have drawn inspiration from natural neural systems. These algorithmic approaches that mirror the complex pathways of human brains are often paired with integrated circuits to provide fast processing. Now, new research by a team led by Los Alamos National Laboratory researchers has tackled key challenges in using backpropagation algorithms on low-power neuromorphic hardware.

“It has long been argued that most modern machine learning algorithms are not neurophysiologically plausible,” said Los Alamos scientist Andrew Sornborger, who led the research. “In particular, in the neuroscience community, this has been claimed of the workhorse of modern deep learning, the backpropagation algorithm. This research demonstrates that this concern is unjustified.”

In a first-of-its-kind demonstration, the team employed a spiking neural network implementation of the backpropagation algorithm to classify digits and clothing items from two commonly used data sets. Fully on-chip, without a computer in the loop, the work demonstrated a path for using neuromorphic processors to implement low-power, modern deep learning applications.

Impressive inference accuracy at low power

The backpropagation algorithm provides a framework for training neural networks by determining how to connect local synaptic weights with global errors made by the network, but current research approaches have not been able to directly implement the algorithm on current hardware. The team found success for neuromorphic backpropagation by using neuroscience-inspired concepts, such as synfire-gated synfire chains, to construct specially designed neural circuit mechanisms for coordinating information.

Using the commonly used benchmark dataset, MNIST — the Modified National Institute of Standards and Technology database of handwritten digits to train algorithm recognition and prediction capabilities — the team’s backpropagation algorithm achieved an average inference accuracy above 96%. This result and a benchmark on a similar dataset, called Fashion MNIST, were competitive with the inference accuracy scores of the same algorithm implemented with standard computing hardware. The team’s algorithm also processed the datasets at the same speed, but used only 2.5% of the power.

The primary importance of the team’s advances is that their neuromorphic backpropagation algorithm is multi-layer and fully on-chip, meaning that all processing occurs “in-memory” – without any off-chip aid, addressing the problem of reducing the vast energy usage that artificial intelligence algorithms require. This had never been accomplished; at best, single layer learning had been implemented, usually with off-chip pre-training.

The “On-Chip Neuromorphic Backpropagation Algorithm” earned the members of the team an R&D 100 award in 2022 for innovation in science and technology in large part because of the potential future impact of their methods on power-usage by deep learning hardware.

Paper: “The backpropagation algorithm implemented on spiking neuromorphic hardware.” Nature Communications. DOI: 10.1038/s41467-024-53827-9

Funding: The work at Los Alamos was supported by the National Nuclear Security Administration, the DOE’s ASCR/CRCNS program, and by the Laboratory Directed Research and Development program.

Source Y.R -#Algorithms #Deployed #OnChip #Reduce #Power #Consumption #Deep #Learning #Applications

2024-11-25 01:22:00