Less is more: Efficient pruning for reducing AI memory and computational cost

June 12, 2025

The GIST Less is more: Efficient pruning for reducing AI memory and computational cost

Gaby Clark

scientific editor

Robert Egan

associate editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Less is more: Efficient pruning for reducing AI memory and computational cost
Single filter performance. Credit: Physical Review E (2025). DOI: 10.1103/49t8-mh9k

Deep learning and AI systems have made great headway in recent years, especially in their capabilities of automating complex computational tasks such as image recognition, computer vision and natural language processing. Yet, these systems consist of billions of parameters and require great memory usage as well as expensive computational cost.

This reality raises the question: Can we optimize, or more correctly, prune, the parameters in those systems without compromising their capabilities? In a study just published in Physical Review E by researchers from Bar-Ilan University, the answer is a resounding yes.

In the article, the researchers show how a better understanding of the mechanism underlying successful deep learning leads to an efficient pruning of unnecessary parameters in a deep architecture without affecting its performance.

Researchers from Bar-Ilan University have developed a groundbreaking method to drastically reduce the size and energy consumption of deep learning systems—without compromising performance. Published in Physical Review E, their study reveals that by better understanding how deep networks learn, it's possible to prune up to 90% of parameters in certain layers while maintaining accuracy. This advancement, led by Prof. Ido Kanter and Ph.D. student Yarden Tzach, could make AI more efficient, sustainable, and scalable for real-world applications. Credit: Prof. Ido Kanter, Bar-Ilan University

"It all hinges on an initial understanding of what happens in deep networks, how they learn and what parameters are essential to its learning," said Prof. Ido Kanter, of Bar-Ilan's Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research.

"It's the ever-present reality of scientific research. The more we know, the better we understand, and in turn, the better and more efficient the technology we can create."

"There are many methods that attempt to improve memory and data usage," said Ph.D. student Yarden Tzach, a key contributor to this research.

"They were able to improve memory usage and computational complexity, but our method was able to prune up to 90% of the parameters of certain layers, without hindering the system's accuracy at all."

These results can lead to better usage of AI systems, both in memory as well as energy consumption. As AI becomes more and more prevalent in our day to day lives, reducing its energy cost will be of utmost importance.

More information: Yarden Tzach et al, Advanced deep architecture pruning using single-filter performance, Physical Review E (2025). DOI: 10.1103/49t8-mh9k. On arXiv: DOI: 10.48550/arxiv.2501.12880

Journal information: Physical Review E , arXiv Provided by Bar-Ilan University Citation: Less is more: Efficient pruning for reducing AI memory and computational cost (2025, June 12) retrieved 12 June 2025 from https://techxplore.com/news/2025-06-efficient-pruning-ai-memory.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Towards a universal mechanism for successful deep learning 0 shares

Feedback to editors