February 12, 2025 function
The GIST Editors' notes
This text has been reviewed in keeping with Science X's editorial course of and insurance policies. Editors have highlighted the next attributes whereas making certain the content material's credibility:
fact-checked
peer-reviewed publication
trusted supply
proofread
Twin-domain structure reveals nearly 40 instances increased power effectivity for operating neural networks

Many standard laptop architectures are ill-equipped to fulfill the computational calls for of machine learning-based fashions. Lately, some engineers have thus been attempting to design different architectures that could possibly be higher fitted to operating these fashions.
Among the most promising options for operating synthetic neural networks (ANNs) are so-called compute-in-memory (CIM) programs. These programs are designed to course of and retailer knowledge in a single {hardware} system, which might cut back energy consumption and increase the efficiency of ANN-based fashions.
CIM programs are divided into two broad classes: digital CIM (DCIM) and analog CIM (ACIM) programs. In a paper revealed in Nature Electronics, researchers at Tsinghua College introduce a brand new dual-domain ACIM system that would run ANNs extra effectively, whereas additionally boosting their efficiency on the whole inference duties.
"Analysis on ACIM for neural networks computations has gained important consideration, however progress has largely been in classification duties (e.g., picture classification)," Ze Wang, first writer of the paper, informed Tech Xplore.
"So far, ACIM nonetheless faces important challenges in dealing with complicated regression duties (e.g., YOLO for object detection), as a consequence of two key limitations. The primary is excessive computational noise and the second consists of floating-point (FP) knowledge compatibility points, which make it unsuitable for regression duties sometimes requiring high-precision FP computation."
To beat the challenges sometimes related to ACIM programs, Wang and his colleagues designed a brand new hybrid structure that mixes high-precision FP-compatible digital computing with good power effectivity. Their proposed structure was discovered to be promising for operating neural networks, additionally permitting these networks to finish normal inference duties.
"ACIM excels in performing extremely parallel and energy-efficient matrix multiplications, a basic operation extensively utilized in neural networks, providing important benefits over digital computing," defined Wang.
"ACIM operates on Kirchhoff's present legislation and Ohm's legislation to carry out voltage-based multiplication and current-based summation straight within the analog area. Nonetheless, fabrication variations introduce computational noise, affecting accuracy in neural community computing."
Along with being liable to the introduction of noise, which impairs the accuracy of ANNs, ACIM programs have been up to now primarily utilized to duties that entail linear multiplication and addition. In distinction, they have been up to now discovered to be incompatible with FP computations, extra complicated mathematical operations which are carried out on so-called FP numbers (i.e., computational representations of numbers).
"For the primary time, we show a fully-hardware-implemented multi-target and multi-class object detection process, YOLO (You Solely Look As soon as), on an actual ACIM system," wrote Wang. "This represents a big milestone, extending ACIM's capabilities past merely supporting classification duties to a full assist of normal neural community inference with FP dataflow."
Wang and his colleagues evaluated their structure in a collection of checks and located that it exhibited a exceptional power effectivity, which was 39.2 instances increased than that achieved by frequent FP-32 multipliers operating ANN-based fashions. The researchers additionally created a prototype memristor-based computing system based mostly on their design, which was discovered to achieve a excessive precision (2.7 instances increased on common than that of pure ACIM programs).
This latest examine might encourage the event of different hybrid ACIM architectures, which could possibly be higher fitted to operating machine learning-based fashions on complicated computing duties. In the meantime, Wang and his colleagues plan to proceed constructing on their structure, to additional enhance its precision and power effectivity.
"There may be nonetheless ample room for additional exploration and enchancment of our present work," added Wang.
"Our future analysis will deal with the co-design and optimization of structure, algorithms, and {hardware} to boost the power effectivity and accuracy of hybrid analog-digital computing programs. In the end, we purpose to assist a broader vary of neural community computations and unlock new software situations."
Extra data: Ze Wang et al, A dual-domain compute-in-memory system for normal neural community inference, Nature Electronics (2025). DOI: 10.1038/s41928-024-01315-9.
Journal data: Nature Electronics
© 2025 Science X Community
Quotation: Twin-domain structure reveals nearly 40 instances increased power effectivity for operating neural networks (2025, February 12) retrieved 12 February 2025 from https://techxplore.com/information/2025-02-dual-domain-architecture-higher-energy.html This doc is topic to copyright. Aside from any truthful dealing for the aim of personal examine or analysis, no half could also be reproduced with out the written permission. The content material is offered for data functions solely.
Discover additional
IBM develops a brand new 64-core mixed-signal in-memory computing chip 3 shares
Feedback to editors
