October 29, 2025 feature
The GIST AI efficiency advances with spintronic memory chip that combines storage and processing
Ingrid Fadelli
contributing writer
Lisa Lock
scientific editor
Robert Egan
associate editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
proofread

To make accurate predictions and reliably complete desired tasks, most artificial intelligence (AI) systems need to rapidly analyze large amounts of data. This currently entails the transfer of data between processing and memory units, which are separate in existing electronic devices.
Over the past few years, many engineers have been trying to develop new hardware that could run AI algorithms more efficiently, known as compute-in-memory (CIM) systems. CIM systems are electronic components that can both perform computations and store information, typically serving both as processors and non-volatile memories. Non-volatile essentially means that they can retain data even when they are turned off.
Most previously introduced CIM designs rely on analog computing approaches, which allow devices to perform calculations leveraging electrical current. Despite their good energy efficiency, analog computing techniques are known to be significantly less precise than digital computing methods and often fail to reliably handle large AI models or vast amounts of data.
Researchers at Southern University of Science and Technology, Xi'an Jiaotong University and other institutes recently developed a promising new CIM chip that could help to run AI models faster and more energy efficiently.
Their proposed system, outlined in a paper published in Nature Electronics, is based on a so-called spin-transfer torque magnetic-random access memory (STT-MRAM), a spintronic device that can store binary units of information (i.e., 0s and 1s) in the magnetic orientation of one of its underlying layers.
Using spintronics to run AI more efficiently
STT-MRAM devices, like the one employed by this research team, essentially consist of a tiny structure known as a magnetic tunnel junction (MTJ). This structure has three layers, a magnetic layer with a "fixed" orientation, a magnetic layer that can change its orientation, and a thin insulating layer that separates the other two layers.
When the two magnetic layers have parallel magnetic directions, electrons can tunnel easily through the device, but while they are opposite, the resistance increases and the flow of electrons becomes more challenging. STT-MRAM devices leverage these two different states to store binary data.
"Non-volatile CIM macros (i.e., pre-designed functional modules inside a chip that can both process and store data) can reduce data transfer between processing and memory units, providing fast and energy-efficient artificial intelligence computations," wrote Humiao Li, Zheng Chai and their colleagues in their paper.
"However, the non-volatile CIM architecture typically relies on analog computing, which is limited in terms of accuracy, scalability and robustness. We report a 64-kb non-volatile digital compute-in-memory macro based on 40-nm STT-MRAM technology."
A step toward more scalable AI hardware
The STT-MRAM-based module introduced by the researchers can reliably perform computations and store bits, all within a single device. In initial tests, it performed remarkably well, running two distinct types of neural networks with remarkable speed and accuracy.
"Our macro features in situ multiplication and digitization at the bitcell level, precision-reconfigurable digital addition and accumulation at the macro level and a toggle-rate-aware training scheme at the algorithm level," wrote the authors. "The macro supports lossless matrix–vector multiplications with flexible input and weight precisions (4, 8, 12 and 16 bits), and can achieve a software-equivalent inference accuracy for a residual network at 8-bit precision and physics-informed neural networks at 16-bit precision.
"Our macro has computation latencies of 7.4–29.6 ns and energy efficiencies of 7.02–112.3 tera-operations per second per watt for fully parallel matrix–vector multiplications across precision configurations ranging from 4 to 16 bits."
In the future, the team's newly developed CIM module could contribute to the energy-efficient deployment of AI directly on portable devices, without having to rely on large datacenters. Over the next few years, it could also inspire the development of similar CIM systems based on STT-MRAMs or other spintronic devices.
Written for you by our author Ingrid Fadelli, edited by Lisa Lock, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you.
More information: Humiao Li et al, A lossless and fully parallel spintronic compute-in-memory macro for artificial intelligence chips, Nature Electronics (2025). DOI: 10.1038/s41928-025-01479-y
Journal information: Nature Electronics
© 2025 Science X Network
Citation: AI efficiency advances with spintronic memory chip that combines storage and processing (2025, October 29) retrieved 29 October 2025 from https://techxplore.com/news/2025-10-ai-efficiency-advances-spintronic-memory.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Next-generation memory: Tungsten-based SOT-MRAM achieves nanosecond switching and low-power data storage
Feedback to editors










