CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Tuesday, July 22, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Probing AI ‘thoughts’ reveals models use tree-like math to track shifting information

July 22, 2025
154
0

July 21, 2025

The GIST Probing AI 'thoughts' reveals models use tree-like math to track shifting information

Related Post

Democratizing AI-powered sentiment analysis

Democratizing AI-powered sentiment analysis

July 22, 2025
New AI method boosts reasoning and planning efficiency in diffusion models

New AI method boosts reasoning and planning efficiency in diffusion models

July 22, 2025
Lisa Lock

scientific editor

Andrew Zinin

lead editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

The unique, mathematical shortcuts language models use to predict dynamic scenarios
Credit: arXiv (2025). DOI: 10.48550/arxiv.2503.02854

Let's say you're reading a story, or playing a game of chess. You may not have noticed, but each step of the way, your mind kept track of how the situation (or "state of the world") was changing. You can imagine this as a sort of sequence of events list, which we use to update our prediction of what will happen next.

Language models like ChatGPT also track changes inside their own "mind" when finishing off a block of code or anticipating what you'll write next. They typically make educated guesses using transformers—internal architectures that help the models understand sequential data—but the systems are sometimes incorrect because of flawed thinking patterns.

Identifying and tweaking these underlying mechanisms helps language models become more reliable prognosticators, especially with more dynamic tasks like forecasting weather and financial markets.

But do these AI systems process developing situations like we do? A new paper posted to the arXiv preprint server from researchers in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Electrical Engineering and Computer Science shows that the models instead use clever mathematical shortcuts between each progressive step in a sequence, eventually making reasonable predictions.

The team made this observation by going under the hood of language models, evaluating how closely they could keep track of objects that change position rapidly. Their findings show that engineers can control when language models use particular workarounds as a way to improve the systems' predictive capabilities.

Shell games

The researchers analyzed the inner workings of these models using a clever experiment reminiscent of a classic concentration game. Ever had to guess the final location of an object after it's placed under a cup and shuffled with identical containers? The team used a similar test, where the model guessed the final arrangement of particular digits (also called a permutation). The models were given a starting sequence, such as "42135," and instructions about when and where to move each digit, like moving the "4" to the third position and onward, without knowing the final result.

In these experiments, transformer-based models gradually learned to predict the correct final arrangements. Instead of shuffling the digits based on the instructions they were given, though, the systems aggregated information between successive states (or individual steps within the sequence) and calculated the final permutation.

One go-to pattern the team observed, called the "Associative Algorithm," essentially organizes nearby steps into groups and then calculates a final guess. You can think of this process as being structured like a tree, where the initial numerical arrangement is the "root." As you move up the tree, adjacent steps are grouped into different branches and multiplied together. At the top of the tree is the final combination of numbers, computed by multiplying each resulting sequence on the branches together.

The other way language models guessed the final permutation was through a crafty mechanism called the "parity-associative algorithm," which essentially whittles down options before grouping them. It determines whether the final arrangement is the result of an even or odd number of rearrangements of individual digits. Then, the mechanism groups adjacent sequences from different steps before multiplying them, just like the Associative Algorithm.

"These behaviors tell us that transformers perform simulation by associative scan. Instead of following state changes step-by-step, the models organize them into hierarchies," says MIT Ph.D. student and CSAIL affiliate Belinda Li SM '23, a lead author on the paper.

"How do we encourage transformers to learn better state tracking? Instead of imposing that these systems form inferences about data in a human-like, sequential way, perhaps we should cater to the approaches they naturally use when tracking state changes."

"One avenue of research has been to expand test-time computing along the depth dimension, rather than the token dimension—by increasing the number of transformer layers rather than the number of chain-of-thought tokens during test-time reasoning," adds Li. "Our work suggests that this approach would allow transformers to build deeper reasoning trees."

Through the looking glass

Li and her co-authors observed how the Associative and Parity-Associative algorithms worked using tools that allowed them to peer inside the "mind" of language models.

They first used a method called "probing," which shows what information flows through an AI system. Imagine you could look into a model's brain to see its thoughts at a specific moment—in a similar way, the technique maps out the system's mid-experiment predictions about the final arrangement of digits.

A tool called "activation patching" was then used to show where the language model processes changes to a situation. It involves meddling with some of the system's "ideas," injecting incorrect information into certain parts of the network while keeping other parts constant, and seeing how the system will adjust its predictions.

These tools revealed when the algorithms would make errors and when the systems "figured out" how to correctly guess the final permutations. They observed that the Associative Algorithm learned faster than the Parity-Associative Algorithm, while also performing better on longer sequences. Li attributes the latter's difficulties with more elaborate instructions to an over-reliance on heuristics (or rules that allow us to compute a reasonable solution fast) to predict permutations.

"We've found that when language models use a heuristic early on in training, they'll start to build these tricks into their mechanisms," says Li. "However, those models tend to generalize worse than ones that don't rely on heuristics. We found that certain pre-training objectives can deter or encourage these patterns, so in the future, we may look to design techniques that discourage models from picking up bad habits."

The researchers note that their experiments were done on small-scale language models fine-tuned on synthetic data, but found the model size had little effect on the results. This suggests that fine-tuning larger language models, like GPT 4.1, would likely yield similar results. The team plans to examine their hypotheses more closely by testing language models of different sizes that haven't been fine-tuned, evaluating their performance on dynamic real-world tasks such as tracking code and following how stories evolve.

Harvard University postdoc Keyon Vafa, who was not involved in the paper, says that the researchers' findings could create opportunities to advance language models. "Many uses of large language models rely on tracking state: anything from providing recipes to writing code to keeping track of details in a conversation," he says.

"This paper makes significant progress in understanding how language models perform these tasks. This progress provides us with interesting insights into what language models are doing and offers promising new strategies for improving them."

More information: Belinda Z. Li et al, (How) Do Language Models Track State?, arXiv (2025). DOI: 10.48550/arxiv.2503.02854

Journal information: arXiv Provided by Massachusetts Institute of Technology

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation: Probing AI 'thoughts' reveals models use tree-like math to track shifting information (2025, July 21) retrieved 21 July 2025 from https://techxplore.com/news/2025-07-probing-ai-thoughts-reveals-tree.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Approach improves how new skills are taught to large language models 27 shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Democratizing AI-powered sentiment analysis
AI

Democratizing AI-powered sentiment analysis

July 22, 2025
0

July 21, 2025 dialog The GIST Democratizing AI-powered sentiment analysis Lisa Lock scientific editor Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility: fact-checked trusted source written by researcher(s)...

Read moreDetails
New AI method boosts reasoning and planning efficiency in diffusion models

New AI method boosts reasoning and planning efficiency in diffusion models

July 22, 2025
Scalable transformer accelerator enables on-device execution of large language models

Scalable transformer accelerator enables on-device execution of large language models

July 22, 2025
AI vision, reinvented: Vision-language models gain clearer sight through synthetic training data

AI vision, reinvented: Vision-language models gain clearer sight through synthetic training data

July 21, 2025
AI comes to California’s electric grid

AI comes to California’s electric grid

July 21, 2025
AI models learn to split up tasks, slashing wait times for complex prompts

AI models learn to split up tasks, slashing wait times for complex prompts

July 21, 2025
Platform can make machine learning more transparent and accessible

Platform can make machine learning more transparent and accessible

July 21, 2025

Recent News

Interest in Ethereum Grows in Traditional Finance! Ethereum ETF Sees Highest Weekly Inflow in History! Here Are the Details

July 22, 2025

Blockstream Expands in Europe With Acquisition of Swiss Crypto Firm Elysium Labs

July 22, 2025
Democratizing AI-powered sentiment analysis

Democratizing AI-powered sentiment analysis

July 22, 2025
New AI method boosts reasoning and planning efficiency in diffusion models

New AI method boosts reasoning and planning efficiency in diffusion models

July 22, 2025

TOP News

  • Bitcoin Sees Long-Term Holders Sell As Short-Term Buyers Step In – Sign Of Rally Exhaustion?

    Bitcoin Sees Long-Term Holders Sell As Short-Term Buyers Step In – Sign Of Rally Exhaustion?

    534 shares
    Share 214 Tweet 134
  • AI-driven personalized pricing may not help consumers

    541 shares
    Share 216 Tweet 135
  • Our favorite power bank for iPhones is 20 percent off right now

    541 shares
    Share 216 Tweet 135
  • God help us, Donald Trump plans to sell a phone

    541 shares
    Share 216 Tweet 135
  • Investment Giant 21Shares Announces New Five Altcoins Including Avalanche (AVAX)!

    541 shares
    Share 216 Tweet 135
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved