CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Sunday, July 27, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Test-time training could lead to LLMs that are better at complex reasoning

July 8, 2025
154
0

July 8, 2025

The GIST Test-time training could lead to LLMs that are better at complex reasoning

Related Post

Urgent need for ‘global approach’ on AI regulation: UN tech chief

Urgent need for ‘global approach’ on AI regulation: UN tech chief

July 27, 2025
China urges global consensus on balancing AI development, security

China urges global consensus on balancing AI development, security

July 26, 2025
Lisa Lock

scientific editor

Robert Egan

associate editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

Study could lead to LLMs that are better at complex reasoning
Example of ARC and BBH tasks that the model successfully solves only after applying test-time training. Credit: arXiv (2024). DOI: 10.48550/arxiv.2411.07279

For all their impressive capabilities, large language models (LLMs) often fall short when given challenging new tasks that require complex reasoning skills.

While an accounting firm's LLM might excel at summarizing financial reports, that same model could fail unexpectedly if tasked with predicting market trends or identifying fraudulent transactions.

To make LLMs more adaptable, MIT researchers investigated how a certain training technique can be strategically deployed to boost a model's performance on unfamiliar, difficult problems.

They show that test-time training, a method that involves temporarily updating some of a model's inner workings during deployment, can lead to a sixfold improvement in accuracy. The researchers developed a framework for implementing a test-time training strategy that uses examples of the new task to maximize these gains.

Their work could improve a model's flexibility, enabling an off-the-shelf LLM to adapt to complex tasks that require planning or abstraction. This could lead to LLMs that would be more accurate in many applications that require logical deduction, from medical diagnostics to supply chain management.

"Genuine learning—what we did here with test-time training—is something these models can't do on their own after they are shipped. They can't gain new skills or get better at a task. But we have shown that if you push the model a little bit to do actual learning, you see that huge improvements in performance can happen," says Ekin Akyürek Ph.D. '25, lead author of the study.

Akyürek is joined on the paper by graduate students Mehul Damani, Linlu Qiu, Han Guo, and Jyothish Pari; undergraduate Adam Zweiger; and senior authors Yoon Kim, an assistant professor of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Jacob Andreas, an associate professor in EECS and a member of CSAIL.

The research will be presented at the International Conference on Machine Learning (ICML 2025), held in Vancouver July 13–19. The paper is available now on the arXiv preprint server.

Tackling hard domains

LLM users often try to improve the performance of their model on a new task using a technique called in-context learning. They feed the model a few examples of the new task as text prompts which guide the model's outputs.

But in-context learning doesn't always work for problems that require logic and reasoning.

The MIT researchers investigated how test-time training can be used in conjunction with in-context learning to boost performance on these challenging tasks. Test-time training involves updating some model parameters—the internal variables it uses to make predictions—using a small amount of new data specific to the task at hand.

The researchers explored how test-time training interacts with in-context learning. They studied design choices that maximize the performance improvements one can coax out of a general-purpose LLM.

"We find that test-time training is a much stronger form of learning. While simply providing examples can modestly boost accuracy, actually updating the model with those examples can lead to significantly better performance, particularly in challenging domains," Damani says.

In-context learning requires a small set of task examples, including problems and their solutions. The researchers use these examples to create a task-specific dataset needed for test-time training.

To expand the size of this dataset, they create new inputs by slightly changing the problems and solutions in the examples, such as by horizontally flipping some input data. They find that training the model on the outputs of this new dataset leads to the best performance.

In addition, the researchers only update a small number of model parameters using a technique called low-rank adaption, which improves the efficiency of the test-time training process.

"This is important because our method needs to be efficient if it is going to be deployed in the real world. We find that you can get huge improvements in accuracy with a very small amount of parameter training," Akyürek says.

Developing new skills

Streamlining the process is key, since test-time training is employed on a per-instance basis, meaning a user would need to do this for each individual task. The updates to the model are only temporary, and the model reverts to its original form after making a prediction.

A model that usually takes less than a minute to answer a query might take five or 10 minutes to provide an answer with test-time training, Akyürek adds.

"We wouldn't want to do this for all user queries, but it is useful if you have a very hard task that you want the model to solve well. There also might be tasks that are too challenging for an LLM to solve without this method," he says.

The researchers tested their approach on two benchmark datasets of extremely complex problems, such as IQ puzzles. It boosted accuracy as much as sixfold over techniques that use only in-context learning.

Tasks that involved structured patterns or those which used completely unfamiliar types of data showed the largest performance improvements.

"For simpler tasks, in-context learning might be OK. But updating the parameters themselves might develop a new skill in the model," Damani says.

In the future, the researchers want to use these insights toward the development of models that continually learn.

The long-term goal is an LLM that, given a query, can automatically determine if it needs to use test-time training to update parameters or if it can solve the task using in-context learning, and then implement the best test-time training strategy without the need for human intervention.

More information: Ekin Akyürek et al, The Surprising Effectiveness of Test-Time Training for Few-Shot Learning, arXiv (2024). DOI: 10.48550/arxiv.2411.07279

Journal information: arXiv Provided by Massachusetts Institute of Technology

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation: Test-time training could lead to LLMs that are better at complex reasoning (2025, July 8) retrieved 8 July 2025 from https://techxplore.com/news/2025-07-llms-complex.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Study shows how large language models like GPT-3 can learn a new task from just a few examples 0 shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Urgent need for ‘global approach’ on AI regulation: UN tech chief
AI

Urgent need for ‘global approach’ on AI regulation: UN tech chief

July 27, 2025
0

July 27, 2025 The GIST Urgent need for 'global approach' on AI regulation: UN tech chief Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility: fact-checked reputable news agency...

Read moreDetails
China urges global consensus on balancing AI development, security

China urges global consensus on balancing AI development, security

July 26, 2025
Trump’s AI plan calls for massive data centers. Here’s how it may affect energy in the US

Trump’s AI plan calls for massive data centers. Here’s how it may affect energy in the US

July 25, 2025
Tradition meets AI in Nishijinori weaving style from Japan’s ancient capital

Tradition meets AI in Nishijinori weaving style from Japan’s ancient capital

July 25, 2025
AI tackles notoriously complex equations, enabling faster advances in drug and material design

AI tackles notoriously complex equations, enabling faster advances in drug and material design

July 25, 2025
AI will soon be able to audit all published research—what will that mean for public trust in science?

AI will soon be able to audit all published research—what will that mean for public trust in science?

July 25, 2025
A human-inspired pathfinding approach to improve robot navigation

A human-inspired pathfinding approach to improve robot navigation

July 25, 2025

Recent News

How Much of Ethereum’s Supply Is Lost Forever? Here’s the Amount That Must Be Excluded When Calculating Supply

July 27, 2025

Users Are Unstaking Their ETH in Unusual Amounts on Ethereum – What Does This Mean and Why Is It Happening? Cathie Wood Weighs In

July 27, 2025
Urgent need for ‘global approach’ on AI regulation: UN tech chief

Urgent need for ‘global approach’ on AI regulation: UN tech chief

July 27, 2025

Bitcoin Cash Surges Past $580 as Analysts Predict Breakout Toward $620–$680 Range

July 27, 2025

TOP News

  • Bitcoin Sees Long-Term Holders Sell As Short-Term Buyers Step In – Sign Of Rally Exhaustion?

    Bitcoin Sees Long-Term Holders Sell As Short-Term Buyers Step In – Sign Of Rally Exhaustion?

    534 shares
    Share 214 Tweet 134
  • The AirPods 4 are still on sale at a near record low price

    533 shares
    Share 213 Tweet 133
  • Ripple Partners With Ctrl Alt to Expand Custody Footprint Into Middle East

    533 shares
    Share 213 Tweet 133
  • Cyberpunk 2077: Ultimate Edition comes to the Mac on July 17

    533 shares
    Share 213 Tweet 133
  • HBO confirms The Last of Us season 3 will arrive in 2027

    533 shares
    Share 213 Tweet 133
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved