CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Thursday, July 31, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Approach improves how new skills are taught to large language models

July 7, 2025
155
0

July 7, 2025

The GIST Approach improves how new skills are taught to large language models

Related Post

Microsoft nears OpenAI agreement for ongoing tech access

Microsoft nears OpenAI agreement for ongoing tech access

July 31, 2025
New algorithm enables efficient machine learning with symmetric data structures

New algorithm enables efficient machine learning with symmetric data structures

July 30, 2025
Gaby Clark

scientific editor

Andrew Zinin

lead editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

ChatGPT
Credit: Unsplash/CC0 Public Domain

Researchers have developed a technique that significantly improves the performance of large language models without increasing the computational power necessary to fine-tune the models. The researchers demonstrated that their technique improves the performance of these models over previous techniques in tasks including commonsense reasoning, arithmetic reasoning, instruction following, code generation, and visual recognition.

Large language models are artificial intelligence systems that are pretrained on huge data sets. After pretraining, these models predict which words should follow each other in order to respond to user queries. However, the nonspecific nature of pretraining means that there is ample room for improvement with these models when the user queries are focused on specific topics, such as when a user requests the model to answer a math question or to write computer code.

"In order to improve a model's ability to perform more specific tasks, you need to fine-tune the model," says Tianfu Wu, co-corresponding author of a paper on the work and an associate professor of computer engineering at North Carolina State University.

"However, these models are so large that it is not feasible to re-train the entire model. Instead, you want to determine the smallest number of changes necessary to improve the model's performance. We've developed a technique, called WeGeFT (pronounced wee-gift), that represents a significant advance for fine-tuning these large models."

The big breakthrough for fine-tuning these large models was called LoRA, which came out in 2022. LoRA works by using mathematical tools to identify a small subset of key parameters that are most likely to improve a model's performance on a specific task.

There have been many attempts to improve upon LoRA, but Wu and his collaborators found these previous efforts either required significantly more computational power to improve performance, or used the same amount of computing power without improving performance.

"WeGeFT builds on LoRA, but incorporates additional mathematical tools that allow us to determine which of the key parameters the model is already familiar with and which parameters the model would need to 'learn,'" says Wu. "By placing more weight on the truly novel parameters, we are able to improve model performance compared to LoRA without incorporating significant new computational demands."

In proof-of-concept testing, the researchers found that WeGeFT performed as well as or better than LoRA and its many variants across a variety of downstream tasks: commonsense reasoning, arithmetic reasoning, instruction following, code generation, and visual recognition.

"We think this is a valuable step forward," Wu says. "We are now exploring ways that WeGeFT could also be used to identify elements of the model that are responsible for harmful outputs, with the goal of improving AI alignment and 'surgery' to improve model safety and outputs. We expect that work to be forthcoming."

The paper, "WeGeFT: Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models," will be presented July 17 at the International Conference on Machine Learning, being held in Vancouver, Canada. Co-corresponding author of the paper is Chinmay Savadikar, a Ph.D. student at NC State. The paper was co-authored by Xi Song, an independent researcher.

More information: "WeGeFT: Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models"

Authors: Chinmay Savadikar and Tianfu Wu, North Carolina State University; Xi Song, independent researcher

Presented: July 13-19, International Conference on Machine Learning, Vancouver, Canada

Provided by North Carolina State University Citation: Approach improves how new skills are taught to large language models (2025, July 7) retrieved 7 July 2025 from https://techxplore.com/news/2025-07-approach-skills-taught-large-language.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Multi-modal AI agent mimics human thinking for long video analysis and reasoning shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Microsoft nears OpenAI agreement for ongoing tech access
AI

Microsoft nears OpenAI agreement for ongoing tech access

July 31, 2025
0

July 30, 2025 The GIST Microsoft nears OpenAI agreement for ongoing tech access Sadie Harley scientific editor Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility: fact-checked reputable news...

Read moreDetails
New algorithm enables efficient machine learning with symmetric data structures

New algorithm enables efficient machine learning with symmetric data structures

July 30, 2025
Hiding secret codes in light can protect against fake videos

Hiding secret codes in light can protect against fake videos

July 30, 2025
Too many em dashes? Weird words like ‘delves?’ Spotting text written by ChatGPT is still more art than science

Too many em dashes? Weird words like ‘delves?’ Spotting text written by ChatGPT is still more art than science

July 30, 2025
AI can evolve to feel guilt—but only in certain social environments

AI can evolve to feel guilt—but only in certain social environments

July 30, 2025
As AI booms, data centers threaten energy grid and water supplies, expert says

As AI booms, data centers threaten energy grid and water supplies, expert says

July 30, 2025
Apple Manufacturing Academy opens in Detroit amid Trump pressure on US production

Apple Manufacturing Academy opens in Detroit amid Trump pressure on US production

July 30, 2025

Recent News

Tourists Can Now Withdraw Cash With USDT via Kaia ATMs in South Korea

July 31, 2025
Trump will end the de minimis exemption for low-cost global shipments

Trump will end the de minimis exemption for low-cost global shipments

July 31, 2025

Cboe BZX Seeks Regulatory Nod for Invesco Galaxy’s Solana ETF

July 31, 2025
Showrunner, an AI-powered streaming service, launches in alpha this week

Showrunner, an AI-powered streaming service, launches in alpha this week

July 31, 2025

TOP News

  • The AirPods 4 are still on sale at a near record low price

    The AirPods 4 are still on sale at a near record low price

    535 shares
    Share 214 Tweet 134
  • Ripple Partners With Ctrl Alt to Expand Custody Footprint Into Middle East

    535 shares
    Share 214 Tweet 134
  • Cyberpunk 2077: Ultimate Edition comes to the Mac on July 17

    535 shares
    Share 214 Tweet 134
  • HBO confirms The Last of Us season 3 will arrive in 2027

    535 shares
    Share 214 Tweet 134
  • Reddit is back online after a brief outage

    535 shares
    Share 214 Tweet 134
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved