CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Friday, October 17, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

VFF-Net algorithm provides promising alternative to backpropagation for AI training

October 16, 2025
157
0

October 16, 2025

The GIST VFF-Net algorithm provides promising alternative to backpropagation for AI training

Related Post

The way we talk to chatbots affects their accuracy, new research reveals

The way we talk to chatbots affects their accuracy, new research reveals

October 16, 2025
Why large language models are bad at imitating people

Why large language models are bad at imitating people

October 16, 2025
Sadie Harley

scientific editor

Robert Egan

associate editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

SEOULTECH researchers develop VFF-Net, a revolutionary alternative to backpropagation that transforms AI training
VFF-Net introduces three new methodologies: label-wise noise labeling (LWNL), cosine similarity-based contrastive loss (CSCL), and layer grouping (LG), addressing the challenges of applying a forward-forward network for training convolutional neural networks. Credit: Hyung Kim, Seoul National University and Technology

Deep neural networks (DNNs), which power modern artificial intelligence (AI) models, are machine learning systems that learn hidden patterns from various types of data, be it images, audio or text, to make predictions or classifications. DNNs have transformed many fields with their remarkable prediction accuracy. Training DNNs typically relies on backpropagation (BP).

While it has become indispensable for the success of DNNs, BP has several limitations, such as slow convergence, overfitting, high computational requirements, and its black box nature.

Recently, forward-forward networks (FFN) have emerged as a promising alternative, where each layer is trained individually, bypassing BP. However, applying FFNs to convolutional neural networks (CNNs), which are widely used for image analysis, has proven difficult.

To address this challenge, a research team led by Mr. Gilha Lee and Associate Professor Hyun Kim from the Department of Electrical and Information Engineering at Seoul National University of Science and Technology has developed a new training algorithm, called visual forward-forward network (VFF-Net). The team also included Mr. Jin Shin.

Their study is published in Neural Networks.

Explaining the challenge of FNN for training CNN, Mr. Lee says, "Directly applying FFNs for training CNNs can cause information loss in input images, reducing accuracy. Furthermore, for general purpose CNNs with numerous convolutional layers, individually training each layer can cause performance issues. Our VFF-Net effectively addresses these issues."

VFF-Net introduces three new methodologies: label-wise noise labeling (LWNL), cosine similarity-based contrastive loss (CSCL), and layer grouping (LG).

In LWNL, the network is trained on three types of data: the original image without any noise, positive images with correct labels, and negative images with incorrect labels. This helps eliminate the loss of pixel information in the input images.

CSCL modifies the conventional goodness-based greedy algorithm, applying a contrastive loss function based on the cosine similarity between feature maps. Essentially, it compares the similarity between two feature representations based on the direction of the data patterns. This helps preserve the meaningful spatial information necessary for image classification.

Finally, LG solves the problem of individual layer training by grouping layers with the same output characteristics and adding auxiliary layers, significantly improving performance.

Thanks to these innovations, VFF-Net significantly improves image classification performance compared to conventional FFNs. For a CNN model with four convolutional layers, test errors on the CIFAR-10 and CIFAR-100 datasets were reduced by 8.31% and 3.80%, respectively. Additionally, the fully connected layer-based VFF-Net achieved a test error of just 1.70% on the MNIST dataset.

"By moving away from BP, VFF-Net paves the way toward lighter and more brain-like training methods that do not need extensive computing resources," says Dr. Kim.

"This means powerful AI models could run directly on personal devices, medical devices, and household electronics, reducing reliance on energy-intensive data centers and making AI more sustainable."

Overall, VFF-Net will allow AI to become faster and cheaper, while allowing more natural, brain-like learning, facilitating more trustworthy AI systems.

More information: Gilha Lee et al, VFF-Net: Evolving forward–forward algorithms into convolutional neural networks for enhanced computational insights, Neural Networks (2025). DOI: 10.1016/j.neunet.2025.107697

Provided by Seoul National University of Science and Technology Citation: VFF-Net algorithm provides promising alternative to backpropagation for AI training (2025, October 16) retrieved 16 October 2025 from https://techxplore.com/news/2025-10-vff-net-algorithm-alternative-backpropagation.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Multisynapse optical network outperforms digital AI models

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

The way we talk to chatbots affects their accuracy, new research reveals
AI

The way we talk to chatbots affects their accuracy, new research reveals

October 16, 2025
0

October 16, 2025 report The GIST The way we talk to chatbots affects their accuracy, new research reveals Paul Arnold contributing writer - price agreed is 27.50 EUR Lisa Lock scientific editor Robert Egan associate editor Editors' notes This article has been reviewed according to Science X's editorial process and...

Read moreDetails
Why large language models are bad at imitating people

Why large language models are bad at imitating people

October 16, 2025
Grokipedia: Elon Musk is right that Wikipedia is biased, but his AI alternative will be the same at best

Grokipedia: Elon Musk is right that Wikipedia is biased, but his AI alternative will be the same at best

October 16, 2025
Method teaches generative AI models to locate personalized objects

Method teaches generative AI models to locate personalized objects

October 16, 2025
Estates of Jimmy Stewart, Judy Garland, Albert Einstein, to be protected against AI manipulation

Estates of Jimmy Stewart, Judy Garland, Albert Einstein, to be protected against AI manipulation

October 16, 2025
A RADIANT future for cybersecurity

A RADIANT future for cybersecurity

October 16, 2025
A stapler that knows when you need it: Using AI to turn everyday objects into proactive assistants

A stapler that knows when you need it: Using AI to turn everyday objects into proactive assistants

October 16, 2025

Recent News

Bitcoin Drops Below $107K, XRP, ADA Down 17% on Week as Traders Await Risk-Taking Mode

October 17, 2025
Google’s Ask Photos feature isn’t available in Texas and Illinois

Google’s Ask Photos feature isn’t available in Texas and Illinois

October 17, 2025

Cryptocurrency Company Executive Discusses the Possibility of Bitcoin Reaching $10 Million

October 17, 2025
Meta is shutting down its desktop Messenger app

Meta is shutting down its desktop Messenger app

October 17, 2025

TOP News

  • God help us, Donald Trump plans to sell a phone

    God help us, Donald Trump plans to sell a phone

    597 shares
    Share 239 Tweet 149
  • Investment Giant 21Shares Announces New Five Altcoins Including Avalanche (AVAX)!

    596 shares
    Share 238 Tweet 149
  • WhatsApp has ads now, but only in the Updates tab

    596 shares
    Share 238 Tweet 149
  • Tron Looks to go Public in the U.S., Form Strategy Like TRX Holding Firm: FT

    597 shares
    Share 239 Tweet 149
  • AI generates data to help embodied agents ground language to 3D world

    596 shares
    Share 238 Tweet 149
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved