CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Friday, August 29, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Amplifying AI’s impact by making it understandable

August 29, 2025
158
0

August 29, 2025

The GIST Amplifying AI's impact by making it understandable

Related Post

When AI blurs reality: The rise of hyperreal digital culture

When AI blurs reality: The rise of hyperreal digital culture

August 29, 2025
Google is training its AI tools on YouTube videos: These creators aren’t happy

Google is training its AI tools on YouTube videos: These creators aren’t happy

August 29, 2025
Sadie Harley

scientific editor

Andrew Zinin

lead editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

AI
Datasets used to train AI algorithms may underrepresent older people. Credit: Pixabay/CC0 Public Domain

As AI becomes a ubiquitous part of everyday life, people increasingly understand how it works. Whereas traditional computer programs operate on clear logic—"If A happens, then do B"—AI models, especially neural networks, make decisions that are difficult to trace back to any single rule or line of code. As a result, traditional program analysis techniques such as code review are ineffective in addressing neural networks' vulnerabilities.

Yang Zhou, a winner of the SMU Research Staff Excellence Award for 2024, is part of SMU Professor of Computer Science Sun Jun's MOE Tier 3 project titled "The Science of Certified AI Systems" that will look into the issue. As listed in the proposal, the first aim of the project is:

"First, we will develop a scientific foundation for analyzing AI systems, with a focus on three fundamental concepts: abstraction, causality and interpretability. Note that these concepts are fundamental for program analysis and yet must be re-invented formally due to the difference between programs and neural networks. This scientific foundation will be the basis of developing systematic analysis tools."

Abstraction, causality and interpretability are core concepts in AI and computer science. Abstraction refers to the invisible process of how a program or model produces an output, such as a "calculate_area" function in a computer program that considers pi and radius that the user never sees.

In AI, a model would learn to identify what is a "circle" through repeated training and learn to measure its area, but nobody can point to a single line of code to identify it as where/when it learnt to do so.

Causality is simpler to understand. In programming it's an if-then situation, e.g., if water level > 2m, sound alarm. It's less clear cut in AI, where a car loan application could be rejected based on patterns and correlations. For example, someone over 50 years old might have the loan application approved but another 50-year-old might be rejected.

The screening model might have spotted other factors such as a history of hospitalization at an eye hospital or being issued a speeding ticket recently. As such, AI systems learn correlations but not necessarily causes.

Interpretability, simply put, is: Do you understand how the software came to the final output or decision? AI output can sometimes be opaque and needs special tools for decisions to make sense.

Once that is done, the following will be developed:

  • A set of effective tools for analyzing and repairing neural networks, including testing engines, debuggers and verifiers.
  • Certification standards which provide actionable guidelines for achieving a different level of quality control.
  • Propose principled processes for developing AI systems, with best practices and guidelines from researchers as well as practitioners.

"This project is a huge one, and the research group under each Co-PI works on a subset of the problems above," Yang explains. "I work with [UOB Chair Professor of Computer Science] David Lo, and our responsibility is to understand the concerns and challenges developers face when developing AI-enabled systems in practice, as well as to extract the best practices and guidelines from AI researchers and practitioners."

The impact

Examples of AI-enabled systems include autonomous driving, image recognition, and smart traffic lights. "My research in this project focuses on an important phase of AI: How AI is integrated into software in practice and what the challenges, solutions, concerns, and practices are in this important phase," Yang tells the Office of Research Governance & Administration (ORGA).

"For example, we suggest that it is important to write well-structured documentation for an open-source model to be more easily adapted in other software."

The real-world impact of Yang's work is substantial. Clear and comprehensive documentation could help smoothen deployment by listing hardware requirements and alternatives in cases of software failing to work on certain devices.

Proper documentation also facilitates faster adoption by showing developers how to plug AI models into systems, be they for autonomous driving, supply chain optimization, or smart assistants such as Amazon's Alexa and Google Assistant.

Yang's work on the project ties in with some of his other collaborations, one of which involves interviewing AI practitioners from the industry to understand the challenges and solutions to ensure the quality of AI systems, and validating findings by conducting surveys to collect the opinions and practices of AI developers.

More research, more impact

Yang also recently published a paper titled "Unveiling Memorization in Code Models" that looked at AI models trained to understand and generate computer code. As written in the paper, these models "automate a series of critical tasks such as defect prediction, code review, code generation and software questions analysis."

While code models make it easier to write and maintain code, they do so by being trained on a lot of data, so much so they memorize frequently occurring code.

"Generally, language models are trained on a large corpus of code, aiming to learn 'given a piece of code, what are the next tokens/code snippets,'" explains Yang. "There exist many code clones (identical code) in the training data, and the code models will learn such information very well, just like memorizing some training data.

"Code models may memorize the information belonging to one developer and expose the information to another, which may cause some concerns," he adds.

Among these include security breaches (models leak passwords and database credentials), intellectual property theft (proprietary algorithms and licensed code get exposed), vulnerability propagation (insecure code patterns spread to new applications), and privacy violations (personal information and sensitive business data exposure).

How does Yang's work address this issue? "We prompt the model to generate a large number of code snippets and identify those that can also be found in the training data via a technique called 'code clone detection," says Yang.

"In the paper, we aim to expose the problem of memorization and not to address it. We have recently published another paper on mitigating privacy information leakage in code models."

The impact of this particular piece of research lies in better preserving the privacy of developers in the era of large language models. He explains, "Specifically, we design a new 'machine unlearning' method to guide the model to 'forget' the privacy information while preserving its general knowledge. When the new model is deployed, it can still generate the correct code upon user request, but will use a placeholder when privacy information is likely to be involved."

Provided by Singapore Management University Citation: Amplifying AI's impact by making it understandable (2025, August 29) retrieved 29 August 2025 from https://techxplore.com/news/2025-08-amplifying-ai-impact.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

AI vision, reinvented: Vision-language models gain clearer sight through synthetic training data 0 shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

When AI blurs reality: The rise of hyperreal digital culture
AI

When AI blurs reality: The rise of hyperreal digital culture

August 29, 2025
0

August 29, 2025 The GIST When AI blurs reality: The rise of hyperreal digital culture Sadie Harley scientific editor Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility: fact-checked...

Read moreDetails
Google is training its AI tools on YouTube videos: These creators aren’t happy

Google is training its AI tools on YouTube videos: These creators aren’t happy

August 29, 2025
Scientists pioneer way to remove private data from AI models

Scientists pioneer way to remove private data from AI models

August 29, 2025
ChatGPT maker touts how AI benefits Californians amid safety concerns

ChatGPT maker touts how AI benefits Californians amid safety concerns

August 29, 2025
Generative model helps design cities for cars and pedestrians

Generative model helps design cities for cars and pedestrians

August 29, 2025
The AI breakthrough that uses almost no power to create images

The AI breakthrough that uses almost no power to create images

August 29, 2025
Robot regret: New research helps robots make safer decisions around humans

Robot regret: New research helps robots make safer decisions around humans

August 28, 2025

Recent News

When AI blurs reality: The rise of hyperreal digital culture

When AI blurs reality: The rise of hyperreal digital culture

August 29, 2025

Crypto Liquidations Top $500 Million as Bitcoin, Ethereum and XRP Sink Into the Weekend

August 29, 2025
MasterClass subscriptions are half off for Labor Day

MasterClass subscriptions are half off for Labor Day

August 29, 2025
Amplifying AI’s impact by making it understandable

Amplifying AI’s impact by making it understandable

August 29, 2025

TOP News

  • An Apple modder added a USB-C port to the AirPods Max, and you can buy a kit to do the same

    An Apple modder added a USB-C port to the AirPods Max, and you can buy a kit to do the same

    547 shares
    Share 219 Tweet 137
  • Meta is launching a California super PAC

    538 shares
    Share 215 Tweet 135
  • God help us, Donald Trump plans to sell a phone

    565 shares
    Share 226 Tweet 141
  • Investment Giant 21Shares Announces New Five Altcoins Including Avalanche (AVAX)!

    564 shares
    Share 226 Tweet 141
  • WhatsApp has ads now, but only in the Updates tab

    564 shares
    Share 226 Tweet 141
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved