CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Saturday, July 19, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Does AI understand?

July 17, 2025
155
0

July 17, 2025

The GIST Does AI understand?

Related Post

Anyone can now train a robot: New tool makes teaching skills hands-on and easy

Anyone can now train a robot: New tool makes teaching skills hands-on and easy

July 18, 2025
Can AI really code? Study maps the roadblocks to autonomous software engineering

Can AI really code? Study maps the roadblocks to autonomous software engineering

July 18, 2025
Lisa Lock

scientific editor

Andrew Zinin

lead editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Does AI understand? 
Credit: Liz Zonarich/Harvard Staff

Imagine an ant crawling in sand, tracing a path that happens to look like Winston Churchill. Would you say the ant created an image of the former British prime minister? According to the late Harvard philosopher Hilary Putnam, most people would say no: The ant would need to know about Churchill, and lines, and sand.

The thought experiment has renewed relevance in the age of generative AI. As artificial intelligence firms release ever-more-advanced models that reason, research, create, and analyze, the meanings behind those verbs get slippery fast. What does it really mean to think, to understand, to know? The answer has big implications for how we use AI, and yet those who study intelligence are still reckoning with it.

"When we see things that speak like humans, that can do a lot of tasks like humans, write proofs and rhymes, it's very natural for us to think that the only way that thing could be doing those things is that it has a mental model of the world, the same way that humans do," said Keyon Vafa, a postdoctoral fellow at the Harvard Data Science Initiative. "We as a field are making steps trying to understand, what would it even mean for something to understand? There's definitely no consensus."

In human cognition, expression of a thought implies understanding of it, said senior lecturer on philosophy Cheryl Chen. We assume that someone who says "It's raining" knows about weather, has experienced the feeling of rain on the skin and perhaps the frustration of forgetting to pack an umbrella. "For genuine understanding," Chen said, "you need to be kind of embedded in the world in a way that ChatGPT is not."

Still, today's artificial intelligence systems can seem awfully convincing. Both large language models and other types of machine learning are made of neural networks—computational models that pass information through layers of neurons loosely modeled after the human brain.

"Neural networks have numbers inside them; we call them weights," said Stratos Idreos, Gordon McKay Professor of Computer Science at SEAS. "Those numbers start by default randomly. We get data through the system, and we do mathematical operations based on those weights, and we get a result."

He gave the example of an AI trained to identify tumors in medical images. You feed the model hundreds of images that you know contain tumors, and hundreds of images that don't. Based on that information, can the model correctly determine if a new image contains a tumor? If the result is wrong, you give the system more data, and you tinker with the weights, and slowly the system converges on the right output. It might even identify tumors that doctors would miss.

Vafa devotes much of his research to putting AI through its paces, to figure out both what the models actually understand and how we would even know for sure. His criteria come down to whether the model can reliably demonstrate a world model, a stable yet flexible framework that allows it to generalize and reason even in unfamiliar conditions.

Sometimes, Vafa said, it sure seems like a yes.

"If you look at large language models and ask them questions that they presumably haven't seen before—like, 'If I wanted to balance a marble on top of an inflatable beach ball on top of a stove pot on top of grass, what order should I put them in?'—the LLM would answer that correctly, even though that specific question wasn't in its training data," he said. That suggests the model does have an effective world model—in this case, the laws of physics.

But Vafa argues the world models often fall apart under closer inspection. In a previous study, he and a team of colleagues trained an AI model on street directions around Manhattan, then asked it for routes between various points. Ninety-nine percent of the time, the model spat out accurate directions. But when they tried to build a cohesive map of Manhattan out of its data, they found the model had invented roads, leapt across Central Park, and traveled diagonally across the city's famously right-angled grid.

"When I turn right, I am given one map of Manhattan, and when I turn left, I'm given a completely different map of Manhattan," he said. "Those two maps should be coherent, but the AI is essentially reconstructing the map every time you take a turn. It just didn't really have any kind of conception of Manhattan."

Rather than operating from a stable understanding of reality, he argues, AI memorizes countless rules and applies them to the best of its ability, a kind of slapdash approach that looks intentional most of the time but occasionally reveals its fundamental incoherence.

Sam Altman, the CEO of OpenAI, has said we will reach AGI—artificial general intelligence, which can do any cognitive task a person can—"relatively soon." Vafa is keeping his eye out for more elusive evidence: that AIs reliably demonstrate consistent world models—in other words, that they understand.

"I think one of the biggest challenges about getting to AGI is that it's not clear how to define it," said Vafa. "This is why it's important to find ways to measure how well AI systems can 'understand' or whether they have good world models—it's hard to imagine any notion of AGI that doesn't involve having a good world model. The world models of current LLMs are lacking, but once we know how to measure their quality, we can make progress toward improving them."

Idreos' team at the Data Systems Laboratory is developing more efficient approaches so AI can process more data and reason more rigorously. He sees a future where specialized, custom-built models solve important problems, such as identifying cures for rare diseases—even if the models don't know what disease is. Whether or not that counts as understanding, Idreos said, it certainly counts as useful.

Provided by Harvard University

This story is published courtesy of the Harvard Gazette, Harvard University's official newspaper. For additional university news, visit Harvard.edu.

Citation: Does AI understand? (2025, July 17) retrieved 17 July 2025 from https://techxplore.com/news/2025-07-ai.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Despite its impressive output, generative AI doesn't have a coherent understanding of the world, researchers suggest shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Anyone can now train a robot: New tool makes teaching skills hands-on and easy
AI

Anyone can now train a robot: New tool makes teaching skills hands-on and easy

July 18, 2025
0

July 17, 2025 The GIST Anyone can now train a robot: New tool makes teaching skills hands-on and easy Sadie Harley scientific editor Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring...

Read moreDetails
Can AI really code? Study maps the roadblocks to autonomous software engineering

Can AI really code? Study maps the roadblocks to autonomous software engineering

July 18, 2025
When the stakes are high, do machine learning models make fair decisions?

When the stakes are high, do machine learning models make fair decisions?

July 18, 2025
California tech hubs are set to dominate the AI economy, report suggests

California tech hubs are set to dominate the AI economy, report suggests

July 18, 2025
Tech giants warn window to monitor AI reasoning is closing, urge action

Tech giants warn window to monitor AI reasoning is closing, urge action

July 17, 2025
Generative AI models streamline fashion design with new text and image creation

Generative AI models streamline fashion design with new text and image creation

July 17, 2025
Through smartphone apps, AI can close road assessment gap

Through smartphone apps, AI can close road assessment gap

July 17, 2025

Recent News

Slack is getting a host of new AI tools

Slack is getting a host of new AI tools

July 19, 2025

Ripple CEO Praises Trump as “Most Crypto-Forward” President After GENIUS Act Signing

July 19, 2025
Trump’s firing of Democratic FTC commissioner was unlawful, judge rules

Trump’s firing of Democratic FTC commissioner was unlawful, judge rules

July 19, 2025
Subaru’s third EV is the Uncharted (yep) with 300 miles of range and 338 horsepower

Subaru’s third EV is the Uncharted (yep) with 300 miles of range and 338 horsepower

July 19, 2025

TOP News

  • Обменник криптовалют Dmoney.cc Выгодные обмены, которым можно доверять

    Обменник криптовалют Dmoney.cc Выгодные обмены, которым можно доверять

    536 shares
    Share 214 Tweet 134
  • Meta plans stand-alone AI app

    564 shares
    Share 226 Tweet 141
  • Kia’s EV4, its first electrical sedan, will probably be out there within the US later this 12 months

    569 shares
    Share 228 Tweet 142
  • New Pokémon Legends: Z-A trailer reveals a completely large model of Lumiose Metropolis

    569 shares
    Share 228 Tweet 142
  • Lazarus, the brand new anime from the creator of Cowboy Bebop, premieres April 5

    567 shares
    Share 227 Tweet 142
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved