CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Friday, June 27, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Q&A: When talking about AI, definitions matter

June 27, 2025
158
0

June 27, 2025

The GIST Q&A: When talking about AI, definitions matter

Related Post

Hide and seek: Uncovering new ways to detect vault apps on smartphones

Hide and seek: Uncovering new ways to detect vault apps on smartphones

June 27, 2025
Google debuts Gemini AI coding tool in bid to entice developers

Google debuts Gemini AI coding tool in bid to entice developers

June 27, 2025
Lisa Lock

scientific editor

Robert Egan

associate editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

ai
Credit: Pixabay/CC0 Public Domain

Artificial intelligence is everywhere lately—on the news, in podcasts and around every water cooler. A new, buzzy term, artificial general intelligence (AGI), is dominating conversations and raising more questions than it answers.

So, what is AGI? Will it replace jobs or unlock solutions to the world's biggest challenges? Will it align with societal priorities or redefine them? Is our future more "The Jetsons" or "The Terminator?" The answers aren't simple. They depend on how we define AI, and what we expect it to do.

As the inaugural director of the Center for AI Learning, Emory's community hub for AI literacy and a core component of the AI.Humanity initiative, Joe Sutherland is no stranger to tough questions about AI. Last summer, he embarked on a statewide tour seeking to demystify AI and empower Georgians with skills to thrive in a technology-focused future. Diverse audiences made up of professionals, business owners, lawmakers and students shared many of the same core questions.

In this Q&A, Sutherland answers our most pressing concerns about AGI, helping us cut through the hype to find the hope.

What is the difference between artificial intelligence, artificial general intelligence and artificial super intelligence?

The definitions have changed over time, and that has caused some confusion. Artificial intelligence is not a monolithic technology. It's a set of technologies that automate tasks or mimic decisions humans normally make. We're delegating the authority to make those decisions to a machine.

Traditionally, when people talked about artificial general intelligence (AGI), they meant Skynet from "The Terminator" or HAL from "2001: A Space Odyssey," machines that supposedly had free will and approximated human abilities.

Today, some major research labs have redefined AGI to mean a computer program that can perform as well as, or better than, expert humans at specific tasks.

Artificial super intelligence (ASI) is the modern term for what we used to call superintelligence or the singularity. That is what we used to think of concurrently as AGI—humanoid robots that surpass human intelligence.

Do we already have AGI?

It depends on what definition you use. If you're using the task-oriented definition from the labs, yes. AI is great at retrieving information and summarizing it in a way that any human evaluating would say, "Oh, this is pretty good."

Large language models (LLMs) like ChatGPT can outperform humans trying to get into medical school on the MCAT. But that's not real intelligence. It's like giving a student Google during an exam. True AGI should show reasoning, not just information retrieval and pattern matching.

What's the difference between reasoning and what today's AI does?

Today's models give the impression they are reasoning, but they're just sequentially researching information and then summarizing it. They don't understand the world—they just predict what word comes next based on patterns. When tested on real reasoning tasks, like the Tower of Hanoi or logic puzzles, LLMs often fail unless they've memorized the answers.

Humor is another example where AI falls short. From a humanities perspective, humor lies at the intersection of comfort and discomfort. That boundary shifts all the time. Chatbots only regurgitate things they've seen in the past; they don't understand that boundary. True humor is something they can't do.

Similarly, businesses often don't have data indicative of broader trends taking place "outside" of the company. Their AI models, which are trained on internal data, can't synthesize where we've been with where we're going. That would require reasoning, intuition and values alignment—things we struggle to articulate even for ourselves.

So how close are we to AGI, really?

If we're using the old definition of AGI—like "The Terminator"—I think we're far away. LLMs won't bring us anywhere close because they don't have reasoning or intuitively creative abilities. We haven't given them a framework to efficiently discover new information. We're going to have to develop totally new architectures if we want to get closer.

One step in the right direction is joint embedding predictive architecture, or JEPA. Instead of stringing words together like an LLM, it infers deeper relationships between concepts and activates those inferences to achieve a higher-level objective.

It's refreshing to learn that throwing all of society's encyclopedia entries into an LLM doesn't produce human-level intelligence. I'm boiling it down, of course. There's more to humanity than meets the eye.

What are the biggest promises and perils of AGI?

The current technologies are fantastic. They enable people to do tasks in hours that they previously had to spend days on. The promise is efficiency—tools that can summarize research, assist in medical diagnosis, help you plan your shopping list for the week. That will help people earn more money, spend more time with their families or on hobbies and help them live longer, healthier lives.

The peril isn't the tech; it's the lack of public understanding. We need AI literacy, so people understand when AI is being used the right way and when it is not.

What kind of oversight or guardrails are needed?

The key is providing a framework that balances the value of what is being built with the intellectual property fueling these models, that is, the datasets.

Some companies have argued in court that scraping people's data without consent or compensation is justified because it advances society. That's a manipulative and troubling argument.

If the needs of the people contributing to these technologies are represented and they are adequately rewarded, it would incentivize greater innovation and usage. We also need thorough testing to identify where these models break or produce biased outcomes.

Can we align AI with human values?

That's more of a social question than a technical one. Last summer, I gave talks around Georgia for Emory's statewide AI workforce development tour with Rowen Foundation and the Georgia Chamber of Commerce, which we called "AI + You." One audience member asked, "How can we ensure that the AI models we are building have American values?"

What makes America special is that we value the opportunity for dissent. We're never going to agree on everything, so the bigger question is: How do we create a system with responsive guardrails that adapt to our evolving societal values? What's the framework that allows us to still have a robust debate and ultimately come to good decisions?

These questions aren't new to the advent of AI. We've been asking them for centuries. But the rapid rise of technology, and its increasingly centralized power, forces us to revisit how we approach collective action.

If we get AGI right, what does the best version of the future look like?

I think the best version of the future with all these technologies is one where they free us to do more meaningful work and spend less time on things we don't enjoy. This means more time for innovation and creative problem-solving.

Emory's AI research is already transforming health care, leading to improved diagnosis and treatment of diseases like cancer, heart disease and diabetes. We are using text analysis to uncover patterns in public policies that will make governance more efficient and equitable. Our scholars are also looking at how AI can protect people's rights and grow businesses.

AI offers more benefits than drawbacks, if we empower people through education and include them in the conversation so they can advocate for themselves and those they care about.

If deployed thoughtfully, these technologies can amplify human potential, not replace it.

Provided by Emory University Citation: Q&A: When talking about AI, definitions matter (2025, June 27) retrieved 27 June 2025 from https://techxplore.com/news/2025-06-qa-ai-definitions.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

The limitations of language: AI models still lag behind humans in simple text comprehension tests shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Hide and seek: Uncovering new ways to detect vault apps on smartphones
AI

Hide and seek: Uncovering new ways to detect vault apps on smartphones

June 27, 2025
0

June 27, 2025 The GIST Hide and seek: Uncovering new ways to detect vault apps on smartphones Gaby Clark scientific editor Robert Egan associate editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's...

Read moreDetails
Google debuts Gemini AI coding tool in bid to entice developers

Google debuts Gemini AI coding tool in bid to entice developers

June 27, 2025
Machine learning methods are best suited to catch liars, according to science of deception detection

Machine learning methods are best suited to catch liars, according to science of deception detection

June 27, 2025
AI models shrink to fit tiny devices, enabling smarter IoT sensors

AI models shrink to fit tiny devices, enabling smarter IoT sensors

June 26, 2025
New method can teach AI to admit uncertainty

New method can teach AI to admit uncertainty

June 26, 2025
AI blunders: Six-finger hands, two suns and Jesus Christ on a surfboard in a stormy sea

AI blunders: Six-finger hands, two suns and Jesus Christ on a surfboard in a stormy sea

June 26, 2025
Can academics use AI to write journal papers? What the guidelines say

Can academics use AI to write journal papers? What the guidelines say

June 26, 2025

Recent News

NordVPN review 2025: Innovative features, a few missteps

NordVPN review 2025: Innovative features, a few missteps

June 27, 2025
XRP Bulls Get Crushed in 1,000% Liquidation Imbalance Bloodbath

XRP Bulls Get Crushed in 1,000% Liquidation Imbalance Bloodbath

June 27, 2025
Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 27, 2025
Hide and seek: Uncovering new ways to detect vault apps on smartphones

Hide and seek: Uncovering new ways to detect vault apps on smartphones

June 27, 2025

TOP News

  • Apple details new fee structures for App Store payments in the EU

    Apple details new fee structures for App Store payments in the EU

    539 shares
    Share 216 Tweet 135
  • Google’s new AI Core update for Pixel 8 Pro will boost its powers and performance

    559 shares
    Share 224 Tweet 140
  • The best Android phones for 2023

    573 shares
    Share 229 Tweet 143
  • My go-to robot vacuum and mop is still $455 off following Cyber Monday

    549 shares
    Share 220 Tweet 137
  • Machine learning method for early fault detection could make lithium-ion batteries safer

    534 shares
    Share 214 Tweet 134
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved