CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Friday, July 25, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Q&A with professor of computer science: What happens when AI faces the human problem of uncertainty?

July 23, 2025
157
0

July 23, 2025

The GIST Q&A with professor of computer science: What happens when AI faces the human problem of uncertainty?

Related Post

A human-inspired pathfinding approach to improve robot navigation

A human-inspired pathfinding approach to improve robot navigation

July 25, 2025
Scientists develop tool to detect fake videos

Scientists develop tool to detect fake videos

July 25, 2025
Sadie Harley

scientific editor

Andrew Zinin

lead editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

large language models
Credit: Pixabay/CC0 Public Domain

In a world increasingly shaped by artificial intelligence, the question of how machines make decisions under uncertain conditions grows more urgent every day.

How do we weigh competing values when outcomes are uncertain? What constitutes reasonable choice when perfect information is unavailable? These questions, once confined to academic philosophy, are now front and center as we delegate increasingly complex decisions to AI.

A new large language model (LLM) framework developed by Willie Neiswanger, assistant professor of computer science at the USC Viterbi School of Engineering and the USC School of Advanced Computing, along with students in the computer science department, combines classical decision theory and utility theory principles to significantly enhance AI's ability to face uncertainty and tackle those complex decisions.

Neiswanger's research was spotlighted at 2025's International Conference on Learning Representations and published on the arXiv preprint server. He recently discussed how AI handles uncertainty with USC News.

What are your thoughts on the difference between artificial and human intelligence?

Neiswanger: At present, human intelligence has various strengths relative to machine intelligence. However, machine intelligence also has certain strengths relative to humans, which make it valuable.

Large language models (LLMs)—AI systems trained on vast amounts of text that can understand and generate humanlike responses—for instance, can rapidly ingest and synthesize large amounts of information from reports or other data sources, and can generate at scale by simulating many possible futures or proposing a wide range of forecasted outcomes. In our work, we aim to take advantage of the strengths of LLMs while balancing them against the strengths and judgment of humans.

Why do current AI large language models struggle with uncertainty?

Neiswanger: Uncertainty is a fundamental challenge in real-world decision-making. Current AI systems struggle to properly balance uncertainty, evidence and the process of making predictions based on the likelihood of different outcomes, as well as user preferences when faced with unknown variables.

Unlike human experts who can express degrees of confidence and acknowledge the limits of their knowledge, LLMs typically generate responses with apparent confidence regardless of whether they're drawing from well-established patterns or making uncertain predictions that go beyond the available data.

How does your research intersect with uncertainty?

Neiswanger: I focus on developing machine learning methods for decision-making under uncertainty, with an emphasis on sequential decision-making—situations where you make a series of choices over time, with each decision affecting future options—in settings where data is expensive to acquire.

This includes applications such as black-box optimization (finding the best solution when you can't see how the system works internally), experimental design (planning studies or tests to get the most useful information), and decision-making tasks in science and engineering—for example, materials or drug discovery, and the optimization of computer systems.

I'm also interested in how large foundation models (massive AI systems trained on enormous datasets that serve as the base for many applications), especially large language models, can both enhance and benefit from these decision-making frameworks: on one hand, helping humans make better decisions in uncertain environments, and on the other, using mathematical methods for making optimal choices to improve getting better results with less training data and quality in training and fine-tuning of LLMs.

How did your research address the problem of uncertainty and AI?

Neiswanger: We focused on improving a machine's ability to quantify uncertainty, essentially teaching it to measure and express how confident it should be about different predictions.

In particular, we developed an uncertainty quantification approach that enables large language models to make decisions under incomplete information, while also making predictions with measurable confidence levels that can be verified and choosing actions that provide the greatest benefit aligned with human preferences.

The process began by identifying key uncertain variables that are relevant to decision-making, then having language models assign language-based probability scores to different possibilities (such as the yield of a crop, the price of a stock, the date of an uncertain event, the projected volume of warehouse shipments, etc.), based on reports, historical data and other contextual information, which were then converted to numerical probabilities.

Are there immediate applications?

Neiswanger: In business contexts, it may improve strategic planning by providing more realistic assessments of market uncertainties and competitive dynamics.

In medical settings, it may provide diagnostic support or help with treatment planning by helping physicians better account for uncertainty in symptoms and test results. In personal decision-making, it may help users get more informed, relevant advice from language models about everyday choices.

The system's ability to align with human preferences has been particularly valuable in contexts where letting computers find the mathematically "best" solution might miss important human values or constraints.

By explicitly modeling stakeholder preferences and incorporating them into mathematical assessments of how valuable different outcomes are to people, the framework produces recommendations that are not only technically optimal but also practically acceptable to the people who implement them.

What's next for your research?

Neiswanger: We're now exploring how this framework can be extended to a broader range of real-world decision-making under uncertainty tasks, including applications in operations research (using mathematical methods to solve complex business problems), logistics and health care. One focus moving forward is improving human auditability: developing interfaces that give users clearer visibility into why an LLM make a particular decision, and why that decision is optimal.

More information: Ollie Liu et al, DeLLMa: Decision Making Under Uncertainty with Large Language Models, arXiv (2024). DOI: 10.48550/arxiv.2402.02392

Journal information: arXiv Provided by University of Southern California Citation: Q&A with professor of computer science: What happens when AI faces the human problem of uncertainty? (2025, July 23) retrieved 23 July 2025 from https://techxplore.com/news/2025-07-qa-professor-science-ai-human.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

New research reveals AI has a confidence problem 12 shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

A human-inspired pathfinding approach to improve robot navigation
AI

A human-inspired pathfinding approach to improve robot navigation

July 25, 2025
0

July 25, 2025 feature The GIST A human-inspired pathfinding approach to improve robot navigation Ingrid Fadelli contributing writer Lisa Lock scientific editor Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the...

Read moreDetails
Scientists develop tool to detect fake videos

Scientists develop tool to detect fake videos

July 25, 2025
Innovative robotic slip-prevention method could bring human-like dexterity to industrial automation

Innovative robotic slip-prevention method could bring human-like dexterity to industrial automation

July 25, 2025
AI-driven framework creates defect-tolerant metamaterials with complex functionality

AI-driven framework creates defect-tolerant metamaterials with complex functionality

July 25, 2025
Investigating self-disclosure in the era of video communication and embodied virtual reality

Investigating self-disclosure in the era of video communication and embodied virtual reality

July 24, 2025
Research shows stark social divides in AI use in the workplace

Research shows stark social divides in AI use in the workplace

July 24, 2025
Improving AI models: Automated tool detects silent errors in deep learning training

Improving AI models: Automated tool detects silent errors in deep learning training

July 24, 2025

Recent News

Internet Archive is now an official US government document library

Internet Archive is now an official US government document library

July 25, 2025

Public Shell Firms Ramping Up Altcoin Buys Draws Skepticism: FT

July 25, 2025
The Morning After: Apple’s iOS 26 beta is ready for the public

The Morning After: Apple’s iOS 26 beta is ready for the public

July 25, 2025
A human-inspired pathfinding approach to improve robot navigation

A human-inspired pathfinding approach to improve robot navigation

July 25, 2025

TOP News

  • Bitcoin Sees Long-Term Holders Sell As Short-Term Buyers Step In – Sign Of Rally Exhaustion?

    Bitcoin Sees Long-Term Holders Sell As Short-Term Buyers Step In – Sign Of Rally Exhaustion?

    534 shares
    Share 214 Tweet 134
  • The AirPods 4 are still on sale at a near record low price

    533 shares
    Share 213 Tweet 133
  • Ripple Partners With Ctrl Alt to Expand Custody Footprint Into Middle East

    533 shares
    Share 213 Tweet 133
  • Cyberpunk 2077: Ultimate Edition comes to the Mac on July 17

    533 shares
    Share 213 Tweet 133
  • HBO confirms The Last of Us season 3 will arrive in 2027

    533 shares
    Share 213 Tweet 133
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved