CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Thursday, October 23, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Five ways to make AI more trustworthy

October 22, 2025
149
0

October 22, 2025

The GIST Five ways to make AI more trustworthy

Related Post

AI heavyweights call for end to ‘superintelligence’ research

AI heavyweights call for end to ‘superintelligence’ research

October 22, 2025
AI model accurately renders garment motions of avatars

AI model accurately renders garment motions of avatars

October 22, 2025
Stephanie Baum

scientific editor

Robert Egan

associate editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

self-driving car
Credit: Unsplash/CC0 Public Domain

Self-driving taxis are sweeping the country and will likely start service in Colorado in the coming months. How many of us will be lining up to take a ride? That depends on our level of trust, says Amir Behzadan, a professor in the Department of Civil, Environmental and Architectural Engineering, and a fellow in the Institute of Behavioral Science (IBS) at CU Boulder.

He and his team of researchers in the Connected Informatics and Built Environment Research (CIBER) Lab at CU Boulder are unearthing new insights into how the artificial intelligence (AI) technology we might encounter in daily life can earn our confidence. They've created a framework for developing trustworthy AI tools that benefit people and society.

In a new paper in the journal AI and Ethics, Behzadan and his Ph.D. student Armita Dabiri drew on that framework to create a conceptual AI tool that incorporates the elements of trustworthiness.

"As a human, when you make yourself vulnerable to potential harm, assuming others have positive intentions, you're trusting them," said Behzadan. "And now you can bring that concept from human–human relationships to human–technology relationships."

How trust forms

Behzadan studies the building blocks of human trust in AI systems that are used in the built environment, from self-driving cars and smart home security systems to mobile public transportation apps and systems that help people collaborate on group projects. He says trust has a critical impact on whether people will adopt and rely on them or not.

Trust is deeply embedded in human civilization, according to Behzadan. Since ancient times, trust has helped people cooperate, share knowledge and resources, form communal bonds and divvy up labor. Early humans began forming communities and trusting those within their inner circles.

Mistrust arose as a survival instinct, making people more cautious when interacting with people outside of their group. Over time, cross-group trade encouraged different groups to interact and become interdependent, but it didn't eliminate mistrust.

"We can see echoes of this trust-mistrust dynamic in modern attitudes toward AI," says Behzadan, "especially if it's developed by corporations, governments or others we might consider 'outsiders'."

So what does trustworthy AI look like? Here are five main takeaways from Behzadan's framework.

1. It knows its users

Many factors affect whether—and how much—we trust new AI technology. Each of us has our own individual inclination toward trust, which is influenced by our preferences, value system, cultural beliefs, and even the way our brains are wired.

"Our understanding of trust is really different from one person to the next," said Behzadan. "Even if you have a very trustworthy system or person, our reaction to that system or person can be very different. You may trust them, and I may not."

He said it's important for developers to consider who the users are of an AI tool. What social or cultural norms do they follow? What might their preferences be? How technologically literate are they?

For instance, Amazon Alexa, Google Assistant and other voice assistants offer simpler language, larger text displays on devices and a longer response time for older adults and people who aren't as technologically savvy, Behzadan said.

2. It's reliable, ethical and transparent

Technical trustworthiness generally refers to how well an AI tool works, how safe and secure it is, and how easy it is for users to understand how it works and how their data is used.

An optimally trustworthy tool must do its job accurately and consistently, Behzadan said. If it does fail, it should not harm people, property or the environment. It must also provide security against unauthorized access, protect users' privacy and be able to adapt and keep working amid unexpected changes. It should also be free from harmful bias and should not discriminate between different users.

Transparency is also key. Behzadan says some AI technologies, such as sophisticated tools used for credit scoring or loan approval, operate like a "black box" that doesn't allow us to see how our data is used or where it goes once it's in the system. If the system could share how it's using data and users could see how it makes decisions, he said, more people might be willing to share their data.

In many settings, like medical diagnosis, the most trustworthy AI tools should complement human expertise and be transparent about their reasoning with expert clinicians, according to Behzadan.

AI developers should not only try to develop trustworthy, ethical tools, but also find ways to measure and improve their tools' trustworthiness once they are launched for the intended users.

3. It takes context into account

There are countless uses for AI tools, but a particular tool should be sensitive to the context of the problem it's trying to solve.

In the newest study, Behzadan and co-researcher Dabiri created a hypothetical scenario where a project team of engineers, urban planners, historic preservationists and government officials had been tasked with repairing and maintaining a historical building in downtown Denver. Such work can be complex and involve competing priorities, like cost effectiveness, energy savings, historical integrity and safety.

The researchers proposed a conceptual AI assistive tool called PreservAI that could be designed to balance competing interests, incorporate stakeholder input, analyze different outcomes and trade-offs, and collaborate helpfully with humans rather than replacing their expertise.

Ideally, AI tools should incorporate as much contextual information as possible so they can work reliably.

4. It's easy to use and asks users how it's doing

The AI tool should not only do its job efficiently, but also provide a good user experience, keeping errors to a minimum, engaging users and building in ways to address potential frustrations, Behzadan said.

Another key ingredient for building trust? Actually allowing people to use AI systems and challenge AI outcomes.

"Even if you have the most trustworthy system, if you don't let people interact with it, they are not going to trust it. If very few people have really tested it, you can't expect an entire society to trust it and use it," he said.

Finally, stakeholders should be able to provide feedback on how well the tool is working. That feedback can be helpful in improving the tool and making it more trustworthy for future users.

5. When trust is lost, it adapts to rebuild it

Our trust in new technology can change over time. One person might generally trust new technology and be excited to ride in a self-driving taxi, but if they read news stories about the taxis getting into crashes, they might start to lose trust.

That trust can later be rebuilt, said Behzadan, although users can remain skeptical about the tool.

For instance, he said, the "Tay" chatbot by Microsoft failed within hours of its launch in 2016 because it picked up harmful language from social media and began to post offensive tweets. The incident caused public outrage. But later that same year, Microsoft released a new chatbot, "Zo," with stronger content filtering and other guardrails. Although some users criticized Zo as a "censored" chatbot, its improved design helped more people trust it.

There's no way to completely eliminate the risk that comes with trusting AI, Behzadan said. AI systems rely on people being willing to share data—the less data the system has, the less reliable it is. But there's always a risk of data being misused or AI not working the way it's supposed to.

When we're willing to use AI systems and share our data with them, though, the systems become better at their jobs and more trustworthy. And while no system is perfect, Behzadan feels the benefits outweigh the downsides.

"When people trust AI systems enough to share their data and engage with them meaningfully, those systems can improve significantly, becoming more accurate, fair, and useful," he said.

"Trust is not just a benefit to the technology; it is a pathway for people to gain more personalized and effective support from AI in return."

More information: Amir Behzadan et al, Factors influencing human trust in intelligent built environment systems, AI and Ethics (2025). DOI: 10.1007/s43681-025-00813-6

Provided by University of Colorado at Boulder Citation: Five ways to make AI more trustworthy (2025, October 22) retrieved 22 October 2025 from https://techxplore.com/news/2025-10-ways-ai-trustworthy.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Public trust in institutions falters amid weak regulation and digital misinformation

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

AI heavyweights call for end to ‘superintelligence’ research
AI

AI heavyweights call for end to ‘superintelligence’ research

October 22, 2025
0

October 22, 2025 The GIST AI heavyweights call for end to 'superintelligence' research Sadie Harley scientific editor Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility: fact-checked trusted source...

Read moreDetails
AI model accurately renders garment motions of avatars

AI model accurately renders garment motions of avatars

October 22, 2025
ChatGPT is about to get erotic, but can OpenAI really keep it adults-only?

ChatGPT is about to get erotic, but can OpenAI really keep it adults-only?

October 22, 2025
Engineers use artificial intelligence to predict car crashes

Engineers use artificial intelligence to predict car crashes

October 22, 2025
Flying is safe thanks to data and cooperation. Here’s what the AI industry could learn from airlines on safety

Flying is safe thanks to data and cooperation. Here’s what the AI industry could learn from airlines on safety

October 21, 2025
AI models can now be customized with far less data and computing power

AI models can now be customized with far less data and computing power

October 21, 2025
A new ‘blueprint’ for advancing practical, trustworthy AI

A new ‘blueprint’ for advancing practical, trustworthy AI

October 21, 2025

Recent News

Google says it made a breakthrough toward practical quantum computing

Google says it made a breakthrough toward practical quantum computing

October 23, 2025

Asia Morning Briefing: BTC, ETH Markets Steady as Traders Await CPI and China-U.S. De-Escalation Signs

October 23, 2025
YouTube is adding a timer to Shorts so you don’t scroll the day away

YouTube is adding a timer to Shorts so you don’t scroll the day away

October 23, 2025
DraftKings taps Polymarket to clear trades in prediction markets play

DraftKings taps Polymarket to clear trades in prediction markets play

October 23, 2025

TOP News

  • God help us, Donald Trump plans to sell a phone

    God help us, Donald Trump plans to sell a phone

    602 shares
    Share 241 Tweet 151
  • Investment Giant 21Shares Announces New Five Altcoins Including Avalanche (AVAX)!

    601 shares
    Share 240 Tweet 150
  • WhatsApp has ads now, but only in the Updates tab

    601 shares
    Share 240 Tweet 150
  • Tron Looks to go Public in the U.S., Form Strategy Like TRX Holding Firm: FT

    602 shares
    Share 241 Tweet 151
  • AI generates data to help embodied agents ground language to 3D world

    600 shares
    Share 240 Tweet 150
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved