June 23, 2025
The GIST Here's why the public needs to challenge the 'good AI' myth pushed by tech companies
Lisa Lock
scientific editor
Andrew Zinin
lead editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
written by researcher(s)
proofread

While there's been much negative discussion about AI, including about the possibility that it will take over the world, the public is also being bombarded with positive messages about the technology, and what it can do.
This "good AI" myth is a key tool used by tech companies to promote their products. Yet there's evidence that consumers are wary of the presence of AI in some products. This means that positive promotion of AI may be putting unwanted pressure on people to accept the use of AI in their lives.
AI is becoming so ubiquitous that people may be losing their ability to say no to using it. It's in smartphones, smart TVs, smart speakers like Alexa and virtual assistants like Siri. We're constantly told that our privacy will be protected. But with the personal nature of the data that AI has access to in these devices, can we afford to trust such assurances?
Some politicians also propagate the "good AI" promise with immense conviction, mirroring the messages coming from tech companies.
My current research is partly explained in a new book called The Myth of Good AI. This research shows that the data feeding our AI systems is biased, as it often over-represents privileged sections of the population and mainstream attitudes.
This means that any AI products that don't include data from marginalized people, or minorities, might discriminate against them. This explains why AI systems continue to be riddled with racism, ageism and various forms of gender discrimination, for instance.
The speed with which this technology is impinging on our everyday life makes it very hard to properly assess the consequences. And an approach to AI that is more critical of how it works does not make for good marketing for the tech companies.
Power structures
Positive ideas about AI and its abilities are currently dominating all aspects of AI innovation. This is partly determined by state interests and by the profit margins of the tech companies.
These are tied into the power structures held up by tech multi-billionaires, and, in some places, their influence on governments. The relationship between Donald Trump and Elon Musk, despite its recent souring, is a vivid manifestation of this.
And so, the public is at the receiving end of a distinctly hierarchical top-down system, from the big tech companies and their governmental enablers to users. In this way, we are made to consume, with little to no influence over how the technology is used. This positive AI ideology is therefore primarily about money and power.
As it stands, there is no global movement with a unifying manifesto that would bring together societies to leverage AI for the benefit of communities of people, or to safeguard our right to privacy. This "right to be left alone," codified in the US constitution and international human rights law, is a central pillar of my argument. It is also something that is almost entirely absent from the assurances about AI made by the big tech companies.
Yet, some of the risks of the technology are already evident. A database compiling cases in which lawyers around the world used AI, identified 157 cases in which false AI-generated information—so-called hallucinations—skewed legal rulings.
Some forms of AI can also be manipulated to blackmail and extort, or create blueprints for murder and terrorism.
Tech companies need to program the algorithms with data that represents everyone, not just the privileged, in order to reduce discrimination. In this way, the public are not forced to give into the consensus that AI will solve many of our problems, without proper supervision by society. This distinction between the ability to think creatively, ethically and intuitively may be the most fundamental faultline between human and machine.
It's up to ordinary people to question the good AI myth. A critical approach to AI should contribute to the creation of more socially relevant and responsible technology, a technology that is already trialed in torture scenarios, as the book discusses, too.
The point at which AI systems would outdo us in every task is expected to be a decade or so away. In the meantime there needs to be resistance to this attack on our right to privacy, and more awareness of just how AI works.
Provided by The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation: Here's why the public needs to challenge the 'good AI' myth pushed by tech companies (2025, June 23) retrieved 23 June 2025 from https://techxplore.com/news/2025-06-good-ai-myth-tech-companies.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
How the dominance of big companies like Nvidia is creating a 'split world' in tech shares
Feedback to editors