March 24, 2025
The GIST Editors' notes
This text has been reviewed in keeping with Science X's editorial course of and insurance policies. Editors have highlighted the next attributes whereas making certain the content material's credibility:
fact-checked
trusted supply
proofread
Apple's missteps spotlight dangers of AI producing automated headlines, researcher says

"Luigi Mangione shoots himself," learn the BBC Information headline.
Besides Mangione, the person charged with murdering UnitedHealthcare chief government Brian Thompson, had achieved no such factor. And neither had the BBC reported that—however but that was the headline that Apple Intelligence exhibited to its customers as a part of a notifications abstract.
It was considered one of many high-profile errors made by the synthetic intelligence-powered software program that led to the tech large suspending the notifications characteristic in Apple Intelligence in relation to information and leisure classes.
Anees Baqir says the inadvertent unfold of misinformation by such an AI supply "posed a major danger by eroding public belief."
The assistant professor of information science at Northeastern College in London, who researches misinformation on-line, says errors like those made by Apple Intelligence have been prone to "create confusion" and will have led to information customers doubting media manufacturers that they beforehand trusted.
"Think about what this might do to individuals's opinion if there may be misinformation-related content material coming from a really high-profile information supply that’s normally thought-about as a dependable information supply," Baqir mentioned. "That could possibly be actually harmful, for my part."
The episode with Apple Intelligence sparked a wider debate in Britain about whether or not the publicly-available mainstream generative AI softwares are able to precisely summarizing and understanding information articles.
BBC Information chief government Deborah Turness mentioned that, whereas AI brings "countless alternatives," the businesses creating the instruments are presently "taking part in with hearth."
There are the explanation why generative AI like Apple Intelligence might not all the time get it proper relating to dealing with information tales, says Mariana Macedo, a knowledge scientist at Northeastern.
When creating generative AI, the "processes will not be deterministic, in order that they have some stochasticity," says the London-based assistant professor, that means that there could be a randomness to the result.
"Issues might be written in a means that you simply can’t predict," she explains. "It’s like if you carry up a baby. If you educate a child, you educate them with values, with guidelines, with directions—and then you definately say, "Now stay your life."
"The child is aware of what will be proper or incorrect kind of, however the child doesn't know every thing. The child doesn't have all of the expertise or the data to react and create new actions in an ideal means. It’s the similar with AI and algorithms."
Macedo says the problem with information and AI studying is that information is generally about issues which have simply occurred—there may be little to no previous context to assist the software program perceive the reporting that it’s being requested to sum up.
"If you speak about information, you might be speaking about issues which are novel," the researcher continues. "You aren’t speaking about issues that now we have recognized for an extended interval.
"AI is superb at issues which are properly established in society. AI doesn't know what to do relating to conflicting or new issues. So each time that the AI just isn’t skilled with sufficient data, it’s going to get much more incorrect."
To make sure accuracy, Macedo argues that builders must "discover a means of routinely double checking that data" earlier than it’s printed.
Permitting AI to study from information articles by being skilled on them may additionally imply they’re "extra doubtless to enhance" their accuracy, Macedo says.
The BBC presently blocks builders from utilizing its content material to coach generative AI fashions. However different U.Okay. information shops have made strikes to collaborate, with partnership offers between the Monetary Occasions and OpenAI permitting ChatGPT customers to see choose attributed summaries, quotes and hyperlinks.
Baqir means that tech corporations, media organizations and communications regulators collaborating could possibly be one of the simplest ways of confronting the AI information misinformation downside.
"I believe all of them want to come back collectively," he says. "Solely then can we give you a means that may assist us mitigate these impacts. There can’t be one single answer."
Offered by Northeastern College
This story is republished courtesy of Northeastern International Information information.northeastern.edu.
Quotation: Apple's missteps spotlight dangers of AI producing automated headlines, researcher says (2025, March 24) retrieved 24 March 2025 from https://techxplore.com/information/2025-03-apple-missteps-highlight-ai-automated.html This doc is topic to copyright. Other than any honest dealing for the aim of personal research or analysis, no half could also be reproduced with out the written permission. The content material is supplied for data functions solely.
Discover additional
Apple sidelines AI information summaries because of errors shares
Feedback to editors
