June 12, 2025
The GIST Can AI help you identify a scam? An expert explains
Sadie Harley
scientific editor
Andrew Zinin
lead editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
proofread

Imagine that you've received an email asking you to transfer money to a bank account. Some of the details look right, but how can you be sure the message is legitimate?
Some people might ask an AI tool like Gemini or ChatGPT to sniff out signs of fraud. But are large language models like these good at detecting fraud? And what prompt language will get the most accurate results?
Testing large language models for fraud detection
Consumers need clear guidance about how to use AI to protect themselves from fraud, says Jessica Staddon, Northeastern University professor of the practice in computer science. Together with two graduate student researchers, Staddon is testing how well Gemini and ChatGPT can spot fraud, with the goal of building a corpus of reliable prompt techniques.
"This idea that LLMs could help with this is very popular right now," Staddon says, "but people really have not been given a lot of support in terms of knowing how to interact with them."
Real complaints fuel prompt-building research
Using language pulled directly from consumer fraud reports filed with the Consumer Financial Protection Bureau (CFPB), researchers are building prompts to see whether LLMs can distinguish between scams (when a user is being tricked), frauds (when a user's money is taken without their consent), and legitimate communications.
First they trained Gemini and ChatGPT4 to understand what a scam is, Staddon said. Then, sifting through 300 complaints on the CFPB database—192 of which were labeled as scams—they used complaint language to build prompts describing various scenarios.
Preliminary results: Some promise, some pitfalls
Staddon, along with co-investigators Isha Chadalavada and Tianhui Huang, shared their ongoing research in May at the Workshop on Technology and Consumer Protection in San Francisco.
Initial findings indicate that LLMs need more training to be effective. In one case, the tools detected a scam even when typical scam details weren't included, basing their assessment on the way the customer was treated by their bank.
They also show some bias in favor of named organizations. Among four complaints about a credit repair company that has been fined by the CFPB for deceptive advertising, the LLMs found only two to be potential scams. Overall, the researchers found that LLMs do better when company names are not mentioned at all.
Why effective prompting matters now more than ever
The need for clear guidance about crafting effective prompts has never been greater, Staddon said. Complaints of potential fraud to the CFPB more than doubled from 2020 to 2023, with 2.6 million reports filed in 2023 alone. The majority relate to scam bank account and credit card transactions.
Researchers chose to study free LLM tools because they are so heavily used by potential victims of fraud.
"One of the clearest drivers of scam susceptibility is social isolation," Staddon said, "when folks don't have someone to turn to and say, 'Hey, I got this strange text. Does this look right?'"
Training AI with the language scammers use
Building prompts with the language that consumers use when reporting fraud was also intentional, she said. Scammers construct their communications using specific language and tend to repeat their pitches, modifying them based on their rates of success. One example, Staddon said, is "the guy on the phone told me to move my money to protect it."
Using the exact wording of a suspicious communication as an LLM prompt, she said, will train the AI tool to identify patterns and become effective at identifying fraud.
"It's an arms race," she said. "They're always developing new techniques."
Provided by Northeastern University
This story is republished courtesy of Northeastern Global News news.northeastern.edu.
Citation: Can AI help you identify a scam? An expert explains (2025, June 12) retrieved 13 June 2025 from https://techxplore.com/news/2025-06-ai-scam-expert.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Redesigned Federal Trade Commission website sees increased consumer reporting of fraud by 28% 0 shares
Feedback to editors