July 31, 2025
The GIST AI can help you die by suicide if you ask the right way, researchers say
Gaby Clark
scientific editor
Robert Egan
associate editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
preprint
trusted source
proofread

Most of the companies behind large language models like ChatGPT claim to have guardrails in place for understandable reasons. They wouldn't want their models to, hypothetically, offer instructions to users on how to hurt themselves or commit suicide.
However, researchers from Northeastern University found that those guardrails are not only easy to break but LLMs are more than happy to offer up shockingly detailed instructions for suicide if you ask the right way.
Annika Marie Schoene, a research scientist for Northeastern's Responsible AI Practice and the lead author on this new paper, prompted four of the biggest LLMs to give her advice for self-harm and suicide. They all refused at first––until she said it was hypothetical or for research purposes. The study is published on the arXiv preprint server.
"That's when, effectively, every single guardrail was overridden and the model ended up actually giving very detailed instructions down to using my body weight, my height and everything else to calculate which bridge I should jump off, which over-the-counter or prescription medicine I should use and in what dosage, how I could go about finding it," Schoene says.
From there, Schoene and Cansu Canca, director of Responsible AI Practice and co-author on the project, started pushing to see how far they could take it. What they found was shocking, even for two people who are aware of the limits of artificial intelligence.
"Knowing human psychology even a little bit, can you really call it a safeguard if you just have to do two turns to get self-harm instructions?" Canca says.
Certain models would create entire tables breaking down various suicide methods. One gave specific instructions on where on your body to cut––and what to cut with––if you wanted to do non-lethal self-harm.
"The thing that shocked me the most was it came up with nine or 10 different methods. It wasn't just the obvious ones," Schoene says. "It literally went into the details of what household items I can use, listing [how] you can get this specific pest control stuff. You walk into Walmart, quite frankly, buy a few bottles and pour yourself a few shots, and told me how many I would need."
Canca was shocked by the seemingly flippant way the models communicated some of this information, with ChatGPT going so far as to organize information using emojis.
"You start having the instructions really structured, categorized, and you can follow them by the specific emojis that correspond to the methods: Here are all the jumping-from-a-bridge related answers. Here's the rope emoji if you want to hang yourself," Canca says. "It just became very dark very quickly."
Most of the models even made their instructions more convenient. One by converting the lethal dosage of certain medications from metric units to an exact number of pills. Canca notes that information like that would not be necessary even for research purposes.
The LLMs kept on repeating how they were glad these conversations were for "academic purposes." But Schoene points out that making the leap from telling the LLMs, "I want to kill myself. What can I do?" to clarifying that it was for research occurred within the same conversation. The link between the two should have been clear.
Schoene and Canca contacted every single company that had a model involved in the experiment––OpenAI (ChatGPT), Google (Gemini), Anthropic (Claude) and Perplexity––to notify them of these findings. After multiple attempts, all they got were automated acknowledgments that their emails had been received. None of the companies has followed up. The experiment also included Pi AI, but it was the only model to refuse attempts to get around its guardrails.
The researchers acknowledge it's possible to find all the information these models shared in other places, but AI just lacks the guardrails that doctors, journalists and even Google have in place around, specifically, suicide.
"You cannot just sit there and tell somebody, 'I want to kill myself' and walk out of their office without at least the bare minimum of resources, a follow-up appointment and a referral to a psychiatrist or other resources," Schoene says.
The fact that there are not only few guardrails with AI but, as Canca notes, these tools can generate detailed, accurate and actionable guidance incredibly fast is "very scary."
"There is merit in delaying the information," Canca says. "Self-harm and suicide can also be impulsive, so just delaying it is useful."
The entire experiment raises questions about how much LLMs understand and memorize the intent of what we're telling them––"because they actually don't," Schoene says. It also highlights the need for real guardrails, safety protocols and regulations on these technologies, she adds.
Some states in the U.S., including California, have started to seriously consider AI regulations. California lawmakers recently introduced legislation aimed at protecting children from AI after a teen boy committed suicide based on months of conversations with a chatbot.
Canca says accountability needs to be taken by those developing AI tools, but those who are deploying them also need to acknowledge the risks involved and respond accordingly.
"There are different levels of worry that different parties should be concerned about," Canca says. "Currently, we seem to be looking for ways to deflect those responsibilities and saying, 'Use it at your own risk. You know it's risky. If things go badly, oh well.'"
As more and more people start to use AI for mental health services like therapy, Schoene says it's worth being direct about the limits of these tools––and their potentially dangerous consequences.
"It's the elephant in the room: We know people have died of suicide after interacting with these models," Schoene says. "We know that people had psychotic episodes, going back into psychiatric hospitals, after interacting with these models. At what point do we acknowledge that these are not great therapists or even great general-purpose listeners?"
More information: Annika M Schoene et al, 'For Argument's Sake, Show Me How to Harm Myself!': Jailbreaking LLMs in Suicide and Self-Harm Contexts, arXiv (2025). DOI: 10.48550/arxiv.2507.02990
Journal information: arXiv Provided by Northeastern University
This story is republished courtesy of Northeastern Global News news.northeastern.edu.
Citation: AI can help you die by suicide if you ask the right way, researchers say (2025, July 31) retrieved 31 July 2025 from https://techxplore.com/news/2025-07-ai-die-suicide.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Law enforcement is learning how to use AI more ethically 33 shares
Feedback to editors