September 18, 2025 report
The GIST The AI model that teaches itself to think through problems, no humans required
Paul Arnold
contributing writer
Lisa Lock
scientific editor
Robert Egan
associate editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
proofread

Artificial intelligence is getting smarter every day, but it still has its limits. One of the biggest challenges has been teaching advanced AI models to reason, which means solving problems step by step. But in a new paper published in the journal Nature, the team from DeepSeek AI, a Chinese artificial intelligence company, reports that they were able to teach their R1 model to reason on its own without human input.
When many of us try to solve a problem, we typically don't get the answer straight away. We follow a methodical process that may involve gathering information and taking notes until we get to a solution. Traditionally, training AI models to reason has involved copying our approach. However, it is a long, drawn-out process where people show an AI model countless examples of how to work through a problem. It also means that AI is only as good as the examples it is given and can pick up on human biases.
Instead of showing the R1 model every step, researchers at DeepSeek AI used a technique called reinforcement learning. This trial-and-error approach, using rewards for correct answers, encouraged the model to reason for itself.
"Rather than explicitly teaching the model how to solve a problem, we simply provide it with the right incentives and it autonomously develops advanced problem-solving strategies," wrote the researchers in their paper.
DeepSeek's R1 model was trained on difficult math, coding and science problems. The only reward it received was a signal that its final answer was correct. During its training, the researchers saw it develop skills such as checking its own work and exploring different strategies to find a solution. It even started to use words like "wait" as it reflected on its own thinking process. If a path led to the right answer, that strategy was reinforced. If it was wrong, the model learned not to repeat it. There was some human intervention, but only to polish R1's skills later in the process.

The results were impressive. R1 performed better on math, coding and science tasks than older models trained with human guidance. One of the most noteworthy results was that it achieved an accuracy of 86.7% on the American Invitational Mathematics Examination (AIME) 2024, a tough math competition for the smartest high school students.
Even with these outstanding results, the researchers admit their model has some limitations to work through. For example, it sometimes mixed languages when given a non-English prompt and made some simple problems more complicated than they needed to be. But once these issues are ironed out, the researchers believe that an AI model that can reason for itself will lead to a new era of more capable and autonomous models.
Written for you by our author Paul Arnold, edited by Lisa Lock, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you.
More information: Daya Guo et al, DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning, Nature (2025). DOI: 10.1038/s41586-025-09422-z
Journal information: Nature
© 2025 Science X Network
Citation: The AI model that teaches itself to think through problems, no humans required (2025, September 18) retrieved 18 September 2025 from https://techxplore.com/news/2025-09-ai-problems-humans-required.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
New method can teach AI to admit uncertainty 48 shares
Feedback to editors
