CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Saturday, July 19, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Can AI really code? Study maps the roadblocks to autonomous software engineering

July 18, 2025
154
0

July 17, 2025

The GIST Can AI really code? Study maps the roadblocks to autonomous software engineering

Related Post

Anyone can now train a robot: New tool makes teaching skills hands-on and easy

Anyone can now train a robot: New tool makes teaching skills hands-on and easy

July 18, 2025
When the stakes are high, do machine learning models make fair decisions?

When the stakes are high, do machine learning models make fair decisions?

July 18, 2025
Lisa Lock

scientific editor

Andrew Zinin

lead editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

brain code
Credit: Pixabay/CC0 Public Domain

Imagine a future where artificial intelligence quietly shoulders the drudgery of software development: refactoring tangled code, migrating legacy systems, and hunting down race conditions, so that human engineers can devote themselves to architecture, design, and the genuinely novel problems still beyond a machine's reach.

Recent advances appear to have nudged that future tantalizingly close, but a new paper by researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and several collaborating institutions argues that this potential future reality demands a hard look at present-day challenges.

Titled "Challenges and Paths Towards AI for Software Engineering," the work maps the many software-engineering tasks beyond code generation, identifies current bottlenecks, and highlights research directions to overcome them, aiming to let humans focus on high-level design while routine work is automated. The paper is available on the arXiv preprint server, and the researchers are presenting their work at the International Conference on Machine Learning (ICML 2025) in Vancouver.

"Everyone is talking about how we don't need programmers anymore, and there's all this automation now available," says Armando Solar-Lezama, MIT professor of electrical engineering and computer science, CSAIL principal investigator, and senior author of the study.

"On the one hand, the field has made tremendous progress. We have tools that are way more powerful than any we've seen before. But there's also a long way to go toward really getting the full promise of automation that we would expect."

Solar-Lezama argues that popular narratives often shrink software engineering to "the undergrad programming part: Someone hands you a spec for a little function and you implement it, or solving LeetCode-style programming interviews."

Real practice is far broader. It includes everyday refactors that polish design, plus sweeping migrations that move millions of lines from COBOL to Java and reshape entire businesses. It requires nonstop testing and analysis—fuzzing, property-based testing, and other methods—to catch concurrency bugs, or patch zero-day flaws. And it involves the maintenance grind: documenting decade-old code, summarizing change histories for new teammates, and reviewing pull requests for style, performance, and security.

Industry-scale code optimization—think re-tuning GPU kernels or the relentless, multi-layered refinements behind Chrome's V8 engine—remains stubbornly hard to evaluate. Today's headline metrics were designed for short, self-contained problems, and while multiple-choice tests still dominate natural-language research, they were never the norm in AI-for-code.

The field's de facto yardstick, SWE-Bench, simply asks a model to patch a GitHub issue: useful, but still akin to the "undergrad programming exercise" paradigm. It touches only a few hundred lines of code, risks data leakage from public repositories, and ignores other real-world contexts—AI-assisted refactors, human–AI pair programming, or performance-critical rewrites that span millions of lines. Until benchmarks expand to capture those higher-stakes scenarios, measuring progress—and thus accelerating it—will remain an open challenge.

If measurement is one obstacle, human-machine communication is another. First author Alex Gu, an MIT graduate student in electrical engineering and computer science, sees today's interaction as "a thin line of communication." When he asks a system to generate code, he often receives a large, unstructured file and even a set of unit tests, yet those tests tend to be superficial. This gap extends to the AI's ability to effectively use the wider suite of software engineering tools, from debuggers to static analyzers, that humans rely on for precise control and deeper understanding.

"I don't really have much control over what the model writes," he says. "Without a channel for the AI to expose its own confidence—'this part's correct … this part, maybe double-check'—developers risk blindly trusting hallucinated logic that compiles, but collapses in production. Another critical aspect is having the AI know when to defer to the user for clarification."

Scale compounds these difficulties. Current AI models struggle profoundly with large code bases, often spanning millions of lines. Foundation models learn from public GitHub, but "every company's code base is kind of different and unique," Gu says, making proprietary coding conventions and specification requirements fundamentally out of distribution.

The result is code that looks plausible yet calls non-existent functions, violates internal style rules, or fails continuous-integration pipelines. This often leads to AI-generated code that "hallucinates," meaning it creates content that looks plausible but doesn't align with the specific internal conventions, helper functions, or architectural patterns of a given company.

Models will also often retrieve incorrectly, because it retrieves code with a similar name (syntax) rather than functionality and logic, which is what a model might need to know how to write the function. "Standard retrieval techniques are very easily fooled by pieces of code that are doing the same thing but look different," says Solar-Lezama.

The authors mention that since there is no silver bullet to these issues, they're calling instead for community-scale efforts: richer, having data that captures the process of developers writing code (for example, which code developers keep versus throw away, how code gets refactored over time, etc.), shared evaluation suites that measure progress on refactor quality, bug-fix longevity, and migration correctness; and transparent tooling that lets models expose uncertainty and invite human steering rather than passive acceptance.

Gu frames the agenda as a "call to action" for larger open-source collaborations that no single lab could muster alone. Solar-Lezama imagines incremental advances—"research results taking bites out of each one of these challenges separately"—that feed back into commercial tools and gradually move AI from autocomplete sidekick toward genuine engineering partner.

"Why does any of this matter? Software already underpins finance, transportation, health care, and the minutiae of daily life, and the human effort required to build and maintain it safely is becoming a bottleneck. An AI that can shoulder the grunt work—and do so without introducing hidden failures—would free developers to focus on creativity, strategy, and ethics" says Gu.

"But that future depends on acknowledging that code completion is the easy part; the hard part is everything else. Our goal isn't to replace programmers. It's to amplify them. When AI can tackle the tedious and the terrifying, human engineers can finally spend their time on what only humans can do."

"With so many new works emerging in AI for coding, and the community often chasing the latest trends, it can be hard to step back and reflect on which problems are most important to tackle," says Baptiste Rozière, an AI scientist at Mistral AI, who wasn't involved in the paper. "I enjoyed reading this paper because it offers a clear overview of the key tasks and challenges in AI for software engineering. It also outlines promising directions for future research in the field."

More information: Alex Gu et al, Challenges and Paths Towards AI for Software Engineering, arXiv (2025). DOI: 10.48550/arxiv.2503.22625

Journal information: arXiv Provided by Massachusetts Institute of Technology

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation: Can AI really code? Study maps the roadblocks to autonomous software engineering (2025, July 17) retrieved 18 July 2025 from https://techxplore.com/news/2025-07-ai-code-roadblocks-autonomous-software.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Google debuts Gemini AI coding tool in bid to entice developers 0 shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Anyone can now train a robot: New tool makes teaching skills hands-on and easy
AI

Anyone can now train a robot: New tool makes teaching skills hands-on and easy

July 18, 2025
0

July 17, 2025 The GIST Anyone can now train a robot: New tool makes teaching skills hands-on and easy Sadie Harley scientific editor Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring...

Read moreDetails
When the stakes are high, do machine learning models make fair decisions?

When the stakes are high, do machine learning models make fair decisions?

July 18, 2025
California tech hubs are set to dominate the AI economy, report suggests

California tech hubs are set to dominate the AI economy, report suggests

July 18, 2025
Does AI understand?

Does AI understand?

July 17, 2025
Tech giants warn window to monitor AI reasoning is closing, urge action

Tech giants warn window to monitor AI reasoning is closing, urge action

July 17, 2025
Generative AI models streamline fashion design with new text and image creation

Generative AI models streamline fashion design with new text and image creation

July 17, 2025
Through smartphone apps, AI can close road assessment gap

Through smartphone apps, AI can close road assessment gap

July 17, 2025

Recent News

Ripple CEO Praises Trump as “Most Crypto-Forward” President After GENIUS Act Signing

July 19, 2025
Trump’s firing of Democratic FTC commissioner was unlawful, judge rules

Trump’s firing of Democratic FTC commissioner was unlawful, judge rules

July 19, 2025
Subaru’s third EV is the Uncharted (yep) with 300 miles of range and 338 horsepower

Subaru’s third EV is the Uncharted (yep) with 300 miles of range and 338 horsepower

July 19, 2025

Trump Family-linked World Liberty Financial (WLFI) Gets 99.94% Approval for Token Trading

July 19, 2025

TOP News

  • Обменник криптовалют Dmoney.cc Выгодные обмены, которым можно доверять

    Обменник криптовалют Dmoney.cc Выгодные обмены, которым можно доверять

    536 shares
    Share 214 Tweet 134
  • Meta plans stand-alone AI app

    564 shares
    Share 226 Tweet 141
  • Kia’s EV4, its first electrical sedan, will probably be out there within the US later this 12 months

    569 shares
    Share 228 Tweet 142
  • New Pokémon Legends: Z-A trailer reveals a completely large model of Lumiose Metropolis

    569 shares
    Share 228 Tweet 142
  • Lazarus, the brand new anime from the creator of Cowboy Bebop, premieres April 5

    567 shares
    Share 227 Tweet 142
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved