CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Saturday, July 19, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Conversations between LLMs could automate the creation of exploits, study shows

July 19, 2025
155
0

July 19, 2025 feature

The GIST Conversations between LLMs could automate the creation of exploits, study shows

Related Post

AI is now part of our world. University graduates should know how to use it responsibly

AI is now part of our world. University graduates should know how to use it responsibly

July 19, 2025
Anyone can now train a robot: New tool makes teaching skills hands-on and easy

Anyone can now train a robot: New tool makes teaching skills hands-on and easy

July 18, 2025
Ingrid Fadelli

contributing writer

Gaby Clark

scientific editor

Andrew Zinin

lead editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

How conversations between LLMs could automate the creation of exploits
High-level Application architecture, consisting of multiple interconnected modules that work together to automate vulnerability analysis and exploit generation. Credit: Caturano et al. (2025). Elsevier.

As computers and software become increasingly sophisticated, hackers need to rapidly adapt to the latest developments and devise new strategies to plan and execute cyberattacks. One common strategy to maliciously infiltrate computer systems is known as software exploitation.

As suggested by its name, this strategy involves the exploitation of bugs, vulnerabilities or flaws in software to execute unauthorized actions. These actions include gaining access to a user's personal accounts or computer, remotely executing malware or specific commands, stealing or modifying a user's data or crashing a program or system.

Understanding how hackers devise potential exploits and plan their attacks is of the utmost importance, as it can ultimately help to develop effective security measures against their attacks. Until now, creating exploits has been primarily possible for individuals with extensive knowledge of programming, the protocols governing the exchange of data between devices or systems, and operating systems.

A recent paper published in Computer Networks, however, shows that this might no longer be the case. Exploits could also be automatically generated by leveraging large language models (LLMs), such as the model underlying the well-known conversational platform ChatGPT. In fact, the authors of the paper were able to automate the generation of exploits via a carefully prompted conversation between ChatGPT and Llama 2, the open-source LLM developed by Meta.

"We work in the field of cybersecurity, with an offensive approach," Simon Pietro Romano, co-senior author of the paper, told Tech Xplore. "We were interested in understanding how far we could go with leveraging LLMs to facilitate penetration testing activities."

As part of their recent study, Romano and his colleagues initiated a conversation aimed at generating software exploits between ChatGPT and Llama 2. By carefully engineering the prompts they fed to the two models, they ensured that the models took on different roles and completed five different steps known to support the creation of exploits.

How conversations between LLMs could automate the creation of exploits
Iterative AI-driven conversation between the two LLMs, culminating in the generation of a valid exploit for the vulnerable code under attack. Credit: Caturano et al. (2025) Elsevier.

These steps included: the analysis of a vulnerable program, the identification of possible exploits, planning an attack based on these exploits, understanding the behavior of targeted hardware systems and ultimately generating the actual exploit code.

"We let two different LLMs interoperate in order to get through all of the steps involved in the process of crafting a valid exploit for a vulnerable program," explained Romano. "One of the two LLMs gathers 'contextual' information about the vulnerable program and its run-time configuration. It then asks the other LLM to craft a working exploit. In a nutshell, the former LLM is good at asking questions. The latter is good at writing (exploit) code."

So far, the researchers have only tested their LLM-based exploit generation method in an initial experiment. Nonetheless, they found that it ultimately produced fully functional code for a buffer overflow exploit, an attack that entails overwriting data stored by a system to alter the behavior of specific programs.

"This is a preliminary study, yet it clearly proves the feasibility of the approach," said Romano. "The implications concern the possibility of arriving at fully automated Penetration Testing and Vulnerability Assessment (VAPT)."

The recent study by Romano and his colleagues raises important questions about the risks of LLMs, as it shows how hackers could use them to automate the generation of exploits. In their next studies, the researchers plan to continue investigating the effectiveness of the exploit generation strategy they devised to inform the future development of LLMs, as well as the advancement of cybersecurity measures.

"We are now exploring further avenues of research in the same field of application," added Romano. "Namely, we feel like the natural prosecution of our research falls in the field of the so-called 'agentic' approach, with minimal human supervision."

Written for you by our author Ingrid Fadelli, edited by Gaby Clark, and fact-checked and reviewed by Andrew Zinin—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you.

More information: A chit-chat between Llama 2 and ChatGPT for the automated creation of exploits. Computer Networks(2025). DOI: 10.1016/j.comnet.2025.111501.

© 2025 Science X Network

Citation: Conversations between LLMs could automate the creation of exploits, study shows (2025, July 19) retrieved 19 July 2025 from https://techxplore.com/news/2025-07-conversations-llms-automate-creation-exploits.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

DarkMind: A new backdoor attack that leverages the reasoning capabilities of LLMs 0 shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

AI is now part of our world. University graduates should know how to use it responsibly
AI

AI is now part of our world. University graduates should know how to use it responsibly

July 19, 2025
0

July 19, 2025 The GIST AI is now part of our world. University graduates should know how to use it responsibly Lisa Lock scientific editor Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes...

Read moreDetails
Anyone can now train a robot: New tool makes teaching skills hands-on and easy

Anyone can now train a robot: New tool makes teaching skills hands-on and easy

July 18, 2025
Can AI really code? Study maps the roadblocks to autonomous software engineering

Can AI really code? Study maps the roadblocks to autonomous software engineering

July 18, 2025
When the stakes are high, do machine learning models make fair decisions?

When the stakes are high, do machine learning models make fair decisions?

July 18, 2025
California tech hubs are set to dominate the AI economy, report suggests

California tech hubs are set to dominate the AI economy, report suggests

July 18, 2025
Does AI understand?

Does AI understand?

July 17, 2025
Tech giants warn window to monitor AI reasoning is closing, urge action

Tech giants warn window to monitor AI reasoning is closing, urge action

July 17, 2025

Recent News

Unprecedented Activity Begins on the Ripple (XRP) Network in Recent Months – Here Are the Data

July 19, 2025
AI is now part of our world. University graduates should know how to use it responsibly

AI is now part of our world. University graduates should know how to use it responsibly

July 19, 2025
OpenAI’s experimental model achieved gold at the International Math Olympiad

OpenAI’s experimental model achieved gold at the International Math Olympiad

July 19, 2025
EA’s big reveal for its next Battlefield game may already be spoiled

EA’s big reveal for its next Battlefield game may already be spoiled

July 19, 2025

TOP News

  • Обменник криптовалют Dmoney.cc Выгодные обмены, которым можно доверять

    Обменник криптовалют Dmoney.cc Выгодные обмены, которым можно доверять

    536 shares
    Share 214 Tweet 134
  • Meta plans stand-alone AI app

    564 shares
    Share 226 Tweet 141
  • Kia’s EV4, its first electrical sedan, will probably be out there within the US later this 12 months

    569 shares
    Share 228 Tweet 142
  • New Pokémon Legends: Z-A trailer reveals a completely large model of Lumiose Metropolis

    569 shares
    Share 228 Tweet 142
  • Lazarus, the brand new anime from the creator of Cowboy Bebop, premieres April 5

    567 shares
    Share 227 Tweet 142
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved