August 28, 2025
The GIST Study examines how AI can ease workloads for frontline cybersecurity teams
Sadie Harley
scientific editor
Andrew Zinin
lead editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
preprint
trusted source
proofread

CSIRO, Australia's national science agency, has analyzed data from a 10-month trial conducted by global cybersecurity firm eSentire exploring how large language models (LLMs) like ChatGPT-4 can support cybersecurity analysts to detect and stop threats while reducing fatigue. The preprint results are available on the server ArXiv.
The anonymized data was collected at eSentire's Security Operations Centers (SOCs) in Ireland and Canada, where analysts identify, investigate and respond to cyberattacks.
During the trial, 45 cybersecurity analysts asked ChatGPT-4 more than 3,000 questions, mostly for routine, low-risk tasks such as interpreting technical data, editing text and analyzing malware code.
Dr. Mohan Baruwal Chhetri, Principal Research Scientist at CSIRO's Data61, said the study shows AI can be embedded into real workflows to support, not replace, human expertise.
"ChatGPT-4 supported analysts with tasks like interpreting alerts, polishing reports, or analyzing code, while leaving judgment calls to the human expert," Dr. Baruwal Chhetri said.
"This collaborative approach adapts to the user's needs, builds trust, and frees up time for higher-value tasks."
The study was conducted under CSIRO's Collaborative Intelligence (CINTEL) program, which explores how human-AI collaboration can enhance performance and well-being across domains including cybersecurity, where analyst fatigue is a growing challenge.
SOC teams face rising volumes of alerts, many of them false positives, leading to missed threats, reduced productivity, and potential burnout.
Dr. Baruwal Chhetri said human-AI collaboration could also benefit other high-pressure environments, such as emergency response and health care.
Dr. Martin Lochner, Data Scientist and Research Coordinator, explained the trial is the first long-term industrial study to show how LLMs can be used in real-world cybersecurity operations, helping shape the next generation of AI tools for SOC teams.
"This collaboration uniquely combined academic rigor with industry reality, producing insights that neither pure laboratory studies nor industry-only analysis could achieve," Mr. Locher said.
"For instance, we found that only 4% of analyst requests to ChatGPT-4 asked for a direct answer, such as 'is this malicious?' Instead, analysts preferred receiving evidence and context to support their own decision-making.
"This highlights the value of LLMs as decision-support tools that enhance analyst autonomy rather than replace it."
Building on the 10-month study, the next phase of research will be a longer-term investigation using a two-year dataset to examine how analysts' use of ChatGPT-4 evolves over time.
This phase will also incorporate qualitative analysis of analyst experiences, comparing outcomes with log data to better understand how AI tools can improve productivity and be refined for broader adoption in SOC environments.
More information: Ronal Singh et al, LLMs in the SOC: An Empirical Study of Human-AI Collaboration in Security Operations Centres, arXiv (2025). DOI: 10.48550/arxiv.2508.18947
Journal information: arXiv Provided by CSIRO Citation: Study examines how AI can ease workloads for frontline cybersecurity teams (2025, August 28) retrieved 28 August 2025 from https://techxplore.com/news/2025-08-ai-ease-workloads-frontline-cybersecurity.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Researcher develops a security-focused large language model to defend against malware shares
Feedback to editors