October 30, 2025
The GIST Korean researchers propose international standards to ensure AI safety and trustworthiness
Sadie Harley
scientific editor
Robert Egan
associate editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
proofread

As artificial intelligence (AI) technology rapidly pervades our lives and industries, ensuring its safety and trustworthiness is a global challenge. In this context, Korean researchers are gaining traction by leading the development of two key international standards.
The Electronics and Telecommunications Research Institute (ETRI) has proposed the "AI Red Team Testing" standard, which aims to proactively identify risks in AI systems, and the "Trustworthiness Fact Label (TFL)" standard, which aims to help consumers easily understand the authenticity level of AI, to the International Organization for Standardization (ISO/IEC) and has begun full-scale development.
With this, Korea has elevated its status beyond being a mere "fast follower" of technology to a "first mover," setting the rules for the AI era.
"AI Red Team Testing" is a method of aggressively exploring and testing how secure an AI system is. For example, it proactively identifies situations where generative AI may produce incorrect information or be exploited to circumvent user protections.
ETRI serves as the editor of ISO/IEC 42119-7, the international standard in this field, creating common international test procedures and methods that can be applied across a wide range of fields, including medicine, finance, and national defense.
Meanwhile, ETRI, together with the Ministry of Food and Drug Safety, hosted the first "Advanced AI Digital Medical Product Red Team Challenge and Technology Workshop" in Korea at the Novotel Seoul Dongdaemun Hotel on September 4th and 5th.
This challenge is the first event of its kind in Asia and Korea for advanced AI medical devices, where medical professionals, security experts, and the general public will participate to examine the biases and risks of AI.
ETRI is also developing a medical-specific red team evaluation methodology in collaboration with Seoul Asan Medical Center, and will create a red team test system for digital medical products that applies advanced AI technology and conduct empirical testing. In addition, it has formed a council with major companies such as STA, NAVER, Upstage, SelectStar, KT, and LG AI Research Institute to strengthen cooperation on international AI red team standardization.

Another key standard is trustworthiness fact labels (TFLs)
The label is an at-a-glance visualization of how trustworthy an AI system is, providing transparent information to consumers, much like a nutrition label on a food product.
ETRI is leading the development of the ISO/IEC 42117 series of standards, which can be operationalized in a variety of ways, with companies providing the information themselves or having it verified and certified by a third-party organization.
In the future, ETRI is even considering incorporating ESG factors, such as AI's carbon footprint.
In conjunction with the AI Management System Standard (ISO/IEC 42001), which is used as an international certification standard for organizations using AI, the standard will serve as a framework to demonstrate how trustworthy the developed products and services are.
These two standards align with the government's Sovereign AI and AI G3 leapfrog strategies. This is evaluated as an example that goes beyond simply securing technological prowess and makes a practical contribution to the competition for leadership in creating global AI rules.
Just as the National Institute of Standards and Technology (NIST) in the United States supports national and international standardization to realize the national AI strategy, ETRI envisions supporting the realization of the national AI strategy by developing AI security technologies and leading international standardization of AI safety and trustworthiness, including the activities of the AI Safety Research Institute.
Kim Wook, PM of the Institute of Information & Communications Technology Planning & Evaluation (IITP), said, "Providing AI safety and trustworthiness will make it easier for everyone to use AI, and leading the way in international standards this time is a turning point toward becoming a country that leads AI norms."
Lee Seung Yun, Assistant Vice President of ETRI's Standards & Open Source Research Division, also said, "AI red team testing and trustworthiness labels are key technical elements included in AI regulatory policies in the U.S., EU, and other countries, and these international standards will serve as common criteria for evaluating the safety and trustworthiness of AI systems around the world.
"ETRI will continue to lead international standardization in the field of AI safety and trustworthiness, making Korea the center of excellence for not only Sovereign AI but also Sovereign AI safety technologies."
Provided by National Research Council of Science and Technology Citation: Korean researchers propose international standards to ensure AI safety and trustworthiness (2025, October 30) retrieved 30 October 2025 from https://techxplore.com/news/2025-10-korean-international-standards-ai-safety.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Integrated terrestrial/satellite 6G hyper-space communication successful in real-time flight tests
Feedback to editors










