October 14, 2025 feature
The GIST Lancelot federated learning system combines encryption and robust aggregation to resist poisoning attacks
Ingrid Fadelli
contributing writer
Gaby Clark
scientific editor
Robert Egan
associate editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
proofread

Federated learning is a machine learning technique that allows several individuals, dubbed "clients," to collaboratively train a model, without sharing raw training data with each other. This "shared training" approach could be particularly advantageous for training machine learning models designed to complete tasks in financial and health care settings, without accessing people's personal data.
Despite their potential, past studies highlighted the vulnerability of federated learning techniques to so-called poisoning attacks. These attacks consist in the submission of corrupted data by malicious users, which adversely impacts a model's performance.
One proposed approach to minimize the effects of corrupted data or updates on a model's performance is known as Byzantine-robust federated learning. This approach relies on mathematical strategies to ensure that unreliable data is ignored, yet it does not prevent potential breaches of sensitive information memorized by neural networks that can be reconstructed by attackers.
Researchers at the Chinese University of Hong Kong, City University of Hong Kong and other institutes recently developed an efficient Byzantine-robust federated learning system that also incorporates advanced cryptographic techniques, thus minimizing the risk of both poisoning attacks and personal data breaches. This new system, called Lancelot, was introduced in a paper published in Nature Machine Intelligence.
"We set out to fix a problem we kept seeing in regulated fields: federated learning can resist malicious participants through robust aggregation, and fully homomorphic encryption can keep updates secret, but doing both at once has been too slow to use," Siyang Jiang, first author of the paper, told Tech Xplore. "Our goal was to build a system that stays reliable even when some clients try to poison the model, keeps every update encrypted from start to finish, and is fast enough for everyday work."
Lancelot, the system developed by Jiang and his colleagues, keeps local updates made to a model encrypted, while also selecting trustworthy client updates without revealing its selection to others. Moreover, the system requires fewer computations, completing only two smarter cryptographic steps and ensuring that heavier mathematical operations are performed by graphics processing units.
"In short, Lancelot closes the privacy and security gap in federated learning while significantly reducing training time," explained Jiang. "Lancelot has three roles that work together. Clients train on their own data and send only encrypted model updates. The central server, which follows the rules (honest) but may be curious, works directly on the encrypted data to measure how similar the updates are and to combine them."
In the team's system, the secret key used to encrypt and decrypt data is stored by a separate and trusted key generation center. This center decrypts only the information required to rank clients based on their trustworthiness, then returns an encrypted 'mask' (i.e., a hidden list of the clients that should be included in the model's training). This ultimately allows the server to aggregate trustworthy data for the model's training without learning what clients were chosen.
"The core idea is this mask‑based encrypted sorting: instead of doing slow comparisons on encrypted data, the trusted center does the sorting and sends back only the hidden selection," said Jiang.
To make the system fast, we use two simple but powerful cryptographic techniques. First, we adopt lazy relinearization to reduce the number of relinearizations and thereby decrease computational overhead. Second, dynamic hoisting groups and parallelizes repeated operations so they run more efficiently. We also offload heavy encrypted operations, such as polynomial multiplications, to graphics processing units for large-scale parallelism."
The unique design proposed by these researchers ultimately ensures that every update submitted by clients remains confidential throughout the federated learning process. This was found to protect their system from malicious or mistaken clients, while also significantly reducing the time required to train models.
"Our work delivers the first practical system that truly combines Byzantine‑robust federated learning (BRFL) with fully homomorphic encryption," said Jiang. "Instead of doing many slow comparisons on encrypted data, we use mask‑based encrypted sorting: a trusted party ranks the client updates and returns only an encrypted selection list, so the server can combine the right updates without ever seeing who was chosen. Two simple ideas make this efficient in practice: lazy relinearization postpones an expensive cryptographic step until the end, and dynamic hoisting groups and parallelizes repeated operations; together with running the heavy encrypted mathematics on graphics processing units, these changes cut processing time and move data through memory much more quickly."
In the future, the Byzantine-robust federated learning system developed by this research team could be used to train models for various applications. Most notably, it could aid the development of AI tools that could boost the efficiency of operations in hospitals, banks and various other organizations that store sensitive information. Jiang and his colleagues are now working on further improving Lancelot, which is still in its pilot version, so that it can be scaled up and deployed in real-world settings.
"In parallel, we're exploring threshold and multi‑key CKKS to strengthen the trust model without blowing up bandwidth or latency, keeping Byzantine‑robust federated learning practical at scale," added Jiang. "We're also deepening the combination with differential privacy and adding asynchronous and clustered aggregation, so the system handles highly heterogeneous clients and flaky networks gracefully."
Written for you by our author Ingrid Fadelli, edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you.
More information: Siyang Jiang et al, Towards compute-efficient Byzantine-robust federated learning with fully homomorphic encryption, Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01107-6.
Journal information: Nature Machine Intelligence
© 2025 Science X Network
Citation: Lancelot federated learning system combines encryption and robust aggregation to resist poisoning attacks (2025, October 14) retrieved 14 October 2025 from https://techxplore.com/news/2025-10-lancelot-federated-combines-encryption-robust.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Securing the future of AI: Innovations in decentralized federated learning
Feedback to editors