July 9, 2025
The GIST New open-source language model offers multilingual support and public transparency
Lisa Lock
scientific editor
Andrew Zinin
lead editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
preprint
trusted source
proofread

This summer, EPFL and ETH Zurich will release a large language model (LLM) developed on public infrastructure. Trained on the Alps supercomputer at the Swiss National Supercomputing Center (CSCS), the new LLM marks a milestone in open-source AI and multilingual excellence.
Earlier this week in Geneva, about 50 leading global initiatives and organizations dedicated to open-source LLMs and trustworthy AI convened at the International Open-Source LLM Builders Summit. Hosted by the AI centers of EPFL and ETH Zurich, the event marked a significant step in building a vibrant and collaborative international ecosystem for open foundation models. Open LLMs are increasingly viewed as credible alternatives to commercial systems, most of which are developed behind closed doors in the United States or China.
Participants of the summit previewed the forthcoming release of a fully open, publicly developed LLM—co-created by researchers at EPFL, ETH Zurich and other Swiss universities in close collaboration with engineers at CSCS. Currently in final testing, the model will be downloadable under an open license. The model focuses on transparency, multilingual performance, and broad accessibility.
The model will be fully open: source code and weights will be publicly available, and the training data will be transparent and reproducible, supporting adoption across science, government, education, and the private sector. This approach is designed to foster both innovation and accountability.
"Fully open models enable high-trust applications and are necessary for advancing research about the risks and opportunities of AI. Transparent processes also enable regulatory compliance," says Imanol Schlag, research scientist at the ETH AI Center, who is leading the effort alongside EPFL AI Center faculty members and professors Antoine Bosselut and Martin Jaggi.
Multilingual by design
A defining characteristic of the LLM is its fluency in more than 1,000 languages. "We have emphasized making the models massively multilingual from the start," says Bosselut.
Training of the base model was done on a large text dataset in more than 1,500 languages—approximately 60% English and 40% non-English languages—as well as code and mathematics data. Given the representation of content from all languages and cultures, the resulting model maintains the highest global applicability.
The model will be released in two sizes—8 billion and 70 billion parameters—meeting a broad range of users' needs. The 70B version will rank among the most powerful fully open models worldwide. The number of parameters reflects a model's capacity to learn and generate complex responses.
High reliability is achieved through training on more than 15 trillion high-quality training tokens (units representing a word or part of the word), enabling robust language understanding and versatile use cases.
The LLM is being developed with due consideration to Swiss data protection laws, Swiss copyright laws, and the transparency obligations under the EU AI Act. In a recent study posted to the arXiv preprint server, the project leaders demonstrated that for most everyday tasks and general knowledge acquisition, respecting web-crawling opt-outs during data acquisition produces virtually no performance degradation.
Supercomputer as an enabler of sovereign AI
The model is trained on the Alps supercomputer at CSCS in Lugano, one of the world's most advanced AI platforms, equipped with more than 10,000 NVIDIA Grace Hopper Superchips. The system's scale and architecture made it possible to train the model efficiently using 100% carbon-neutral electricity.
The successful realization of Alps was significantly facilitated by a long-standing collaboration spanning over 15 years with NVIDIA and HPE/Cray. This partnership has been pivotal in shaping the capabilities of Alps, ensuring it meets the demanding requirements of large-scale AI workloads, including the pre-training of complex LLMs.
"Training this model is only possible because of our strategic investment in Alps, a supercomputer purpose-built for AI," says Thomas Schulthess, Director of CSCS and professor at ETH Zurich. "Our enduring collaboration with NVIDIA and HPE exemplifies how joint efforts between public research institutions and industry leaders can drive sovereign infrastructure, fostering open innovation—not just for Switzerland, but for science and society worldwide."
Public access and global reuse
In late summer, the LLM will be released under the Apache 2.0 License. Accompanying documentation will detail the model architecture, training methods, and usage guidelines to enable transparent reuse and further development.
"As scientists from public institutions, we aim to advance open models and enable organizations to build on them for their own applications," says Bosselut.
"By embracing full openness—unlike commercial models that are developed behind closed doors—we hope that our approach will drive innovation in Switzerland, across Europe, and through multinational collaborations. Furthermore, it is a key factor in attracting and nurturing top talent," says EPFL professor Jaggi.
More information: Dongyang Fan et al, Can Performant LLMs Be Ethical? Quantifying the Impact of Web Crawling Opt-Outs, arXiv (2025). DOI: 10.48550/arxiv.2504.06219
Journal information: arXiv Provided by Ecole Polytechnique Federale de Lausanne Citation: New open-source language model offers multilingual support and public transparency (2025, July 9) retrieved 10 July 2025 from https://techxplore.com/news/2025-07-source-language-multilingual-transparency.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Engineers create first AI model specialized for chip design language 1 shares
Feedback to editors