CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Thursday, June 26, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Interactive virtual companion to accelerate discoveries at scientific user facilities

June 26, 2025
157
0

June 26, 2025

The GIST Interactive virtual companion to accelerate discoveries at scientific user facilities

Related Post

AI blunders: Six-finger hands, two suns and Jesus Christ on a surfboard in a stormy sea

AI blunders: Six-finger hands, two suns and Jesus Christ on a surfboard in a stormy sea

June 26, 2025
Can academics use AI to write journal papers? What the guidelines say

Can academics use AI to write journal papers? What the guidelines say

June 26, 2025
Lisa Lock

scientific editor

Andrew Zinin

lead editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Interactive virtual companion to accelerate discoveries at scientific user facilities
VISION aims to lead the natural-language-controlled scientific expedition with joint human-AI force for accelerated scientific discovery at user facilities. Credit: Brookhaven National Laboratory

A team of scientists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory have dreamed up, developed, and tested a novel voice-controlled artificial intelligence (AI) assistant designed to break down everyday barriers for busy scientists.

Known as the Virtual Scientific Companion, or VISION, the generative AI tool—developed by researchers at the Lab's Center for Functional Nanomaterials (CFN) with support from experts at the National Synchrotron Light Source II (NSLS-II)—offers an opportunity to bridge knowledge gaps at complex instruments, carry out more efficient experiments, save scientists' time, and overall, accelerate scientific discovery.

The idea is that a user simply has to tell VISION in plain language what they'd like to do at an instrument and the AI companion, tailored to that instrument, will take on the task—whether it's running an experiment, launching data analysis, or visualizing results. The Brookhaven team recently shared details about VISION in a paper published in Machine Learning: Science and Technology.

"I'm really excited about how AI can impact science and it's something we as the scientific community should definitely explore," said Esther Tsai, a scientist in the AI-Accelerated Nanoscience group at CFN. "What we can't deny is that brilliant scientists spend a lot of time on routine work. VISION acts as an assistant that scientists and users can talk to for answers to basic questions about the instrument capability and operation."

VISION highlights the close partnership between CFN and NSLS-II, two DOE Office of Science user facilities at Brookhaven Lab. Together they collaborate with facility users on the setup, scientific planning, and analysis of data from experiments at three NSLS-II beamlines, highly specialized measurement tools that enable researchers to explore the structure of materials using beams of X-rays.

Tsai, inspired to alleviate bottlenecks that come with using NSLS-II's in-demand beamlines, received a DOE Early Career Award in 2023 to develop this new concept. Tsai now leads the CFN team behind VISION, which has collaborated with NSLS-II beamline scientists to launch and test the system at the Complex Materials Scattering (CMS) beamline at NSLS-II, demonstrating the first voice-controlled experiment at an X-ray scattering beamline and marking progress towards the world of AI-augmented discovery.

"At Brookhaven Lab, we're not only leading in researching this frontier scientific virtual companion concept, we're also being hands-on, deploying this AI technique on the experimental floor at NSLS-II and exploring how it can be useful to users," Tsai said.

Talking to AI for flexible workflows

VISION leverages the growing capabilities of large language models (LLMs), the technology at the heart of popular AI assistants such as ChatGPT.

An LLM is an expansive program that creates text modeled on natural human language. VISION exploits this concept, not just to generate text for answering questions but also to generate decisions about what to do and computer code to drive an instrument. Internally, VISION is organized into multiple "cognitive blocks," or cogs, each comprising an LLM that handles a specific task. Multiple cogs can be put together to form a capable assistant, with the cogs carrying out work transparently for the scientist.

"A user can just go to the beamline and say, 'I want to select certain detectors' or 'I want to take a measurement every minute for five seconds' or 'I want to increase the temperature' and VISION will translate that command into code," Tsai said.

Those examples of natural language inputs, whether speech, text, or both, are first fed to VISION's "classifier" cog, which decides what type of task the user is asking about. The classifier routes to the right cog for the task, such as an "operator" cog for instrument control or "analyst" cog for data analysis.

Then, in just a few seconds, the system translates the input into code that's passed back to the beamline workstation, which the user can review before executing. On the back end, everything is being run on "HAL," a CFN server optimized for running AI workloads on graphics processing units.

VISION's use of natural language—that is, how people normally speak—is its key advantage. Since the system is tailored to the instrument the researcher is using, they are liberated from spending time setting up software parameters and can instead focus on the science they are pursuing.

"VISION acts as a bridge between users and the instrumentation, where users can just talk to the system and the system takes care of driving experiments," said Noah van der Vleuten, a co-author who helped develop VISION's code generation capability and testing framework. "We can imagine this making experiments more efficient and giving people a lot more time to focus on the science, rather than becoming experts in each instrument's software."

The ability to speak to VISION, not only type a prompt, could make workflows even faster, team members noted.

In the fast-paced, ever-evolving world of AI, VISION's creators also set out to build a scientific tool that can keep up with improving technology, incorporate new instrument capabilities, and scale up as needed to seamlessly navigate multiple tasks.

"A key guiding principle is that we wanted to be modular and adaptable, so we can quickly swap out or replace with new AI models as they become more powerful," said Shray Mathur, first author on the paper who worked on VISION's audio-understanding capabilities and overall architecture. "As underlying models become better, VISION becomes better. It's very exciting because we get to work on some of the most recent technology, and deploy it immediately. We're building systems that can really benefit users in their research."

This work builds on a history of AI and machine learning (ML) tools developed by CFN and NSLS-II to aid facility scientists, including for autonomous experiments, data analytics, and robotics. Future versions of VISION could act as a natural interface to these advanced AI/ML tools.

A VISION for an AI Lab Partner
The team behind VISION at the CMS beamline. From left to right, co-authors and Center for Functional Nanomaterials scientists Shray Mathur, Noah van der Vleuten, Kevin Yager, and Esther Tsai. Credit: Kevin Coughlin/Brookhaven National Laboratory

A VISION for human-AI partnership

Now that the architecture for VISION is developed and at a stage where it has an active demonstration at the CMS beamline, the team aims to test it further with beamline scientists and users and eventually bring the virtual AI companion to additional beamlines.

This way, the team can have real discussions with users about what's truly helpful to them, Tsai said.

"The CFN/NSLS-II collaboration is really unique in the sense that we are working together on this frontier AI development with language models on the experimental floor, at the front-line supporting users," Tsai said. "We're getting feedback to better understand what users need and how we can best support them."

Tsai offered a huge thanks to CMS lead beamline scientist Ruipeng Li for his support and openness to ideas for VISION.

The CMS beamline has already been a testing ground for AI/ML capabilities, including autonomous experiments. When the idea of bringing VISION to the beamline came up, Li saw it as an exciting and fun opportunity.

"We've been close collaborators and partners since the beamline was built more than eight years ago," Li noted. "These concepts enable us to build on our beamline's potential and continue to push the boundaries of AI/ML applications for science. We want to see how we can learn from this process because we are riding the AI wave now."

In the bigger picture of AI-augmented scientific research, the development of VISION is a step towards realizing other AI concepts across the DOE complex, including a science exocortex.

Kevin Yager, the AI-Accelerated Nanoscience Group leader at CFN and VISION co-author, envisions the exocortex as an extension of the human brain that researchers can interact with through conversation to generate inspiration and imagination for scientific discovery.

"When I imagine the future of science, I see an ecosystem of AI agents working in coordination to help me advance my research," Yager said. "The VISION system is an early example of this future—AI assistant that helps you operate an instrument. We want to build more AI assistants, and connect them together into a really powerful network."

More information: Shray Mathur et al, VISION: a modular AI assistant for natural human-instrument interaction at scientific user facilities, Machine Learning: Science and Technology (2025). DOI: 10.1088/2632-2153/add9e4

Provided by Brookhaven National Laboratory Citation: Interactive virtual companion to accelerate discoveries at scientific user facilities (2025, June 26) retrieved 26 June 2025 from https://techxplore.com/news/2025-06-interactive-virtual-companion-discoveries-scientific.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

From automation to analysis, AI-driven innovations are making synchrotron science faster, smarter, more efficient shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

AI blunders: Six-finger hands, two suns and Jesus Christ on a surfboard in a stormy sea
AI

AI blunders: Six-finger hands, two suns and Jesus Christ on a surfboard in a stormy sea

June 26, 2025
0

June 26, 2025 The GIST AI blunders: Six-finger hands, two suns and Jesus Christ on a surfboard in a stormy sea Stephanie Baum scientific editor Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes...

Read moreDetails
Can academics use AI to write journal papers? What the guidelines say

Can academics use AI to write journal papers? What the guidelines say

June 26, 2025
Engineers create first AI model specialized for chip design language

Engineers create first AI model specialized for chip design language

June 26, 2025
US judge sides with Meta in AI training copyright case

US judge sides with Meta in AI training copyright case

June 26, 2025
Mattel and OpenAI have partnered up. Here’s why parents should be concerned about AI in toys

Mattel and OpenAI have partnered up. Here’s why parents should be concerned about AI in toys

June 25, 2025
How AI models successfully detect personality traits from written text

How AI models successfully detect personality traits from written text

June 25, 2025
Pervasive surveillance of people is being used to access, monetize, coerce and control, study suggests

Pervasive surveillance of people is being used to access, monetize, coerce and control, study suggests

June 25, 2025

Recent News

Google tweaked its AI-powered Ask Photos feature and restarted its rollout

Google tweaked its AI-powered Ask Photos feature and restarted its rollout

June 26, 2025
Threads now has a Hidden Words setting that’s separate from Instagram

Threads now has a Hidden Words setting that’s separate from Instagram

June 26, 2025

Trump’s World Liberty Crypto Project Gets $100 Million Investment From UAE Fund

June 26, 2025
AI blunders: Six-finger hands, two suns and Jesus Christ on a surfboard in a stormy sea

AI blunders: Six-finger hands, two suns and Jesus Christ on a surfboard in a stormy sea

June 26, 2025

TOP News

  • The best Android phones for 2023

    The best Android phones for 2023

    573 shares
    Share 229 Tweet 143
  • Google’s new AI Core update for Pixel 8 Pro will boost its powers and performance

    559 shares
    Share 224 Tweet 140
  • My go-to robot vacuum and mop is still $455 off following Cyber Monday

    549 shares
    Share 220 Tweet 137
  • How OpenAI’s ChatGPT has changed the world in just a year

    557 shares
    Share 223 Tweet 139
  • Shiba Inu Price Prediction Today

    618 shares
    Share 247 Tweet 155
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved