CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Saturday, June 14, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Vision-language models gain spatial reasoning skills through artificial worlds and 3D scene descriptions

June 13, 2025
152
0

June 13, 2025 feature

The GIST Vision-language models gain spatial reasoning skills through artificial worlds and 3D scene descriptions

Related Post

Benchmarking hallucinations: New metric tracks where multimodal reasoning models go wrong

Benchmarking hallucinations: New metric tracks where multimodal reasoning models go wrong

June 14, 2025
AI-generated podcasts open new doors to make science accessible

AI-generated podcasts open new doors to make science accessible

June 14, 2025
Ingrid Fadelli

contributing writer

Lisa Lock

scientific editor

Robert Egan

associate editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

A framework to boost the visual perspective taking and spatial reasoning of vision-language models
On the left, the simulated environment containing a cuboid placed on a plane and observed by a camera, placed directly above the object at varying distances. On the right, an example of the dataset elements used to train the model: an image and textual prompt as input, with the spatial relationship between the cuboid and camera represented as a transformation matrix as the desired output. Credit: Gioele Migno.

Vision-language models (VLMs) are advanced computational techniques designed to process both images and written texts, making predictions accordingly. Among other things, these models could be used to improve the capabilities of robots, helping them to accurately interpret their surroundings and interact with human users more effectively.

A team of researchers from the Italian Institute of Technology (IIT) and the University of Aberdeen have recently introduced a new conceptual framework and a dataset containing computationally generated data, which could be used to train VLMs on spatial reasoning tasks. Their framework and dataset, presented in a paper posted to the arXiv preprint server, could contribute to the development of embodied artificial intelligence (AI) systems that are better equipped to navigate real-world environments and communicate with humans.

This research marks the outcome of the FAIR* project and stems from a recent collaboration between the Social Cognition in Human-Robot Interaction (S4HRI) research line at IIT, guided by Prof. Agnieszka Wykowska, and the Action Prediction Lab at the University of Aberdeen, which is led by Prof. Patric Bach.

"Our research group investigates how human social cognition mechanisms are engaged during interactions with artificial agents," Davide De Tommaso, technologist at IIT and co-senior author of the paper, told Tech Xplore. "Our previous studies indicated that, under specific conditions, people attribute intentionality to robots and interact with them in ways that closely resemble interactions with other social partners.

"Therefore, understanding these mechanisms, particularly the role of nonverbal cues such as gaze, gestures, and spatial behaviors, is crucial for developing effective computational models of social cognition in robots."

Visual perspective taking (VPT), the ability to understand what a visual scene looks like from another's point of view, could be greatly advantageous for robotic systems, as it could allow them to make sense of instructions they are given, cooperate with other agents and successfully complete missions. De Tommaso and his colleagues have recently been trying to reproduce this key ability in robots, while also ensuring that the robots can apply it across a wide range of contexts.

"Our primary objective was to enable robots to reason effectively about what other agents (human or artificial) can or cannot perceive from their vantage points within shared environments," said De Tommaso. "For example, robots should accurately assess whether text is readable from another person's viewpoint, if an object is hidden behind an obstacle, or whether an object is suitably oriented for a human to grasp or point to it.

"Despite current foundational models often lacking sophisticated spatial reasoning capabilities, we strongly believe that harnessing large-language models for scene understanding, alongside synthetic scene representations, holds significant promise for modeling human-like VPT capabilities in embodied artificial agents."

To improve the VPT capabilities of VLMs, the researchers compiled a dataset that could support their training on spatial reasoning tasks. Using NVIDIA's Omniverse Replicator, a platform for generating synthetic data, they created a new "artificial world," which essentially consisted of a simple scene capturing a cube, which was viewed from different angles and distances.

They then took captured 3D images of the cube in this synthetic world, adding a natural language description for each of them, along with a 4×4 transformation matrix, a mathematical structure that represents the position and orientation of the cube. The dataset was published online and can be used by other teams to train their VLMs.

"Each image captured by the virtual camera comes with a text prompt containing the cube's dimensions, and a precise transformation matrix that encodes the spatial relationship between the camera and the object, the kind of data robots use to plan movements and interact with the world," explained Joel Currie, the first author of the paper, who is a Ph.D. student at the University of Aberdeen and a Research Fellow at the Italian Institute of Technology.

"Because the environment is synthetic, we control every aspect and generate tens of thousands of image-matrix pairs quickly (something nearly impossible with real-world setups). It's a way of teaching robots to not just see, but to understand space like a physical being would."

So far, the framework introduced by the researchers is merely theoretical, yet it could soon open new possibilities for the training of real VLMs. The researchers themselves could soon assess its potential by training a model using the dataset they compiled or similar synthetically generated data.

"What we've done is fundamentally conceptual," Currie said. "We're proposing a new way for AI to learn space, not just from its own viewpoint, but from someone else's. Instead of hardcoded geometry, we treat Visual Perspective Taking as something the model can learn using vision and language. It's a step toward embodied cognition—robots that don't just see the world, but can imagine how it looks to others. We see this as foundational for true social intelligence in machines."

The recent work by De Tommaso, Currie, Migno and their colleagues could inspire the generation of other similar synthetic datasets for training VLMs on spatial reasoning tasks. These efforts could collectively contribute to the improvement of humanoid robots and other embodied AI agents, potentially facilitating their deployment in real-world settings.

"Our next step will be to make the virtual environment as realistic as possible, bringing the distance between a scene from the simulated space and the real world closer," added Gioele Migno, who graduated in Artificial Intelligence and Robotics from Sapienza University of Rome and recently joined the S4HRI research unit at IIT as a Research Fellow.

"This step is crucial to transfer the knowledge acquired by the model in simulation into the real world, and to make it possible for an embodied robot to exploit spatial reasoning. Once this is achieved, we are then interested in investigating how these capabilities can make interactions with humans more effective in scenarios where they share a spatial understanding of the scene."

Written for you by our author Ingrid Fadelli, edited by Lisa Lock, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you.

More information: Joel Currie et al, Towards Embodied Cognition in Robots via Spatially Grounded Synthetic Worlds, arXiv (2025). DOI: 10.48550/arxiv.2505.14366

Journal information: arXiv

© 2025 Science X Network

Citation: Vision-language models gain spatial reasoning skills through artificial worlds and 3D scene descriptions (2025, June 13) retrieved 13 June 2025 from https://techxplore.com/news/2025-06-vision-language-gain-spatial-skills.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Training robots without robots: Smart glasses capture first-person task demos 62 shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Benchmarking hallucinations: New metric tracks where multimodal reasoning models go wrong
AI

Benchmarking hallucinations: New metric tracks where multimodal reasoning models go wrong

June 14, 2025
0

June 14, 2025 feature The GIST Benchmarking hallucinations: New metric tracks where multimodal reasoning models go wrong Ingrid Fadelli contributing writer Gaby Clark scientific editor Robert Egan associate editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes...

Read moreDetails
AI-generated podcasts open new doors to make science accessible

AI-generated podcasts open new doors to make science accessible

June 14, 2025
The most eye-catching products at Paris’s Vivatech trade fair

The most eye-catching products at Paris’s Vivatech trade fair

June 14, 2025
Anthropic says looking to power European tech with hiring push

Anthropic says looking to power European tech with hiring push

June 13, 2025
New ocean mapping technology helps ships cut fuel use and CO₂ emissions

New ocean mapping technology helps ships cut fuel use and CO₂ emissions

June 13, 2025
Explainable AI: New framework increases transparency in decision-making systems

Explainable AI: New framework increases transparency in decision-making systems

June 13, 2025
Rethinking AI: Researchers propose a more effective, human-like approach

Rethinking AI: Researchers propose a more effective, human-like approach

June 13, 2025

Recent News

Apple will repair some Mac minis powered by M2 chips for free

Apple will repair some Mac minis powered by M2 chips for free

June 14, 2025

Why Are So Many Public Companies Pivoting to Crypto, And What Happens If Bitcoin Crashes?

June 14, 2025
Playdate Season 2 review: Long Puppy and Otto’s Galactic Groove!!

Playdate Season 2 review: Long Puppy and Otto’s Galactic Groove!!

June 14, 2025
MasterClass deal: Get up to 50 percent off for Father’s Day

MasterClass deal: Get up to 50 percent off for Father’s Day

June 14, 2025

TOP News

  • Meta plans stand-alone AI app

    Meta plans stand-alone AI app

    555 shares
    Share 222 Tweet 139
  • Kia’s EV4, its first electrical sedan, will probably be out there within the US later this 12 months

    560 shares
    Share 224 Tweet 140
  • New Pokémon Legends: Z-A trailer reveals a completely large model of Lumiose Metropolis

    560 shares
    Share 224 Tweet 140
  • Lazarus, the brand new anime from the creator of Cowboy Bebop, premieres April 5

    559 shares
    Share 224 Tweet 140
  • Pokémon Champions is all in regards to the battles

    557 shares
    Share 223 Tweet 139
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved