August 19, 2025
The GIST AI tech breathes life into virtual companion animals
Sadie Harley
scientific editor
Robert Egan
associate editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
proofread

Researchers at UNIST have developed an innovative AI technology capable of reconstructing highly detailed three-dimensional (3D) models of companion animals from a single photograph, enabling realistic animations. This breakthrough allows users to experience lifelike digital avatars of their companion animals in virtual reality (VR), augmented reality (AR), and metaverse environments.
The findings were published in the International Journal of Computer Vision.
Led by Professor Kyungdon Joo at the Artificial Intelligence Graduate School of UNIST, the research team announced the development of DogRecon, a novel AI framework that can reconstruct an animatable 3D dog Gaussian from a single dog image.
Due to their diverse breeds, varying body shapes, and frequent occlusion of joints caused by their quadrupedal stance, reconstructing 3D models of dogs presents unique challenges. Moreover, creating accurate 3D structures from a single 2D photo is inherently difficult, often resulting in distorted or unrealistic representations.
DogRecon overcomes these challenges by utilizing breed-specific statistical models to capture variations in body shape and posture. It also employs advanced generative AI to produce multiple viewpoints, effectively reconstructing occluded areas with high fidelity. Additionally, the application of Gaussian Splatting techniques enables the model to accurately reproduce the curvilinear body contours and fur textures characteristic of dogs.
Performance evaluations using various datasets demonstrated that DogRecon can generate natural, precise 3D dog avatars comparable to those produced by existing video-based methods, but from only a single image.
Unlike prior models, which often rendered dogs with unnatural postures—such as stretched bodies with bent joints, or bundled ears, tails, and fur—especially when dogs are in relaxed or crouched positions, DogRecon delivers more realistic results.
Furthermore, due to its scalable architecture, DogRecon holds significant promise for applications in text-driven animation generation, as well as AR/VR environments.
This research was led by first author Gyeongsu Cho, with contributions from Changwoo Kang (UNIST) and Donghyeon Soon (DGIST).
Gyeongsu Cho said, "With over a quarter of households owning pets, expanding 3D reconstruction technology—traditionally focused on humans—to include companion animals has been a goal," adding, "DogRecon offers a tool that enables anyone to create and animate a digital version of their companion animals."
Professor Joo added, "This study represents a meaningful step forward by integrating generative AI with 3D reconstruction techniques to produce realistic models of companion animals. We look forward to expanding this approach to include other animals and personalized avatars in the future."
More information: Gyeongsu Cho et al, DogRecon: Canine Prior-Guided Animatable 3D Gaussian Dog Reconstruction From A Single Image, International Journal of Computer Vision (2025). DOI: 10.1007/s11263-025-02485-5
Provided by Ulsan National Institute of Science and Technology Citation: AI tech breathes life into virtual companion animals (2025, August 19) retrieved 19 August 2025 from https://techxplore.com/news/2025-08-ai-tech-life-virtual-companion.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
AI technology reconstructs 3D hand-object interactions from video, even when elements are obscured 4 shares
Feedback to editors