CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Saturday, July 19, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Anyone can now train a robot: New tool makes teaching skills hands-on and easy

July 18, 2025
150
0

July 17, 2025

The GIST Anyone can now train a robot: New tool makes teaching skills hands-on and easy

Related Post

Can AI really code? Study maps the roadblocks to autonomous software engineering

Can AI really code? Study maps the roadblocks to autonomous software engineering

July 18, 2025
When the stakes are high, do machine learning models make fair decisions?

When the stakes are high, do machine learning models make fair decisions?

July 18, 2025
Sadie Harley

scientific editor

Andrew Zinin

lead editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

New tool gives anyone the ability to train a robot
A new handheld interface developed by MIT engineers enables a person to teach a robot new skills, using any of three training approaches: natural teaching (top left), kinesthetic training (middle), and teleoperation. Credit: Massachusetts Institute of Technology

Teaching a robot new skills used to require coding expertise. But a new generation of robots could potentially learn from just about anyone.

Engineers are designing robotic helpers that can "learn from demonstration." This more natural training strategy enables a person to lead a robot through a task, typically in one of three ways: via remote control, such as operating a joystick to remotely maneuver a robot; by physically moving the robot through the motions; or by performing the task themselves while the robot watches and mimics.

Learning-by-doing robots usually train in just one of these three demonstration approaches. But MIT engineers have now developed a three-in-one training interface that allows a robot to learn a task through any of the three training methods.

The interface is in the form of a handheld, sensor-equipped tool that can attach to many common collaborative robotic arms. A person can use the attachment to teach a robot to carry out a task by remotely controlling the robot, physically manipulating it, or demonstrating the task themselves—whichever style they prefer or best suits the task at hand.

The MIT team tested the new tool, which they call a "versatile demonstration interface," on a standard collaborative robotic arm. Volunteers with manufacturing expertise used the interface to perform two manual tasks that are commonly carried out on factory floors.

The researchers say the new interface offers increased training flexibility that could expand the type of users and "teachers" who interact with robots. It may also enable robots to learn a wider set of skills. For instance, a person could remotely train a robot to handle toxic substances, while further down the production line another person could physically move the robot through the motions of boxing up a product, and at the end of the line, someone else could use the attachment to draw a company logo as the robot watches and learns to do the same.

"We are trying to create highly intelligent and skilled teammates that can effectively work with humans to get complex work done," says Mike Hagenow, a postdoc at MIT in the Department of Aeronautics and Astronautics. "We believe flexible demonstration tools can help far beyond the manufacturing floor, in other domains where we hope to see increased robot adoption, such as home or caregiving settings."

Hagenow will present a paper detailing the new interface, at the IEEE Intelligent Robots and Systems (IROS) conference in October. The paper is also published on the arXiv preprint server.

The paper's MIT co-authors are Dimosthenis Kontogiorgos, a postdoc at the MIT Computer Science and Artificial Intelligence Lab (CSAIL); Yanwei Wang, Ph.D., who recently earned a doctorate in electrical engineering and computer science; and Julie Shah, MIT professor and head of the Department of Aeronautics and Astronautics.

Training together

Shah's group at MIT designs robots that can work alongside humans in the workplace, in hospitals, and at home. A main focus of her research is developing systems that enable people to teach robots new tasks or skills "on the job," as it were.

Such systems would, for instance, help a factory floor worker quickly and naturally adjust a robot's maneuvers to improve its task in the moment, rather than pausing to reprogram the robot's software from scratch—a skill that a worker may not necessarily have.

The team's new work builds on an emerging strategy in robot learning called "learning from demonstration," or LfD, in which robots are designed to be trained in more natural, intuitive ways.

In looking through the LfD literature, Hagenow and Shah found that LfD training methods developed so far fall generally into the three main categories of teleoperation, kinesthetic training, and natural teaching.

One training method may work better than the other two for a particular person or task. Shah and Hagenow wondered whether they could design a tool that combines all three methods to enable a robot to learn more tasks from more people.

"If we could bring together these three different ways someone might want to interact with a robot, it may bring benefits for different tasks and different people," Hagenow says.

Tasks at hand

With that goal in mind, the team engineered a new versatile demonstration interface (VDI). The interface is a handheld attachment that can fit onto the arm of a typical collaborative robotic arm. The attachment is equipped with a camera and markers that track the tool's position and movements over time, along with force sensors to measure the amount of pressure applied during a given task.

When the interface is attached to a robot, the entire robot can be controlled remotely, and the interface's camera records the robot's movements, which the robot can use as training data to learn the task on its own. Similarly, a person can physically move the robot through a task, with the interface attached.

The VDI can also be detached and physically held by a person to perform the desired task. The camera records the VDI's motions, which the robot can also use to mimic the task when the VBI is reattached.

To test the attachment's usability, the team brought the interface along with a collaborative robotic arm, to a local innovation center where manufacturing experts learn about and test technology that can improve factory-floor processes.

The researchers set up an experiment where they asked volunteers at the center to use the robot and all three of the interface's training methods to complete two common manufacturing tasks: press-fitting and molding. In press-fitting, the user trained the robot to press and fit pegs into holes, similar to many fastening tasks.

For molding, a volunteer trained the robot to push and roll a rubbery, dough-like substance evenly around the surface of a center rod, similar to some thermomolding tasks.

For each of the two tasks, the volunteers were asked to use each of the three training methods, first teleoperating the robot using a joystick, then kinesthetically manipulating the robot, and finally, detaching the robot's attachment and using it to "naturally" perform the task as the robot recorded the attachment's force and movements.

The researchers found the volunteers generally preferred the natural method over teleoperation and kinesthetic training. The users, who were all experts in manufacturing, did offer scenarios in which each method might have advantages over the others. Teleoperation, for instance, may be preferable in training a robot to handle hazardous or toxic substances.

Kinesthetic training could help workers adjust the positioning of a robot that is tasked with moving heavy packages. And natural teaching could be beneficial in demonstrating tasks that involve delicate and precise maneuvers.

"We imagine using our demonstration interface in flexible manufacturing environments where one robot might assist across a range of tasks that benefit from specific types of demonstrations," says Hagenow, who plans to refine the attachment's design based on user feedback and will use the new design to test robot learning.

"We view this study as demonstrating how greater flexibility in collaborative robots can be achieved through interfaces that expand the ways that end-users interact with robots during teaching."

More information: Michael Hagenow et al, Versatile Demonstration Interface: Toward More Flexible Robot Demonstration Collection, arXiv (2024). DOI: 10.48550/arxiv.2410.19141

Journal information: arXiv Provided by Massachusetts Institute of Technology

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation: Anyone can now train a robot: New tool makes teaching skills hands-on and easy (2025, July 17) retrieved 18 July 2025 from https://techxplore.com/news/2025-07-robot-tool-skills-easy.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Robot see, robot do: System learns after watching how-to videos 0 shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Can AI really code? Study maps the roadblocks to autonomous software engineering
AI

Can AI really code? Study maps the roadblocks to autonomous software engineering

July 18, 2025
0

July 17, 2025 The GIST Can AI really code? Study maps the roadblocks to autonomous software engineering Lisa Lock scientific editor Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's...

Read moreDetails
When the stakes are high, do machine learning models make fair decisions?

When the stakes are high, do machine learning models make fair decisions?

July 18, 2025
California tech hubs are set to dominate the AI economy, report suggests

California tech hubs are set to dominate the AI economy, report suggests

July 18, 2025
Does AI understand?

Does AI understand?

July 17, 2025
Tech giants warn window to monitor AI reasoning is closing, urge action

Tech giants warn window to monitor AI reasoning is closing, urge action

July 17, 2025
Generative AI models streamline fashion design with new text and image creation

Generative AI models streamline fashion design with new text and image creation

July 17, 2025
Through smartphone apps, AI can close road assessment gap

Through smartphone apps, AI can close road assessment gap

July 17, 2025

Recent News

Apple’s AirPods 4 are up to 33 percent off right now

Apple’s AirPods 4 are up to 33 percent off right now

July 19, 2025

Jack Dorsey’s Block to Join the S&P 500 Index Next Week

July 19, 2025
DuckDuckGo now allows you to filter out AI images in search results

DuckDuckGo now allows you to filter out AI images in search results

July 19, 2025
Remedy lays out its plan to fix FBC: Firebreak, which includes improved onboarding

Remedy lays out its plan to fix FBC: Firebreak, which includes improved onboarding

July 18, 2025

TOP News

  • Обменник криптовалют Dmoney.cc Выгодные обмены, которым можно доверять

    Обменник криптовалют Dmoney.cc Выгодные обмены, которым можно доверять

    536 shares
    Share 214 Tweet 134
  • Speedrunner reaches Breath of the Wild credit on Change 2, a console which is not even out but

    533 shares
    Share 213 Tweet 133
  • Meta plans stand-alone AI app

    563 shares
    Share 225 Tweet 141
  • Kia’s EV4, its first electrical sedan, will probably be out there within the US later this 12 months

    568 shares
    Share 227 Tweet 142
  • New Pokémon Legends: Z-A trailer reveals a completely large model of Lumiose Metropolis

    569 shares
    Share 228 Tweet 142
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved