July 31, 2025
The GIST Turning gestures into speech for people with limited communication
Lisa Lock
scientific editor
Andrew Zinin
lead editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
proofread

Communication is a fundamental human right, and many individuals need augmentative and alternative communication (AAC) approaches or tools, such as a notebook or electronic tablet with symbols the user can select to create messages, to communicate effectively.
While access to speech-language therapies and interventions that promote successful communication outcomes can help some, many existing AAC systems are not designed to support the needs of individuals with motor or visual impairments.
By integrating movement sensors with artificial intelligence (AI), researchers at Penn State are finding new ways to further support expressive communication for AAC users.
Led by Krista Wilkinson, distinguished professor of communication sciences and disorders at Penn State, and Syed Billah, assistant professor of information sciences and technology at Penn State, researchers developed and tested a prototype application that interprets body-based communicative movements into speech output using sensors.
This initial test included three individuals with motor or visual impairment who served as community advisors to the project. All participants said that the prototype improved their ability to communicate quickly and with people outside their immediate social circle. The theory behind the technology and initial findings were published in the journal Augmentative and Alternative Communication.
Aided and unaided AAC
There are two different types of AAC individuals can use. Aided AAC is typically technology-assisted—pointing at pictures or selecting symbols in a specialized app on an electronic tablet. For example, a person might be presented with three different food options via images on their tablet and will point to the choice they want, communicating it to their communication partner. While aided AAC can be understood easily, even by individuals not familiar with the user, it can be physically taxing for those with visual or motor impairments, according to Wilkinson.
The other form of AAC is unaided, or body-based AAC—facial expressions, shrugs or gestures that are specific to the individual. For example, a person with little to no speech who also has motor impairments, but can move their arms and hands, may raise their hand when shown a specific object signaling, "I want."
"Unaided AAC is fast, efficient and often less physically taxing for individuals as the movements and gestures are routinely used in their everyday lives," Wilkinson said. "The downside is these gestures are typically only known by people familiar with the individual and cannot be understood by those they may interact with on a less frequent basis, making it more difficult for AAC users to be independent."
According to Wilkinson, the goal of developing the prototype was to begin breaking down the wall between aided and unaided AAC, giving individuals the tools they need to open more of the world and communicate freely with those outside their immediate circles.
How AI can help
Current technologies have already begun incorporating AI for natural gesture recognition. However, mainstream technologies are based on large numbers of movements produced by people without disabilities. For individuals with motor or visual disabilities, it is necessary to make the technologies capable of learning idiosyncratic movements—movements and gestures with specific meaning to individuals—and map them to specific commands.
The ability of these systems to adjust to individual movement patterns reduces the potential for error and the demands placed on the individual to perform specific pre-assigned movements, according to Wilkinson.
The utility and user experience of AI algorithms, however, is largely unexplored. There are gaps in the understanding of how these algorithms are developed, how they can be adapted for AAC users with diverse disabilities and how they can be seamlessly integrated into existing AAC, according to Wilkinson.
Building the prototype
When developing and testing the prototype, Wilkinson said it was important to her and her team to gather input and feedback from individuals who would be likely to use, and benefit from, this technology.
Emma Elko is one of three "community advisers" the researchers worked with, along with her mother, Lynn Elko—Emma's primary communication partner. Emma has cortical visual impairment—a visual disability caused by damage to the brain's visual pathways rather than the eyes themselves—and uses aided AAC to communicate. She also has specific gestures she makes to say, "I want" and "come here."
Using a sensor worn on Emma's wrist, the researchers captured her communicative movements. The sensor detected the kinematics—how an object moves, focusing on position and speed—of each movement, allowing it to distinguish between different gestures like an up and down motion versus a side-to-side motion.
Emma was prompted to repeat a movement three times, with Lynn signaling the beginning and end of the movement for the algorithm to capture. The researchers found three repetitions of a gesture gathered sufficient data and minimized user fatigue.
Once the AI algorithm captured the gesture and an associated communicative output was assigned, a connected smartphone application translated the gesture into speech output to be produced any time the sensor recorded the gesture being made. In this way, Emma could communicate directly with someone who was unfamiliar with the specific meaning of her gestures.
"The idea is that we can create a small dictionary of an individual's most commonly used gestures that have communicative meaning to them," Wilkinson said. "The great thing about it is the sensor technology allows individuals to be disconnected from their computer or tablet AAC, allowing them to communicate with people more freely."
Bringing this technology to the people who need it
While the technology is still in the prototype stage, Lynn said she has already seen it make a positive impact on Emma's life.
"It has been exciting to see a lightweight, unobtrusive sensor detect Emma's communicative movements and speak them for her, allowing people less familiar with her to understand her instantly," Lynn said.
While this initial testing proved that this idea works on a conceptual level, questions remain around fine tuning the sensor technology. The next step for Wilkinson and her team is to get this technology in the hands of more people with motor or visual impairment who use AAC for more widespread testing and data collection. The researchers' goal is to determine not only how well the algorithm does when identifying target motions, but how well it can disregard involuntary movements and how to refine it to distinguish between similar gestures that have different communicative meanings.
"Each individual will have different priorities and different communication needs," Wilkinson said. "While the sensor is great for capturing movements that are very distinct from one another, we need to develop a way to capture gestures that require more precision. The next step for us is to develop camera-based algorithms that will work in tandem with the sensor, ultimately making this technology accessible for as many people as possible."
Lynn and Emma are continuing to work with the sensor and integrated app and can see it making a larger impact in Emma's life as the technology continues to evolve.
"We're looking forward to skiing season when Emma can wear the sensor to communicate on the slopes instead of only communicating with her paper-based AAC on the chair lift," Lynn said. "Living without spoken words can bring isolation and a limited social circle. This technology will widen Emma's world, and I look forward to witnessing the impact of that on her life."
More information: Krista M. Wilkinson et al, Consideration of artificial intelligence applications for interpreting communicative movements by individuals with visual and/or motor disabilities, Augmentative and Alternative Communication (2025). DOI: 10.1080/07434618.2025.2495905
Provided by Pennsylvania State University Citation: Turning gestures into speech for people with limited communication (2025, July 31) retrieved 31 July 2025 from https://techxplore.com/news/2025-07-gestures-speech-people-limited-communication.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Listeners use gestures to predict upcoming words, virtual avatar study finds 10 shares
Feedback to editors