Researchers have unveiled a vision-integrated bionic hand system that automatically adjusts grip strength, addressing one of the most persistent challenges in modern prosthetics: how hard to squeeze.
Current advanced prosthetic hands typically rely on electromyography (EMG) sensors to detect muscle signals in the user’s forearm, allowing them to open or close a hand. While effective at identifying intent, EMG alone struggles to determine the correct amount of force needed for different objects.
As a result, amputees are often forced to consciously calculate grip strength, turning simple actions such as holding an egg, a can, or a bottle into mentally exhausting tasks.
To eliminate that burden, the Chinese research team combined machine learning, a palm-mounted camera, and fingertip pressure sensors to automate grip modulation. The system visually identifies an object and instantly applies an appropriate level of force, removing the need for constant manual adjustment.
“We want to free the user from thinking about how to control [an object] and allow them to focus on what they want to do, achieving a truly natural and intuitive interaction,” said author Hua Li in a press release.
Pressure sensors embedded in the fingertips monitor contact force, while the camera near the palm captures visual data about the object being grasped. A machine-learning model then matches the object to a database of grip-force requirements that includes everyday items such as eggs, pens, keys, cans, and USB sticks.
“An EMG signal can clearly convey the intent to grasp, but it struggles to answer the critical question, how much force is needed? This often requires complex training or user calibration,” Li explained. “Our approach was to offload that ‘how much’ question to the vision system.”
The system works by letting EMG signals initiate the grasp, while vision determines force, allowing users to focus on action rather than calculation. In lab tests, the prosthetic successfully handled fragile and rigid objects without crushing or dropping them.
The work builds on a growing body of global research into multimodal prosthetics, where vision, touch, and muscle signals are combined. Similar efforts by institutions in the US and Europe have shown that adding cameras and AI to prosthetic limbs can significantly improve object recognition, task speed, and user confidence, especially in fine motor activities.
Looking ahead, the research team aims to integrate haptic feedback, creating a two-way communication loop where the prosthetic not only moves intelligently but also sends sensory information back to the user.
“What we are most looking forward to, and currently focused on, is enabling users with prosthetic hands to seamlessly and reliably perform the fine motor tasks of daily living,” said Li. “We hope to see users be able to effortlessly tie their shoelaces or button a shirt, confidently pick up an egg or a glass of water without consciously calculating the force, and naturally peel a piece of fruit or pass a plate to a family member.”
Globally, over 1 million amputations occur annually. Experts say vision-assisted, AI-driven prosthetics like this one could represent a meaningful step toward devices that feel less like tools and more like natural extensions of the body.
The article “Design of intelligent artificial limb hand with force control based on machine vision” is authored by Yao Li, Xiaoxia Du, and Hua Li. This research was conducted at Guilin University of Electronic Technology in China, and published in Nanotechnology and Precision Engineering on January 20. You can read the research paper here.