Technology

MIT has developed a robot that can “touch and feel” objects

The state of the art in artificial intelligence keeps improving steadily, especially in the domains of image, speech and text recognition. In other words, we have reached a point where AI can “see” and “hear” once it has acquired sufficient training, but that’s certainly not the end of it. MIT is all set to turn things up a notch by equipping a robot with the ability to recognize things by “touch” for the very first time.

Recognition by touch is one of the easiest and most natural ways for us humans to familiarize ourselves with an object and ultimately learn to recognize it. Unsurprisingly, it is a whole different ball game for machines, since they lack our complicated nervous system, including the brain.

Therefore, teaching a robot to “touch and feel” proceeds just like every other machine learning recognition task: feed enough relevant data to it to make it form connections between the object and its interaction with the object. At the Computer Science and Artificial Intelligence Laboratory (CSAIL) in MIT, this is exactly what a group of students set out to do.

The team first acquired a KUKA robot arm and appended a special sensor called GelSight to it. Created by Ted Adelson’s group in CSAIL, GelSight is a tactile sensor i.e. it collects relevant incoming information whenever it touches an object. Allowing the robot arm to obtain touch-related information was the first step in enabling it to connect tactile and visual data.

The next step was training, and that involved feeding the underlying machine learning algorithm 12,000 videos of 200 household objects like fabrics and tools being touched. These videos were subsequently broken down into individual frames that were used by the algorithm to correlate touch-related and vision-related data so that it could start recognizing those objects by “touch”.

“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge”, said Yunzhu Li, CSAIL Ph.D. student and lead author on a new paper about the system. “By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings. Bringing these two senses together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects.”

Currently, the robot can only recognize objects by touch in a controlled environment, but this achievement is nonetheless a profound one as we continue to expand upon the abilities of artificial intelligence.

Sponsored
Hamza Zakir

Platonist. Humanist. Unusually edgy sometimes.

Leave a Comment
Share
Published by
Hamza Zakir
Tags: AIRobots

Recent Posts

China’s Tencent Releases Large Language Model, Opens it For Enterprise Use

Capable of conversing in both Chinese and English, Tencent’s large language model ‘Hunyuan’ is claimed…

8 months ago

Apple Reportedly Spending ‘Million of Dollars Each Day’ for AI Training

Working on multiple AI models, Apple has allocated several teams who are working on artificial…

8 months ago

World’s Largest Wind Turbine Breaks Record For Power Generated In A Single Day-During A Typhoon

The world's largest offshore wind turbine has achieved a milestone by setting a new record…

8 months ago

YouTube Will Let You Play Mini Games Soon

YouTube is stepping into the world of gaming. YouTube has started testing out its gaming…

8 months ago

Pakistani Student Won First Position In Matric Exams of UAE

In a remarkable academic achievement, Abdullah Zaman, a Pakistani student hailing from Attock, has clinched…

8 months ago

‘Flying Bum’ World’s largest Aircraft Is Ready To Launch In 2026 With Hybrid Technology

Flying Bum, the world's largest aircraft is ready to launch in 2026. The Airlander 10…

8 months ago