Newswise — Even the most advanced robots struggle to sense touch as effectively as humans do -- just think of all the “fail” videos online where robots grasp eggs and accidentally crack them, or a robot arm moves too fast and smashes a bottle to pieces. Touch has been a huge challenge for roboticists who are working to develop more sophisticated systems that enable robots to accomplish tasks with human-like precision and adaptability.

New tool gives machines a new level of dexterity and precision

To address this challenge, Yunzhu Li, an assistant professor of computer science at Columbia Engineering, has developed a robot learning system that fuses the sense of touch with sight, giving machines a new level of dexterity and precision. His latest paper introduces 3D-ViTac, a multi-modal robot manipulation system that is the first to integrate visual and tactile data into a unified 3D point cloud space for imitation learning. This system allows robots to “see” and “feel” objects, handle fragile objects, and perform long-horizon, precise tasks far more effectively than vision-only systems. Li and collaborators presented the paper Nov. 6 at the Conference on Robot Learning (CoRL) in Munich, Germany.

“Our tactile sensors bring robots' sensing capabilities a step closer to human levels,” said Li, who joined Columbia this year. This research builds on his previous work on a tactile glove for understanding how humans use their hands. “The sensors are flexible, scalable, cost-effective, and able to detect detailed contact patterns, making them superior to previous tactile sensors used by robots.”

New approach 

In this work, the researchers introduce a high-resolution force sensor, made from force-sensitive materials (specifically, piezoresistive materials), to the robot learning community to mimic human tactile sensing capabilities. The piezoresistive tactile sensor converts applied pressure into changes in electrical resistance. This change in resistance enables the robot to “feel” the objects it interacts with, allowing for precise grip strength and accurate in-hand object manipulation.

3D-ViTac in Action


 

A presentation slide that reads "3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing"Play video

 This contact-rich data provides a clear demonstration of how touch can enhance manipulation and how combining touch with vision can bring robots closer to human-level dexterity. The system was tested on tasks like grasping fragile items, such as eggs and grapes, and performing in-hand manipulation, like reorienting a hex key between fingers or adjusting the grip on a spatula. The tactile and visual fusion significantly outperformed systems relying solely on visual inputs, which was especially useful when handling fragile items or doing tasks with limited visibility.

Thin, flexible tactile sensors transform robots from clunky tools into ones capable of precise, fluid manipulation 

The researchers developed a dense, flexible tactile sensor array integrated into a soft robotic gripper. The data from the sensors, combined with visual data, generate a 3D-point cloud, like a visual representation or a scene, that enables the robot to both “see” and “feel” its surroundings. The tactile feedback allows the robot to adjust its grip strength in real time, which is especially crucial when visual information is limited or occluded. 

Equipped with “fingers” capable of feeling the world around them, these robots can now handle fragile objects with care. Thin, flexible tactile sensors cover their hands, enabling them to perceive the slightest pressure and adjust their movements accordingly. This innovation has transformed the robots from clunky tools into ones capable of precise, fluid manipulation once thought impossible for machines.

“This breakthrough also enables robots to handle occluded objects more reliably and effectively,” said Binghao Huang, the project lead and a Columbia Engineering PhD student who works with Li. Occlusion occurs when an object is hidden from view, which is problematic for robots that rely on visual information to manipulate objects. “As the demand for humanoid robots to assist with household chores grows, our bimanual system equipped with tactile sensors presents a promising solution for achieving such tasks.”

What’s next?

With this leap forward in robotics, the line between human and machine skills begins to blur, opening the door to a future where robots cannot only see the world but feel it, too. 

The next step for the researchers is to further improve the system's scalability and to scale up data collection. They are also developing a tactile simulation and integrating it into the robot learning process. This will allow for larger-scale data collection and better generalization of the policy, enabling the system to perform well in new situations for which it was not explicitly trained.


About the Study

ConferenceConference on Robot Learning (CoRL) in Munich, Germany, November 6-9, 2024.

Title3D ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing

Authors: Binghao Huang1, Yixuan Wang1, Xinyi Yang2, Yiyue Luo3, Yunzhu Li1

  1. Columbia University
  2. University of Illinois Urbana-Champaign
  3. University of Washington

Funding: This work is partly funded by the Toyota Research Institute (TRI). This research solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity.