A tactile sensor that uses cameras instead of pressure pads to measure the force being applied to a surface has been demonstrated by researchers at ETH University of Zurich.
The technology could service as the basis for soft robotic skins or prosthetics that use computer vision algorithms instead of pressure pads to simulate touch.
While machine vision has made great strides in recent years thanks to advanced cameras, fast CPUs, and powerful algorithms, the sense of touch has not seen much improvement. “Vision-based tactile skins aim at bridging this gap,” write researchers Camill Trueeb, Carmelo Sferrazza, and Raffaello D’Andrea, “exploiting the capabilities of vision sensors and state-of-the-art artificial intelligence algorithms, benefiting from the accessibility of large amounts of data and computational power.”
The researchers developed the tactile sensor along with a machine learning architecture that analyzes motion data supplied by cameras. When force is applied to a soft material, the material deforms, redistributing spherical particles under the surface. The AI software analyzes the motion of the particles and transforms it into tactile data. Researchers say the system has both a larger surface area and a thinner structure than competing solutions.
Scaled to larger surfaces, the technology could lead to soft skins with a sense of touch – perfect for robotic and medical applications.