Skip to main content Skip to secondary navigation

In Loving Memory of George "Bud" M. Homsy (1943 - 2024)

Optical tactile sensor to improve robotic performance

Main content start

By Andy Tay

Robots are increasingly being used in our society from manufacturing to daily life. However, it remains challenging for robots to perform dexterous manipulation tasks, such as moving small objects, that could reduce their impact for uses in manufacturing and to assist humans with tasks that require grasping. Hardware improvements such as sensors and visual algorithms have achieved some advances toward mimicking human dexterity. However, the lack of tactile feedback with sufficiently high resolution, accuracy, and dexterity is hindering greater use of robots.

Professor Monroe Kennedy III and his research group in the Department of Mechanical Engineering at Stanford University explains, “Humans are very good at sensing the shape and forces of objects they are holding between their fingers with high resolution. Humans achieve this with mechanoreceptors, cells that sense local pressure, embedded in the skin. Traditionally, biomimicry for tactile sensing has been achieved through physical transduction (fingertips are compressed, and that is directly converted to an electrical signal) or vision-based sensing (deformation is observed and correlated to change in shape or applied forces).”

In a recent paper posted in Arxiv, appearing in IEEE International Conference on Robotics and Automation 2022, Prof. Kennedy and Ph.D. student Wonkyung Do,  research a novel vision-based solution.  Kennedy says, “Vision-based sensing has demonstrated the ability to sense at higher resolutions with multi-model sensing compared to most physical transduction methods. But with most available vision-based sensors, robots still find it very challenging to perform dexterous tasks and to take what was learned from one task and apply it to a similar manipulation task.”

Their solution, named DenseTact, combines a vision sensor using an inexpensive fisheye lens camera with a soft elastomer cover that serves as the contact surface (Figure 1). The interior of the sensor cover is illuminated, which allows for estimating its shape.  Feedback on shape is based on the deformation of the interior of the cover captured with a single image, which is then used to construct a model of the object being grasped. The sensor can be used in applications including in-hand pose estimation of a held object.  “Our major findings from this work were developing the DenseTact sensor that can predict for previously unseen objects the depth of each point sensed by the sensor through camera pixels (570 x 570 pixels per image) for 1000 images with an average accuracy of 0.28 mm,” says Kennedy.

optical tactile sensor, DenseTact
Figure 1 Image showing optical tactile sensor, DenseTact, measuring the shape of a pen. The left bottom image is the image taken from the camera and the right bottom image represents the 3D reconstructed surface. Image taken from (Do & Kennedy, 2022).

Optical sensors for high-resolution robotic feedback

Optical sensors have proven to be useful for purposes like estimating forces, controlling grasping motion and adjusting grasp. However, most vision-based tactile sensors are expensive, bulky, and limited to 2D shapes. On the other hand, tactile sensors with 3D curved surface designs are costly, especially for multi-finger applications.

The authors set out to make use of a fisheye lens camera with a 3D hemispherical shaped cover and soft surface for versatile small object manipulation, and a high-resolution surface deformation model for shape reconstruction. To create a soft surface, an extra-soft silicone material that has deformation properties similar to human skin is used. The extra soft elastomer also maximizes surface deformation with small shear force. Their camera system is illuminated with light emitting diodes (LEDs). When the silicone cover is depressed, the camera system emits color patterns indicative of surface shape, correlated to color channel reflectivity.

Next, using 3D printers, the authors printed training shapes of defined dimensions and use them as ‘ground truths’ to train a deep neural network algorithm for accurate shape reconstruction. When the re-projection error was calculated, the DenseTact sensor performed shape reconstruction with an absolute mean error of 0.28 mm, which is small compared to typical error in current applications for manufacturing robotics and for robots used to assist humans in dexterous tasks.

Importantly, the entire cost of the sensor is less than USD $80, and the entire fabrication process takes less than two days. This will help to democratize the use of optical tactile sensors in robotics.

“Our work presents a significant step forward in robotic dexterity approaching human sensing capabilities through accurate shape reconstruction. The second generation of this sensor (Figure 2) is one-third the size of the first generation and adds the capability to obtain calibrated forces over the surface of the sensor (a stress vector field). These combined sensing abilities of high-resolution, calibrated shape reconstruction, and force sensing will enable many new avenues of research in robotic manipulation including the manipulation of fragile, soft, or small objects,” says Kennedy.

Beyond robotic manipulation capabilities, Kennedy adds that these techniques of calibration for shape and force sensing using computer vision may be applied to wearable technology where it is valuable to sense contact shape or forces on a surface, with the primary limitation being the form-factor of the embedded camera.

second generation optical tactile sensor, DenseTact
Figure 2 2nd generation of sensor, Image credit: Assistive Robotics and Manipulation Laboratory, Stanford U.

A way for better tactile sensors

In this paper, Do and Kennedy developed a low cost optical-based tactile sensor using a fisheye lens with a soft illuminated cover and deep neural network algorithms. When integrated with a neural network, the DenseTact solution facilitated shape reconstruction with low mean error. In the future, multiple LEDs may be used in various configurations to improve sensor accuracy. The design of the sensor can also be varied in size and shape to enhance versatility in applications. Training sets can also be expanded to make the deep neural network algorithm more powerful for shape reconstruction.

Source article: Do, W. K., & Kennedy, M. (2022). DenseTact: Optical Tactile Sensor for Dense Shape Reconstruction.

The eWEAR-TCCI awards for science writing is a project commissioned by the Wearable Electronics Initiative (eWEAR) at Stanford University and made possible by funding through eWEAR industrial affiliates program member Shanda Group and the Tianqiao and Chrissy Chen Institute (TCCI®).