Skip to main content Skip to secondary navigation

In Loving Memory of George "Bud" M. Homsy (1943 - 2024)

Spray-on smart skin uses AI to rapidly understand hand tasks

Main content start

A novel, electrically active smart skin can rapidly decipher typing, sign language, even the shape of a familiar object from the movements of a human hand even with limited data.

By Andrew Myers

A new smart skin developed at Stanford University might foretell a day when people type on invisible keyboards, identify objects by touch alone, or allow users to communicate by hand gestures with apps in immersive environments.

In a just-publish paper in the journal Nature Electronics the researchers describe a new type of stretchable biocompatible material that gets sprayed on the back of the hand, like suntan spray. Integrated in the mesh is a tiny electrical network that senses as the skin stretches and bends and, using AI, the researchers can interpret myriad daily tasks from hand motions and gestures. The researchers say it could have applications and implications in fields as far-ranging as gaming, sports, telemedicine, and robotics.

So far, several promising methods, such as measuring muscle electrical activities using wrist bands or wearable gloves, have been actively explored to enable various hand tasks and gesturing. However, these devices are bulky as multiple sensory components are needed to pinpoint movements at every single joint. Moreover, a large amount of data needs to be collected for each user and task in order to train the algorithm. These challenges make it difficult to adopt such devices as daily-use electronics. 

This work is the first practical approach that is both lean enough in form and adaptable enough in function to work for essentially any user—even with limited data. Current technologies require multiple sensor components to read each joint of the finger, making them bulky. The new device also takes a leaner approach to software to allow faster learning. Such precision could be key in virtual reality applications to convey finely detailed motions for a more realistic experience. 

The enabling innovation is a sprayable electrically sensitive mesh network embedded in polyurethane—the same durable-yet-stretchable material used to make skateboard wheels and to protect hardwood floors from damage. The mesh is comprised of millions of nanowires of silver coated with gold that are in contact with each other to form dynamic electrical pathways. This mesh is electrically active, biocompatible, breathable, and stays on unless rubbed in soap and water. It conforms intimately to the wrinkles and folds of each human finger that wears it. Then a light-weight Bluetooth module can be simply attached to the mesh which can wirelessly transfer the signal changes.

“As the fingers bend and twist, the nanowires in the mesh get squeezed together and stretched apart, changing the electrical conductivity of the mesh. These changes can be measured and analyzed to tell us precisely how a hand or a finger or a joint is moving,” explained Zhenan Bao, a K.K. Lee Professor of Chemical Engineering and senior author of the study.

The researchers chose a spray-on approach directly on skin so that the mesh is supported without a substrate. This key engineering decision eliminated unwanted motion artifacts and allowed them to use a single trace of conductive mesh to generate multi-joint information of the fingers.

Visual of device on skin and how skin streches
Spray-on sensory system which consists of printed, bio-compatible nanomesh directly connected with wireless Bluetooth module and further trained through meta-learning (Image credit: Kyun Kyu “Richard” Kim, Bao Group, Stanford U.)

The spray-on nature of the device allows it to conform to any size or shaped hand, but opens the possibility that the device could be adapted to the face to capture subtle emotional cues.  That might enable new approaches to computer animation or lead to new avatar-led virtual meetings with more realistic facial expressions and hand gestures.

Machine learning then takes over. Computers monitor the changing patterns in conductivity and map those changes to specific physical tasks and gestures. Type an X on a keyboard, for instance, and the algorithm learns to recognize that task from the changing patterns in the electrical conductivity. Once the algorithm is suitably trained, the physical keyboard is no longer necessary. The same principles can be used to recognize sign language or even to recognize objects by tracing their exterior surfaces.

And, whereas existing technologies are computationally intensive and require vast amounts of data that must be laboriously labelled by humans—by hand, if you will—the Stanford team has developed a learning scheme that is far more computationally efficient.

hands with device over virtual keyboard and holding cone
Two-handed QWERTY keyboard typing recognition with nanomesh printed on both hands and real-time recognition of interacting objects (Image credit: Kyun Kyu “Richard” Kim, Bao Group, Stanford U.)

“We brought the aspects of human learning that rapidly adapt to tasks with only a handful of trials known as ‘meta-learning.’ This allows the device to rapidly recognize  arbitrary new hand tasks and users with a few quick trials,” said Kyun Kyu “Richard” Kim, a post-doctoral scholar in Bao’s lab, who is first author of the study.

“Moreover, it’s a surprisingly simple approach to this complex challenge that means we can achieve faster computational processing time with less data because our nanomesh captures subtle details in its signals,” Kim added. The precision with which the device can map subtle motions of the fingers is one of the leading features of this innovation. 

The researchers have built a prototype that recognizes simple objects by touch and can even do predictive two-handed typing on an invisible keyboard. The algorithm was able to type, “No legacy is so rich as honesty” from William Shakespeare and “I am the master of my fate, I am the captain of my soul” from William Ernest Henley’s poem “Invictus.”

Keyboard typing video

Co-first authors are: Kyun Kyu (Richard) Kim is a postdoctoral scholar in the Bao Group in Chemical Engineering; Min Kim is a Ph.D. student in the School of Computing, Korea Advanced Institute of Science and Technology (KAIST).

Additional Stanford authors are: Samuel Root and Bao-Nguyen Nguyen are postdoctoral scholars in the Bao Group in Chemical Engineering; Yuya Nishio is a Ph.D. student in the Bao Group in Electrical Engineering; Jeffrey B.-H. Tok is the Uytengsu Teaching Center Lab Laboratory Director in Chemical Engineering; Zhenan Bao is also a member of Stanford Bio-X, the Stanford Cardiovascular Institute, the Maternal & Child Health Research Institute (MCHRI), the Precourt Institute for EnergySarafan ChEM-HStanford Woods Institute for the Environment, the Wu Tsai Human Performance Alliance, the Wu Tsai Neurosciences Institute, and an investigator of CZ Biohub.

Additional authors are: Kyungrok Pyun, Jinki Min, Jaewon Kim, Seonggeun Han, and Joonhwa Choi are Ph.D. students in the Department of Mechanical Engineering, Seoul National University; Jin Kim is researcher in College of Veterinary Medicine, Seoul National University; Seunghun Koh is a Ph.D. student in the School of Computing, Korea Advanced Institute of Science and Technology (KAIST); C-Yoon Kim is an assistant professor in College of Veterinary Medicine, Konkuk University; Sungho Jo is a professor in School of Computing, Korea Advanced Institute of Science and Technology (KAIST), a member of Soft Robotics Research Center, and associated with KAIST Institute for AI and KAIST Institute for Robotics; Seung Hwan Ko is a professor in Department of Mechanical Engineering, Seoul National University, associate head of institute of Engineering Research, and member of Soft Robotics Research Center.

The eWEAR-TCCI awards for science writing is a project commissioned by the Wearable Electronics Initiative (eWEAR) at Stanford University and made possible by funding through eWEAR industrial affiliates program member Shanda Group and the Tianqiao and Chrissy Chen Institute (TCCI®)