laitimes

Cyberman Ecstasy: With hybrid machine learning methods, prosthetics can be more flexible and accurate

To study muscle gesture recognition in hand prosthetics (prosthetic hands), engineering researchers developed a hybrid machine learning method that combines image recognition artificial intelligence techniques with professional handwriting and speech recognition methods. Compared to traditional machine learning methods, the performance aspect of the technology is much better.

On November 8, 2021, a paper describing this hybrid approach was published in the journal Cyberbionic Systems.

Schematic diagram of the device and location of the device and location for collecting surface EMG signals | References[1]

Motor neurons are the parts of the central nervous system that directly control our muscles. They contract muscles by transmitting electrical signals. Electromyography is a method of recording muscle responses by inserting electrode needles into muscles. Surface electromyography (sEMG) places electrodes on the skin above the muscles, performing the same recording process without punctures and using non-medical procedures such as exercise and physiotherapy studies.

Over the past decade, researchers have conducted research into potential applications for controlling prosthetics through surface electromyography signals, particularly with regard to the complex movements and gestures required for prosthetic hands to provide smoother, more sensitive, and more intuitive prosthetic activity.

Cyberman Ecstasy: With hybrid machine learning methods, prosthetics can be more flexible and accurate

The prosthesis is as silky as a real hand | Alita: Battle Angel

Unfortunately, unexpected environmental disturbances, such as electrode movements, can introduce a lot of "noise" as the device tries to identify surface EMG signals. During wear and use on weekdays, the electrical level will often move. To overcome this problem, users must undergo lengthy and exhausting surface EMG signal training before using a prosthesis. In order to achieve control of the prosthetic hand, the user also needs to laboriously collect and classify the surface EMG signals.

To reduce and eliminate the hassle of this training, the researchers explored a variety of machine learning methods – especially deep learning pattern recognition – to be able to distinguish between different, complex gestures and movements in the presence of ambient signal interference.

By optimizing the network structure model of deep learning, the number of trainings can be reduced. One potential advance that has been tested is the use of convolutional neural networks (CNNs) similar to the connectivity structure of the human visual cortex. This type of neural network enhances the image and speech effects and is therefore the visual core of the computer.

So far, researchers have had some success with CNNs, making significant strides in recognizing ("extracting") the spatial dimensions of surface EMG signals associated with gestures. But despite their acumen with space, they ran into trouble with time. Gestures are not static, but occur over time, and CNN ignores the time information when muscles contract continuously.

In recent years, some researchers have begun to solve this problem using a long short-term memory (LSTM) artificial neural network structure. The LSTM includes a structure with feedback connections that perform well in classifying and making predictions based on data, especially when there is relaxation, stagnation, or interference between important tasks that cause unexpected delays. LSTM is a form of deep learning that is best suited for tasks that involve non-segmented, stitching activities, such as handwriting and speech recognition.

The challenge now is that while researchers have achieved better pose classification of surface EMG signals, calculating model size is a difficult one. The required microprocessor has limited functionality, and using a more powerful device is too expensive. Finally, while such deep learning training models can work on computers in the lab, they are difficult to apply by embedding hardware into prosthetics.

Dianchun Bai, one of the authors of the paper and a professor of electrical engineering at Shenyang University of Technology, said: "After all, convolutional neural networks are based on image recognition in the brain, not control of prosthetic limbs." "We need to combine CNNs with a technology that can handle the time dimension, while also ensuring the viability of the physical devices that users must wear."

So the researchers developed a model that combined CNNs and LSTMs, combining the spatial and temporal advantages of both approaches. This reduces the size of the deep learning model while achieving high accuracy and greater immunity to interference.

After the system was developed, they tested the hybrid approach on 10 non-amputee subjects who performed a series of 16 different movements, such as holding the phone, picking up a pen, pointing, pinching and grabbing a glass of water. The results show that compared with cnN alone or other traditional machine learning methods, the recognition performance of this method is much superior, and the recognition accuracy rate reaches more than 80%.

Cyberman Ecstasy: With hybrid machine learning methods, prosthetics can be more flexible and accurate

Schematic diagram of 16 gestures used in the experiment | References[1]

However, this hybrid method makes it difficult to accurately identify two pinching gestures: one with the middle finger and the other with the index finger. In future work, the researchers hope to further optimize the algorithm while keeping the training model small, improving the accuracy of the algorithm for application in prosthetic hardware. In addition, they wanted to figure out what made it difficult to recognize pinch gestures and extended their experiment to more subjects.

Eventually, the researchers hope to develop a prosthetic hand that is as flexible and reliable as the user's original limb.

bibliography

[1] https://spj.sciencemag.org/journals/cbsystems/2021/9794610/

[2] https://www.eurekalert.org/multimedia/817173

Compile: Oasis

Edit: Crispy fish

Typography: Yin Ningliu

Research team

Corresponding author Tie Liu: Department of Mechanical Engineering and Intelligent Systems, Tokyo University of Electronics and Communications, Japan. In 2014, he obtained a bachelor's degree in automation from Shenyang University of Technology. After that, he went on to pursue a Doctorate in Electrical Engineering at the School of Electrical Engineering of Shenyang University of Technology. His main research interests are surface ELECTROMY signal analysis and human upper limb modeling.

Homepage of the research group

https://dqxy.sut.edu.cn/info/1171/1562.htm

First author Dianchun Bai / Bai Dianchun: Associate Professor, Ph.D. in Electrical Engineering from Shenyang University of Technology in 2011. Since then, he has been a lecturer at the School of Electrical Engineering at Shenyang University of Technology and a distinguished researcher at the University of Electro-Communications in Japan in 2019. His research interests include deep learning, human-machine interface, and intelligent prosthetics.

Thesis information

Published the journal Cybrog and Bionic Systems

Published November 8, 2021

论文标题Application Research on Optimization Algorithm of sEMG Gesture Recognition Based on Light CNN+LSTM Model

(DOI:https://doi.org/10.34133/2021/9794610)

The areas of the article are deep learning, human prosthetics, robotics, and experimental research

Read on