Figure 3. Arterial-pulse measurement with nanofiber-based pressure-sensor unit (NFPSU): a) Photograph of NFPSU on tester’s wrist. b, c) Measured arterial-pulse waveforms of a male and a female tester before and after exercise. d) Schematic of nine-point grid with an area of 1×1 cm2 on a tester’s wrist. e) Measured arterial-pulse waveform at different locations (red dots) in the grid. f) Typical pulse waveform showing P, T, and D waves.
To characterize its response to tiny mechanical stimuli, the NFPSU was used to monitor wrist pulses in real time (Figure 3a). Owing to its high sensitivity, it could readily obtain wrist-pulse waveforms with high resolution; the two distinguishable peaks and late systolic augmentation shoulder agree very well with the expected shape of a noninvasive radial-artery pressure wave.[40] The trace shows clearly that the pulse frequency before the exercise was ~ 70 beats/min and the pulse shape was regular and repeatable. After exercise, the pulse frequency increased to ~ 90 beats/min, and the shape and intensity were irregular (Figure 3, b and c).
To prove that the NFPSU response was position independent, we drew a nine-point grid with an area of 1×1 cm
2 on the wrist skin, centered on the wrist artery (
Figure 3d). The NFPSU was positioned point-by-point on the grid to obtain pulse signals.
Figure 3e presents the measured arterial-pulse signals recorded by the NFPSU at various positions in the grid (indicated by red dots in the figure).
Because the whole experiment lasted more than one hour, the heart beat cycles show a slight fluctuation in the range of 60-70/min. Each of the nine pulse waveforms in Figure 3e resembles a typical pulse waveform (
Figure 3f) consisting of a percussion wave (P-wave), a tidal wave (T-wave), and a diastolic wave (D-wave). Thus, we believe that high-fidelity pulse signals can be successfully obtained anywhere within a maximum distance of √(5
2+5
2)≈7.07 mm from the arterial pulse. Such position-independent sensing ability of the NFPSU is crucial when the GRW is worn for a long time.
Figure 4. Processing procedure for data collected in gesture-recognition experiment. (ML: machine learning; SVM: support-vector machine)
For gesture recognition, the proposed GRW with three NFPSUs was placed on the tester’s wrist to obtain mechanical signals related to hand gestures. Data-processing methods were employed to calculate the characteristics of each gesture signal, as shown in
Figure 4. Light was launched into the three sensors and collected using a CMOS camera. The time-varying output data—in the form of a CMOS image (1280×720 pixels)—were then transferred to and processed by a computer. By extracting the change in the gray level of the CMOS images over time, we obtained the time-domain output light intensities of all three sensor channels as they varied with different gestures. (The time-domain signal was automatically collected using the change-point-finder algorithm or the threshold setting method.)
To avoid the influence of ambient light, the CMOS was well packaged and shaded. In addition, an algorithm was used to remove the background noise. Subsequently, data consolidation was achieved by an end-to-end merge of the time-domain data obtained from the three sensors. The consolidated data were then appended to a gesture label and collected in a dataset of integrated time-domain signals containing all gestures and their corresponding gesture labels. Because some degree of random motion is inevitable for a GRW when a person is wearing it, a machine learning algorithm for support-vector classification (SVC) was introduced to relearn the tester’s gestures every time the wearing condition changed. In this study, an SVM classification model was trained using the consolidated database and was subsequently used as a classifier to detect gestures. Once the real-time gesture-related data were collected, the trained support-vector classifier (SVC) was used to predict the real-time data and return the predicted gesture.
Figure 5a shows the mechanical signals obtained by our GRW for twelve fundamental gestures. The effects of different gestures on the output intensity are readily visible. The cross section of the wrist was altered by the gesture-related movement of even a single tendon, and the wearing conditions of the GRW changed accordingly. Even for similar gestures (e.g., Gesture 1 and 2), notable differences were observed in the corresponding time-varying output of NFPSU 2; this can be attributed to the high sensitivity of the NFPSUs. Though the introduction of machine learning algorithm can effectively solve the inevitable problem of random wristband motion, disturbance in the output of GRW sensors caused by sliding between the sensors and the skin surface may occur during long-term wear, reducing the recognition accuracy. However, the position-independent response of the NFPSU means that the effects of sliding on the results are insignificant. Consequently, a stable output of the NFPSUs during long-term wear is achieved even with very few sensors.
Figure 5. Results of the gesture-recognition experiment: a) Gestures with corresponding time-domain signals measured by the three nanofiber-based pressure-sensor units (NFPSUs). b) Hands of four testers with different physiques. c-f) Classifiers obtained from different testers.
To characterize the adaptability and gesture-recognition accuracy of the GRW, four testers with different physiques were employed (
Figure 5b). Each tester performed each gesture ten times to update the corresponding databases. The corresponding classifiers were obtained by training new SVM classification models with the updated databases and corresponding gesture labels.
Figure 5c shows the classifier obtained from Tester 1. Twelve gestures were successfully recognized with an accuracy of 93.2%, which was comparable to or slightly higher than that reported for GRWs with more than five electrical sensors.
[16-18] To prove the adaptivity of the wristband and save the testers’ time, only five representative gestures were included for the other testers. The classifiers obtained from the other testers are shown in
Figure 5d-f. The slight fluctuations in the recognition accuracy may be attributed to different physiques. Specifically, the subcutaneous fat of the chubby tester (Tester 2) reduced the degree of finger movement-related deformation, which slightly decreased the recognition accuracy. Nevertheless, the excellent adaptability of the proposed GRW can be seen in the high recognition accuracy (92%-94%) for all the testers, regardless of physique.
Besides, calibration is required before each wear and can be performed by repeating each gesture 10 times, which takes about 10 minutes.
Figure 6. Remote control of robotic hand via proposed gesture-recognition wristband (GRW): a) Images of a robotic hand performing gestures based on the results obtained from a tester wearing the GRW. b) Images of a tester playing rock-paper-scissors with a robotic hand.
Robotic hands are widely used in modern industry, serving as an efficient method for improving productivity and working conditions. Humanoid robotic hands, which can perform more complicated tasks involving various gestures, have wide application prospects, e.g., remote surgical operation, sign-language translation, and virtual/augmented-reality interactions. To demonstrate further the use of the proposed GRW as an HMI device, a robotic hand was used to perform specific movements based on the gesture-recognition results, as shown in
Figure 6a. A schematic of the entire experiment is shown in
Figure S4 in the Supporting Information. All four testers, wearing GRWs, controlled the robotic hand through gestures almost in real time (
Videos S1-S4).
Specifically, there is a time delay of ~1 s between the tester’s action and the response of the robotic hand in the videos. We attributed the time delay mainly to the robotic hand, which was driven by steering engines, because the response time of the NFPSU and the CMOS-based signal collection system is 12 ms and 150 ms, respectively. Additionally, each tester played rock-paper-scissors with the robotic hand (
Video S5). Since the robotic hand recognized the tester’s gesture before executing its own move, it always won, as shown in
Figure 6b.
3. Conclusion
In this study, we proposed and demonstrated an optical-nanofiber-enabled GRW for HMI with the assistance of machine learning. In order to overcome the position-dependent response of a filmy optical nanofiber sensor, we used a soft liquid sac to transfer pressure stimuli to the optical-nanofiber sensor.
The pressure sensor exhibited position independence within a distance of 7.07 mm, together with a good linear pressure response in the range 6.4-31.8 kPa. With the assistance of the SVM machine-learning algorithm, the three-sensor GRW could successfully recognize twelve hand gestures with a maximum accuracy of 94%. Moreover, the GRW was able to control and play games with a robotic hand, demonstrating its significant potential for use as an immersive HMI terminal.
To recognize more gestures with much higher accuracy, one can include more NFPSUs in a GRW or optimize the SVM model. With the assistance of
micro/nanofiber angle sensors
[30-31], it is possible for the GRW to recognize wrist-related gestures. The proposed GRW seems to offer a promising solution for application scenarios in virtual/augmented reality and the metaverse requiring gesture recognition, e.g., remote control of machines or translation of sign language.
4. Experimental Methods
Materials: PDMS (Sylgard 184 silicone elastomer) was purchased from Dow Corning. The optical nanofiber was fabricated from a standard silica SMF (G.652, cladding diameter: 125 μm, core diameter: 9 μm; Corning Inc.).
Fabrication: First, the nanofiber was fabricated by heating and stretching the SMF. The uniform diameter and length of the waist area were 800 nm and approximately 1.5 cm, respectively. By accurately measuring the time interval between transmission laser intensity drops during fiber pulling process, we could precisely determine the time to stop the heating and pulling based on a target diameter. Both the accuracy and standard deviation of diameters can be less than 5 nm for expected micro/nanofiber diameters ranging from 800 nm to 1300 nm.[41] The stretched nanofiber was connected to an unstretched SMF at both ends by a conical tapered transition region. Subsequently, the PDMS monomer and curing agent were mixed in a ratio of 10:1. After degassing for 30 minutes, the uncured PDMS was cast on a glass slide, followed by spin-coating (500 rpm, 5 minutes). The glass slide coated with PDMS was heated at 80 °C for 30 minutes to form a PDMS membrane with a thickness of 200 μm. The thickness can be readily adjusted by controlling the spin-coating speed.
The optical nanofiber was then formed into a U-shape and embedded between the PDMS membranes. Therefore, a membrane-nanofiber-membrane sandwiched optical-nanofiber sensor was realized. To form the soft liquid sac, we cast the degassed PDMS precursor into a custom-made mold. The region of the sac contacting the skin surface, referred to as the sac base, was designed to have a thickness of ~200 μm. After solidification, the sac was demolded and filled with glycerol solution to ensure its non-toxicity and chemical stability. Subsequently, the liquid-filled sac was sealed using a filmy optical-nanofiber sensor. Finally, the sealed optical-nanofiber sensor with the soft liquid sac was embedded and fixed in a rigid 3D-printed resin shell using ultraviolet solidification glue to complete the assembly of the NFPSU.
The detailed process of NFPSU fabrication is shown in Video S6.Characterization and measurement: A motorized force tester (ESM 303, Mark-10 Inc.) was used to control the applied pressure and maintain the operating cycle. A tungsten light source (SLS201L/M, Thorlabs Inc.) and a spectrometer (USB2000+, Ocean Optics) were used to inject and collect the light signals for the characterization of NFPSU. In the gesture-recognition experiments, an LED (GCI-0604, Daheng Optics) was used as the light source, and a CMOS camera (ov5640, OmniVision Technologies) was used for collecting the output signals. A robotic hand (uHand 2.0, Hiwonder Inc.) controlled by STM 32 was used to perform the gestures recognized.
Experiments involving human subjects were performed with the full, informed consent of the volunteers, who are also the authors of the manuscript.
Supporting Information
Supporting information is available from the Wiley Online Library or from the corresponding author.
Acknowledgements
This study was supported by the National Natural Science Foundation of China (61975173, 62105299, 92148205), the Major Scientific Research Project of Zhejiang Lab (No. 2019MC0AD01), and the Key Research and Development Project of Zhejiang Province (No. 2021C05003, 2022C03103).
References
[2] M. Wang, T. Wang, Y. Luo, K. He, L. Pan, Z. Li, Z. Cui, Z. Liu, J. Tu, X. Chen, Fusing Stretchable Sensing Technology with Machine Learning for Human-Machine Interfaces, Adv. Funct. Mater. 2021, 31, 2008807. [3] G. Gao, F. Yang, F. Zhou, J. He, W. Lu, P. Xiao, H. Yan, C. Pan, T. Chen, Z. L. Wang, Bioinspired Self-Healing Human-Machine Interactive Touch Pad with Pressure-Sensitive Adhesiveness on Targeted Substrates, Adv. Mater. 2020, 32, 2004290.
[5] J. Yang, S. Liu, Y. Meng, W. Xu, S. Liu, L. Jia, G. Chen, Y. Qin, M. Han, X. Li, Self-Powered Tactile Sensor for Gesture Recognition Using Deep Learning Algorithms, ACS Appl. Mater. & Interfaces 2022, 14, 25629. [8] A. Moin, A. Zhou, A. Rahimi, A. Menon, S. Benatti, G. Alexandrov, S. Tamakloe, J. Ting, N. Yamamoto, Y. Khan, F. Burghardt, L. Benini, A. C. Arias, J. M. Rabaey, A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition, Nat. Electron. 2020, 4, 54. [10] Z. Zhou, K. Chen, X. Li, S. Zhang, Y. Wu, Y. Zhou, K. Meng, C. Sun, Q. He, W. Fan, E. Fan, Z. Lin, X. Tan, W. Deng, J. Yang, J. Chen, Sign-to-speech translation using machine-learning-assisted stretchable sensor arrays, Nat. Electron. 2020, 3, 571. [11] F. Wen, Z. Zhang, T. He, C. Lee, AI enabled sign language recognition and VR space bidirectional communication using triboelectric smart glove, Nat. Commun. 2021, 12, 5378.
[12] M. Zhu, Z. Sun, Z. Zhang, Q. Shi, T. He, H. Liu, T. Chen, C. Lee, Haptic-feedback smart glove as a creative human-machine interface (HMI) for virtual/augmented reality applications, Sci. Adv. 2020, 6, eaaz8693.
[13] S. Shin, H. Yoon, B. Yoo. Hand Gesture Recognition Using EGaIn-Silicone Soft Sensors, Sensors, 2021, 21, 3204.
[15] M. Wang, Z. Yan, T. Wang, P. Cai, S. Gao, Y. Zeng, C. Wan, H. Wang, L. Pan, J. Yu, S. Pan, K. He, J. Lu, X. Chen, Gesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensors, Nat. Electron. 2020, 3, 563. [16] P. Tan, X. Han, Y. Zou, X. Qu, J. Xue, T. Li, Y. Wang, R. Luo, X. Cui, Y. Xi, L. Wu, B. Xue, D. Luo, Y. Fan, X. Chen, Z. Li, Z. L. Wang, Self-powered gesture recognition wristband enabled by machine learning for full keyboard and multi-command input, Adv. Mater. 2022, e2200793. [19] L. Zhang, J. Pan, Z. Zhang, H. Wu, N. Yao, D. Cai, Y. Xu, J. Zhang, G. Sun, L. Wang, W. Geng, W. Jin, W. Fang, D. Di, L. Tong, Ultrasensitive skin-like wearable optical sensors based on glass micro/nanofibers, Opto-Electron. Adv 2020, 3, 19002201. [20] S. Wang, X. Ni, L. Li, J. Wang, Q. Liu, Z. Yan, L. Zhang, Q. Sun, Noninvasive Monitoring of Vital Signs Based on Highly Sensitive Fiber Optic Mattress, IEEE Sens. J. 2020, 20, 6182.
[21] A. Leber, B. Cholst, J. Sandt, N. Vogel, M. Kolle, Stretchable Thermoplastic Elastomer Optical Fibers for Sensing of Extreme Deformations, Adv. Funct. Mater. 2018, 29, 1802629.
[25] S. Ma, X. Wang, P. Li, N. Yao, J. Xiao, H. Liu, Z. Zhang, L. Yu, G. Tao, X. Li, L. Tong, L. Zhang, Optical Micro/Nano Fibers Enabled Smart Textiles for Human-Machine Interface, Adv. Fiber Mater. 2022, 4, 1108. [28] L. Y. Li, Y. F. Liu, C. Y. Song, S. F. Sheng, L. Y. Yang, Z. J. Yan, D. J. J. Hu, Q. Z. Sun, Wearable Alignment-Free Microfiber-Based Sensor Chip for Precise Vital Signs Monitoring and Cardiovascular Assessment, Adv. Fiber Mater. 2022, 4, 475. [29] W. Yu, N. Yao, J. Pan, W. Fang, X. Li, L. Tong, L. Zhang, Highly sensitive and fast response strain sensor based on evanescently coupled micro/nanofibers, Opto-Electron. Adv 2022, 5, 210101. [31] Z. Zhang, Y. Kang, N. Yao, J. Pan, W. Yu, Y. Tang, Y. Xu, L. Wang, L. Zhang, L. Tong, A Multifunctional Airflow Sensor Enabled by Optical Micro/nanofiber, Adv. Fiber Mater. 2021, 3, 359.
[32] Y. Li, S. Tan, L. Yang, L. Li, F. Fang, Q. Sun, Optical Microfiber Neuron for Finger Motion Perception, Adv. Fiber Mater. 2022, 4, 226.
[33] L. Zhao, B. Wu, Y. Niu, S. Zhu, Y. Chen, H. Chen, J. h. Chen, Soft Optoelectronic Sensors with Deep Learning for Gesture Recognition, Adv. Mater. Technol. 2022, 7, 2101698. [35] X. Fan, Y. Huang, X. Ding, N. Luo, C. Li, N. Zhao, S.-C. Chen, Alignment-Free Liquid-Capsule Pressure Sensor for Cardiovascular Monitoring, Adv. Funct. Mater. 2018, 28, 1805045. [36] N. Yao, X. Wang, S. Ma, X. Song, S. Wang, Z Shi, J. Pan, S. Wang, J. Xiao, H. Liu, L. Yu, Y. Tang, Z. Zhang, X. Li, W. Fang, L. Zhang and L. Tong, Single optical microfiber enabled tactile sensor for simultaneous temperature and pressure measurement, Photonics Res. 2022, 10, 2040. [37] Y. Tang, L. Yu, J. Pan, N. Yao, W. Geng, X. Li, L. Tong, L. Zhang, Z. Zhang, A. Song, Optical nanofiber skins for multifunctional humanoid tactility, Adv. Intell. Syst. 2023, 2200203. [38] N. Yao, S. Linghu, Y. Xu, R. Zhu, N. Zhou, F. Gu, L. Zhang, W. Fang, W. Ding, L. Tong, Ultra-Long Subwavelength Micro/Nanofibers With Low Loss, IPTL 2020, 32, 1069.
Supporting Information
Optical-Nanofiber-Enabled Gesture-Recognition Wristband for Human-Machine Interaction with the Assistance of Machine Learning
Shipeng Wang, Xiaoyu Wang, Shan Wang, Wen Yu, Longteng Yu, Lei Hou, Yao Tang, Zhang Zhang, Ni Yao, Chuan Cao, Hao Dong, Lei Zhang, and Hujun Bao
The model between the nanofiber deformation and optical intensity