Opto-Electronic Advances, Volume. 8, Issue 2, 240152(2025)

Multi-photon neuron embedded bionic skin for high-precision complex texture and object reconstruction perception research

Hongyu Zhou1,2、†, Chao Zhang1,2、†, Hengchang Nong1,2、†, Junjie Weng3, Dongying Wang4, Yang Yu1、*, Jianfa Zhang5, Chaofan Zhang6, Jinran Yu6, Zhaojian Zhang1, Huan Chen1, Zhenrong Zhang2, and Junbo Yang1
Author Affiliations
  • 1College of Science, National University of Defense Technology, Changsha 410073, China
  • 2Key Laboratory of Multimedia Communication and Network Technology in Guangxi, School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China
  • 3College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
  • 4College of Meteorology and Oceanography, National University of Defense Technology, Changsha 410073, China
  • 5Hunan Provincial Key Laboratory of Novel Nano-Optoelectronic, Information Materials and Devices, National University of Defense Technology, Changsha 410073, China
  • 6College of Advanced Interdisciplinary Studies, National University of Defense Technology, Changsha 410073, China
  • show less

    Attributable to the complex distribution of tactile vesicles under the skin and the ability of the brain to process specific tactile parameters (shape, hardness, and surface texture), human skin can have the capacity for tactile spatial reconstruction and visualization of complex object geometry and surface texture. However, current haptic sensor technologies are predominantly point sensors, which do not have an interlaced distribution structure similar to that of haptic vesicles, limiting their potential in human-computer interaction applications. Here, we report an optical microfiber array skin (OMAS) imitating tactile vesicle interlaced structures for tactile visualization and object reconstruction sensing. This device is characterized by high sensitivity (−0.83 N/V) and fast response time (38 ms). We demonstrate that combining the signals collected by the OMAS with appropriate artificial intelligence algorithms enables the recognition of objects with different hardnesses and shapes with 100% accuracy. It also allows for the classification of fabrics with different surface textures with 98.5% accuracy and Braille patterns with 99% accuracy. As a proof-of-concept, we integrated OMAS into a robot arm to select mahjong among six common objects and successfully recognize its suits by touch, which provides a new solution for tactile sensory processing for human-computer interaction.

    Introduction

    Mahjong is a traditional Chinese game that contains a wealth of statistical principles, from "circle, character, bamboo" to "red, green, white Dragon", from seasonal winds to daily necessities, each element holds a profound cultural significance, and is regarded as the quintessence of Chinese culture. The masters of this game do not need to rely on vision, but can recognize the mahjong suits in their hands simply by the pressure and friction sensed by their skin. This not only reflects the human skin's sophisticated tactile sensing mechanism for external physical stimuli such as hardness, shape, surface texture, etc., but also includes the brain's rapid analysis and reconstruction of the morphology of the contact object based on these tactile parameters. Taking the human palm skin as an example, it contains more than 20,000 tactile vesicles underneath, which can be categorized according to their functions: static force-sensitive Merkel-neurite vesicles, tensile force-sensitive Ruffini vesicles, edge-shape-sensitive Meissner vesicles, and vibration-sensitive Pacinian vesicles1,2, according to their own depth in the skin. Depending on their depth in the skin, activation threshold, and triggering mode, they can acquire different types of tactile signals in cross synergy, and clearly distinguish features such as shape, hardness, and surface texture in real time.

    Inspired by this tactile sensing mechanism, tactile sensors that simulate the function of human skin have attracted extensive attention. With the emergence of inkjet thin-film printing and micro-nano-processing technologies, tactile sensors with high sensitivity, wide detection range, and high robustness have been developed. So far, electrical sensors based on the principles of resistance3,4, piezoelectricity5, and friction electricity6 have been able to better mimic the tactile nerve to collect and process physical information, including strain3, vibration5, material6, etc. By monitoring changes in the sensor's output electrical signals during contact, various types of sensing systems have been developed for applications like motion monitoring3, smart prosthetics5,7, and human-computer interaction4,6. However, electrical-based sensors for tactile sensing have some inherent drawbacks, such as safety issues with electrical components and circuits, the possibility of containing certain hazardous metal components, low resistance to corrosion by human secretions, and susceptibility to magnetic interference. In contrast, the use of photons rather than electrons as carriers of information is an ideal strategy for tactile sensors, as demonstrated by fiber optic sensors8,9. Fiber-optic tactile sensors typically combine optical fibers with flexible materials10 to measure physical quantities such as temperature, pressure, and material porperities by monitoring the changes in the characteristics of the output light such as phase11, intensity12,13, and central wavelength14. Typical fiber optic tactile sensing structures are microfiber (OM)11,13, fiber Bragg grating (FBG)14, single mode fiber (SMF)15, Fabry-Perot interferometer (FPI)16, etc. as shown in Supplementary Table S1. Although this has led to great progress in recognition technology, there are some limitations. Nowadays, most of the optical tactile sensors are point sensors, i.e., a sensing point is used to sense the characteristics of the touched object, and this kind of sensing method is prone to lose some detailed information of the touched surface. And the few array-based sensing efforts17,18, whose structures do not mimic the cross-distributed arrangement of tactile vesicles under the skin, cannot well capture the vector angles of tactile parameters such as stress and surface texture, and such limited signals hamper the restoration of tactile signals and their application in spatial reconstruction. Therefore, there is an urgent need for a high-precision and high-sensitivity tactile-like sensing technique that can be extended to a two-dimensional detection range. This would allow for comprehensive acquisition of the features of the contacted surface and aid in spatial reconstruction perception.

    Due to its wavelength or sub-wavelength scale diameter, large refractive index difference between the core and cladding, and geometrical homogeneity of atomic precision, optical microfiber exhibits strong optical field confinement and propagation characteristics with a large evanescent field19. This makes it extremely suitable for the detection of tiny tactile signals, and it has been demonstrated that such fibers can measure the temperature20, pressure18,20-22, vibration12,23,24, material25, and other tactile parameters. In our previous work25, we proposed that a photonic sensing system, consisting of OM as a mechanoreceptor coupled with an artificial intelligence algorithm (which simulates the human brain to train and extract signal features from touching objects), is able to mimic the participation of a single human tactile neuron voxel in haptic sensing. Based on the above theory, a grid-distributed array of OMs, in which the position of each OM determines its characteristic response to the contact force, can be built to capture the rich vectorial information of the force in space, which can be used to mimic the synergistic sensing effect of spatially distributed tactile sensing voxels under the human subcutaneous space. In addition, unlike previous array-based work6,18, the cross-distributed OM sensing array signals are not completely independent of each other, but are correlated with respect to the stress angle, which enriches the dimensionality of the signals due to the 'anisotropic response' of the single waveguide core to stress26.

    Considering these factors, we integrated a 2 × 2 OM array with polydimethylsiloxane (PDMS) to propose a micro-nano multiphotonic neuron-embodied tactile skin (OMAS) for object shape recognition in human-computer interaction. A four-way longitudinal and transverse micro-nano structure is utilized to achieve multimodal tactile sensing of objects, simulating the synergistic effect of human multi-fingers or subcutaneous multi-tactile receptors in various tactile modalities (Fig. 1(a)). In addition, in order to simulate the human multimodal tactile sensing system, we connect the OMAS to a signal processing module on the PC side, and use artificial intelligence algorithms (Fully Connected Neural Networks-FCNN etc.) to mimic the brain's function of processing bioelectrical signals. This enables multivariate sensing and spatial reconstruction mechanism of the object's shape, hardness, and surface texture (Fig. 1(b)). Through experiments, we demonstrate that OMAS can be used as a bionic flexible tactile skin for robots. By analyzing the static pressure data, OMAS can detect the softness, hardness and shape of the contacting objects with 100% accuracy for recognizing three levels of hardness across three 3D geometries and six common objects. By analyzing the characteristics of the tactile signals of the dynamic pressure, OMAS is able to accurately recognize the material and surface texture of the contacting objects with a recognition accuracy of 98.5% for ten fabrics and a 99% success rate for recognizing numbers 0–9 in Braille. As a proof-of-concept, we integrated OMAS into a robotic hand to accurately locate the position of a mahjong among six objects with different spatial distributions on a plane and correctly recognize its suit. This micro-nano-multiphotonic neuron embodied tactile skin is ahead of similar research efforts in terms of key parameters such as recognition accuracy, detectable modality, and detectable range (Supplementary Fig. S1). The experimental results show that based on this spatial angle correlation sensing structure design, the developed multiphoton bionic skin receptors are equipped with vector sensing capabilities of normal/shear force and axial strain. Combined with the preferred machine learning algorithm and interaction processing module, the multiphoton bionic skin is capable of realizing object texture and spatial structure reconstruction sensing capabilities that match those of mahjong masters (i.e., the process of combining tactile features of an object to form a spatial image reflection, enabling purely tactile-based spatial reconstruction.).

    Design and fabrication of a multiphoton neuron tactile skin. (a) The design concept and spatial reconstruction workflow of the multiphotonic neuron haptic skin for simulating the tactile perception and spatial reconstruction process of human subcutaneous multitactile cell fusion. (b) Flowchart of the interaction of each module in the spatial reconstruction process of mahjong by multiphotonic neuron haptic skin. (c) The structure of the multiphotonic neuron tactile skin, which consists of three layers: a silicone contact layer, an OM array embedded PDMS sensing layer, and a glass substrate layer.

    Figure 1.Design and fabrication of a multiphoton neuron tactile skin. (a) The design concept and spatial reconstruction workflow of the multiphotonic neuron haptic skin for simulating the tactile perception and spatial reconstruction process of human subcutaneous multitactile cell fusion. (b) Flowchart of the interaction of each module in the spatial reconstruction process of mahjong by multiphotonic neuron haptic skin. (c) The structure of the multiphotonic neuron tactile skin, which consists of three layers: a silicone contact layer, an OM array embedded PDMS sensing layer, and a glass substrate layer.

    Results and discussion

    Design and fabrication of tactile skin with multiphoton neurons

    The overall concept of the multiphoton neuron haptic skin is illustrated in Fig. 1(c). The OMAS bionic structure can be characterized and classified as a three-layer structure of contact, sensing and substrate layers. A 3D printer (X1-Carbon Combo, Bambulab) was used to prepare a PET plastic inverted mold of the microstructural layer simulating the skin texture, and then transparent silicone (Posilicone) was applied to the mold, which was cured and then demolded to form a microstructured annular ridge mimicking the skin texture. According to the principle of volume conservation, the characteristic dimensions of the annular ridges are as follows: overall diameter of 30 mm, ridge height of 0.2 mm, ridge width of 0.5 mm, ridge spacing of 0.5 mm, and thickness of 1 mm, which are comparable to the dimensional characteristics of the skin texture of the adult hand27. This structure not only cushions some of the stresses, improves the sensor toughness, and protects the sensing layer, but its annular ridge structure also increases the friction coefficient, increases the actual contact surface area, and amplifies the tactile vibration caused by friction28. The core sensing unit shown in the figure is the use of improved flame brush pulling cone method29 pulling cone four production process and parameters are identical to the microfiber (The diameter of the waist region is 3 μm and the length of the waist region is 6 mm. Supplementary Fig. S2 shows its high quality microscope images.), and according to the form of vertical and horizontal cross-vertical distribution of embedded in the PDMS, so that greatly increased the sensing area of the sensor, increased the amount of texture collection of signals (from one way signal to four-way signals). This greatly increases the sensing area of the sensor, increases the amount of signals for texture acquisition (from one signal to four signals), and makes it less likely that some specific 2D texture features of the contacted surface will be lost, increasing the reliability of the sensor. Micro- and nanofibers have wavelength- or sub-wavelength-scale diameters and large refractive index differences between the core-cladding, as well as geometric homogeneity with atomic precision, presenting high mechanical strength (> 5 Gpa30), and possessing a strong light-field confinement capability and propagation characteristics of large abrupt fields19. When the OM is subjected to a slight bending caused by a tactile stimulus (static pressure or dynamic vibration), its restricted symmetric transmission mode will be converted to an asymmetric radiation mode, which produces an energy leakage, leading to a decrease in the optical power at the output port, and its bending loss is shown in Eq. (1)31:

    Lα=10log{1πk02(n12nneff2)exp[23k0(nneff2n22)3/2nneff2R1]2W3/2V2R1K0(Wa)K2(Wa)},

    where k0 is the free propagation constant, n1 and n2 are the refractive indices of the OM core and cladding, respectively, nneff is the effective refractive index of the propagating modes in the OM, a is the radius of the OM, V is the normalized frequency of the OM, W is the radial propagation constant of the cladding, R1 is the bending radius of the micro- and nano-optic fibers, and K0 and K2 are the zeroth-order and second-order modified Hankel functions, respectively.

    However, because of its large-scale evanescent field transmission property, i.e., part of the optical field energy will be transmitted outside the physical boundary of the OM, it is highly susceptible to external environmental factors32. If the waist region is exposed to the air, its evanescent field energy will be absorbed by the air, causing loss. In addition, the exposed region in the air is also highly susceptible to the attachment of pollutants, causing additional losses such as absorption and scattering13. The introduced PDMS can effectively encapsulate the leaked evanescent field from the waist region, in addition, PDMS is a transparent and flexible material with high biocompatibility, high flexibility, and quality optical modulation properties33, and the two layers of PDMS (base fluid and curing fluid ratio of 10:1, thickness of about 500 μm) we used encapsulated the longitudinally and horizontally distributed OM, and its elastic optical effect34 can effectively transform the tactile stimulus into OM deformation with high reduction, which improves the stability of the sensor while protecting the waist region of the microfiber. The fractional power outside the OM in this state is 38.04% (Supplementary Section 1). The final substrate layer consisting of a glass substrate is used to mimic the bones inside the skin, which is used to support and protect the top structure of the sensor from damage and improve the robustness of the sensor. The rationalization of the choice of sensor fabrication parameters is discussed in Supplementary Section 2.

    In order to demonstrate the rationality of the longitudinal and transverse distribution of the girdle structure, and to investigate the effect of the deformation of the sensor on the optical transmission loss in the girdle region of the optical microfiber under different directional forces, we performed finite element simulations based on the Solid Mechanics Module and the Ray Optics Module in COMSOL. We perform finite element simulations in COMSOL based on the “Solid Mechanics” module and the “Ray Optics” module. Firstly, we classify the possible forms of force into three: vertical normal force, radial shear force and axial shear force [Fig. 2(a)], assuming that all three forces act on the waist region of the microfiber to produce a deformation of 0.22 μm in the center region. After steady state, we conducted path tracing for 2791 rays with a total power of 1 W emitted from the fiber core in the waist region, as shown in Fig. 2(b, c). It is found that the bending of the fiber core caused by the vertical normal force and the radial shear force leads to the transmission of light in the curvature change to produce a large leakage along the plane of the force occurring, resulting in the loss of power, whereas the deformation caused by the axial shear force is more complex and subtle, only causing a small portion of the transmission of the light path changes, which indicates that different directions of force will bring different deformation to the sensitive waist area, resulting in different characteristics of light propagation. The action of the same force is split according to vertical, radial, and normal directions, which can produce different effects in the horizontal and vertical sensing regions, and this vectorial force-sensitive sensing structure is able to capture the angular information of the patterns and textures of the surfaces being contacted, different from single-fiber neurons 12 and parallel-distributed arrays of micro and nano fibers18, or other girdle-area overlapping approaches (Supplementary Section 3). In addition, the structure expands the direction of the sensor's use, allowing the friction signal acquisition action to be expanded from a specific one-dimensional direction to an arbitrary direction in two dimensions. On this basis, we can use signal processing tools such as Fast Fourier Transform (FFT) and deep learning algorithms to identify and even rebuild the surface texture of an object by analyzing the features of the four signals.

    Validation of shape and hardness recognition ability of multiphotonic tactile neurons. (a) Schematic of the force applied to a single sensing unit of a multiphotonic neuron. (b) Changes in the path of light passing through the waist region under three forces. (c) Stress and deformation diagrams (indicated by color bars) of the waist region of a single sensing unit under three forces. (d) Optical power response plots of a single sensing unit under 0 to 3 N normal contact force in steps of 0.2 N, with each force transformation held for 5 s. (e) Normal force sensitivity from 0 to 3 N, with error bars indicating slight variations in optical power from the response time. (f) Individual sensing unit optical power response when pressure (1 N) is repeatedly applied more than 5000 times. The inset shows the optical power response in one of the two time domains after zooming in. (g) Pressure recognition based on object hardness and shape for six species using FCNN machine learning algorithm. (h) Visualization of clustered data clusters. (i) Confusion matrix of the six measured objects with 100% recognition accuracy.

    Figure 2.Validation of shape and hardness recognition ability of multiphotonic tactile neurons. (a) Schematic of the force applied to a single sensing unit of a multiphotonic neuron. (b) Changes in the path of light passing through the waist region under three forces. (c) Stress and deformation diagrams (indicated by color bars) of the waist region of a single sensing unit under three forces. (d) Optical power response plots of a single sensing unit under 0 to 3 N normal contact force in steps of 0.2 N, with each force transformation held for 5 s. (e) Normal force sensitivity from 0 to 3 N, with error bars indicating slight variations in optical power from the response time. (f) Individual sensing unit optical power response when pressure (1 N) is repeatedly applied more than 5000 times. The inset shows the optical power response in one of the two time domains after zooming in. (g) Pressure recognition based on object hardness and shape for six species using FCNN machine learning algorithm. (h) Visualization of clustered data clusters. (i) Confusion matrix of the six measured objects with 100% recognition accuracy.

    Validation of OMAS's shape and hardness recognition capability

    SA receptors in human skin generate voltage spikes at different rates and patterns according to the intensity of a continuous stimulus (such as static pressure), forming the static portion of the tactile sensory information, which is the main way for the skin to recognize the shape and hardness of an object35,36. Since the microfiber array in the OMAS multiphoton tactile receptor is four microfibers with the same parameters fabricated under the same experimental environment using the same pulling cone process, and all of them are timely made well-packaged and preserved after fabrication, it can be assumed that they have the consistency of morphology and properties31, and the optical paths of the four microfibers are not coupled to each other and do not affect each other. Therefore, we chose the sensing region of one of the four OMAS sensors' microfibers as the subsequent object of study in this section to investigate the response of the OMAS sensors to the static tactile sensing of the contact force, and conducted the experiments using a PC-controlled electrodynamic dynamometer (Supplementary Fig. S3). Figure 2(d) plots the optical power response of a microfiber sensing region with a uniform waist region diameter of 3 μm and a length of 6 cm under the action of a normal contact force from 0 to 3 N in steps of 0.2 N, with each force transformation held for 5 s. It can be observed that the output intensity decreases with increasing pressure, which is attributed to the fact that the degree of the bending deformation of the OM waist region increases as the pressure is increased, which causes the light that was originally transmitted in a symmetric mode of the OM is converted to an asymmetric leakage mode, and the bending loss of the OM becomes larger, leading to a decrease in the optical power guided in the OM37. The descend in optical power that occurs for the same pressure in the range of 40–100 s in the figure is the result of the pressure dynamic autonomous correction function of the precision manometer. Extracting the information in the figure to establish the relationship between optical power and applied stress [Fig. 2(e)], we use the slope to indicate its stress sensitivity as:

    SF=dIdF,

    where I is the optical power and F is the magnitude of the corresponding applied force. It can be found that the contact force sensitivity in the ranges of 0.6–1.2 N, 1.2–2.2 N, and 2.2–3 N is about −0.83 N/V, −0.53 N/V, and −0.36 N/V. The change in the force range sensitivity can be attributed to the fact that, in the section of the sensitive manometer's indenter is not in sufficient contact with the OMAS before the initial occurrence of the force, and the force conducted to the waist region of the OM is small, which did not cause the waist region to significantly deformation. As the applied force increases, the contact layer drives the waist region to produce a deeper degree of microbending strain, the stress sensitivity increases, and the change in sensitivity in the range of 1.2 N and beyond is due to the limitation of the linear elasticity range of the PDMS thin film structure38 and the protective effect of the glass substrate layer. It is known from our previous report25 that the sensitivity of the OM sensing cell is inversely proportional to its waist region diameter. This is due to the fact that the energy share of the evanescent field of the OM increases with the decrease in the diameter of the uniform waist region of the OM, and at the same time, the decrease in the diameter of the waist region of the OM also leads to the sensing region being more susceptible to the deformation limits of the OM and the PDMS under the same force. And the smaller the thickness of the PDMS layer wrapped outside the waist region, the higher its sensitivity, but its ability to protect the sensing layer becomes weaker, affecting the reusability of OMAS. In order to meet the specific requirements of practical applications and to ensure the high sensitivity, wide detection range, and high stability of the OMAS, trade-offs must be made. Considering the influence of the above parameters on the sensor performance, we chose the OMAS consisting of an OM array with a PDMS thickness of 500 μm, a diameter of the waist region of 3 μm, and a length of 6 mm for the subsequent study. The effect of another important sensing parameter, temperature, on the optical power output of the sensor is discussed in Supplementary Section 4. In order to investigate the stability and reusability of the OMAS, we repeatedly apply and release a contact force of 2 N on the sensor panel, and the experimental results shown in Fig. 2(f) show that the baseline of the sensor's output waveform intensity remains stable during more than 5000 times of the “apply-release” test, which basically maintains a high degree of consistency in the experimental results. The excellent repeatability and stability of the OMAS sensor is verified, which ensures the feasibility of subsequent dynamic response experiments.

    Due to the development of thin film electronic printing technology and mask printing technology, some array type electrical tactile sensors based on hardness and shape recognition work39,40. However, the ability of optical tactile array to recognize hardness and shape has not been discussed. In the previous paragraph, we proved that OMAS has the ability to map the magnitude of static force. In order to further verify its static tactile sensing function similar to that of human SA receptors, as shown in Supplementary Fig. S4(a), we made three groups of different simple silicone geometries (sphere, cylinder, cube) with increasing hardness as test objects, and tested their hardness with a durometer to be 12, 23, and 35 HA respectively. The experimental facilities are shown in Supplementary Fig. S4(b). Fix the OMAS at the bottom of the electric dynamometer, and then apply the same pressing distance to the silicone geometry to simulate the pressing action of the skin on the object, and record the four optical power as shown in Supplementary Fig. S5. The results show that for the geometry with the same shape and different hardness, the higher the hardness of the geometry, the faster the optical power of the sensor decreases, and the lower the output optical power intensity after the "pressing" process. For geometries with the same hardness but different shapes, the decreasing tendency of the mutual four-way optical power and the stabilized optical power after pressurization are significantly different. This is because: on the one hand, when the sensor presses the geometric samples with the same shape and different hardness with the same pressing action, the softer sample will produce a uniform and continuous deformation. At this time, the deformation of its reaction on the sensor layer is small, so the optical power will decline slowly and the final value will be large, while the harder object is just the opposite. On the other hand, when the sensor presses the geometric samples with different shapes and the same hardness with the same pressing action, although the applied pressure is the same, due to the difference in the contact area between the objects with different shapes and the sensor and the contact position with the OM array, the pressure and force applied to the sensing layer change, resulting in the change of the output optical power. Our finite element simulation results also support this view (Supplementary Fig. S6).

    By repeating the pressing action, we collected data on the hardness of six different shapes of cans, plastic bottles, stones, walnuts, apples, and oranges (Fig. 2(g), Supplementary Fig. 7). Here we use the 4-channel optical power to directly characterize the samples, including information on the magnitude of the optical power, the rate of descent, and the latency, which corresponds to the properties of the contact object's corners, the area of contact, and its hardness, and will define the pressing object's identity. Each object will be pressed 200 times to ensure the reliability of the data set in the data collection process. Since machine learning has been shown to be an effective tool for automatically extracting features and classifying haptic time-domain data41, we can learn to amplify and present differences between labeled samples with the help of neural network algorithms. The composite light intensity data collected by the OMAS array is characterized by one-dimensional multi-channel, so we used a Full Connect Neural Network (FCNN) for pressing object classification. The network consists of a spreading layer, three hidden layers, and an output layer. Each hidden layer consists of Dense (fully connected layer) and ReLU activation function, while the output layer consists of Dense and softmax activation function. The input data X is a single channel of 20,000 points and a number of sets of waveform data for 4 channels. The spreading layer spreads the input data X from a two-dimensional shape of (4,20000) to a one-dimensional shape of (80000), the three Dense in the hidden layer have 128, 64, and 32 neurons, respectively, and the three-layer structure and layer-by-layer reduction of neurons is designed to reduce the amount of computation at the same time for the gradual aggregation of the high-density data, and deep extraction of the features; the output layer has six neurons in the dense classifies the features, and then outputs the final predicted classification results of the six categories using Softmax probability normalization and the visualization results are shown in Fig. 2(h). The validation results show that the total recognition accuracy is as high as 100% (Fig. 2(i)). It can be seen that the static force response characteristics based on OMAS are in line with the human behavior of identifying the shape and hardness of the target through the SA receptor, and the shape and hardness of the target object can be recognized by analyzing the change characteristics of light intensity.

    Verification of OMAS's material and surface texture recognition ability

    FA receptors in human skin generate voltage spikes during initial contact and final release of physical stimuli, forming the dynamic part of tactile perception that allows us to sense the material and specific surface texture details characterizing skin friction. Response speed is a key parameter that determines the sensor's ability to detect high-frequency dynamic pressures, and existing electrically based sensors typically exhibit response-unloading reaction times in the order of hundred milliseconds, which limits their detection limits for vibration signals42,43. Our sensor was loaded and unloaded with a fast response time of 13 ms, a relaxation time of 25 ms, and a total loading-unloading duration of 38 ms (Supplementary Fig. S8), and this fast response-relaxation process enabled the OMAS to respond effectively to small vibrations. Further, we investigated the basic ability of the OMAS sensor to recognize regular textures [Supplementary Fig. S9(a)]. Three linearly textured metal plates with different surfaces of the cycle were prepared with parameters of 0.6 mm (ridge width of 0.3 mm, spacing of 0.3 mm), 1.2 mm (ridge width of 0.6 mm, spacing of 0.6 mm), and 2 mm (ridge width of 1 mm, spacing of 1 mm), which were tested by controlling the direction of the texture to be perpendicular to the longitudinal line of the OMAS sensing array, and were fixed to a circular probe of the force gauge. When the samples were in contact with the OMAS attached to the motorized displacement stage at a constant height, the PC controlled the displacement stage to make frictional contact between the two at a preset speed [Supplementary Fig. S9(b)]. When the OMAS on the CNC displacement stage rubbed the three texture samples at a speed of 2 mm/s, the two output optical powers in the texture-normal direction exhibited continuous oscillations with a specific period, whereas the two output optical powers in the texture-tangential direction remained almost unchanged. This is due to the fact that the bending of the OM waist region in the texture normal direction due to the periodic texture contact leads to the change of the optical transmission intensity, while the OM waist region in the tangential direction does not produce obvious bending with the overall rise and fall of the texture, and the bending loss of the transmitted light is not obvious. To further analyze the vibrotactile properties, we converted the oscillatory time-domain signals of the four-channel light into frequency-domain signals using the Fast Fourier Transform (FFT), and found that the amplitude of the normal two-channel light signals was much higher than that of the tangential two-channel light signals (the amplitude of the tangential light signals was close to 0), and the difference in the amplitude of the two-channel light power of the isotropic two-channel light power was decided by the waist-area-parallel-angle error during the fabrication of the arrays [Supplementary Fig. S9(c)). And the four signals appeared the same frequency main peak and harmonics44, which tended to increase with the decrease of the period of the grating area of the texture plate, and their frequency main peaks satisfied the equation of the relationship between the scanning speed and the texture frequency25. In order to investigate whether there is a sensitive relationship between the sensing structure of multiphoton longitudinal distribution and the pattern angle, we repeated the above friction experiments (Supplementary Fig. S10) with linear sample textures with ridge width of 1 mm and sensor longitudinal lines at different angles, and found that the four texture signals of different angles of the OMAS were significantly different in the time domain (correlation) and the frequency domain (amplitude, frequency), and they had a deep coupling with the angle information correlation, which may be due to the difference in the time domain of the force acting on the two parallel transverse OM waist regions, and the deformation of the longitudinal OM waist region with the texture movement. It is shown that OMAS can extract the angle information of the texture of the object surface while recognizing the surface roughness, which not only improves the tactile recognition ability of single fiber neurons in terms of the number of signals, but also expands the dimension of tactile recognition ability. In summary, OMAS has the ability to quantify surface texture and distinguish between surfaces with different texture features, and is sensitive to the angular information of surface texture, which provides the necessary conditions for the construction of subsequent deep learning neural networks.

    Our previous work25 has demonstrated the recognition ability of a single microfiber for complex textures. However, in practical application scenarios, most of the surface textures we encountered are characterized by 2D facets and show a high degree of randomness. A single microfiber sensor not only restricts the friction direction of the sensor, but also tends to lose some specific texture features of the contact surface, reducing the recognition accuracy. OMAS with four longitudinally and horizontally distributed OMs constitutes a good solution to this problem in terms of both the expansion of the data volume and the increase of the friction direction, and combined with deep learning techniques can well analyze the relevant signal features of the texture. We conducted friction experiments on ten fabrics (uniform speed 2 mm/s) from four directions, up, down, left, and right, in an experimental condition with the same downward distance of the manometer by an experimental method consistent with the dynamic response (Supplementary Fig. S11), 75 samples were collected in each direction for each fabric, a total of 3000 samples, and were randomly divided into the training dataset (2400 samples) and the test dataset (600 samples), because of the large amount of collected signals and the friction signals present complex and random characteristics, we process the collected optical power signals with a same neural network structure as pressed item classification (output layer changed from 6 to 10 neurons), Fig. 3(a) shows the structure of the neural network used and some of the signal collection results. Where the rapid decrease in optical power at the beginning of friction can be attributed to the large surface roughness of the particular fabric, high friction coefficient, and static friction generated between the layers in contact with the OMAS. While the relative movement of the object surface occurs after the friction action begins, the friction force at this time is kinetic friction, the magnitude of which decreases to a lower level45, and the optical power gradually levels off. Supplementary Fig. S12 and Fig. 3(b) show the changes of loss and accuracy values during the training process, and it can be found that after about 100 iterations of training, the accuracy convergence of the training set and the test set is stable at 99% and 98.5%. We tested the actual prediction results in the validation set, and the corresponding results and confusion matrix are shown in Fig. 3(c). The values on the diagonal of the confusion matrix represent the correct labels after training, which corresponds to an accuracy of 98.5%, which is higher than the level of similar work, whereas the accuracies of recognition of the four signals individually after doing the same signal processing are 85.8%, 85.5%, 89.1%, 68.8%, respectively, which are lower than that of the composite signals (Supplementary Fig. S13). This indicates that the dynamic response characteristics of our sensor array combined with the neural network algorithm can more accurately realize the human-like learning and recognition function of intelligent tactile material perception. Based on this technique, we constructed a smart material recognition system that is comparable to human tactile perception, and the process of simulating smart material tactile perception is shown in detail in Fig. 3(d), Supplementary Fig. S14, and Movie S1 to S2.

    The introduction of a longitudinally and horizontally distributed microfiber array sensing structure expands the haptic sensing region from a single linear one-dimensional to a two-dimensional region surrounded by the OM girdle area. To validate the OMAS sensor's ability to recognize patterns on the surface of an object, we integrated the OMAS into a dynamometer for Braille (numbers 0–9) recognition (Fig. 3(e)). The Braille samples were produced using CAD modeling and a 3D printer (X1-Carbon Combo, Bambulab) to complete the production, with a dot pitch of 2.5 mm, a dot height of 0.3 mm, and a square pitch of 3.6 mm, which is in accordance with international standard dimensions and proportions46. First, the multiplexed touch signals of 10 Braille digits were collected through dynamic friction experiments. The optical power change signals generated during the friction process were extracted through high-pass filtering processing (Supplementary Fig. S15), which shows that the similarity of these digital Braille signals is large, leading to the difficulty of discrimination through visual classification, so we introduced deep learning to extract and classify features from the data. Due to the strong nonlinear fitting ability of neural network, the model can reach 99% average classification accuracy in the test set, and the change of loss value and accuracy are shown in Supplementary Fig. S16(a, b). To verify its performance in real applications, we integrated the sensor into an intelligent robotic arm (RM-65B, Realman) for decoding 11-digit phone numbers in real time [Fig. 3(f)]. We programmed the sensor to continuously touch these numbers and input the signals to a PC for real-time demodulation and display to the screen [Fig. 3(g), Movie S3], which demonstrated that the OMAS has a strong ability to recognize surface texture in the sensing area.

    Validation of the material and surface texture recognition ability of micronized multiphoton neurons. (a) Neural network structure used for fabric recognition experiments with some signal acquisition results. (b) Accuracy change graphs of the training set and test set during the neural network training process, with the final accuracy stabilized at 99% for the training set and 98.5% for the test set. (c) Confusion matrix of ten fabric classification results in the validation set with 98.5% accuracy. (d) OMAS successfully realizes model training and online recognition of fabrics with the assistance of artificial intelligence algorithms. (e) Schematic diagram of 0-9 digit Braille. (f) Experimental process of real-time signal feedback and recognition of Braille phone numbers. (g) Real-time signal acquisition and recognition of Braille phone numbers displayed on the user graphical interface.

    Figure 3.Validation of the material and surface texture recognition ability of micronized multiphoton neurons. (a) Neural network structure used for fabric recognition experiments with some signal acquisition results. (b) Accuracy change graphs of the training set and test set during the neural network training process, with the final accuracy stabilized at 99% for the training set and 98.5% for the test set. (c) Confusion matrix of ten fabric classification results in the validation set with 98.5% accuracy. (d) OMAS successfully realizes model training and online recognition of fabrics with the assistance of artificial intelligence algorithms. (e) Schematic diagram of 0-9 digit Braille. (f) Experimental process of real-time signal feedback and recognition of Braille phone numbers. (g) Real-time signal acquisition and recognition of Braille phone numbers displayed on the user graphical interface.

    Recognizing mahjong and its suits based on haptic parameters combined with robotic arm spatial reconstruction

    As a proof-of-concept, we constructed a closed-loop system with a robotic arm that is integrated with OMAS for reconstructive recognition of mahjong topography based on hardness, shape, and surface texture, and displaying its flower color. The OMAS multiphoton neurons accurately record signals containing detailed information about the tactile properties of an object, which are then transmitted to a computer via a photovoltaic converter device for deep learning, and the results are displayed on a computer screen. Our cross-array distributed OM sensors are capable of capturing spatial angular and vectorial information from multiple sensing points, including force and stress distributions at different angles and locations. This complex multi-channel data requires advanced algorithms for processing and interpretation. Neural networks are particularly adept at processing high-dimensional and multi-channel data, and are able to automatically learn the correlations and patterns between different sensing points, extract effective features, and achieve a comprehensive understanding and accurate recognition of tactile signals. Five common life objects (cans, plastic bottles, stones, apples, oranges) with different hardness, shapes and surface textures plus mahjong were selected for validation. We pre-collected the shape and hardness information of the recognized objects as well as the surface texture information of ten typical patterns of mahjong for pre-training to build the neural network model. The whole recognition process can be divided into two steps (Fig. 4(a–f)). Firstly, using the static force sensing function of OMAS, when the robotic arm sensing head touches the material twice consecutively under suitable conditions (pressing the object by 1 mm), the acquired signals are recognized, and the shape and hardness of the contacted objects are analyzed to determine which objects are mahjong. Then the robotic arm is manipulated to rub the surface of the mahjong from down to up, and the dynamic force perception function of OMAS is used to analyze the surface texture (pattern) of the mahjong to complete the spatial reconstruction of the mahjong's morphology, which is outputted on the display of the PC terminal. The display interface of the computer terminal of the spatial reconstruction system is shown in Fig. 4(g, h). The left half of the interface outputs in real time the signals of each one individually, and the right half displays the types of objects and the suits of mahjong. The spatial reconstruction process of the simulated human skin based on tactile signals is shown in detail in Movie S4.

    Recognition of mahjong and its suits based on haptic parameters combined with spatial reconstruction of robotic arm. (a–c) With the help of AI algorithms, the multiphoton neuron haptic skin successfully recognizes different objects through the differences in hardness and shape, and the results are displayed on the PC screen. (d–f) Multi-photon neuron haptic skin successfully recognizes mahjong colors by differences in surface texture, and the results are displayed on the PC screen. (g, h) Computer interface for hardness and shape recognition and surface texture recognition.

    Figure 4.Recognition of mahjong and its suits based on haptic parameters combined with spatial reconstruction of robotic arm. (ac) With the help of AI algorithms, the multiphoton neuron haptic skin successfully recognizes different objects through the differences in hardness and shape, and the results are displayed on the PC screen. (df) Multi-photon neuron haptic skin successfully recognizes mahjong colors by differences in surface texture, and the results are displayed on the PC screen. (g, h) Computer interface for hardness and shape recognition and surface texture recognition.

    Conclusions

    In summary, we propose a multiphoton neuron tactile skin, which is able to mimic the ability of human skin FA receptors and SA receptors to sense static pressure and vibration, and realize the characterization of three important appearance features, namely, shape, hardness, and surface texture of objects. The design includes a three-layer structure: transparent silicone serves as the vibration amplifier and protective surface layer, PDMS encapsulates the vertically and horizontally distributed optical microfiber array, and the micro-fiber girdle area senses tactile stimuli with vectorial characteristics. By monitoring the optical power at the four optical outlets, intelligent tactile sensing is achieved. For static tactile sensing, OMAS can effectively sense static pressure and complete the identification of hardness and shape parameters of the contact object with high sensitivity, a wide detection range, and high stability. For dynamic haptic sensing, OMAS can analyze the vibration signals sensed by the longitudinally and horizontally waist area. By quantifying the details of the time and frequency domain signals and inputting them into the deep learning network for training, this area can also capture the texture information of the two-dimensional patterns and the angle information. To explore its reliability in practical applications, we captured the static pressure features of three basic geometries and used the FCNN algorithm to recognize six common objects with different shapes and hardnesses with a success rate of 100%, which proves the sensor's capability of recognizing shapes and hardnesses. In terms of dynamic pressure response, the material friction signals of ten fabrics were classified and recognized online using a neural network algorithm with an accuracy rate of 98.5%, and the surface patterns of the digits 0–9 in Braille were recognized with an accuracy rate of 99%. Combining the above sensing capabilities, we integrated the OMAS into a robotic arm to pick out mahjong among six common objects on the desktop and successfully touch-recognized its flower color. This micro-nano multiphotonic embodied humanoid haptic skin perfectly simulates the touch perception mechanism of human skin, which will significantly enhance the capability of embodied haptic sensing technology and provide a new solution for the shape perception function of human-computer intelligent interaction.

    Methods

    Production of OMAS

    First, mix polydimethylsiloxane (PDMS, Sylgard 184 silicone elastomer) base and curing agent in a ratio of 10:1, and thoroughly mix them. Then, use an air pump to create a vacuum environment to remove air bubbles. Next, spin-coat a layer of 300 μm PDMS onto a cruciform glass substrate (with dimensions of 120 mm in length, 30 mm in width, and 2 mm in thickness), and cure it at 80 °C on a hot plate for 20 minutes to create an adhesive layer. Using an improved flame brushing method, melt and pull single OMs into shape. Through precise control of parameters such as waist radius, cone angle, and length using a homemade programmable precision displacement stage and flame brushing in the same and opposite directions, produce four OMs with a waist diameter of 3 μm and a length of 6 mm. Adjust them on the adhesive layer using a PDMS prefabricated rod to form a square-like structure connected from the beginning to the end of the waist region. Then, cover the resulting structure with a layer of 200 μm PDMS using the principle of volume conservation, and repeat the same curing steps to obtain the sensing layer. The contact layer is made of transparent silicone rubber (Posilicone) using 3D printing and demolding. First, prepare a PET plastic negative mold of the simulated skin texture microstructure layer using a 3D printer (X1-Carbon Combo, Bambulab) and coat it with transparent silicone rubber. After creating a vacuum environment to remove bubbles, separate the mold and solid transparent silicone rubber after curing at room temperature for 2 hours to obtain the contact fingerprint layer, with a diameter of 10 mm, a ridge height of 0.2 mm, a ridge width of 0.5 mm, and a ridge spacing of 0.5 mm, which matches the size characteristics of adult hand skin texture. Finally, use the same transparent silicone rubber to tightly connect the prepared contact layer with the sensing layer, ensuring that the contact layer is directly above the waist region. After curing using the same steps, obtain the OMAS with a biomimetic structure.

    Experimental environment construction

    As shown in Supplementary Fig. S17, to detect the transmitted optical power of the sensor under different pressures, the amplified spontaneous emission (ASE) broadband light source (1520–1620 nm) emits a broadband beam injected into a 1 × 4 splitter (FC/APC, GLNVI) to split into four beams, which are respectively injected into ports 1-4 of the OMAS. After passing through the waist sensing area, they are output from ports 5–8 and directed to the detection equipment. We sequentially connect the output ports of the sensor to an optical power detector (PD, 800–1700 nm, PDB435C, THORLABS), a data acquisition card (DAQ, OLP3203, OLYPER TECHNOLOGY), and a personal computer (PC) data processing terminal to monitor changes in optical power. This setup allows us to perceive static pressure and dynamic vibration, simulating the processing of tactile signals by human skin's FA and SA receptors. We analyze the signal's time-frequency domain characteristics to identify the vibration, hardness, and material of objects. For static pressure and sliding sensing experiments, the OMAS is mounted on an electric numerical control displacement stage (GX80, MANMEIRUI), and then contact force testing is performed using a precision electric force gauge (ZQ-990LB, ZHIQU). The experiment process is controlled by the PC to adjust the parameters of both devices.

    COMSOL multiphysics finite element simulation setup

    We conducted simulations using the COMSOL Multiphysics software to investigate the anisotropic response of a single OMAS sensing unit to different directional forces and the distribution of forces acting on the contact layer for different silicone gel shapes. This involved two modules: Solid Mechanics and Ray Optics. For the first simulation, within the Solid Mechanics module under the steady-state experiment step, we simulated the effects of forces in different directions and extracted the displacement field resulting from the applied forces. Then, we applied the displacement field conditions to the Ray Optics module and completed the optical finite element simulation during the ray tracing step. By setting conditions for the "ray counter", we easily obtained the number of remaining rays in the optical path. For the second simulation, solely utilizing the Solid Mechanics module sufficed. We set up appropriate contact pairs and displacement distances and calculated the deformation and force distribution resulting from the displacement contact using the steady-state experiment step. In the Solid Mechanics physics field module, we defined the core part of the OMAS as a linear elastic material (silicon dioxide) with a Young's modulus of 72 GPa and a Poisson's ratio of 0.22, and the cladding PDMS part as a hyperelastic material with a Young's modulus of 3 MPa and a Poisson's ratio of 0.49. The contacting material was silicone gel with a Young's modulus of 10 MPa and a Poisson's ratio of 0.5. In the Geometric Optics physics field module, we set the refractive index of the core to 1.46 and the refractive index of the cladding to 1.42. A total of 2791 rays with a wavelength of 1550 nm were released from a hexagonal grid network along the X-direction, with a total power of 1 W. To achieve a balance between convergence, computational efficiency, and accuracy, we partitioned the geometry into a "finer" tetrahedral mesh with preset parameters. For two cross-sections that required particular attention, we employed a "very fine" preset parameter to ensure computational accuracy.

    Construction of BP neural network

    For the fully connected neural network (FCNN) we designed, each hidden layer consists of a Dense layer and ReLU (Rectified Linear Unit) activation function. The Dense layer abstracts and recombines features from the previous layer, enabling the network to learn complex relationships and patterns among features. ReLU acts to sparsify the network, reduce interdependence of parameters, and alleviate overfitting. The output of a single neuron in a hidden layer can be calculated using the following formula:

    z=ReLU(i=1Nwixi+b),

    ReLU(x)=max(0,x).

    In this equation, each input node xi (1, 2, ..., N) is connected to each output node (neuron) z in that layer. Each input node has a weight wi, and each output node has a bias term b and a ReLU non-linear activation function. The activation function for the output layer is Softmax, where the target is predicted based on the maximum probability. To measure the difference between the model's predicted results and the true labels, the cross-entropy loss function is used as the loss function. The optimizer chosen is Adam for gradient optimization and learning rate optimization, with a batch size of 32, 100 epochs, and an initial learning rate of 0.001. The computations are performed on a standard X86 instruction set computer, using Python 3.8 as the programming language, and TensorFlow 2.10 as the deep learning framework.

    Description of KNN algorithm

    We use the K-Nearest Neighbors (KNN) algorithm to process the OMAS signals. The main reason for choosing the KNN algorithm to process the OMAS signals is its simplicity, effectiveness, and applicability to small samples, which doesn't require complex parameter tuning or model training, and the ability to quickly validate the capture ability of different sensing structures (with different pinch angles and number of pathways) on the tactile signals through the use of KNN for the feature classification. By using KNN for feature classification, the ability of different sensing structures (different pinch angles and number of pathways) to capture tactile signals can be quickly verified, laying the foundation for the subsequent use of more complex machine learning algorithms.

    KNN performs classification by calculating the distance between an input sample and a training sample. For a new input sample, the K closest samples (neighbors) to it in the training set are found, and then the labels of the new sample are predicted based on the labels of these neighbors. In our experiments, we use Scikit-learn 0.24.2 as the machine learning framework, the value of K in the model parameters is 5, the Euclidean distance ('euclidean') is used as the distance metric and the distance weight ('distance') is used to improve the classification accuracy, and the algorithm ('algorithm') is 'auto', the leaf size (leaf_size) is 30, these parameter settings are empirically and experimentally validated to provide good performance and balance in tactile signal classification tasks.

    [1] N Sharma, K Flaherty, K Lezgiyeva et al. The emergence of transcriptional identity in somatosensory neurons. Nature, 577, 392-398(2020).

    [2] FY Liu, S Deswal, A Christou et al. Neuro-inspired electronic skin for robots. Sci Robot, 7, eabl7344(2022).

    [3] D Kang, PV Pikhitsa, YW Choi et al. Ultrasensitive mechanical crack-based sensor inspired by the spider sensory system. Nature, 516, 222-226(2014).

    [4] S Wang, ZX Zhang, B Yang et al. High sensitivity tactile sensors with ultrabroad linear range based on gradient hybrid structure for gesture recognition and precise grasping. Chem Eng J, 457, 141136(2023).

    [5] CJ Wei, HW Zhou, BH Zheng et al. Fully flexible and mechanically robust tactile sensors containing core-shell structured fibrous piezoelectric mat as sensitive layer. Chem Eng J, 476, 146654(2023).

    [6] SH Wang, L Lin, ZL Wang. Nanoscale triboelectric-effect-enabled energy conversion for sustainably powering portable electronics. Nano Lett, 12, 6339-6346(2012).

    [7] GY Gu, NB Zhang, HP Xu et al. A soft neuroprosthetic hand providing simultaneous myoelectric control and tactile feedback. Nat Biomed Eng, 7, 589-598(2023).

    [8] Q Wang, DY Zhang, YZ Qian et al. Research on fiber optic surface Plasmon resonance biosensors: a review. Photonic Sens, 14, 240201(2024).

    [9] MD Lu, C Wang, RZ Fan et al. Review of fiber-optic localized surface Plasmon resonance sensors: geometries, fabrication technologies, and bio-applications. Photonic Sens, 14, 240202(2024).

    [10] L Zhang, YQ Zhen, LM Tong. Optical micro/nanofiber enabled tactile sensors and soft actuators: a review. Opto-Electron Sci, 3, 240005(2024).

    [11] N Yao, XY Wang, SQ Ma et al. Single optical microfiber enabled tactile sensor for simultaneous temperature and pressure measurement. Photonics Res, 10, 2040-2046(2022).

    [12] JJ Weng, Y Yu, JF Zhang et al. A biomimetic optical skin for multimodal tactile perception based on optical microfiber coupler neuron. J Lightwave Technol, 41, 1874-1883(2023).

    [13] L Zhang, J Pan, Z Zhang et al. Ultrasensitive skin-like wearable optical sensors based on glass micro/nanofibers. Opto-Electron Adv, 3, 190022(2020).

    [14] TL Li, YF Su, FY Chen et al. Bioinspired stretchable fiber-based sensor toward intelligent human-machine interactions. ACS Appl Mater Interfaces, 14, 22666-22677(2022).

    [15] R Ahmadi, M Packirisamy, J Dargahi et al. Discretely loaded beam-type optical fiber tactile sensor for tissue manipulation and palpation in minimally invasive robotic surgery. IEEE Sens J, 12, 22-32(2012).

    [16] S Keser, Ş Hayber. Fiber optic tactile sensor for surface roughness recognition by machine learning algorithms. Sens Actuators A Phys, 332, 113071(2021).

    [17] KP Feng, H Dang, WY Zhou et al. A capillary-induced self-assembly method under external constraint for fabrication of high-aspect-ratio and square array of optical fibers. J Manuf Process, 85, 645-657(2023).

    [18] SQ Ma, XY Wang, P Li et al. Optical micro/Nano fibers enabled smart textiles for human-machine interface. Adv Fiber Mater, 4, 1108-1117(2022).

    [19] LM Tong. Micro/nanofibre optical sensors: challenges and prospects. Sensors, 18, 903(2018).

    [20] Y Tang, LT Yu, J Pan et al. Optical nanofiber skins for multifunctional humanoid tactility. Adv Intell Syst, 5, 2200203(2023).

    [21] SP Wang, XY Wang, S Wang et al. Optical-nanofiber-enabled gesture-recognition wristband for human-machine interaction with the assistance of machine learning. Adv Intell Syst, 5, 2200412(2023).

    [22] YP Li, SJ Tan, LY Yang et al. Optical microfiber neuron for finger motion perception. Adv Fiber Mater, 4, 226-234(2022).

    [23] LY Li, YF Liu, CY Song et al. Wearable alignment-free microfiber-based sensor chip for precise vital signs monitoring and cardiovascular assessment. Adv Fiber Mater, 4, 475-486(2022).

    [24] RF Kuang, Z Wang, L Ma et al. Smart photonic wristband for pulse wave monitoring. Opto-Electron Sci, 3, 240009(2024).

    [25] JJ Weng, SY Xiao, Y Yu et al. A bio-inspired artificial tactile sensing system based on optical microfiber and enhanced by neural network. Adv Sens Res, 3, 2300157(2024).

    [26] JY Zhou, Q Shao, C Tang et al. Conformable and compact multiaxis tactile sensor for human and robotic grasping via anisotropic waveguides. Adv Mater Technol, 7, 2200595(2022).

    [27] CP Jiang, Z Zhang, J Pan et al. Finger-skin-inspired flexible optical sensor for force sensing and slip detection in robotic grasping. Adv Mater Technol, 6, 2100285(2021).

    [28] CL Fan, BB Luo, DC Wu et al. Flexible bionic microstructure tactile sensor based on micro-Nano optical fiber. Acta Opt Sin, 43, 2106004(2023).

    [29] F Xu. Optical fibre nanowire devices(2008).

    [30] G Brambilla, DN Payne. The ultimate strength of glass silica nanowires. Nano Lett, 9, 831-835(2009).

    [31] Y Yu. Optical microfiber fabrication and all-optic microfiber modulator(2014).

    [32] XL Zhang, M Belal, GY Chen et al. Compact optical microfiber phase modulator. Opt Lett, 37, 320-322(2012).

    [33] YJ Li, BB Luo, X Zou et al. Sensing characteristics of optical vernier of double-helix micro-Nano optical fiber coupler. Chin J Lasers, 50, 1406001(2023).

    [34] JY Lu, Y Yu, SP Qin et al. High-performance temperature and pressure dual-parameter sensor based on a polymer-coated tapered optical fiber. Opt Express, 30, 9714-9726(2022).

    [35] S Chun, JS Kim, Y Yoo et al. An artificial neural tactile sensing system. Nat Electron, 4, 429-438(2021).

    [36] W Chen, H Khamis, I Birznieks et al. Tactile sensors for friction estimation and incipient slip detection—Toward dexterous robotic manipulation: a review. IEEE Sens J, 18, 9049-9064(2018).

    [37] M Sumetsky, Y Dulashko, A Hale. Fabrication and study of bent and coiled free silica nanowires: self-coupling microloop optical interferometer. Opt Express, 12, 3521-3531(2004).

    [38] YS Yu, YP Zhao. Deformation of PDMS membrane and microcantilever by a water droplet: comparison between Mooney-Rivlin and linear elastic constitutive models. J Colloid Interface Sci, 332, 467-476(2009).

    [39] L Wen, M Nie, JW Fan et al. Tactile recognition of shape and texture on the same substrate. Adv Intell Syst, 5, 2300337(2023).

    [40] HY Qiao, ST Sun, PY Wu. Non-equilibrium-growing aesthetic ionic skin for fingertip-like strain-undisturbed tactile sensation and texture recognition. Adv Mater, 35, e2300593(2023).

    [41] ZH Zhou, K Chen, XS Li et al. Sign-to-speech translation using machine-learning-assisted stretchable sensor arrays. Nat Electron, 3, 571-578(2020).

    [42] S Lee, S Franklin, FA Hassani et al. Nanomesh pressure sensor for monitoring finger manipulation without sensory interference. Science, 370, 966-970(2020).

    [43] V Amoli, JS Kim, E Jee et al. A bioinspired hydrogen bond-triggered ultrasensitive ionic mechanoreceptor skin. Nat Commun, 10, 4019(2019).

    [44] J Park, M Kim, Y Lee et al. Fingertip skin-inspired microstructured ferroelectric skins discriminate static/dynamic pressure and temperature stimuli. Sci Adv, 1, e1500661(2015).

    [45] Y Li, ZG Cao, T Li et al. Highly selective biomimetic flexible tactile sensor for neuroprosthetics. Research, 2020, 8910692(2020).

    [46] LJ Yu, YL Jia, WM Teng. Compilation of Chinese braille standard. Chin J Rehabil Theory Pract, 1, 27-28(1997).

    Tools

    Get Citation

    Copy Citation Text

    Hongyu Zhou, Chao Zhang, Hengchang Nong, Junjie Weng, Dongying Wang, Yang Yu, Jianfa Zhang, Chaofan Zhang, Jinran Yu, Zhaojian Zhang, Huan Chen, Zhenrong Zhang, Junbo Yang. Multi-photon neuron embedded bionic skin for high-precision complex texture and object reconstruction perception research[J]. Opto-Electronic Advances, 2025, 8(2): 240152

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Research Articles

    Received: Jun. 21, 2024

    Accepted: Nov. 19, 2024

    Published Online: Apr. 27, 2025

    The Author Email: Yang Yu (YYu)

    DOI:10.29026/oea.2025.240152

    Topics