Cai, Shaoyu
(School of Creative Media, City University of Hong Kong)
,
Zhao, Lu
(Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology)
,
Ban, Yuki
(Graduate School of Frontier Sciences, The University of Tokyo)
,
Narumi, Takuji
(Graduate School of Information Science and Technology, The University of Tokyo)
,
Liu, Yue
(Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology)
,
Zhu, Kening
(School of Creative Media, City University of Hong Kong)
Abstract The electrovibration tactile display could render the tactile feeling of different textured surfaces by generating the frictional force through voltage modulation. When a user is sliding his/her finger on the display surface, he/she can feel the frictional texture. However, it is not trivi...
Abstract The electrovibration tactile display could render the tactile feeling of different textured surfaces by generating the frictional force through voltage modulation. When a user is sliding his/her finger on the display surface, he/she can feel the frictional texture. However, it is not trivial to prepare and fine-tune the appropriate frictional signals for haptic design and texture simulation. In this paper, we present a deep-learning-based framework to generate the frictional signals from the textured images of fabric materials. The generated frictional signal can be used for the tactile rendering on the electrovibration tactile display. Leveraging GANs (Generative Adversarial Networks), our system could generate the displacement-based data of frictional coefficients for the tactile display to simulate the tactile feedback of different fabric materials. Our experimental results show that the proposed generative model could generate the frictional-coefficient signals visually and statistically close to the ground-truth signals. The following user studies on fabric-texture simulation show that users could not discriminate the generated and the ground-truth frictional signals being rendered on the electrovibration tactile display, suggesting the effectiveness of our deep-frictional-signal-generation model. Highlights The deep-learning-based image-to-friction generation framework for tactile simulation of fabric material. The augmented visual-to-frictional database based on HapTex for image-to-friction generation. The technical experiment of frictional-coefficient signal generation evidencing the performance of the proposed generative model. The user-perception experiment validating the effectiveness of generated signals for tactile simulation of fabrics on the electrovibration tactile display. Graphical abstract [DISPLAY OMISSION]
Abstract The electrovibration tactile display could render the tactile feeling of different textured surfaces by generating the frictional force through voltage modulation. When a user is sliding his/her finger on the display surface, he/she can feel the frictional texture. However, it is not trivial to prepare and fine-tune the appropriate frictional signals for haptic design and texture simulation. In this paper, we present a deep-learning-based framework to generate the frictional signals from the textured images of fabric materials. The generated frictional signal can be used for the tactile rendering on the electrovibration tactile display. Leveraging GANs (Generative Adversarial Networks), our system could generate the displacement-based data of frictional coefficients for the tactile display to simulate the tactile feedback of different fabric materials. Our experimental results show that the proposed generative model could generate the frictional-coefficient signals visually and statistically close to the ground-truth signals. The following user studies on fabric-texture simulation show that users could not discriminate the generated and the ground-truth frictional signals being rendered on the electrovibration tactile display, suggesting the effectiveness of our deep-frictional-signal-generation model. Highlights The deep-learning-based image-to-friction generation framework for tactile simulation of fabric material. The augmented visual-to-frictional database based on HapTex for image-to-friction generation. The technical experiment of frictional-coefficient signal generation evidencing the performance of the proposed generative model. The user-perception experiment validating the effectiveness of generated signals for tactile simulation of fabrics on the electrovibration tactile display. Graphical abstract [DISPLAY OMISSION]
10.1145/1866029.1866074 Bau O, Poupyrev I, Israr A, Harrison C. TeslaTouch: Electrovibration for touch surfaces. In: Proceedings of the 23nd annual ACM symposium on user interface software and technology. 2010. p. 283-92.
Acta Polytechnica Hungarica Galambos 9 1 41 2012 Vibrotactile feedback for haptics and telemanipulation: survey, concept and experiment
ACM Trans Appl Percept Bochereau 15 2 1 2018 10.1145/3152764 Perceptual constancy in the reproduction of virtual tactile textures with surface displays
Int J Hum-Comput Stud Zhu 130 234 2019 10.1016/j.ijhcs.2019.07.003 A sense of ice and fire: Exploring thermal feedback with multiple thermoelectric-cooling elements on a smart ring
10.1145/3373625.3417004 Nasser A, Keng K-N, Zhu K. ThermalCane: Exploring thermotactile directional cues on cane-grip for non-visual navigation. In: Proceedings of the 22nd international ACM SIGACCESS conference on computers and accessibility. 2020. p. 1-12.
10.1145/3281505.3281516 Chen T, Wu Y-S, Zhu K. Investigating different modalities of directional cues for multi-task visual-searching scenario in virtual reality. In: Proceedings of the 24th ACM symposium on virtual reality software and technology. 2018. p. 1-5.
10.1145/3290605.3300923 Zhu K, Chen T, Han F, Wu Y-S. HapTwist: Creating interactive haptic proxies in virtual reality using low-cost twistable artefacts. In: Proceedings of the 2019 CHI conference on human factors in computing systems. 2019. p. 1-13.
10.1145/3355049.3360529 Cai S, Ke P, Jiang S, Narumi T, Zhu K. Demonstration of thermairglove: A pneumatic glove for material perception in virtual reality through thermal and force feedback. In: SIGGRAPH Asia 2019 emerging technologies. 2019. p. 11-2.
Cai 2020 2020 IEEE conference on virtual reality and 3D user interfaces ThermAirGlove: A pneumatic glove for thermal perception and material identification in virtual reality
Shin 131 2015 2015 IEEE world haptics conference Data-driven modeling of isotropic haptic textures using frequency-decomposed neural networks
10.1145/1978942.1979235 Israr A, Poupyrev I. Tactile brush: Drawing on skin with a tactile grid display. In: Proceedings of the SIGCHI conference on human factors in computing systems. 2011. p. 2019-28.
Text Res J Bueno 84 13 1428 2014 10.1177/0040517514521116 A simulation from a tactile device to render the touch of textile fabrics: A preliminary study on velvet
10.1109/VR46266.2020.00043 Zhao L, Liu Y, Ye D, Ma Z, Song W. Implementation and evaluation of touch-based interaction using electrovibration haptic feedback in virtual environments. In: 2020 IEEE conference on virtual reality and 3D user interfaces. 2020. p. 239-47.
10.1145/2984511.2984526 Benko H, Holz C, Sinclair M, Ofek E. Normaltouch and texturetouch: High-fidelity 3D haptic shape rendering on handheld virtual reality controllers. In: Proceedings of the 29th annual symposium on user interface software and technology. 2016. p. 717-28.
10.1145/3173574.3173660 Whitmire E, Benko H, Holz C, Ofek E, Sinclair M. Haptic revolver: Touch, shear, texture, and shape rendering on a reconfigurable virtual reality controller. In: Proceedings of the 2018 CHI conference on human factors in computing systems. 2018. p. 1-12.
10.1145/3173574.3174228 Choi I, Ofek E, Benko H, Sinclair M, Holz C. Claw: A multifunctional handheld haptic controller for grasping, touching, and triggering in virtual reality. In: Proceedings of the 2018 CHI conference on human factors in computing systems. 2018. p. 1-13.
10.1145/3290605.3300479 Degraen D, Zenner A, Krüger A. Enhancing texture perception in virtual reality using 3D-printed hair structures. In: Proceedings of the 2019 CHI conference on human factors in computing systems. 2019. p. 1-12.
10.1145/3290605.3300682 Sun Y, Yoshida S, Narumi T, Hirose M. Pacapa: A handheld VR device for rendering size, shape, and stiffness of virtual objects in tool-based interactions. In: Proceedings of the 2019 CHI conference on human factors in computing systems. 2019. p. 1-12.
Jiao 169 2018 2018 IEEE haptics symposium Data-driven rendering of fabric textures on electrostatic tactile displays
Int J Hum-Comput Interact Ilkhani 33 9 756 2017 10.1080/10447318.2017.1286766 Data-driven texture rendering on an electrostatic tactile display
Jiao 331 2019 2019 IEEE world haptics conference Haptex: A database of fabric textures for surface tactile display
10.1109/CVPR.2017.632 Isola P, Zhu J-Y, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 1125-34.
Cai 11 2020 International conference on artificial reality and telexistence & eurographics symposium on virtual environments FrictGAN: Frictional signal generation from fabric texture images using generative adversarial network
J Mach Learn Res Van der Maaten 9 11 2008 Visualizing data using t-SNE
Meyer 43 2013 2013 world haptics conference Fingertip friction modulation due to electrostatic attraction
10.1145/1979742.1979705 Xu C, Israr A, Poupyrev I, Bau O, Harrison C. Tactile display for the visually impaired using TeslaTouch. In: Extended abstracts on human factors in computing systems. 2011. p. 317-22.
10.1145/2501988.2502020 Kim S-C, Israr A, Poupyrev I. Tactile rendering of 3D features on touch surfaces. In: Proceedings of the 26th annual ACM symposium on user interface software and technology. 2013. p. 531-8.
IEEE Trans Haptics Romano 5 2 109 2012 10.1109/TOH.2011.38 Creating realistic virtual textures from contact acceleration data
Ilkhani 496 2014 International conference on human haptic sensing and touch enabled computer applications Data-driven texture rendering with electrostatic attraction
Osgouei 270 2018 2018 IEEE haptics symposium An inverse neural network model for data-driven texture rendering on electrovibration display
IEEE Trans Haptics Osgouei 13 2 298 2020 10.1109/TOH.2019.2932990 Data-driven texture modeling and rendering on electrovibration display
10.1145/3290607.3312778 Zhao L, Liu Y, Ma Z, Wang Y. Design and evaluation of a texture rendering method for electrostatic tactile display. In: Extended abstracts of the 2019 CHI conference on human factors in computing systems. 2019. p. 1-6.
Culbertson 319 2014 2014 IEEE haptics symposium One hundred data-driven haptic texture models and open-source methods for rendering on 3D objects
Ujitoko 25 2018 International conference on human haptic sensing and touch enabled computer applications Vibrotactile signal generation from texture images or attributes using generative adversarial network
IEEE Trans Autom Sci Eng Liu 2020 Toward image-to-tactile cross-modal perception for visually impaired people
IEEE Robot Autom Lett Cai 6 4 7525 2021 10.1109/LRA.2021.3095925 Visual-tactile cross-modal data generation using residue-fusion GAN with feature-matching and perceptual losses
Wang 775 2014 2014 international conference on audio, language and image processing Electrostatic tactile rendering of image based on shape from shading
Vis Comput Wu 33 5 637 2017 10.1007/s00371-016-1214-3 Tactile modeling and rendering image-textures based on electrovibration
10.1109/ICCV.2017.244 Zhu J-Y, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. 2017. p. 2223-32.
Reed 2016 Generative adversarial text to image synthesis
10.1109/ICCV.2017.629 Zhang H, Xu T, Li H, Zhang S, Wang X, Huang X et al. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision. 2017. p. 5907-15.
10.1145/3126686.3126723 Chen L, Srivastava S, Duan Z, Xu C. Deep cross-modal audio-visual generation. In: proceedings of the on thematic workshops of ACM multimedia 2017. 2017. p. 349-57.
Hao 32 2018 Cmcgan: A uniform framework for cross-modal visual-audio mutual generation
Cogn Comput Syst Li 1 2 40 2019 10.1049/ccs.2018.0014 Learning cross-modal visual-tactile representation using ensembled generative adversarial networks
10.1109/CVPR.2017.19 Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A et al. Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 4681-90.
ACM Trans. Graph. Chen 39 4 72:1 2020 10.1145/3386569.3392386 DeepFaceDrawing: Deep generation of face images from sketches
Ronneberger 234 2015 International conference on medical image computing and computer-assisted intervention U-net: Convolutional networks for biomedical image segmentation
Arjovsky 2017 Towards principled methods for training generative adversarial networks
Arjovsky 2017 Wasserstein GAN
IEEE Trans Haptics Vardar 10 4 488 2017 10.1109/TOH.2017.2704603 Effect of waveform on tactile perception by electrovibration displayed on touch screens
Glorot 249 2010 Proceedings of the thirteenth international conference on artificial intelligence and statistics Understanding the difficulty of training deep feedforward neural networks
Jin 2017 Towards the automatic anime characters creation with generative adversarial networks
Zhang 2017 Deep unsupervised clustering using mixture of autoencoders
Zophoniasson 70 2017 2017 zooming innovation in consumer electronics international conference Electrovibration: Influence of the applied force on tactile perception thresholds
J Big Data Sampath 8 1 1 2021 10.1186/s40537-021-00414-0 A survey on generative adversarial networks for imbalance problems in computer vision tasks
※ AI-Helper는 부적절한 답변을 할 수 있습니다.