GuestXR USE CASE
Promoting hearing-impaired individuals’ inclusion in VR/AR
Overview
This use case aims to promote the social inclusion of hearing-impaired individuals in VR and AR environments, as well as to boost the abilities of individuals with typical hearing to communicate efficiently with individuals with hearing loss.
The GuestXR project promotes this goal by combining voice analysis tools, multisensory devices and displays with multisensory training protocols.
USE CASE DEVELOPMENT
Developing audio and multisensory technologies to improve the comprehension of hearing-impaired individuals in 3D spatial settings
Hardware and algorithms have been produced to translate audio and spatial information into touch-based feedback, enriching the auditory experience. Additional development involves using tactile vibration for speech enhancement.
In audio processing, personalized hearing corrections amplifying specific frequencies were made available, with generic high-frequency amplification available for users with mild undiagnosed hearing loss.
Moreover, virtual environments simulating social conversations to raise awareness of hearing challenges and the benefits of hearing aids have been implemented. These environments have also been used to compare conventional and deep neural network (DNN)-based speech enhancement methods.
Finally, partners integrated Head-Related Transfer Function (HRTF) analysis and machine learning to improve environmental noise reduction and enhance hearing assistive devices.
Step by step
Advanced audio testing technology
Hearing loss and hearing aid simulation
Multisensory speech and comprehension enhancement in VR
Testing with hearing-impaired users
Early results
Initial findings show that congenitally hearing-impaired individuals are rapidly able to use the tactile devices, significantly improving their auditory spatial understanding.
Meanwhile, hearing individuals can use the spatial tactile input to perform spatial localization in a manner comparable to hearing. This integration can improve performance of spatial auditory tasks. Moreover, researchers found improvement of speech perception in noise within a simulated hearing impairment paradigm (Ciesla et al, 2022).
Furthermore, results show that communication and social interaction highly depends on the acoustic conditions, seeing further that DNN-based speech enhancement can improve speech communication, more than conventional signal processing algorithms.