GuestXR USE CASE

Promoting hearing-impaired individuals’ inclusion in VR/AR

Overview

This use case aims to promote the social inclusion of hearing-impaired individuals in VR and AR environments, as well as to boost the abilities of individuals with typical hearing to communicate efficiently with individuals with hearing loss.

The GuestXR project promotes this goal by combining voice analysis tools, multisensory devices and displays with multisensory training protocols.

USE CASE DEVELOPMENT

Developing audio and multisensory technologies to improve the comprehension of hearing-impaired individuals in 3D spatial settings

Hardware and algorithms have been produced to translate audio and spatial information into touch-based feedback, enriching the auditory experience. Additional development involves using tactile vibration for speech enhancement.

In audio processing, personalized hearing corrections amplifying specific frequencies were made available, with generic high-frequency amplification available for users with mild undiagnosed hearing loss.

Moreover, virtual environments simulating social conversations to raise awareness of hearing challenges and the benefits of hearing aids have been implemented. These environments have also been used to compare conventional and deep neural network (DNN)-based speech enhancement methods.

Finally, partners integrated Head-Related Transfer Function (HRTF) analysis and machine learning to improve environmental noise reduction and enhance hearing assistive devices.

Step by step

1.

Advanced audio testing technology

Use of state-of-the-art Higher-Order Ambisonics (HOA) facilities to develop and test audio technologies, ensuring research accuracy for hearing-impaired populations. Implementation of real-time binaural synthesis, using HRTF convolutions to spatialize audio sources in virtual reality.
2.

Hearing loss and hearing aid simulation

Incorporation of the research-based 3D-TuneIn simulator to create authentic auditory experiences for users with hearing impairments. Integration of wide-dynamic range compression and amplification of audio using profiles derived from audiograms to simulate personalised hearing aids.
3.

Multisensory speech and comprehension enhancement in VR

State-of-the-art DNN-based and conventional algorithms to simulate speech enhancement within virtual reality environments. Usage of tactile-based hardware and in-house low-latency conversion algorithms to evaluate spatial accuracy and sensory enhancement devices.
4.

Testing with hearing-impaired users

Performance of experiments with cochlear-implant and hearing-aid users, comparing 3D sound localization abilities with and without tactile support. Studies conducted also assessed how tactile enhancements affect music enjoyment and comprehension of complex auditory environments for hearing-impaired populations.

Early results

Initial findings show that congenitally hearing-impaired individuals are rapidly able to use the tactile devices, significantly improving their auditory spatial understanding.

Meanwhile, hearing individuals can use the spatial tactile input to perform spatial localization in a manner comparable to hearing. This integration can improve performance of spatial auditory tasks. Moreover, researchers found improvement of speech perception in noise within a simulated hearing impairment paradigm (Ciesla et al, 2022).

Furthermore, results show that communication and social interaction highly depends on the acoustic conditions, seeing further that DNN-based speech enhancement can improve speech communication, more than conventional signal processing algorithms.