GuestXR USE CASE

Promoting hearing-impaired individuals’ inclusion in VR/AR

Overview

This use case aims to promote the social inclusion of hearing-impaired individuals in VR and AR environments, as well as to boost the abilities of individuals with typical hearing to communicate efficiently with individuals with hearing loss.

The GuestXR project promotes this goal by combining voice analysis tools, multisensory devices and displays with multisensory training protocols.

USE CASE DEVELOPMENT

Developing audio and multisensory technologies to improve the comprehension of hearing-impaired individuals in 3D spatial settings

Hardware and algorithms have been produced to translate audio and spatial information into touch-based feedback, enriching the auditory experience. Additional development involves using tactile vibration for speech enhancement.

In audio processing, personalized hearing corrections amplifying specific frequencies were made available, with generic high-frequency amplification available for users with mild undiagnosed hearing loss.

Moreover, virtual environments simulating social conversations to raise awareness of hearing challenges and the benefits of hearing aids have been implemented. These environments have also been used to compare conventional and deep neural network (DNN)-based speech enhancement methods.

Finally, partners integrated Head-Related Transfer Function (HRTF) analysis and machine learning to improve environmental noise reduction and enhance hearing assistive devices.

Step by step

1.

Advanced audio testing technology

Use of state-of-the-art Higher-Order Ambisonics (HOA) facilities to develop and test audio technologies, ensuring research accuracy for hearing-impaired populations. Implementation of real-time binaural synthesis, using HRTF convolutions to spatialize audio sources in virtual reality.
2.

Hearing loss and hearing aid simulation

Incorporation of the research-based 3D-TuneIn simulator to create authentic auditory experiences for users with hearing impairments. Integration of wide-dynamic range compression and amplification of audio using profiles derived from audiograms to simulate personalised hearing aids.
3.

Multisensory speech and comprehension enhancement in VR

State-of-the-art DNN-based and conventional algorithms to simulate speech enhancement within virtual reality environments. Usage of tactile-based hardware and in-house low-latency conversion algorithms to evaluate spatial accuracy and sensory enhancement devices.
4.

Testing with hearing-impaired users

Performance of experiments with cochlear-implant and hearing-aid users, comparing 3D sound localization abilities with and without tactile support. Studies conducted also assessed how tactile enhancements affect music enjoyment and comprehension of complex auditory environments for hearing-impaired populations.

Results

Researchers found that congenitally hearing-impaired individuals are rapidly able to use the touch-motion algorithm (TMA) which represents spatial positions in 3D through haptics.

The results show significantly improved auditory spatial understanding, as hearing impaired individuals can use the TMA to perform spatial localization in a manner comparable to typically hearing (Snir et al, 2024a). Audio-tactile integration can also improve performance of spatial auditory tasks for typically hearing individuals in noisy environments (Snir et al, 2024b).

Researchers found improvement of speech perception in noise within a simulated hearing impairment paradigm (Ciesla et al, 2022). Providing speech-derived tactile cues to the fingertips  induced training-related reorganization of resting-state functional connectivity across visual, sensorimotor and multisensory networks. The results suggest that tactile speech benefits rely on plasticity in pre-existing audio-visual and audio-tactile integration circuits, with potential relevance for hearing-impaired rehabilitation (Ciesla et al, 2025).


Furthermore, results show that communication and social interaction highly depends on the acoustic conditions (Gusó et al., 2025), seeing further that DNN-based speech enhancement can improve speech communication, more than conventional signal processing algorithms (Gusó et al, 2025).

Within the project, technologies that have been developed not only used helping the hearing impaired, but also raising awareness through experiences that helps individuals to understand how it feels to be hearing impaired and use a hearing aid device (Luberazdka et al, 2025). Ongoing studies are investigating if all of the mentioned technologies and methods could be deployed to off-the-shelf devices for mass consumption and help people with hearing loss.