Eurecat presents advances in signal processing for immersive environments presented at WASPAA

Eurecat researchers have presented new advances in signal processing and machine learning for immersive environments at the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), a leading international forum for the scientific community working in audio and acoustics research.

Enric Gusó Muñoz, researcher from Eurecat’s Audiovisual Technologies Unit, showcased two new studies that explore how advanced signal processing and deep learning can enhance spatial and interactive experiences in extended reality (XR) environments. These contributions are directly connected to the research conducted within the GuestXR project, which aims to improve social interaction and realism in shared virtual spaces.

The presented works are:

Through these studies, Eurecat contributes to GuestXR’s objective of creating more natural and immersive audio experiences in virtual and mixed-reality scenarios. The work focuses on generating realistic acoustic responses and improving sound coherence across shared XR spaces, thereby enhancing users’ sense of presence and social connection.

By participating in WASPAA, Eurecat and the GuestXR consortium strengthen their presence in top-tier international scientific forums, sharing progress with the research community and fostering new collaborations in the fields of immersive technologies, audio signal processing, and human–machine interaction.