- Version
- Download 1
- File Size 158.63 KB
- File Count 1
- Create Date July 20, 2023
- Last Updated July 20, 2023
D4.2 Multisensory Displays - initial report - Executive summary
In this initial report, we present the current state of our multisensory display techniques designed and available in the GuestXR consortium, which combines visual, auditory, and haptic cues designed to support social interactions in XR environments.
We first explored improving the auditory communication channels through deep learning methods. Firstly, the integration of deep learning-based sound separation and enhancement in complex auditory scenes within virtual environments which is deployed to improve communication that is degraded due to loss of spatial cues and competing sound events. And as another improvement, the end-to-end acoustical style transfer increases the presence and immersiveness of each user by transforming the audio of other users to have similar acoustic aspects to their own.
We then explored the integration of haptics to enrich XR environments' visual and auditory signals. Haptic displays include a haptic belt that exerts pressure on the user's waist and can simulate false biofeedback, increase empathy towards virtual agents or manage anxiety. We have also investigated vibration techniques to reinforce verbal communication between XR users and have shown that they can increase the speech persuasion of users. Last, this report also includes our current research directions based on thermal feedback. We have begun exploring thermal displays for enhancing verbal communication and for reducing user anxiety in stressful situations.