Artificial Intelligence, Neuroscience, Haptics, and Ethics Converge to Transform Extended Reality environments
In the realm of Extended Reality (XR), the GuestXR project is pioneering a transformative journey with the integration of an advanced Artificial Intelligence (AI) agent facilitating the interaction between participants to help them achieve their intended goals. In this article, the project’s milestones, and main research activities carried out during the first two years of the GuestXR journey in the areas of Artificial Intelligence, neuroscience, haptics and ethics are summarised.
The project has leveraged Reinforcement Learning to create an AI agent capable of navigating complex social scenarios in AR and VR, fostering pro-social behaviour and harmony. Researchers have integrated Large Language Models, such as ChatGPT, into XR environments, that combined with neuroscience and social psychology experimental studies results, allow the AI agent to interpret and intervene in participant behaviours effectively.
Research beyond the state of the art also extended to haptic and acoustic advancements, with research on affective haptic feedback and the development of algorithms optimizing VR acoustics. Ethics is at the center of GuestXR’s developments, with an Ethics-by-Design approach embedded into the project, ensuring responsible innovation.
Deploying an Artificial Intelligent agent into VR and AR environments
GuestXR final solution will feature a machine learning agent, the Guest, designed to unlock interactions among participants and help them achieve their goals, whilst helping users address conflicts in virtual environments. Work in this area is led by the Advanced Reality Lab (ARL) of Reichman University and kickstarted with the development of algorithms and simulations based on Reinforcement Learning (RL) methods, showing that an Artificial Intelligence agent can learn by trial and error.
Several simulations were developed during the first year of the project, including simple social dilemma games such as prisoner’s dilemma, spatial harvesting simulations, and a Guest-specific simulation of a discussion in collaboration with the University of Warsaw. These simulations served to start testing the functionality of the agent, which will be gradually transferred to increasingly complex social scenarios in AR and VR as a Guest to foster pro-social behaviour and increase harmony.
“We are using a Reinforcement Learning method called reward learning, just requiring humans to specify high-level goals, instead of precise reward functions as most Reinforcement Learning approaches require,” says Doron Friedman, Professor and Head of the Advanced Reality Lab at Reichman University.
Additionally, progress was made by integrating Large Language Models (LLMs), such as ChatGPT, in XR environments. An example of this work was presented during the 23rd ACM International Conference on Intelligent Virtual Agents which featured a virtual Albert Einstein as a panellist, alongside three international experts having a live conference panel discussion. The VR discussion was broadcast live on stage, and a moderator was able to communicate with both the live audience, the virtual world participants, and the virtual agent.
The achievements of the project in that area continued with the release of an LLM model that supports detecting not useful interactions within conversations, or another one in which the agent asks specifically the quieter participant in the room for their opinion on the topic of discussion to ensure equal participation.
Work done for the deployment of the AI agent has been integrated into a shared XR system allowing remote participants from around the globe to meet in shared 3D virtual or augmented reality spaces. The system incorporates an interface for AI and simulation systems to observe meetings and intervene by altering the environment. “This ranges from controlling conversational humanoid agents tightly linked with LLMs, changing the appearance of the environment, lighting, fog, playing sounds, altering participants’ avatar representation or even swapping bodies with humanoid AI agents,” states Bernhard Spanlang Co-founder and CTO at Virtual Bodyworks. At the end of GuestXR second year, the system developed has been piloted with 250 participants for workplace training experiences.
Moreover, a Python interface has been developed, enabling the integration of Machine Learning and simulation libraries with the shared extended reality. In that regard, several experiences have been developed, ranging from a self-conversation system linked to an LLM-controlled humanoid agent, a study of racial biases in XR, and a tragedy of the commons game to raise awareness about the impact of competition and collaboration on climate change, in collaboration with the University of Warsaw.
The next steps include the demonstration of the agent on more advanced social simulations and human studies and the demonstration of LLM-based agents in XR meetings, as well as the deployment of experimental studies to further develop the merging of virtual and remote places in VR.
The integration of social psychology and neuroscience
The AI agent to be implemented in XR environments needs to be equipped with knowledge to be able to examine participants’ individual and group behaviour. By drawing on existing theoretical models from the neuroscientific and social psychology standpoint, the University of Warsaw and the University of Maastricht, have been exploring social psychology to identify relevant rules of social behaviour. These will be used by the GuestXR agent not only to interpret the actions and feelings of the participants in the Virtual Reality environment but also to effectively intervene and contribute to interactions.
“Using computational, psychological, and neuroscientific methods, we are exploring body language and how we can best identify what aspects of an action are most relevant for understanding human behaviour, actions, and emotions” explains Vojta Smekal, PhD Candidate at the University of Maastricht. “This knowledge is being validated through several experimental studies on group dynamics and social interaction to further understand the underlying mechanisms,” Smekal adds.
Work in this area is complemented by the development of electrophysiological devices and environments to smoothly run experiments and facilitate the collection of the data necessary. In that regard, g.tec and the University of Warsaw have worked together on an experiment investigating synchronization in dyadic interactions, under the conditions of cooperation and competition.
Results of this research will result in the development of an automatic decoding system for emotion recognition from whole-body expressions in realistic situations.
Tactile and auditory enhancement in VR for optimised user experience
The Guest is aimed at performing a range of multisensory actions, using, for instance, visual or auditory features to create states of mind and stimulate a relaxed environment when it identifies a conflict amongst participants. With the goal in mind to optimise user experience in VR, research led by Inria touches upon developing haptic technologies to deliver tactile sensory inputs and make advances in the auditory field beyond the state of the art to promote enhanced sensing in XR environments.
During the first year and a half of the GuestXR project, active research on designing haptic prototypes has been fostered by developing a package of software and hardware multi-sensory technologies to be integrated into the GuestXR architecture. Haptic technologies developed included from a compression belt to promote empathy with other users by transmitting their anxiety and stress while they are presenting in a virtual scene, to a vibrator providing tactile feedback synchronised with a speech that improves the co-presence and leadership of agents when listening to them, while participants perceive their speech as more persuasive when speaking.
“We propose the integration of haptic feedback to reinforce the connection with other users by transmitting their physiological state and promote positive social interactions in VR,” explains Anatole Lecuyer, Research Director at Inria centre at Rennes University.
On the other hand, new algorithms and tools to optimise the acoustics in Virtual Reality environments have been developed by researchers at Eurecat. Solutions being developed, based on deep learning-based audio enhancement techniques, allow decreasing significantly reverberation, background noise and other disturbances captured by the microphones integrated into VR headsets, which, at times, make it very difficult to understand speech and locate resources in the space.
Ethics at the heart of GuestXR project developments
Ethics-by-Design entails considering any potential ethical issues which may come up when using Artificial Intelligence, such as respect for human autonomy, prevention of harm, fairness, and explicability. This principle has been “embedded into the project by means of interviews, discussions and workshops held by project partners, as well as by participating in the day-to-day activities of the University of Barcelona Event Lab in collaboration as a responsible innovation researcher,” explains Dani Shanley, post-doctoral researcher from the University of Maastricht.
As a first step, during the first year of the project, a literature review on the emerging ethical issues, when technologies like VR, AI, and haptics converge, has been produced. The literature review has been adapted into an article and submitted to the Journal of Responsible Technology. Moreover, a guideline for facilitating the adoption of the ethics-by-design methodology in design and innovation processes has been produced.
During next year, partners will focus on the development of an algorithmic ethical risk assessment protocol to evaluate possible areas of impact and the attendant’s risk during the development of GuestXR experimental studies and the development of novel technology, as well as organise follow-up interviews with project partners based on findings and outcomes of the workshops organised.