Researcher Spotlight: Adrielle Nazar Moraes on VR audio for auditory testing

12 January 2021
Researcher Spotlight: Adrielle Nazar Moraes on VR audio for auditory testing

When we think about Virtual Reality (VR) entertainment in the form of games or immersive visual experiences most often comes to mind, but have you ever imagined being strapped into a VR setup at the doctor’s office? Advances in technology and graphics are enabling VR to be accessible even through smartphones in high quality. Due to its increasing availability, implementation of VR technology is expanding into fields beyond just games and entertainment, including medicine. Adrielle Nazar Moraes is an ADAPT researcher who is exploring the world of VR through surround-sound, or spatial audio, and its implications for e-health.

Besides the importance of the visual perception of the environment, audio plays an important role in VR applications. Moreover, the sense of knowing where a sound is coming from in space is one of the key elements in keeping the user immersed in the virtual world.

The term spatial audio means audio coming from more than one direction. Currently, the most popular form of spatial audio is stereo audio, which makes it possible to hear sounds coming from either left or right. However, with the advancement of audio recording technologies, it is possible to reproduce sound with full 360-degree coverage [1]. Therefore, we can use spatial audio that is responsive to the visual field, with a perception of audio coming from any place in space.

Spatial audio is extremely important when it comes to the entertainment industry. Games including CS:GO, PUBG, and Resident Evil use audio as a cue to know the enemy’s position and help the user react quicker. Outside the VR entertainment realm, it has an important role as well, in the case of immersive VR e-health applications, for example.

Spatial audio can be used in conjunction with VR environments to develop applications for auditory training for people with autism (ASD), brain injury, Alzheimer’s disease, and auditory processing disorders [2]. In order to process audio, the auditory cortex uses spatial cues encoded in the sound to provide the perceived distance, intensity, and direction of the sound source.

However, people affected with central auditory processing disorder (CAPD) are not able to perceive sound correctly in space [3]. For people with this disorder it is difficult to tell which direction a given sound is coming from. Moreover, they are not able to differentiate distinct sounds in space, as you need to perceive each sound as coming from a separate source.

One of the alternatives to diagnose this condition is through the Listening in Spatialised Noise test. This test presents 3 sound sources to the user, but only one of them is the target that the listener should pay attention to. With a virtual environment, it is possible to reproduce the same test, but it becomes more immersive and intuitive. Since it is simple to remove possible visual distractors, such as the objects in the physician’s room and replace them with objects matching the audio source if an auditory training is performed, the test may even be improved in a VR environment.

As mentioned above, given the flexibility of VR applications, it is possible to develop auditory training applications since our brain is able to learn how to process sounds while immersed in the virtual environment. This advantage gives VR the possibility to simulate multiple listening situations that are challenging for people with CAPD including, driving, with the noise of the traffic, engine, and radio, or even talking with other people in a party with other voices and background music.

Preliminary results have already been published and the paper aims to understand how users perceive sound in virtual reality environments. In order to achieve this goal, a VR environment was built with multiple sound sources surrounding the user. The objective is to select the correct sound source. Additionally, the task becomes more challenging when distractors are presented together with the target sound. This work also implements a sensor system able to capture relevant metrics from the user to give insight into user workload and behaviour. As expected, the level of immersion increased over the duration of the test and users were able to learn how to localise the correct sound source more effectively over time.

Future work in this field will include increasing the level of complexity of the virtual environment by simulating the user interaction with multiple objects. This approach will more closely resemble the complexities we encounter in real-world scenarios.

Adrielle Nazar Moraes is a researcher in the area of Software Engineering, supervised by Dr Niall Murray, Dr Andrew Hines, and Dr Ronan Flynn. She received her BSc in Biomedical Engineering in 2018. Currently, she is based in Athlone Institute of Technology and is funded by Science Foundation Ireland and ADAPT Centre. Her research focus is Quality of Experience of immersive multimedia applications, with emphasis on spatial audio to understand how users perceive sounds in VR.

 

 

[1]    E. R. Hoeg et al., “Binaural sound reduces reaction time in a virtual reality search task,” 2017 IEEE 3rd VR Work. Sonic Interact. Virtual Environ. SIVE 2017, vol. 37, 2017.

[2]    J. Wade et al., “Design of a virtual reality driving environment to assess performance of teenagers with ASD,” in Universal Access in Human-Computer Interaction. Universal Access to Information and Knowledge, 2014, vol. 8514 LNCS, no. PART 2, pp. 466–474.

[3]    S. Cameron and H. Dillon, “Development of the Listening in Spatialized Noise-Sentences Test (LISN-S),” Ear Hear., vol. 28, no. 2, pp. 196–211, 2007.

Share this article: