Newer
Older
Limited support for spatial audio in headphones Spatial audio enhances the immersive audio experience by adding direction and depth to sound. However, many headphones do not support spatial audio, limiting the accessibility of this enhanced auditory experience.
Employ an ML tracking algorithm that utilizes input data from the camera to monitor the position of the user’s head. Next, we can use the head coordinates to create an illusion of spatial audio, irrespective of the type of headphones utilized by the user.
We utilized Google’s FaceMesh model, a robust tool for tracking facial landmarks, focusing primarily on the region surrounding our eyes. This allowed us to accurately estimate the position and orientation of the head. Once trained, this model was hosted on a dedicated server, facilitating real-time data transmission.
In Unity, we developed scripts to dynamically alter the perspective of the in-scene camera based on the results provided by the FaceMesh model, essentially allowing the user’s head position to influence the camera view.
Alongside this, we have created spatial audio scripts that are specifically designed to alter the depth and direction of in-game sounds. These scripts utilize a variety of effects:
1) The Doppler Effect: adjusts sound frequency and pitch based on relative motion between the source and the observer.
2) Stereo Panning/Spread: distributes sound across a stereo field, creating a sense of directionality.
3) Volume Rolloff: simulates decrease in volume as sound source moves away.
4) Audio Occlusions: mimics sounds being muffled or blocked by in-game objects.
These scripts analyze the relative positioning of the audio source in the 3D game space, adding realism and immersion.