The team have also released an ‘Ambisonic recording tool’ to spatially capture sound design directly within Unity, which can be saved to a file for use elsewhere, such as game engines or YouTube videos. By using precise HRTFs, the accuracy of close sound source positioning can be increased. Near-field audio rendering takes acoustic diffraction into account, as sound waves travel across the head. The SDK will also automatically render near-field effects for sound sources within arm’s reach of the user. The width of each source can be specified, from a single point to a wall of sound. Much like the existing VR Audio SDK, Resonance Audio is able to model complex sound environments, allowing control over the direction of acoustic wave propagation from individual sound sources. Out Now: New 'Demeo' Chapter Adds Levels, Enemies, & Playable Character-Price Increase Coming Next Month In order to achieve this on mobile, where CPU resources are often very limited for audio, Resonance Audio features scalable performance using “highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality.” A new feature in Unity for precomputing reverb effects for a given environment also ‘significantly reduces’ CPU usage during playback. This broader cross-platform support means that developers can implement one sound design for their experience that should perform consistently on both mobile and desktop platforms. Google are providing integrations for “Unity, Unreal Engine, FMOD, Wwise, and DAWs,” along with “native APIs for C/C++, Java, Objective-C, and the web.” According to the press release provided to Road to VR, the new SDK supports “the most popular game engines, audio engines, and digital audio workstations” running on Android, iOS, Windows, MacOS, and Linux. The new Resonance Audio SDK consolidates these efforts, working ‘at scale’ across mobile and desktop platforms, which should simplify development workflows for spatial audio in any VR/AR game or experience. In February, a post on Google’s official blog recognised the “confusing and time-consuming” battle of working with various audio tools, and described the development of streamlined FMOD and Wwise plugins for multiple platforms on both Unity and Unreal Engine. Google’s existing VR SDK audio engine already supported multiple platforms, but with platform-specific documentation on how to implement the features. Google’s spatial audio support for VR is well-established, having introduced the technology to the Cardboard SDK in January 2016, and bringing their audio rendering engine to the main Google VR SDK in May 2016, which saw several improvements in the Daydream 2.0 update earlier this year. Resonance Audio aims to make VR and AR development easier across mobile and desktop platforms. Google today released a new spatial audio software development kit called ‘Resonance Audio’, a cross-platform tool based on technology from their existing VR Audio SDK.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |