Wwise 2017.2 is now live!

신규 출시 / Wwise에 대한 팁과 도구

We’re excited to announce that Wwise 2017.2 is now live!

Below, a short-list of what’s new in Wwise 2017.2.

 

Workflow and Feature Improvements  

Improvements with States

Most RTPC properties now available to States

All audio properties that support multiple RTPC curves and most properties from the Wwise plug-ins can now be controlled by States. To add new State properties to an audio object or plug-in, simply open the State Properties view and select the properties as seen below. This selection is done on a per object or plug-in basis, so you don’t waste space with all possible properties used by your project.   

 

2.NewStateProps.png

A new State icon has been added next to the link/unlink and RTPC icons to help you identify which properties can be modified by States. 

 

3.image2017-12-5_22-29-57.png

Mixing Desk Workflow Enhancements

It’s now possible to quickly change which State Group is listening to State changes. This makes faders move when States are changing on motorized controllers. It’s also possible to expand and collapse State Groups (and other main categories, such as Monitoring, Positioning, and Effects), which is convenient when Mixing Desk sessions contain multiple State Groups. 

 

Wwise Spatial Audio improvements

Spatial Audio: Sound Propagation, Rooms, and Portals 

Development in Spatial Audio continued during 2017.2 with a focus on expanding the existing features by enhancing usability, runtime efficiency, and flexibility. Here’s an overview of the main elements:

Sound Propagation

  • The path of sound from an emitter to reach a listener can now traverse one or multiple Portals.
  • Virtual positions are attributed to sounds as if they are coming from the Portal closest to the listener.

Smooth Transitions Between Rooms

  • Portals are now volumes instead of being point sources, which allows for smoother and more accurate panning and spread when going through Portals.
  • Room reverbs now smoothly and automatically crossfade over distance.
  • Spatialized Portal Game Objects can also send to listener’s room.

Diffraction Modeling

  • Diffraction is represented as an angle ranging from 0° (no diffraction) to 180°. It can be driven using obstruction and/or the new Diffraction built-in parameters.
  • The emitter dry path(s) and a Room's wet path have different angles. The dry diffraction angle is the deviation from the straight-line path, and the wet diffraction angle is the angle from the normal of the Portal.

 

SoundPropagation - What's new 2017.2 - White.png

Basic Transmission Modeling

  • When no paths through Portals are found from the emitter to the listener, the ‘sound transmission’ path goes through the walls.
  • Rooms are tagged with ‘wall occlusion’ values, which are used to set the Wwise occlusion value on the emitter.

Portal Obstruction

  • The Spatial Audio API provides means to set game-driven obstruction values on the Portal objects.

Profiler Improvements

  • LPF and HPF values are now complementing the volume values in the Voices Graph tab of the Advanced Profiler view.

4.image2017-11-8_11-26-50.png

Efficient Game Object Usage

  • Multiple positions are leveraged at the Portals when the listener is not in the Room, which results in an important performance improvement over 2017.1 as only one game object is now instantiated per Room.
  • Single position (not using orientation) following the listener is used when the listener is located inside a Room.

Integrations in UE4, Unity, and an SDK Example Exposed in the Integration Demo Are Provided.

Filters in (and on) Busses

Built-in Low-pass and High-pass filters have been reworked in the sound engine's audio graph to better model filter values coming from different features such as Wwise user parameters, Attenuation, Occlusion and Obstruction. Previously, Wwise had a single filter on the voice output, and competing parameter values for this filter would have to be logically combined into a single value; only the minimum value was used. Wwise 2017.2 features individual filters on each unique output, including outputs from busses, so that values pertaining to different rays or output busses no longer have to be combined.

An example of how this is useful can be seen when using multiple listener scenarios. A game object that has multiple listeners will have different Attenuation curve evaluations for each listener, since they may be at different distances. The curve defines a Low-pass filter value; however, in previous versions of Wwise, only one Low-pass filter value could be used per voice. Now, upon mixing the single voice into each listener's output bus instance, the correct filter value (as determined by the curve evaluation) will be applied for each output bus.

Busses have three new controls: Output Bus Volume, Output Bus LPF, and Output Bus HPF, allowing one to filter the output of mix busses. Also, low-pass and high-pass filtering of user-defined sends is now possible via RTPC.

New Built-In Parameters: Listener Cone & Diffraction

  • Listener cone represents the angle between the listener's front vector (gaze) and the emitter position. It can be used to implement the listener cone via RTPC. This can be useful to, for example, reduce focus on emitters outside the front cone of the listener or simulate microphone polar patterns.
  • Diffraction angle between emitter and listener operates in tandem with Portals. The diffraction built-in parameter ranges from 0° (no diffraction) to 180° and can be used to,  for example, shape different Attenuation curves for Rooms' dry and wet paths. 

Ambisonics IR Now Packaged with Wwise Convolution Reverb

Packaged with the plug-in, all ShareSets and stereo impulse responses included with the original Wwise Convolution Reverb now come with their ambisonics equivalents. Projects that already licensed the Convolution Reverb can simply get access to the ambisonics versions of the impulse responses from the "Import Factory Assets..." menu in Wwise (you may need to first download them from the Wwise Launcher).

Audio Output Management and Motion Refactor

Audio output management and motion have been refactored to offer greater flexibility, and represent the foundation for future improvements for output management and support of haptic devices.

 Audio Output Management

  • The management of audio output is now mostly done in Wwise Authoring by assigning Audio Device ShareSets to master busses.
  • It is now possible to create any number of master busses in the Master-Mixer Hierarchy and assign specific audio devices to them.
  • Independent audio device ShareSets are created to output specific audio content, like voice chat or user music, to specific audio devices such as game controllers or alternative physical outputs.
  • On master busses, different output devices can be assigned for authoring and runtime, which, among other things, greatly simplifies auditioning during the development of complex sound installations.

Wwise Motion Refactor

The motion system used by Wwise Motion to support rumble on game controllers has been refactored. Instead of using a specific code path, it now uses the same feature set and API as the audio. This simplified model allows support for third-party haptic plug-ins for devices such as VR kits or mobile platforms.

 

Wwise Authoring API improvements 

Wwise Authoring API New Features

A series of feature requests from early adopters of WAAPI have been added to 2017.2. Here are a few examples!

  • To ease transfer across computers, it’s now possible to import audio files from base64 without the need to write files on disk.
  • Switch Container associations with audio objects can now be gathered or edited from WAAPI.
  • The WAAPI API can be queried to get all available functions. For each function, the information is returned with its JSON schema.
  • WAAPI can bring Wwise to the foreground and expose its Process ID.
  • The Wwise search can be used from the Command Line Interface.
  • Applications using WAAPI can subscribe to transport activity notifications.

 

Game Engine Integrations - Unity

Wwise Spatial Audio

The Spatial Audio suite is now fully integrated in the Unity integration. There’s also a step-by-step tutorial to help you discover its functionality! 

 

5.UnityWindowRoomsAndPortalsTutorialScene.png

Simple Obstruction and Occlusion 

The Ak Emitter Obstruction Occlusion component is applied to emitters and offers a basic ray-casting system to occlude or obstruct sounds. The presence or lack of the Ak Room component within the scene determines whether occlusion or obstruction should be used:  

  • When an Ak Room component is added to a scene, the Ak Emitter Obstruction Occlusion component uses obstruction.
  • When there is no Ak Room component present in the scene, the Ak Emitter Obstruction Occlusion component uses occlusion instead.

While its system might be too elementary for certain games, it should be useful to many other projects needing a simple and straightforward mechanism to manage occlusion and obstruction. 

 

AkEmitterObstructionOcclusion.png

 

Timeline & Audio Scrubbing

It’s now possible to play back from anywhere in the timeline to allow, for example, more control when editing in-game cinematics. Audio scrubbing is also supported, which can be helpful when syncing audio to video. 

6.Timeline_RTPCKeyframeContextMenu.png

7.Timeline_RTPCKeyframeEdit.png

Automatic SoundBank Management

A manual copy of SoundBanks in the StreamingAssets folder is no longer required. There is a pre-build processing step now that generates and copies SoundBanks to their appropriate location for the Unity build pipeline.

New C# Scripts

MIDI Events can now be posted to Wwise via C# scripts. Further, the Wwise Audio Input source plug-in is now accessible via C# scripts.

Preview in Editor

It is now possible to preview sounds from the Inspector view without entering Play Mode

 

Game Engine Integrations - Unreal

DAW-Like Workflow in Sequencer

There are significant improvements in the Unreal Sequencer to support audio scrubbing, seeking inside tracks, and waveform display. These improvements should be particularly useful when editing in-game cinematics and linear or interactive VR experiences. 

 

8.sequencer_example_retrigger_enablement.png

WAAPI Integration in UE4

  • UMG Widget Library: Using WAAPI, you can control Wwise directly from Unreal. With this new widget library, you can build your own custom UI in Unreal to optimize your team's workflow.

 

9.image2017-12-4_11-33-39.png
  • Blueprint: WAAPI is now accessible from Blueprint, allowing you to use built-in UMG widgets to control Wwise.  
  • Wwise Picker: A new WAAPI-enabled Wwise Picker has been added to the UE4 integration, which allows to complete a number of operations (such as selecting audio objects, modifying volume, and playing/stopping) directly in the Unreal Editor. 

 

10.waapi_picker.png

Improvements with Listeners

AkComponents can now support more than one listener. Further, a listener, which follows the focused viewport's camera position, has been added to the Unreal Editor (when not in Play in Editor mode). It can be used to,  for example, preview sounds and distance attenuation directly from the Animation Editor. 

 

 

Subscribe  

Audiokinetic

Audiokinetic

Audiokinetic is the leading provider of cross-platform audio solutions for interactive media and gaming, and sets new standards in interactive audio production for location-based entertainment, automotive, consumer electronics, and training simulation. A trusted and strategic partner to the world’s largest interactive media developers and OEMs, Audiokinetic has a long-established ecosystem of allies within the audio industry and amongst platform manufacturers. The company’s middleware solutions include the award-winning Wwise, as well as Wwise Automotive and Strata. Audiokinetic, a Sony Group Company, is headquartered in Montréal, Canada, has subsidiaries in Tokyo, Japan, Shanghai, China, Hilversum, Netherlands, as well as Product Experts in the USA.

 @audiokinetic

댓글

댓글 달기

이메일 주소는 공개되지 않습니다.

다른 글

Wwise에서 Audio Object를 저작하고 프로파일링하는 간단한 9 단계

Wwise에서 새롭게 제공되는 오브젝트 기반 오디오 파이프라인을 둘러보고 싶지만 어디서부터 시작해야 할지 모르시는 분들 계시나요? 그렇다면 Windows용 Wwise에서 Audio...

21.7.2021 - 작성자: 데미안 캐스트바우어 (Damian Kastbauer)

새로워진 Wwise Audio Lab(WAL)을 소개합니다

Wwise Audio Lab(와이즈 오디오 랩, WAL)은 Unreal Engine 4를 통해 오픈 소스로 개발된 게임 형식의 3D 환경이며 Wwise 런처를 통해 제공됩니다....

19.1.2022 - 작성자: 데미안 캐스트바우어 (Damian Kastbauer)

ReaWwise 개발 | 제 2부 - 구현

이 글은 2부작으로 제작된 블로그 시리즈의 제 2부입니다. 제 1부에서는 ReaWwise의 사전 제작에 대해 알아보았고, 제 2부에서는 이 확장의 개발에 대해 알아보게 됩니다.거의...

3.11.2022 - 작성자: 앤드류 코스타 (Andrew Costa)

Wwise 2022.1에서의 SDK 런타임 성능 개선

이 글에서는 Wwise 2022.1의 런타임에서 CPU 사용량에 대한 몇 가지 개선 사항을 살펴보게...

5.12.2022 - 작성자: 데이비드 크룩스 (David Crooks)

대사 | Wwise와 Unity에서의 나레이션

현대 게임의 필수 요소 중 하나인 보이스오버 대사는 플레이어가 캐릭터를 특정 목소리와 연관지을 수 있을 뿐만 아니라 전반적인 억양을 통해 캐릭터의 감정을 더 잘 이해할 수 있게...

5.4.2023 - 작성자: Jake Gamelin (제이크 겜린)

Wwise Spatial Audio 2023.1의 새로운 기능 | 위상 완화 (Phasing Mitigation)

오늘 이 글에서는 '위상(phasing)'이라는 흥미로운 음향적인 현상에 대해 알아보겠습니다. 이 현상은 특정 환경에서 음향을 모델링할 때 나타날 수 있죠. Wwise 23.1의...

25.1.2024 - 작성자: Allen Lee

다른 글

Wwise에서 Audio Object를 저작하고 프로파일링하는 간단한 9 단계

Wwise에서 새롭게 제공되는 오브젝트 기반 오디오 파이프라인을 둘러보고 싶지만 어디서부터 시작해야 할지 모르시는 분들 계시나요? 그렇다면 Windows용 Wwise에서 Audio...

새로워진 Wwise Audio Lab(WAL)을 소개합니다

Wwise Audio Lab(와이즈 오디오 랩, WAL)은 Unreal Engine 4를 통해 오픈 소스로 개발된 게임 형식의 3D 환경이며 Wwise 런처를 통해 제공됩니다....

ReaWwise 개발 | 제 2부 - 구현

이 글은 2부작으로 제작된 블로그 시리즈의 제 2부입니다. 제 1부에서는 ReaWwise의 사전 제작에 대해 알아보았고, 제 2부에서는 이 확장의 개발에 대해 알아보게 됩니다.거의...