Version
This glossary is intended to define technical terms used throughout the Help documentation. It is divided into General terminology, which defines sound design and programming vocabulary, and Wwise-specific terminology, which covers Wwise-specific (and sometimes Audiokinetic-patented) objects and concepts.
The delivery of audio in an intermediate format that includes information above and below the listener plane, which is then converted by the endpoint into either a binaural mix for headphones or a channel-based mix with height channels depending on the listener's setup. This allows for greater spatial precision than non-3D audio.
A compression standard for an audio coding algorithm. For more information, see the document detailing this Digital Audio Compression Standard.
An audio file conversion encoding method that quantizes the difference between a sound signal and a prediction that has been made of that sound signal. The ADPCM quantization step is adaptive, which differs from PCM encoding where the signals are quantized directly. Fundamentally, ADPCM offers significant gains in storage and CPU usage at the cost of sound quality. As such, it is typically used on mobile platforms.
A surround sound technique covering the horizontal plane as well as regions above and below the listener. By way of its B-format sound field representation, it works independently of speaker setups.
The shape of an envelope as defined by the given values of a sound's attack time (and curve, in Wwise), decay time, sustain level, and release time.
The number of bits used to describe each sample within a digital audio file. In PCM audio, the bit depth determines the maximum possible dynamic range of the signal.
The amount of data, specifically bits, transmitted or received per second. The higher the bit rate, the more file data is processed and usually the higher the resolution.
A unit of pitch equal to 1/100 of a semitone. An octave consists of 1200 cents.
In the context of audio filters, it might be preferable for a filter's frequency spectrum to be a flat line. Deviations such as peaks and notches add "coloration" to the filter's characteristics and, therefore, the output signal.
The Direct Current represents the center of a waveform of a sound. The offset represents the percentage away from the 0.0 point that the center of a waveform lies.
(Decibels full scale) Decibel amplitude levels in digital systems which have a maximum available level (like PCM encoding).
A filter that scrambles the original input signal but maintains most of the original sound characteristics. You can use Decorrelation Filters to reduce the number of phasing artifacts that are created when two or more similar signals are mixed, as in the Reflect plug-in, for example.
A processing unit that defers a signal by a number of samples.
A central repository of files on the Perforce source control server. It contains all versions of all files ever submitted to the server.
The noise added to a signal prior to quantization which reduces the distortion and noise modulation resulting from the quantization process. Although there is a slight increase in the noise level, spectrally shaped dither can minimize the apparent increase. The noise is less objectionable than the distortion, and allows low-level signals to be heard more clearly.
A process whereby all channels within a multi-channel source are mixed into a compatible version with fewer channels, such as stereo or mono.
The base voice signal path, which goes straight from the sound to a bus.
Output that consists entirely of the original, unprocessed signal.
The amount of echoes per second produced by the reverberation algorithm.
The number of cycles per unit of time.
A change in the power or amplitude of a signal.
Multiples of the fundamental frequency. For example, if the frequency is 50 Hz, the second harmonic is 100 Hz, the third harmonic is 150 Hz, and so on.
The level difference (in dB) between normal operating level and clipping level in an amplifier or audio device.
A recursive filter that attenuates frequencies lower than the cut-off frequency. The units for this filter represent the percentage of high pass filtering effect that has been applied, where 0 means no high pass filtering (signal unaffected) and 100 means maximal attenuation.
A device developed for use with virtual reality and designed to provide visual and audio input based on the user's head movement.
A technique to design a mix using level values spanning across a very high dynamic range as occurs in nature. HDR is also a run-time system that dynamically maps this wide range of levels to a range that is more suited to your sound system's digital output.
The way in which sound is received through different physical media from a point in space to the ear. Paired HRTFs allow the synthesis of a binaural sound that seems to originate at a particular point in space.
A fractional offset between the pitch of a partial and the pitch of the true harmonic (a whole number multiple of the fundamental frequency).
A lossy audio coding technique that removes the phase alignment information of higher frequency content to merge it into a mono signal, while keeping intensity information to reconstruct the stereo signal.
Time delays inherent in internal processing or generation of audio signals within a computer.
Low Frequency Effect. The name of the audio channel specifically intended for deep, low-pitched sounds ranging from 10-120 Hz. The LFE channel is a totally independent channel that must be imported into Wwise as a x.1 media file.
An extreme form of compression where the input/output relationship becomes very flat (10:1 or higher). This places a hard limit on the signal level.
A recursive filter that attenuates frequencies higher than the cut-off frequency. The units for this filter represent the percentage of low pass filtering effect that has been applied, where 0 means no low pass filtering (signal unaffected) and 100 means maximal attenuation.
Modes are the peaks in the frequency domain representation of an audio signal. Increasing the modal density improves the realism of the reverberation when simulating most acoustic spaces. Decreased modal density can cause ringing sounds.
A technique that helps to minimize the audible artifacts caused by the bit-depth reduction of a digital signal.
The highest audio frequency that can be accurately sampled, equivalent to one-half of the sampling rate. The Nyquist sampling theorem showed that the sampling rate must be at least twice the highest frequency present in the sample in order to accurately reconstruct the original signal.
A low-latency audio codec, optimized for both voice and general-purpose audio, which outperforms other codecs for compression without compromising on sound quality. Consult Conversion tips and best practices for details.
The final section of a sound, as opposed to the intro.
The band of frequencies that passes through a filter with essentially no attenuation.
A method for encoding audio files where distinct binary representations or pulse codes are chosen. These are quantized by measuring values between two encoded points, selecting the value associated with the nearest point.
A PCM frame is composed of samples for all channels at a given time. Each frame represents 1/sample frequency seconds.
Often an effect of mixing two very similar audio signals. Phased sounds are often described as hollow, metallic and/or synthesized.
The playback speed of the object.
A method to model speech signal that can be used as a lossy audio compression technique, based on the predictable nature of speech signal behavior.
The process whereby a range of values of an audio file are divided into sub ranges, each of which is represented by an assigned value.
A filter that uses previously-calculated output values in addition to the current input values to calculate the latest output. Also known as feedback filters.
The difference between the maximum and minimum attenuation within a passband.
Root Mean Square. This is a measure of a signal's average amplitude that provides in most cases a better approximation of a signal's power than using peak amplitude. This value is obtained by summing the square of the samples over a given time window and taking the square root of the result.
Monitoring the level of an audio signal and using the audio information to manipulate another audio signal in real-time. This technique is used to give more emphasis to the important sounds in the final mix by automatically controlling the volume of the less important sounds.
The management of changes to files (such as the Wwise Project, Work Units, and Audio files) where each revision of the file is tagged with a revision identification number and associated with a timestamp and the name of the person who modified it. In most version control systems, revisions can be compared, restored, and merged.
Channel configuration made out of standard, discrete channels, such as stereo, 4.0, 5.1, 7.1, and 7.1.4. Ambisonics is not a standard configuration.
A special kind of plain text file where the information is arranged into columns separated by tabs.
The point from which a signal is extracted from a Delay Line.
A common algorithm used for accurate 3D positioning of one to multiple virtual sources emanating from different directions using a setup of multiple loudspeakers.
The amplitude or level of intensity of the audio output.
A perceptual encoding method that permits encoding of audio files at various bit rates while maintaining a very good perceived sound quality. The balance between data compression efficiency and perceived sound quality is controlled using the Quality Factor setting or by specifying the maximum, minimum, and average bit rates per channel.
Audiokinetic's special implementation of Vorbis is highly optimized for all platforms.
A structure that defines the format of waveform-audio data for formats having more than two channels. To preserve the specific multi-channel configuration of a WAV file on input, you must define the channel information in the WAV header file as part of the channel mask of the WAVEFORMATEXTENSIBLE.
A voice signal path which branches off from the base dry path via an Auxiliary Bus, wherein it typically has Effects (such as Reverb) and other changes applied to it before it continues its path to a bus.
Output that consists entirely of processed sound.
An application used to create the virtual environment of your game.
Properties, such as positioning and playback priority, that are usually defined at the top-level parent object and automatically passed down to each of the parent's child objects. You can override the top-level parent properties by defining these properties at a different level within the hierarchy.
A hierarchical structure of one or more sounds, motion objects, containers, and/or Actor-Mixers. You can use an Actor-Mixer to control properties for all objects below it.
A Wwise property that can be changed multiple times by different settings, such as RTPCs and States. The end value is a combination of all the property offsets applied.
The Audio Input is a sample source plug-in that allows audio content generated by the game to be sent through the Wwise pipeline and processed by the sound engine.
An audio buffer accompanied by Metadata that, if all conditions are met, could be passed to an endpoint to be rendered with spatialized effects.
The decrease in volume of a sound, music, or motion object, as it moves away from the source emitter.
Attenuation settings related to the volume of a sound based on its distance from the listener. These settings have been saved as a ShareSet and can be shared between objects.
An optional bus that can be grouped (with other Audio and Auxiliary Busses) under a master Audio Bus to help in the organization and delivery of your sound mix. These busses can be renamed, moved, and deleted; and you can also apply Effects to them.
A separate abstraction layer between the audio file and the sound object. It is linked to the audio file imported into your project. The audio source remains linked to the audio file that you imported into your project so that you can reference it at any time.
Any object within the Master-Mixer, Actor-Mixer, or Interactive Music Hierarchies in the Audio tab of the Project Explorer.
The action of lowering the volume level of one audio signal in order for another simultaneous audio signal to have more prominence.
A special type of Audio Bus that is generally used to apply Effects such as Reverbs and Delays for simulating environmental Effects or to allow dynamic mixing (side-chaining).
An audio signal routing technique used to send an audio signal to an Auxiliary Bus. An Auxiliary Send can be controlled either per sound object, or per game object when using the SDK API.
A group of one or more objects and/or containers that are played back simultaneously. The objects within this container can be grouped into blend tracks where properties are mapped to Game Parameter values using RTPCs. Crossfades can also be applied between the objects within a blend track based on the value of a Game Parameter.
A project folder that contains all the converted files for the platforms that you are developing. By default, this folder is stored locally although you can modify it's location. Multiple users should not access the cache folder simultaneously.
An object within a hierarchical structure that lies within a higher level or parent object.
A Music object that represents an audio source. Clips are arranged in Music Tracks.
An audio Effect plug-in that reduces the dynamic range of a signal by weakening any part of the input signal that is above a pre-defined threshold value.
A group of one or more objects, including sounds, motion objects, and/or containers that are played according to a certain defined behavior. In Wwise, there are several different types of containers, including Random Container, Sequence Container, Switch Container, Blend Container, Music Switch Container, and Music Playlist Container.
A saved project element that defines how Wwise interacts with external control surface devices using the MIDI or Mackie protocol. When bindings are defined for a compatible connected device, the Control Surface Session can be opened to control Wwise features directly through the device.
A group of audio file parameters - which includes sample rate, audio format, and number of channels - that defines the overall quality, memory, and CPU usage of the audio files for each platform.
An audio Effect that uses IR (Impulse Responses) to simulate the acoustics of real spaces, such as a concert halls, buildings, streets, vehicle interiors, rooms, fields, forests, and others.
A marker appended to Music Segments to indicate a key point, such as its entry or exit point.
A user-created marker appended to Music Segments to indicate a key sync point. The Entry and Exit cues are not considered custom cues.
An option to display the graph view Y axis in logarithmic scaling when representing properties that are measured in decibels.
XML files that contain all the information in your project related to the specific element for which they were created. For example, the Default Work Unit.wwu file for Events contains all the information related to events, and the Default Work Unit.wwu file for States contains all the information related to states, and so on. Default Work Units are created when the project is created.
Text files that list all the Events in a game, classified by SoundBank.
An audio Effect plug-in that adds echoes by delaying an audio signal for a specified period of time.
A method to trigger audio in game using a set of rules or conditions expressed in arguments and argument values that match the possible conditions in the game. These argument values are arranged into argument paths and then assigned to an object in Wwise. When a Dialogue Event is called by the game, the game matches its current situation with those defined in the Dialogue Event and the appropriate piece of dialogue is played.
For Wwise, diffraction is a built-in game parameter, computed by Spatial Audio, providing the angle from the Shadow Boundary, where the approximation of the bending of sound waves around corners and other obstacles begins.
A series of Effects that have been applied to an object or bus in a specific order.
A customized set of Effect properties that can be saved and applied to other objects or Busses. Effect instance properties can also be shared across objects.
Audio effect settings that can be used to enhance the audio in your game. These settings have been saved as a ShareSet and can be shared between objects.
An Event that doesn't contain any actions or objects.
An Effect that alters the property set of the sounds generated by a game object depending on the position of that object in a game's geometry.
A method to trigger audio in game using an Action or series of Actions, such as Play, Mute, and Pause, that have been applied to one or more Wwise objects.
The Expander plug-in effect expands the dynamic range of a signal by weakening any part of the input signal that is below a pre-defined threshold value. When the signal is soft and below the threshold, the Expander begins to reduce the signal's gain. When the signal is at or louder than the threshold value, no gain reduction is applied to the signal.
A source plug-in that associates a sound object with an audio file at runtime. It allows the management of large amounts of dialogue lines that could otherwise require a great deal of overhead. It helps to save some runtime memory and it simplifies the replacement of audio files that is sometime required when generating DLC content for a game.
A dialog that displays information about project files and original imported source files, as well as manages many source control plug-in functions, where applicable.
An audio effect that mixes two identical signals together, where one of the signals is time-delayed by a small and gradually changing amount producing a swept comb filter effect.
A view that is docked into a layout.
The percentage value used to condense the virtual emitters generated by the spread value. For a focus of 0%, the virtual emitters remain unchanged, but at higher values each virtual points are moved closer around the source channel origin.
High level structure in the Actor-Mixer Hierarchy, used to manage other structures of actor-mixers, containers, and so on.
Entity in a game to which elements such as an interface, a Trigger, or a Sound can be attached.
A parameter from the game, such as speed and RPMs in a car racing game for example, that can be mapped to Wwise property values using RTPCs.
A group of Wwise elements that includes States, Switches, RTPCs, Triggers, and Arguments which are called by conditions in the game and modify the audio and motion accordingly.
The basic unit of length used in calculating game geometry. For example, a stealth FPS might use meters as a game unit, while a space conquest game might use light-years.
An audio effect that alters the shape of the audio waveform and introduces frequency components not present in the original signal. The Guitar Distortion effect mimics the behavior of commonly used distortion 'stomp boxes' to obtain typical guitar distortion sounds.
An audio effect that adds one or two pitched voices to the incoming signal.
The hidden project .cache folder containing copies of audio files imported into the project that have undergone a special import conversion process.
The mirrored position of an emitter, normal to a room wall, used in Reflect calculations according to the image source technique.
A specific icon in the Wwise interface that indicates the status of a particular property value. For example, the RTPC indicator shows whether a property value has an associated RTPC.
A special type of bank that contains all the general information of a project, including information on the bus hierarchy, and information on states, switches, and RTPCs. There is only one Initialization bank per project and it is named “Init.bnk” by default. The Initialization bank must be the first bank loaded when starting a game. If it is not loaded first, the subsequent SoundBank may refuse to load.
The complete range of values that can be entered for a property - as opposed to Slider Range.
Report generated in Wwise that displays errors or issues in a project along with suggested fixes.
A music composition and arrangement method to create musical scores that are both modular and responsive to in-game actions.
Events that have been deleted from your project, but are still included in a SoundBank.
An audio file resulting from the measurement of real acoustic characteristics of a location, such as a concert hall. Impulse Responses are used in the AK Convolution Reverb Effect to enable the acoustic characteristics of a particular location to be applied to the incoming signal.
A series of views grouped together to facilitate the work involved for a particular task or job.
A virtual microphone or motion sensor in the game that helps assign the sounds to particular speakers or motion to particular motors to simulate a 3D environment.
In streaming, this refers to the time reserved for the sound engine to seek the streaming data.
A bus or series of busses at the top of your project hierarchy that allow you to group many different sound, music, and motion structures according to the main categories within a game. For example, you can group all the voice or music structures under one Audio Bus, all the sound effects under another Audio Bus, and so on.
A master Audio Bus is found at the top level of nested work units and virtual folders, as long as there's no other audio bus above it in the hierarchy. The final audio output, through a potentially extensive network of sub-busses, is ultimately decided by settings specified in the master Audio Busses and Effects applied to them.
The term "master secondary bus" refers to any bus found at the top level of nested work units and virtual folders, where the final output is a secondary one, such as a game controller, as long as there's no other audio bus above it in the hierarchy.
A unique reverb effect optimized for game production that balances quality with performance and includes real-time editing and RTPC mapping functionalities.
A collection of properties associated with an Audio Object and intended for use by an endpoint or Object Processor to create spatialized effects. Typical examples of Metadata include 3D position, azimuth, elevation, focus, and spread.
An audio effect that measures the level of a signal without modifying it, and optionally outputs this level as a Game Parameter. It is most useful for achieving side-chaining, where the measured level of a bus drives the volume of another bus through RTPC.
A series of meters that display the level of the audio signal for each channel. While Audio and Auxiliary Busses show the output signal, dynamic Effects (such as compressors and limiters) generally display the audio input, audio output, and the applied gain reduction.
A flexible and powerful mixing console that groups a variety of bus and object properties into a single view, used to fine-tune the audio mix of your game in real-time.
A set of Wwise objects of your choice used within the Mixing Desk that can be saved and re-used at any time.
A technique to adjust Waveforms in MIDI or normal playback, which uses a predefined ADSR shape (the envelope).
A technique to adjust Waveforms - or their properties - over time in MIDI or normal playback, which uses an LFO (Low Frequency Oscillator).
A representation in Wwise of the individual motion assets that you have created for your project. Generally, these objects control the rumble feature of the console's game controllers. Each motion object can contain one or more sources, which defines the actual motion that will be generated in-game.
The basic component of a Music Track displayed as a rectangular area representing a single WAV file.
A group of one or more Music objects and/or Containers that are played back in a random or sequential order.
A multi-track Music object that is the basic unit of the Interactive Music Hierarchy.
A group of one or more Music objects and/or Containers that are played back according to the Switch or State that is called.
A Music object that contains arrangements of individual music clips that are displayed in waveform so that you can visually align them in a Music Segment.
In Wwise, there are several different types of Music Tracks, including Random Music Track, Sequence Music Track, and Switch Music Track.
An object that lies within another object.
A Work Unit nested inside another Work Unit. It allows for a finer granularity in the project files, reducing potential conflicts in a team environment when merging files under source control.
An effect created in the Expander effect plug-in that removes sounds from the output signal almost completely. A noise gate is created by setting a high expander ratio (over 10:1) that closes the gate for sounds whose gain has been reduced to this extent.
Elements in Wwise, such as the sounds, motion, Actor-Mixers, and containers that are used to contain, group, and define sounds and voices, motion, and music within the project hierarchy.
A condition that occurs when an object in the game, such as a pillar, partially blocks the space between a sound object and a listener.
A condition that occurs when an object in the game, such as a wall, completely blocks the space between a sound object and a listener.
The folder containing untouched copies of the audio files imported into the project. This folder is usually stored under source control.
Audio files no longer associated with a sound, motion, or music object. These files are not automatically deleted when you delete a sound object. To delete these files, clear the audio .cache folder.
Latency introduced during audio playback determined by the number of output buffers used by Wwise.
An audio effect plug-in that allows you to apply a variety of filters to shape the spectrum of your sound.
An object within a hierarchical structure that contains child objects.
An audio effect that controls the dynamic range of audio signals. It does this by weakening parts of the audio signal that briefly exceed a pre-defined threshold value as calculated with peak-based detection.
A directory on your hard disk, under your Wwise project root, that can contain other physical folders or Work Units used in your project. Physical folders cannot be child objects for containers, motion, or sounds.
The physical environment of the game where audio and motion are played and processed by the sound engine. When volume levels become very low, sound and motion objects can move into a virtual environment where they are managed and monitored by the sound engine, but no audio processing is performed.
An audio effect that changes the pitch without affecting the duration of the resulting audio signal.
A source that emits sound or motion as if from a single point.
The segment area after the exit cue that can be used for transitions in interactive music.
The segment area before the entry cue that can used for transitions in interactive music.
In streaming, this refers to a small buffer that covers the latency time required to fetch the rest of the file data.
A customized set of properties for objects, effects, and sound propagation that can be saved and re-used at any time.
A specific set of search criteria used to find a particular object or project element.
A group of one or more sounds, motion objects, and/or containers that are played back in a random order.
A special effect in Wwise that is applied to a property value that allows you to define a range of possible values that can be used randomly each time an object is played.
A Music Track that plays back its sub-tracks in random order each time its parent Music Segment is played.
Properties, such as volume and pitch, that can be defined at each level within your project hierarchy. These property values are cumulative, which means that a parent's property values are added to those of the child.
An audio effect plug-in that simulates the acoustics of a particular room or space.
A Wwise Spatial Audio concept describing how sound, shaped by the acoustical properties of the room in which it is emitted, propagates as a diffuse field to an adjoined room through Portals (openings) and walls (accounting for different levels of obstruction and occlusion).
A versatile, high-quality reverb effect plug-in that simulates the acoustics of a particular room or space.
Real Time Parameter Controls. An interactive method used to drive the audio in game by mapping Game Parameter values to properties in Wwise. RTPCs can also be used to drive Switch changes by mapping Game Parameters to Switch Groups.
The frequency at which an audio signal is sampled per second during conversion to a digital signal or during a digital conversion.
The level or amplitude of the audio signal sent to an Auxiliary Bus.
A group of one or more sounds, motion objects, and/or containers that are played back according to a specific playlist.
A Music Track that plays back its sub-tracks in sequential order each time its parent Music Segment is played.
The line separating the View Region, an area where sound waves pass unaltered, from the Shadow Region, an area where sound waves are altered by diffraction.
A set of properties that can be shared between objects to define attributes such as Effects or Attenuation.
When an object is added in a Wwise project, an entry in the related Work Unit file is created and given a 'Globally Unique Identifier' (or GUID) that is a 128 bit number. ShortIDs are either the FNV hash of object's GUID or the FNV hash of the object's name, resulting in a 32 bit identifier. The creation of the ShortID depends on the type of object:
For Events, Game Syncs and SoundBanks, the hash is implicit and will not be declared in the XML Work Unit file. It is the 32 bit FNV hash of the name of the object in the Wwise project.
For all other objects (WorkUnit, Sound, Container, Bus, etc), a ShortID is a 30 bit FNV hash of the GUID bytes.
You can find our C++ implementation of the FNV hash algorithm in the SDK: \SDK\include\AK\Tools\Common\AkFNVHash.h
A plug-in source of a specified duration that generates no sound or motion.
The default range of values that can be entered for a property using the slider - as opposed to the range that can be manually entered in the field, which changes the Slider Range.
A group of Event data, Sound, Music, and Motion structure data, and/or media files that will be loaded into the game's platform memory at a particular point in a game.
The view in Wwise that provides transport controls, auditioning, and real-time mixing in game or in Wwise, for objects or Events that can be inserted and removed as needed.
A saved project element that contains the Wwise objects and Events used in a specific simulation created in the Soundcaster.
Audiokinetic technologies that provide a framework for communicating with Wwise. WAAPI is the latest SoundFrame technology. See Using the Wwise Authoring API (WAAPI) for details.
An Audio Object that is sent to the system's 3D Audio endpoint.
Audio Objects are promoted to System Audio Objects when they reach the Audio Device at the end of the bus pipeline. An Audio Object can only be promoted if it meets all these requirements:
It has 3D Spatialization.
Its Speaker Panning / 3D Spatialization Mix is set to 100%.
It has a standard channel configuration that does not have any height channels.
A programming interface that enables external applications to seamlessly communicate with Wwise.
A representation in Wwise of the individual audio assets that you have created for your project. Each Sound object contains one or more sources, which defines the actual audio content that will be played in-game. Note that a capitalized "Sound" used in the Wwise documentation is referring to a Sound object, which could be either a Sound SFX or a Sound Voice.
A Sound object within the Actor-Mixer Hierarchy that contains sounds, music, and ambiences.
A sound object that contains dialogue or game voice over.
An audio or motion source that is created by a plug-in coming from outside of Wwise.
A function in Wwise that determines the actual location or position of the sound or music object within the 3D environment of the game.
The amount or percentage of audio that is spread to neighboring speakers allowing for sounds to change over distance from a point source at low values to a completely diffused propagation at high values. For multi-channel sounds, each channel is spread separately.
A global offset or adjustment to the game audio properties that relates to changes in the physical and environmental conditions in the game.
A collection of related States that have been grouped together to help manage the global changes that occur within the game environment.
An audio effect that provides a dual channel delay with a built-in filter. It has feedback and crossfeed controls to send delayed signals from one channel to another to create stereo effects.
A brief musical phrase that is superimposed and mixed over the currently playing music.
A Switch represents the alternatives that exist for a particular element in game, and is used to help manage the corresponding objects for these alternatives. For example, if a character is running on a concrete surface and then moves onto grass, the sound of the footsteps in a Switch Container should change to match the change of surface.
A series of Switches or States, each of which contains a group of sounds, motion objects, or containers, that correspond to particular changes in the environment of the game. For example, a Switch Container for a character's footsteps might contain Switches for grass, concrete, wood and any other surface that a character can walk on in the game.
A collection of related Switches that have been grouped together to help manage the different alternatives that exist for a given element within the game.
A Music Track that plays back its sub-tracks according to the associated Switch Group.
An audio effect that changes the duration without affecting the pitch of the resulting audio signal.
In Interactive Music, a transition is meant to be a smooth bridge between a source and destination Music Segment.
Time period used to transition from one state to another within the same State Group. During the transition, an interpolation of the two state properties occurs.
A Wwise Spatial Audio concept covering the proportion of energy that passes through an obstacle.
An audio effect that modulates the amplitude of the input signal with a unipolar carrier signal.
A Game Sync that responds to a spontaneous occurrence in the game and launches a Stinger.
An organizational object, displayed as a folder and contained within a Work Unit or one of its child objects, in which you can place other objects, such as Virtual Folders, Actor-Mixers, containers, motion objects, and sounds. Virtual folders cannot be child objects for Containers, Motion, or Sounds and do not have a corresponding directory on your hard disk.
A virtual environment where sounds and motion are managed and monitored by the sound engine, but no processing is performed. Objects move into the virtual voice when their volume levels fall below the volume threshold.
A separate or discrete playback instance, either audio or motion.
A type of error message that is displayed in the Capture Log when the sound engine cannot provide audio data to the platform hardware buffer in a timely manner. This type of problem occurs when there is excessive use of the host CPU. For example, when the platform CPU is trying to mix too many sources or is using too many audio effects simultaneously.
A certain volume level below which the behavior of sound, music, and motion objects can be specifically determined. For example, voices that fall below the volume threshold can either continue to play, be killed, or sent to the virtual voice list. These behaviors are defined in the Advanced category of the object's Property Editor.
A distinct XML file that contains information related to a particular section or element within your project.
Des questions ? Des problèmes ? Besoin de plus d'informations ? Contactez-nous, nous pouvons vous aider !
Visitez notre page d'AideEnregistrez votre projet et nous vous aiderons à démarrer sans aucune obligation !
Partir du bon pied avec Wwise