In previous recipes, we acknowledged that JavaScript timers do not have the fidelity required for scripting audio. Web Audio circumvents this limitation by adding the automation support. Automation allows applications to schedule the predefined audio behaviors ahead of time, thereby allowing applications to schedule audio events independent of the code execution timing.
Most AudioNode
member attributes are the AudioParam
instances which support automation. Using this automation support, it's easy to implement sophisticated audio effects such as ducking, where the sound level of an audio signal is reduced by the presence of another audio signal. In this recipe, we'll use automation to duck the music whenever the applause sound effect plays.
The complete source code for this recipe is available in the code bundle at recipes/Recipe5_2
.
Start with a clean copy of the base framework template Version 2. The template bundle is located at
tools/RecipeFrameworkV2
in the code bundle.Open
index.html
with a text editor.We'll start by declaring our application controls in the HTML section:
<div id="appwindow"> <h2>Automating Audio Parameters</h2> <form> <div> <h3>Music</h3> <input type="checkbox" id="piano" /> <label for="piano">Piano Loop</label> <span>VOLUME</span> <span id="pianovol" style="display: inline-block; width: 300px;"></span> </div> <div> <h3>Sound Effects</h3> <a id="applause" href="javascript:void(0);">Applause</a> </div> </form> </div>
To implement the ducking support, we'll need to modify the
AudioLayer
class. In theAudioLayer
constructor, we'll instantiate anotherGainNode
instance to act as the ducker volume control:function AudioLayer( audioContext ) { this.audioContext = audioContext; // Create the ducker GainNode this.duckNode = audioContext.createGain(); // Create the volume GainNode this.volNode = audioContext.createGain(); // Expose the gain control this.gain = this.volNode.gain; // Connect the volume control to the ducker this.volNode.connect( this.duckNode ); // Connect the ducker to the speakers this.duckNode.connect( this.audioContext.destination ); }
We'll add a new function
setDuck()
toAudioLayer
to activate the ducking behavior:AudioLayer.prototype.setDuck = function( duration ) { var TRANSITIONIN_SECS = 1; var TRANSITIONOUT_SECS = 2; var DUCK_VOLUME = 0.3; var duckGain = this.duckNode.gain; var eventSecs = this.audioContext.currentTime; // Cancel all future events duckGain.cancelScheduledValues( eventSecs ); // Schedule the volume ramp down duckGain.linearRampToValueAtTime( DUCK_VOLUME, eventSecs + TRANSITIONIN_SECS ); // Add a set value event to mark ramp up start duckGain.setValueAtTime( DUCK_VOLUME, eventSecs + duration ); // Schedule the volume ramp up duckGain.linearRampToValueAtTime( 1, eventSecs + duration + TRANSITIONOUT_SECS ); };
Next, we'll add the function
WebAudioApp.initMusic()
for initializing and controlling the music playback:WebAudioApp.prototype.initMusic = function( elemId, audioSrc, elemVolId ) { // Initialize the button and disable it by default var jqButton = $( elemId ).button({ disabled: true }); // Load the audio var audioBuffer; this.loadAudio( audioSrc, function( audioBufferIn ) { // Cache the audio buffer audioBuffer = audioBufferIn; // Enable the button once the audio is ready to go jqButton.button( "option", "disabled", false ); }, this ); var musicLayer = this.musicLayer; // Register a click event listener to trigger playback var activeNode; jqButton.click(function( event ) { // Stop the active source node if( activeNode != null ) { activeNode.stop( 0 ); activeNode = null; consoleout( "Stopped music loop '" + audioSrc + "'" ); } // Start a new sound on button activation if($(this).is(':checked')) { // Start the loop playback activeNode = musicLayer.playAudioBuffer( audioBuffer, 0, true ); consoleout( "Played music loop '" + audioSrc + "'" ); } }); // Create the volume control $( elemVolId ).slider({ min: musicLayer.gain.minValue, max: musicLayer.gain.maxValue, step: 0.01, value: musicLayer.gain.value, // Add a callback function when the user // moves the slider slide: function( event, ui ) { // Set the volume directly musicLayer.gain.value = ui.value; consoleout( "Adjusted music volume: " + ui.value ); } }); };
We'll add the function
WebAudioApp.initSfx()
for initializing and controlling the sound effects playback. The sound effects controls use theAudioLayer
ducking functionality to duck the music every time sound effects are active:WebAudioApp.prototype.initSfx = function( elemId, audioSrc ) { // Initialize the button and disable it by default var jqButton = $( elemId ).button({ disabled: true }); // Load the audio var audioBuffer; this.loadAudio( audioSrc, function( audioBufferIn ) { // Cache the audio buffer audioBuffer = audioBufferIn; // Enable the button once the audio is ready to go jqButton.button( "option", "disabled", false ); }, this ); // Register a click event listener to trigger playback var me = this; var activeNode; jqButton.click(function( event ) { me.sfxLayer.playAudioBuffer( audioBuffer, 0 ); // Duck the music layer for the duration of the // sound effect me.musicLayer.setDuck( audioBuffer.duration ); consoleout( "Ducking music for SFX '" + audioSrc + "'" ); }); };
In
WebAudioApp.start()
, we initialize Web Audio, the audio layers, and the application controls:WebAudioApp.prototype.start = function() { if( !this.initWebAudio() ) { consoleout( "Browser does not support WebAudio" ); return; } // Create the audio layers this.musicLayer = new AudioLayer( this.audioContext ); this.sfxLayer = new AudioLayer( this.audioContext ); // Set up the UI this.initMusic( "#piano", "assets/looperman-morpheusd-dreamworld-fullpiano-120-bpm.wav", "#pianovol" ); this.initSfx ( "#applause", "assets/applause.mp3" ); };
Launch the application test URL in a web browser (http://localhost/myaudiomixer
) to see the end result. The following is a screenshot of what we should see in the browser:

As previously mentioned, the AudioParam
interface has automation support which allows applications to build pretty sophisticated automated behaviors. Let's take a look at the AudioParam
automation methods:
The
setValueAtTime()
method sets the audio parameter value tovalue
at the timestartTime
:function setValueAtTime( value:Number, startTime:Number);
The following is a diagram illustrating its behavior:
The
linearRampToValueAtTime()
method linearly ramps the audio parameter value from the previously set value to the given value,value
, at the timeendTime
:function linearRampToValueAtTime( value:Number, endTime:Number);
The following diagrams illustrate the behavior when ramping up or down to the target value respectively:
The
exponentialRampToValueAtTime()
method exponentially ramps the audio parameter value from the previously set value to the given value,value
, at the timeendTime
:function exponentialRampToValueAtTime( value:Number, endTime:Number);
The following are the diagrams illustrating its behavior:
The
setTargetAtTime()
method ramps the audio parameter so that it approaches the target value,value
, starting at the timestartTime
. ThetimeConstant
parameter controls the approach rate of the value:function setTargetAtTime( target:Number, startTime:Number, timeConstant:Number);
The following are the diagrams illustrating its behavior:
The
setValueCurveAtTime()
method applies an array of arbitrary values to the audio parameter. The array values are distributed evenly throughout the automation duration, and the applied value is calculated using linear interpolation:function setValueCurveAtTime( values:Array.<Number>, startTime:Number duration:Number );
The following is the diagram illustrating its behavior:
The
cancelScheduledValues()
method cancels all the scheduled parameter changes starting at the timestartTime
or later:function cancelScheduledValues( startTime:Number );
Like the playback automation methods we discussed in the previous recipe, all time parameters are in seconds and are relative to the audio context's time coordinate system.
Note
Wondering how to specify the start time for some automation methods such as linearRampToValueAtTime()
and exponentialRampToValueAtTime()
?
When an automation method does not have a start time parameter, its behavior starts at the nearest previous automation event or the audio context current time, whichever is later.
There are several key rules in regards to scheduling the automation events:
If an event is added at a time when there is already an event of the exact same type, the new event replaces the old one.
If an event is added at a time when there is already an event of a different type, it is scheduled to occur immediately after it.
Events may not overlap – some events occur over time, such as the
linearRampToValueAtTime()
automation behavior. No events may be scheduled in the time when such events are active, otherwise Web Audio will throw a runtime exception.
We leverage the AudioParam
automation support to implement ducking. The following is the overview of the ducking logic implemented in the AudioLayer
class:
We add a
GainNode
instance into the node graph as the duck controller.When a sound effect is played, we script the duck controller's
gain
audio parameter to reduce the audio output gain level for the duration of the sound effect.If ducking is reactivated while it is still active, we revise the scheduled ducking events so that they end at the appropriate time.
The following is the node graph diagram produced by the code:

Tip
Why use two GainNode instances instead of one?
It's a good idea to split up the independent scripted audio gain behaviors into separate GainNode
instances. This ensures that the scripted behaviors will interact properly.
Now, let's take a look at AudioLayer.setDuck()
which implements the ducking behavior:
The
AudioLayer.setDuck()
method takes a duration (in seconds) indicating how long the duck behavior should be applied:AudioLayer.prototype.setDuck = function( duration ) {
We cache the duck controller's
gain
audio parameter induckGain
:var TRANSITIONIN_SECS = 1; var TRANSITIONOUT_SECS = 2; var DUCK_VOLUME = 0.3; var duckGain = this.duckNode.gain;
We cancel any existing leftover scheduled duck behaviors, thereby allowing us to start with a clean slate:
var eventSecs = this.audioContext.currentTime; duckGain.cancelScheduledValues( eventSecs );
We employ the
linearRampToValueAtTime()
automation behavior to schedule the transition in—the audio parameter is scripted to linearly ramp from the existing volume to the duck volume,DUCK_VOLUME
, over the time,TRANSITIONIN_SECS
. Because there are no future events scheduled, the behavior starts at the current audio context time:duckGain.linearRampToValueAtTime( DUCK_VOLUME, eventSecs + TRANSITIONIN_SECS );
We add an automation event to mark the start of the
TRANSITIONOUT
section. We do this by scheduling asetValueAtTime()
automation behavior:duckGain.setValueAtTime( DUCK_VOLUME, eventSecs + duration );
Finally, we set up the
TRANSITIONOUT
section using alinearRampToValueAtTime()
automation behavior. We arrange the transition to occur overTRANSITIONOUT_SECS
by scheduling its end time to occur after theTRANSITIONOUT_SECS
duration of the previoussetValueAtTime()
automation behavior:// Schedule the volume ramp up duckGain.linearRampToValueAtTime( 1, eventSecs + duration + TRANSITIONOUT_SECS ); };
The following is a graph illustrating the automation we've applied to duckGain
, the duck controller's gain
audio parameter:

In order to have the sound effects activation duck the music volume, the sound effects and music have to be played on separate audio layers. That's why this recipe instantiates two AudioLayer
instances—one for music playback and the other for sound effect playback.
The dedicated music AudioLayer
instance is cached in the WebAudioApp
attribute, musicLayer
, and the dedicated sound effects AudioLayer
instance is cached in WebAudioApp
attribute sfxLayer
:
WebAudioApp.prototype.start = function() { ... this.musicLayer = new AudioLayer( this.audioContext ); this.sfxLayer = new AudioLayer( this.audioContext ); ... };
Whenever a sound effects button is clicked, we play the sound and simultaneously activate the duck behavior on the music layer. This logic is implemented as part of the behavior of the sound effect's click
event handler in WebAudioApp.initSfx()
:
jqButton.click(function( event ) { me.sfxLayer.playAudioBuffer( audioBuffer, 0 ); me.musicLayer.setDuck( audioBuffer.duration );
We activate ducking on webAudioApp.musicLayer
, the music's AudioLayer
instance. The ducking duration is set to the sound effects duration (we read the sound effects sample duration from its AudioBuffer
instance).
The ducking behavior is just one demonstration of the power of automation. The possibilities are endless given the breadth of automation-friendly audio parameters available in Web Audio. Other possible effects that are achievable through automation include fades, tempo matching, and cyclic panning effects.
Please refer to the latest online W3C Web Audio documentation at http://www.w3.org/TR/webaudio/ for a complete list of available audio parameters.
Web Audio allows the output from an AudioNode
instance to drive an audio parameter. This is accomplished by connecting an AudioNode
instance to an AudioParam
instance:
interface AudioNode { function connect( destinationNode:AudioParam, outputIndex:Number? ); };
The previous code connects an AudioNode
instance to a target AudioParam
instance. destinationNode
is the target AudioParam
instance, and outputIndex
is the AudioNode
output to connect to it.
This functionality allows applications to automate audio parameters using controller data from data files—the controller data is loaded into an AudioBuffer
instance, and is injected into the node graph using an AudioBufferSourceNode
instance.
The following node graph illustrates this approach for controlling the output volume using controller data from a file:

The automation data can be generated even at runtime using JavaScript. The following node graph employs this method to automate a sound sample's output volume:

Unfortunately, the implementation details for accomplishing these effects are beyond the scope of this book. Therefore, I leave the task of producing working examples of these cases to you, the readers.