Book Image

Instant Audio Processing with Web Audio

By : Chris Khoo
Book Image

Instant Audio Processing with Web Audio

By: Chris Khoo

Overview of this book

Web Audio is an upcoming industry standard for web audio processing. Using the API, developers today can develop web games and applications with real-time audio effects to rival their desktop counterparts. Instant Audio Processing with Web Audio is your hands-on guide to the Web Audio API. Using clear, step-by-step exercises, this book explores the API and how to apply it to produce real-time audio effects such as audio stitching, audio ducking, and audio equalization. This book is an in-depth study of the Web Audio API. Through a series of practical, step-by-step exercises, this book will guide you through the basics of playing audio all the way to the task of building a 5-band audio equalizer. Along the way, we'll learn how to utilize Web Audio's scripting functionality to build real-time audio effects such as audio stitching and ducking. Then, we'll use this knowledge to build a basic audio layer step-by-step which can be used in our web applications/games. With its in-depth coverage of the Web Audio API and its practical advice on various audio implementation scenarios, Instant Audio Processing with Web Audio How-to is your ultimate guide to Web Audio.
Table of Contents (7 chapters)

Automating the audio parameters (Intermediate)


In previous recipes, we acknowledged that JavaScript timers do not have the fidelity required for scripting audio. Web Audio circumvents this limitation by adding the automation support. Automation allows applications to schedule the predefined audio behaviors ahead of time, thereby allowing applications to schedule audio events independent of the code execution timing.

Most AudioNode member attributes are the AudioParam instances which support automation. Using this automation support, it's easy to implement sophisticated audio effects such as ducking, where the sound level of an audio signal is reduced by the presence of another audio signal. In this recipe, we'll use automation to duck the music whenever the applause sound effect plays.

Getting ready

The complete source code for this recipe is available in the code bundle at recipes/Recipe5_2.

How to do it...

  1. Start with a clean copy of the base framework template Version 2. The template bundle is located at tools/RecipeFrameworkV2 in the code bundle.

  2. Open index.html with a text editor.

  3. We'll start by declaring our application controls in the HTML section:

    <div id="appwindow">
        <h2>Automating Audio Parameters</h2>
        <form>
            <div>
                <h3>Music</h3>
                <input type="checkbox" id="piano" />
                <label for="piano">Piano Loop</label>
                <span>VOLUME</span>
                <span id="pianovol" style="display: inline-block; width: 300px;"></span>
            </div>
            <div>
                <h3>Sound Effects</h3>
                <a id="applause" href="javascript:void(0);">Applause</a>
            </div>
        </form>
    </div>
  4. To implement the ducking support, we'll need to modify the AudioLayer class. In the AudioLayer constructor, we'll instantiate another GainNode instance to act as the ducker volume control:

    function AudioLayer( audioContext ) {
        this.audioContext = audioContext;
    
        // Create the ducker GainNode
        this.duckNode = audioContext.createGain();
    
        // Create the volume GainNode
        this.volNode = audioContext.createGain();
    
        // Expose the gain control
        this.gain = this.volNode.gain;
    
        // Connect the volume control to the ducker
        this.volNode.connect( this.duckNode );
    
        // Connect the ducker to the speakers
        this.duckNode.connect( this.audioContext.destination );
    }
  5. We'll add a new function setDuck() to AudioLayer to activate the ducking behavior:

    AudioLayer.prototype.setDuck = function( duration ) {
        var TRANSITIONIN_SECS   = 1;
        var TRANSITIONOUT_SECS  = 2;
        var DUCK_VOLUME         = 0.3;
    
        var duckGain  = this.duckNode.gain;
        var eventSecs = this.audioContext.currentTime;
    
        // Cancel all future events
        duckGain.cancelScheduledValues( eventSecs );
    
        // Schedule the volume ramp down
        duckGain.linearRampToValueAtTime( 
            DUCK_VOLUME, 
            eventSecs + TRANSITIONIN_SECS );
    
        // Add a set value event to mark ramp up start
        duckGain.setValueAtTime( 
            DUCK_VOLUME, 
            eventSecs + duration );
    
        // Schedule the volume ramp up
        duckGain.linearRampToValueAtTime( 
            1, 
            eventSecs + duration + TRANSITIONOUT_SECS );
    };
  6. Next, we'll add the function WebAudioApp.initMusic() for initializing and controlling the music playback:

    WebAudioApp.prototype.initMusic = function( elemId, 
                                                audioSrc, 
                                                elemVolId ) {
        // Initialize the button and disable it by default
        var jqButton = $( elemId ).button({ disabled: true });
        // Load the audio
        var audioBuffer;
        this.loadAudio( audioSrc, function( audioBufferIn ) {
            // Cache the audio buffer
            audioBuffer = audioBufferIn;
    
            // Enable the button once the audio is ready to go
            jqButton.button( "option", "disabled", false );
        }, this );
    
        var musicLayer = this.musicLayer;
    
        // Register a click event listener to trigger playback
        var activeNode;
        jqButton.click(function( event ) {
    
            // Stop the active source node
            if( activeNode != null ) {
                activeNode.stop( 0 );
                activeNode = null;
    
                consoleout( "Stopped music loop '"
                        + audioSrc + "'" );
            }
    
            // Start a new sound on button activation
            if($(this).is(':checked')) {
                // Start the loop playback
                activeNode = musicLayer.playAudioBuffer(
                        audioBuffer, 0, true );
    
                consoleout( "Played music loop '"
                        + audioSrc + "'" );
            }
        });
    
        // Create the volume control
        $( elemVolId ).slider({
            min: musicLayer.gain.minValue,
            max: musicLayer.gain.maxValue,
            step: 0.01,
    
            value: musicLayer.gain.value,
            // Add a callback function when the user
            // moves the slider
            slide: function( event, ui ) {
                // Set the volume directly
                musicLayer.gain.value = ui.value;
    
                consoleout( "Adjusted music volume: "
                            + ui.value );
            }
        });
    };
  7. We'll add the function WebAudioApp.initSfx() for initializing and controlling the sound effects playback. The sound effects controls use the AudioLayer ducking functionality to duck the music every time sound effects are active:

    WebAudioApp.prototype.initSfx = function( elemId, 
                                              audioSrc ) {
        // Initialize the button and disable it by default
        var jqButton = $( elemId ).button({ disabled: true });
    
        // Load the audio
        var audioBuffer;
        this.loadAudio( audioSrc, function( audioBufferIn ) {
            // Cache the audio buffer
            audioBuffer = audioBufferIn;
    
            // Enable the button once the audio is ready to go
            jqButton.button( "option", "disabled", false );
        }, this );
    
        // Register a click event listener to trigger playback
        var me = this;
        var activeNode;
        jqButton.click(function( event ) {
            me.sfxLayer.playAudioBuffer( audioBuffer, 0 );
    
            // Duck the music layer for the duration of the
            // sound effect
            me.musicLayer.setDuck( audioBuffer.duration );
    
            consoleout( "Ducking music for SFX '"
                        + audioSrc + "'" );
        });
    };
  8. In WebAudioApp.start(), we initialize Web Audio, the audio layers, and the application controls:

    WebAudioApp.prototype.start = function() {
        if( !this.initWebAudio() ) {
            consoleout( "Browser does not support WebAudio" );
            return;
        }
    
        // Create the audio layers
        this.musicLayer = new AudioLayer( this.audioContext );
        this.sfxLayer = new AudioLayer( this.audioContext );
    
        // Set up the UI
        this.initMusic( "#piano", "assets/looperman-morpheusd-dreamworld-fullpiano-120-bpm.wav", "#pianovol" );
        this.initSfx ( "#applause", "assets/applause.mp3" );
    };

Launch the application test URL in a web browser (http://localhost/myaudiomixer) to see the end result. The following is a screenshot of what we should see in the browser:

How it works...

As previously mentioned, the AudioParam interface has automation support which allows applications to build pretty sophisticated automated behaviors. Let's take a look at the AudioParam automation methods:

  1. The setValueAtTime() method sets the audio parameter value to value at the time startTime:

        function setValueAtTime( value:Number,
                                 startTime:Number);

    The following is a diagram illustrating its behavior:

  2. The linearRampToValueAtTime() method linearly ramps the audio parameter value from the previously set value to the given value, value, at the time endTime:

        function linearRampToValueAtTime( value:Number,
                                          endTime:Number);

    The following diagrams illustrate the behavior when ramping up or down to the target value respectively:

  3. The exponentialRampToValueAtTime() method exponentially ramps the audio parameter value from the previously set value to the given value, value, at the time endTime:

        function exponentialRampToValueAtTime( value:Number,
                                               endTime:Number);

    The following are the diagrams illustrating its behavior:

  4. The setTargetAtTime() method ramps the audio parameter so that it approaches the target value, value, starting at the time startTime. The timeConstant parameter controls the approach rate of the value:

        function setTargetAtTime( target:Number,
                                  startTime:Number,
                                  timeConstant:Number);

    The following are the diagrams illustrating its behavior:

  5. The setValueCurveAtTime() method applies an array of arbitrary values to the audio parameter. The array values are distributed evenly throughout the automation duration, and the applied value is calculated using linear interpolation:

        function setValueCurveAtTime( values:Array.<Number>,
                                      startTime:Number
                                      duration:Number );

    The following is the diagram illustrating its behavior:

  6. The cancelScheduledValues() method cancels all the scheduled parameter changes starting at the time startTime or later:

        function cancelScheduledValues( startTime:Number );

Like the playback automation methods we discussed in the previous recipe, all time parameters are in seconds and are relative to the audio context's time coordinate system.

Note

Wondering how to specify the start time for some automation methods such as linearRampToValueAtTime() and exponentialRampToValueAtTime()?

When an automation method does not have a start time parameter, its behavior starts at the nearest previous automation event or the audio context current time, whichever is later.

There are several key rules in regards to scheduling the automation events:

  • If an event is added at a time when there is already an event of the exact same type, the new event replaces the old one.

  • If an event is added at a time when there is already an event of a different type, it is scheduled to occur immediately after it.

  • Events may not overlap – some events occur over time, such as the linearRampToValueAtTime() automation behavior. No events may be scheduled in the time when such events are active, otherwise Web Audio will throw a runtime exception.

We leverage the AudioParam automation support to implement ducking. The following is the overview of the ducking logic implemented in the AudioLayer class:

  1. We add a GainNode instance into the node graph as the duck controller.

  2. When a sound effect is played, we script the duck controller's gain audio parameter to reduce the audio output gain level for the duration of the sound effect.

  3. If ducking is reactivated while it is still active, we revise the scheduled ducking events so that they end at the appropriate time.

The following is the node graph diagram produced by the code:

Tip

Why use two GainNode instances instead of one?

It's a good idea to split up the independent scripted audio gain behaviors into separate GainNode instances. This ensures that the scripted behaviors will interact properly.

Now, let's take a look at AudioLayer.setDuck() which implements the ducking behavior:

  1. The AudioLayer.setDuck() method takes a duration (in seconds) indicating how long the duck behavior should be applied:

    AudioLayer.prototype.setDuck = function( duration ) {
  2. We cache the duck controller's gain audio parameter in duckGain:

        var TRANSITIONIN_SECS   = 1;
        var TRANSITIONOUT_SECS  = 2;
        var DUCK_VOLUME         = 0.3;
    
        var duckGain  = this.duckNode.gain;
  3. We cancel any existing leftover scheduled duck behaviors, thereby allowing us to start with a clean slate:

        var eventSecs = this.audioContext.currentTime;
    
        duckGain.cancelScheduledValues( eventSecs );
  4. We employ the linearRampToValueAtTime() automation behavior to schedule the transition in—the audio parameter is scripted to linearly ramp from the existing volume to the duck volume, DUCK_VOLUME, over the time, TRANSITIONIN_SECS. Because there are no future events scheduled, the behavior starts at the current audio context time:

        duckGain.linearRampToValueAtTime(
            DUCK_VOLUME,
            eventSecs + TRANSITIONIN_SECS );

    Note

    If the volume is already at DUCK_VOLUME, the transition has no effect, thereby creating the effect of extending the ducking behavior.

  5. We add an automation event to mark the start of the TRANSITIONOUT section. We do this by scheduling a setValueAtTime() automation behavior:

        duckGain.setValueAtTime(
            DUCK_VOLUME,
            eventSecs + duration );
  6. Finally, we set up the TRANSITIONOUT section using a linearRampToValueAtTime() automation behavior. We arrange the transition to occur over TRANSITIONOUT_SECS by scheduling its end time to occur after the TRANSITIONOUT_SECS duration of the previous setValueAtTime() automation behavior:

        // Schedule the volume ramp up
        duckGain.linearRampToValueAtTime(
            1,
            eventSecs + duration + TRANSITIONOUT_SECS );
    };

The following is a graph illustrating the automation we've applied to duckGain, the duck controller's gain audio parameter:

In order to have the sound effects activation duck the music volume, the sound effects and music have to be played on separate audio layers. That's why this recipe instantiates two AudioLayer instances—one for music playback and the other for sound effect playback.

The dedicated music AudioLayer instance is cached in the WebAudioApp attribute, musicLayer, and the dedicated sound effects AudioLayer instance is cached in WebAudioApp attribute sfxLayer:

WebAudioApp.prototype.start = function() {
    ...

    this.musicLayer = new AudioLayer( this.audioContext );
    this.sfxLayer = new AudioLayer( this.audioContext );

    ...
};

Whenever a sound effects button is clicked, we play the sound and simultaneously activate the duck behavior on the music layer. This logic is implemented as part of the behavior of the sound effect's click event handler in WebAudioApp.initSfx():

jqButton.click(function( event ) {
    me.sfxLayer.playAudioBuffer( audioBuffer, 0 );
    me.musicLayer.setDuck( audioBuffer.duration );

We activate ducking on webAudioApp.musicLayer, the music's AudioLayer instance. The ducking duration is set to the sound effects duration (we read the sound effects sample duration from its AudioBuffer instance).

The ducking behavior is just one demonstration of the power of automation. The possibilities are endless given the breadth of automation-friendly audio parameters available in Web Audio. Other possible effects that are achievable through automation include fades, tempo matching, and cyclic panning effects.

Please refer to the latest online W3C Web Audio documentation at http://www.w3.org/TR/webaudio/ for a complete list of available audio parameters.

Advanced automation techniques

Web Audio allows the output from an AudioNode instance to drive an audio parameter. This is accomplished by connecting an AudioNode instance to an AudioParam instance:

interface AudioNode {
    function connect( destinationNode:AudioParam,
             outputIndex:Number? );
};

The previous code connects an AudioNode instance to a target AudioParam instance. destinationNode is the target AudioParam instance, and outputIndex is the AudioNode output to connect to it.

This functionality allows applications to automate audio parameters using controller data from data files—the controller data is loaded into an AudioBuffer instance, and is injected into the node graph using an AudioBufferSourceNode instance.

The following node graph illustrates this approach for controlling the output volume using controller data from a file:

The automation data can be generated even at runtime using JavaScript. The following node graph employs this method to automate a sound sample's output volume:

Unfortunately, the implementation details for accomplishing these effects are beyond the scope of this book. Therefore, I leave the task of producing working examples of these cases to you, the readers.