Book Image

Mapping and Visualization with SuperCollider

By : Marinos Koutsomichalis
Book Image

Mapping and Visualization with SuperCollider

By: Marinos Koutsomichalis

Overview of this book

SuperCollider is an environment and programming language used by musicians, scientists, and artists who work with audio-files SuperCollider has built-in graphical features which are used in conjunction with the sound synthesis server to create audio-visual mapping and sound visualization. If you wish to create data visualizations by acquiring data from audio and visual sources, then this book is for you.Digital sound artists need to analyze, manipulate, map, and visualize data when working on a scientific or an artistic project. As an artist, this book, by means of its numerous code examples will provide you with the necessary knowledge of SuperCollider's practical applications, so that you can extract meaningful information from audio-files and master its visualization techniques. This book will help you to prototype and implement sophisticated visualizers, sonifiers, and complex mappings of your data.This book takes a closer look at SuperCollider features such as plotting and metering functionality to dispel the mysterious aura surrounding the more advanced mappings and animation strategies. This book also takes you through a number of examples that help you to create intelligent mapping and visualization systems. Throughout the course of the book, you will synthesize and optimize waveforms and spectra for scoping as well as extract information from an audio signal. The later sections of the book focus on advanced topics such as emulating physical forces, designing kinematic structures, and using neural networks to enable you to develop a visualization that has a natural motion with structures that respect anatomy and which come with an intelligent encoding mechanism. This book will teach you everything you need to work with intelligent audio-visual systems to extract and visualize audio-visual data.
Table of Contents (16 chapters)

Plotting audio, numerical datasets, and functions

Before discussing how we can scope audio signals in real time, it is worth reviewing the various ways in which we can create static graphs and charts out of arbitrary numerical datasets or signals.

Using plot and plot graph

SuperCollider provides us with a very handy plot method. We can use this method in different situations to create graphs on the fly from instances of Function, ArrayedCollection, Env, Buffer, SoundFile, WaveTable, and from a series of other objects (also depending on what extensions we have installed). An example of this is shown in the following code:

{}.plot(0.1);              // plot a 0.1 seconds of a sinewave
[5,10,100, 50, 60].plot;                 // plot a numerical dataset
Env([0,1,0],[1,1],[-10,2]).plot;         // plot an envelope
Signal[0,1,0.5,1,0].plot;                // plot a signal
Wavetable.chebyFill(513,[1]).plot;       // plot a wavetable

( // plot the contents of a sound file
Server.default.waitForBoot({ // wait for Server to boot, Platform.resourceDir +/+ "sounds/a11wlk01.wav").plot;


Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.

In all cases, the resulting graphs will be automatically normalized with respect to the kind of data plotted so that each dimensions' display range is determined by the minimum and maximum quantities it has to represent; that is, to say that the plot's graph is content-dependent. Additionally, their meaning depends upon the receiver (that is, the kind of object plotted) so that for instances of Array, Wavetable, or Signal, the graph would represent the value per index; for UGen graphs, amplitude per unit time; for instances of Env, value per unit time; and for instances of Buffer, amplitude per frame. Since its behavior is different for different kinds of objects, the plot is said to be polymorphic. We should always consider the implicit consequences of these two properties. For example, the following two waveforms could be easily mistaken as identical, even if they are not:

(  // plot two sinusoids of different amplitude

To compensate for such a phenomenon, we need to explicitly set the minima (minval) and maxima (maxval) arguments. Interestingly enough, we can also plot abstract functions as long as they are one-argument ones and return some arithmetic value. We can do this with the plotGraph method, as follows:

{arg x; tan(x**2);}.plotGraph(100,-pi,pi); // graph out of a function

Here, the interpreter calculates the output of the given function for 100 different values in the range of ± π and populates the graph with the results; the horizontal axis representing node indexes and the vertical axis representing the function's output.


Buffer objects have a finite capacitance measured in frames; each frame may hold exactly one sample, therefore, a frame is the container of a sample.

Polymorphism in Computer Science refers to the ability in programming to present the same interface for different underlying forms.

Using plotter

Both plot and plotGraph are convenient methods, which ostensibly are just abstractions of a series of tasks. Whenever they are invoked, a parent Window is created containing an instance of Plotter whose specifications are configured accordingly. Explicitly creating and using Plotter allows sophisticated control over the way our data is plotted. The following code exemplifies a number of features of the Plotter object:

(  // data visualization using custom plotters
// the parent window
var window ="Plotter Example", Rect(0,0,640,480)).front;

// the datasets to visualize 
var datasetA = Array.fill(1000,{rrand(-1.0,1.0)});// random floats
var datasetB =  [ // a 2-dimensional array of random floats

// the plotters
var plotterA = Plotter("PlotterA",Rect(5,5,630,235),window);
var plotterB = Plotter("PlotterB",Rect(5,240,630,235),window);

// setup and customize plotterA
plotterA.value_(datasetA);       // load dataset
plotterA.setProperties(          // customize appearance
  \plotColor,,         // plot color
  \backgroundColor,, // background color
  \gridColorX, Color.white,      // gridX color
  \gridColorY, Color.yellow)     // gridY color
.editMode_(true)   // allow editing with the cursor
.editFunc_({ // this function is evaluated whenever data is edited
  arg plotter,plotIndex,index,val,x,y;
  ("Value: " ++ val ++ " inserted at index: " ++ index ++  

// setup and customize plotterB
plotterB.value_(datasetB);   // load datasetB
plotterB.superpose_(true);   // allow channels overlay
  \plotColor, [,], // plot colors
  \backgroundColor, Color.grey, // background color
  \gridOnX, false,              // no horizontal grid
  \gridOnY, false)              // no vertical grid
.plotMode_(\steps);             // use step interpolation

The result is illustrated in the following screenshot:

The comments pretty much explain everything. The first Plotter object is editable, which means that we can alter the graph when dragging and clicking on it with the mouse. Whenever we do so, editFunc will be evaluated with the following that are passed as arguments:

  • The Plotter object.

  • The plot index (which is only meaningful if there is more than one graph, such as for multichannel signals, of course).

  • The index position (horizontal axis value).

  • The value of the vertical dimension.

  • The x and the y positioning of the cursor.

In this case, while clicking or dragging with the mouse, a simple message is printed in the console.

The second Plotter object that operates on a multichannel dataset will create ramps out of every individual channel and superimpose them on the same graph using different colors. Using plotMode, we can select between the following alternative data representation modes, namely, \linear (linear interpolation), \points (data points only), \plines (both lines and points), \levels (horizontal lines), and \steps (ramps).

Using SoundFileView

In a visualization context, we may encounter situations wherein we need to plot the contents of some audio file. We could do so with Buffer and Plotter, yet there does exist a dedicated class for such cases, namely, SoundFileView as shown in the following code:

(  // display the contents of a soundfile
// create the view
var view ="A SoundFileView Example", 640@480).front,640@480);

// load a soundfile in the view using a SoundFile
var file =;   // create a new SoundFile
file.openRead(Platform.resourceDir +/+ "sounds/a11wlk01.wav");  
// read a file
view.soundfile_(file);           // set the soundfile, file.numFrames);    // read the entire soundfile (**for big soundFiles use .readWithTask instead**)
file.close;     // we no longer need the SoundFile

// configure appearence
view.timeCursorOn_(false);         // no time cursor
view.gridOn_(false);               // no grid
view.background_(;     // background color
// waveform color (it has to be an array)

Again the code is pretty straightforward; the only implication being that we need to open and read the actual file with a SoundFile object before we can read its contents into the SoundFileView object. When large sound files are involved, we will have to use readWithTask instead to avoid overloading our computer's memory. Then, if needed, we can use the zoom (or zoomToFrac) and scrollTo methods to only display portions of the file or to animate its contents. For example, the previous code could continue as shown in the following code:

// animate the contents of the file 
fork{ { arg counter; 
  { // every time we put some GUI-related operation in a Routine we need to defer it so that it is scheduled in the AppClock instead
    view.zoomToFrac(counter/100); // to total zooming range is 0-1
    view.scrollTo(counter/100); // the total scrolling range is 0-1
  0.1.wait; // speed of animation

Note that SuperCollider will refuse to schedule any GUI-related operation in the SystemClock class, hence we will have to use defer whenever such operations are involved. This is so that we can implicitly schedule them in the AppClock instead.