Prototyping – First iteration of audio visualizer

Over the previous months I have learned to use Processing to visualize data, using techniques such as for loops, conditionals, and integers to create visual output. I have also learned to use Minim at a basic level, Processing’s included library for using audio.

Building on my analysis and what I have learned of Processing thus far, I have created the first iteration of what I intend to be my final project – an audio visualizer built with Processing’s Minim Library.

import ddf.minim.*;

Minim minim;
AudioPlayer player;

int col1 = 100;
int col2 = 100;
int x ;
int y ;

void setup()
  size(1280, 620, P3D);
  background(0) ;
  smooth ();
  float x = 6;
  minim = new Minim(this);
  player = minim.loadFile("time.mp3");

void draw()

  for (int i = 0; i < player.bufferSize () - 1; i++) {

    if (col1 < 255) {
    col1 = 75+int(player.right.get(i)*255) ;
    } else {
    col1 = 0;

    if (col2 < 255) {
      col2 = 75+int(player.left.get(i)*255) ;
    } else {
      col2 = 0;

  int y1 = (int(player.left.get(i)*305));
  int y2 = 620-(int(player.right.get(i)*305));

    if (i == 0) {




  if (x < width) {
    x = x + 6;
  } else {
    x = 6;


void mousePressed (){
  background (0);

My final project will be interactive as the audio values will be based on user input, however for now I am using the audio player function of Minim to get input from a preset audio track rather than a microphone. This will allow me to develop iterations without requiring microphone testing each time.

first iteration
The height and colors of the lines are determined by sound.

My main goal with this iteration was to get the visual elements created by Processing successfully responding to the audio input, creating integers y1, y2, col1 and col2 to be determined by the player.left.get and player.right.get functions of the Minim library. These two functions return values for amplitude of whatever audio input has been provided, be it an audio track or ongoing input, and respond to Stereo audio – left and right.

The visual elements are a series of lines created across the screen using a for loop (see lines 26, 43-47, and 58-62), to display the audio input’s values chronologically like a wave going across the screen – the top half of the screen draws lines based on the left audio, and the bottom is based on the right audio. This results in an aesthetic similar to a spectrogram, however displaying amplitude over time compared to by frequency. As the lines repeat from the start, I also put in a fade effect (lines 54-56) to gradually fade out previously drawn lines, which is also a visually pleasing effect. I also lowered the frame rate to 30fps, as the sketch felt a bit too fast paced at full 60fps not giving much time to see how the graph reacts to the sound input.

My next iterations will need to examine in greater detail how my code responds to the audio’s input, as curiously quieter audio values still display full length lines drawn on the screen (though colors remain dull), and refine the visualizer to react more accurately with what the audience will perceive from the audio. The values drawn from player.left.get and player.right.get may not reflect the actual sound.


Minim documentation –

Analysis – Audio Visualization

Based on my understanding of the space and its audience, and interactive displays, I have decided to focus on audio visualization, and synergy between audio and visual media, as the concept for my final piece.

As the space is in a university, the primary demographic moving through the Weymouth house foyer is media students, likely to take an interest in digital art and electronic music themes. My goal is to create a piece which will draw their attention and is not overly complex, but still allows users to have interaction with a visual output through audio.

Kosara (2008) describes key characteristics of visualization: “the visual must be the primary means of communication”, and “the most important criteria is that the visualization must provide a way to learn something about the data … there must be at least some relevant aspects of the data that can be read. The visualization must also be recognizable as one and not pretend to be something else”. These will be factors to consider when developing my project, a strong relationship between the audio and the visual elements of my piece, and that the interaction between the two is clearly recognizable to the user.

Music visualization is often made vivid and artistic in correlation to the audio input. Several examples gathered on the blog Visual Complexity (2010) are shown below:

“The Shape of Song” by Martin Wattenberg
“Narratives 2.0” by Matthias Dittrich

My work will likely not amount to the above level of complexity, however they are examples of the beautiful results that can be achieved through visualizing audio. With the correct setup and design, even a simple audio input such as speech should be able to produce a nice effect, made doubly satisfying by the synergy between audio and visual elements of the piece – audiovisual pieces of media are effective as the two different sensory outputs compliment and enhance each other. The video below is an example of a physical audiovisual installation by media group AntiVJ, and exemplifies why I am interested in this concept:

The key to making my project successful will be to combine this concept with interactivity, allowing users direct control over the audio input and therefore the visuals on the screen. Synergy between audible and visual elements is already satisfying, giving a user control over it too should amount to an engaging interactive display.



Kosara, R., 2008. What is Visualization? A definition. eagereyes [online]. Available from:

Lima, M., 2010. Music Visualization: Beautiful Tools to “See” Sound. Visual Complexity [online]. Available from:

Wattenberg, M., The Shape of Song [online]. Available from:

Dittrich, M., Narratives 2.0 [online]. Available from:

AntiVJ, 2013. The Ark. AntiVJ [online]. Available from: