Brief Evaluation of Design Iterations

Overall I feel the interactive visualization piece I created with Processing worked well in the media space, with testers commenting they liked the visual effect produced by the input of their own voice.

Using an iterative approach to design in this project allowed me to quickly frame and solve problems that arose, from the early stages of examining the space to coding the visualizer in the later parts of the process. Looking at the broader process and approach I took, there is a clear progression through stages from requirements gathering to prototyping and testing, however within each of these stages I took an iterative approach on a smaller scale, from redesigning our posters to study visual media in the space to prototyping my project in processing – each iteration revealed new problems to me that I explored solutions to in the following version, then solved and tested outside of the space using Minim’s audio player function.

While the audio visualizer I designed is relatively simple, exploring Processing and Minim further could lead me to more complex ways to visualize audio, and get different kinds of data from an input – for example, instead of charting amplitude over time, getting frequency values instead could allow me to create a proper spectrogram within Processing. It could also be applied to a wide variety of audio inputs, from microphones to music tracks, or as part of an app allowing a user to create their own musical audio input and then visualizing it.

I have only scratched the surface, but audio visualization has many potential uses such as in nightclubs or electronic music events to enhance the experience between music and lights or screens. Digitally, audio visualization is also used to provide visual effects to music uploaded to video sites such as YouTube while they don’t have music videos. As technology progresses and more innovative forms of digital media and user input develop, this is something that will stay applicable as it is key to the senses of sight and hearing in perceiving a media piece.

Testing in the Space

As previously described in my posts exploring the space, my processing work was tested on publicly visible screens in the Weymouth house foyer. On the day there was a university open day, so the area was quite busy with a considerable amount of background noise, however due to the microphone I used for testing this did not pose much of an issue to the functionality of the visualizer.

I put my work on display for a time, allowing some of my fellow designers to test and use it.

Overall the testing phase of my project proved very successful, the visualizer translated well to input from a microphone in the space and displayed perfectly on the screen. However, it may not have been readily apparent that the piece was interactive – the audience mainly noted it was interactive by noticing me and my coursemates testing it. Making a camera-based work would have made it more obvious that it was a piece of interactive media, but as it was visualization of audio it was less clear at first that it was necessarily responding to sound input.

Audio Visualisation test 1 from Joe White on Vimeo.

Above is the first test of my work, as my coursemate Joel demonstrates it responding to a simple “Hello” into a microphone.

Audio Visualisation test 2 from Joe White on Vimeo.

This video shows how the sketch appears while idle, a small amount of background noise is picked up and results in a slight shimmer but does not interfere with talking directly into the mic.

Audio Visualisation test 3 from Joe White on Vimeo.

Tristan tests out my work here, giving a more continuous speech of audio for the visualiser to respond to.

Users commented that they liked the aesthetic style of the piece (such as the red-blue contrast and the fade) and its responsiveness, making them feel like they had proper control over what was displayed on the screen through their speech, an outcome I am pleased with. Moving the project from testing on my local PC with audio tracks to a microphone on the public screen was seamless and didn’t require any tweaking or fixes to the code to make it work.

IMAG0004   IMAG0007   IMAG0003

Prototyping – Fourth & Final Iteration

This iteration addresses what I feel are the last of my issues with my audio visualizer, and I now have a clean, fully functional version of it.

import ddf.minim.*;

Minim minim;
AudioPlayer player;

int col1 = 100;
int col2 = 100;
int x ;
int y ;

void setup()
{
  frameRate(30);
  size(1280, 620, P3D);
  background(0) ;
  smooth ();
  float x = 6;
  minim = new Minim(this);  
  player = minim.loadFile("time.mp3");
  player.loop();
}

void draw()
{
  translate(0, 310);
  
  for (int i = 0; i < player.bufferSize () - 1; i++) {
    
   if (col1 < 255) {
     col1 = 100+int(((1+(player.right.get(i+1)))/2)*155) ;
    } else {
    col1 = 0;
  }
  
   if (col2 < 255) {
     col2 = 100+int(((1+(player.left.get(i+1)))/2)*155) ;
    } else {
      col2 = 0;
  }
  
  // 
  int y1 = int((player.left.get(i+1))*305);
  int y2 = int((player.right.get(i+1))*305);
  
  // turn get values into actual amplitude by preventing negative values
  if (y1 < 0) {
    y1 = -y1;
  }
  
  if (y2 <0) {
    y2 = -y2;
  }
    
    if (i == 0) {
      
      strokeWeight(5);
      
      //top line
      stroke(col1,0,0);
      line(x,0,x,0-y1-10);
      
      //bottom line
      stroke(0,0,col2);
      line(x,0,x,10+y2);
      
      //seperator
      strokeWeight(2);
      stroke(0);
      line(0,0,1280,0);
    
    }
  }
  
  //Fade lines into background
  fill(0,15);
  noStroke();
  rect(0,-310,width,height);
  
  if (x < width) {
    x = x + 6;
  } else {
    x = 6;
  }
  
}

void mousePressed (){
  background (0);
}

This iteration has only one notable change from the previous version, the addition of lines 45-52. Using “if” conditions allowed me to change any negative values of y1 or y2 to their positive counterparts. While this means that -0.5 would return the same value as 0.5, this is reflective of the nature of amplitude – the peak of a wave will be extremely similar in value, if not identical, to its trough, so by making positive and negative values identical the visualizer now displays the actual amplitude of the audio input.

Final iteration.
Final iteration.

Due to this change I also removed the addition of 1 to the y1 and y2 integers, as now all values will be on a scale of 0 to 1 anyway. The result is a visualizer that scales accurately with the volume and intensity of the audio track’s input, and while it may not truly represent the player.left/right.get values, the audio and visual elements appear more synchronized from an audience’s point of view.

Now that the visualizer is fully functional, I have prepared my final and testing-ready prototype, by changing the form of audio input from Minim’s audio player to its audio input (to take sound from a microphone). The final code, to be used in my testing, is shown below:

import ddf.minim.*;

Minim minim;
AudioInput in;

int col1 = 100;
int col2 = 100;
int x ;
int y ;

void setup()
{
  frameRate(30);
  size(1280, 620, P3D);
  background(0) ;
  smooth ();
  float x = 6;
  minim = new Minim(this);
  in = minim.getLineIn();
}

void draw()
{
  translate(0, 310);
  
  for (int i = 0; i < in.bufferSize() -1; i++) {
        
   if (col1 < 255) {
     col1 = 75+int(in.right.get(i)*255) ;
    } else {
    col1 = 0;
  }
  
   if (col2 < 255) {
     col2 = 75+int(in.left.get(i)*255) ;
    } else {
      col2 = 0;
  }
  
  int y1 = int(in.left.get(i)*305);
  int y2 = int(in.right.get(i)*305);
    
  if (i == 0) {
      
      strokeWeight(5);
      
      //top line
      stroke(col1,75,75);
      line(x,0,x,0-y1);
      
      //bottom line
      stroke(col2,75,75);
      line(x,0,x,y2);
      
      //seperator
      strokeWeight(2);
      stroke(0);
      line(0,0,1280,0);
    
    }
  }
  
  //Fade lines into background
  fill(0,15);
  noStroke();
  rect(0,-310,width,height);
  
  if (x < width) {
    x = x + 6;
  } else {
    x = 6;
  }
  
}

void mousePressed (){
  background (0);
}

More detailed pictures and videos of my project in the space will be taken in the testing phase to show how the visualizer actually reacts to audio input – creating a video output through Processing so far has proven to create frame rate issues as .tiff files are created.

Prototyping – Third Iteration

The third version of my visualizer addresses the previously mentioned issue, that the top and bottom lines were overlapping into the wrong sides of the screen.

import ddf.minim.*;

Minim minim;
AudioPlayer player;

int col1 = 100;
int col2 = 100;
int x ;
int y ;

void setup()
{
  frameRate(30);
  size(1280, 620, P3D);
  background(0) ;
  smooth ();
  float x = 6;
  minim = new Minim(this);
  player = minim.loadFile("time.mp3");
  player.loop();
}

void draw()
{
  translate(0, 310);
  
  for (int i = 0; i < player.bufferSize () - 1; i++) {
    
   if (col1 < 255) {
     col1 = 100+int(((1+(player.right.get(i+1)))/2)*155) ;
    } else {
    col1 = 0;
  }
  
   if (col2 < 255) {
     col2 = 100+int(((1+(player.left.get(i+1)))/2)*155) ;
    } else {
      col2 = 0;
  }
  
  int y1 = int(((1+(player.left.get(i+1)))/2)*305);
  int y2 = int(((1+(player.right.get(i+1)))/2)*305);
    
    if (i == 0) {
      
      strokeWeight(5);
      
      //top line
      stroke(col1,0,0);
      line(x,0,x,0-y1);
      
      //bottom line
      stroke(0,0,col2);
      line(x,0,x,y2);
      
      //seperator
      strokeWeight(2);
      stroke(0);
      line(0,0,1280,0);
    
    }
  }
  
  //Fade lines into background
  fill(0,15);
  noStroke();
  rect(0,-310,width,height);
  
  if (x < width) {
    x = x + 6;
  } else {
    x = 6;
  }
  
}

void mousePressed (){
  background (0);
}

The changes in this iteration keep the left and right audio values to their own side of the screen. As the lines were previously drawn using the same colors, I changed the top lines to be predominantly red and the bottom to be blue. The shade and brightness were still determined by the opposite side’s value, however keeping the two sides’ colors separate made it clearer when testing to see if the overlapping issue solved – additionally, I felt it had a nicer contrasting aesthetic with vivid red and dark blue.

When working on this iteration I realized that the values returned by player.left.get and player.right.get were on a scale of -1 to 1. The value returned was not technically the amplitude, but instead a value of the actual audio wave, whereas a silent input would be 0. The amplitude, technically, is the difference between the peak and trough of the audio wave. To fix this, I tweaked the way col1, col2, y1 and y2 are calculated (see lines 29-42). In this version, adding 1 to each calculation of player.left/right.get resulted on the scale being from 0 to 2 instead of -1 to 1, and then I simply divided the values by 2 so values went from 0 to 1.

third iteration
A new aesthetic shows the top and bottom lines keep to their own sides.

This iteration has resulted in a much more refined, functional audio visualizer with a more aesthetically appealing design in my opinion. The top and bottom sides of the screen now draw their own amplitude waves over time, without crossing over. However, as I calculated the player.left/right.get values from 0 to 1 now, it appears the base value for a quiet audio wave is 0.5, not 0, as negative values are now from 0 to 0.5. Converting these values into a true representation of amplitude is something to address in the next iteration.

Prototyping – Second Iteration

I have now developed the second Iteration of my audio visualizer, with changes made to the way lines draw input from the audio and reflect the sound that plays.

import ddf.minim.*;

Minim minim;
AudioPlayer player;

int col1 = 100;
int col2 = 100;
int x ;
int y ;

void setup()
{
  frameRate(30);
  size(1280, 620, P3D);
  background(0) ;
  smooth ();
  float x = 6;
  minim = new Minim(this);  
  player = minim.loadFile("time.mp3");
  player.loop();
}

void draw()
{

  translate(0, 310);
  
  for (int i = 0; i < player.bufferSize () - 1; i++) {
    
    
      if (col1 < 255) {
    col1 = 75+int(player.right.get(i)*255) ;
    } else {
    col1 = 0;
  }
  
    if (col2 < 255) {
      col2 = 75+int(player.left.get(i)*255) ;
    } else {
      col2 = 0;
    }
  
  int y1 = int(player.left.get(i)*305);
  int y2 = int(player.right.get(i)*305);
    
    if (i == 0) {
      
      strokeWeight(5);
      stroke(col1,75,75);
      line(x,0,x,0-y1);
      stroke(col2,75,75);
      line(x,0,x,y2);
      strokeWeight(4);
      stroke(10,25,75);
      line(0,0,1280,0);
    
    }
  }
  
  fill(0,10);
  noStroke();
  rect(0,-310,width,height);
  
  if (x < width) {
    x = x + 6;
  } else {
    x = 6;
  }
  
}

void mousePressed (){
  background (0);
}

This iteration gets the values y1 and y2 in a different manner, I have translated the sketch’s origin point to halfway down the Y axis (line 26), and then calculated the values for y1 and y2 identically, but note that when it comes to drawing the lines (50, 52) the first line’s end Y value is 0-y1. As the origin is now halfway down the screen, one set of lines can be drawn with positive values (going down from 0, 0) and the other set can be drawn with negative Y values (sending them up from 0, 0).

The lines drawn start from the center and grow with amplitude.
The lines drawn start from the center and grow with amplitude.

The result appears to show a solution to one problem that was framed by my previous iteration – that the quietest amplitudes displayed full length bars. By using this new method to determine the height values of the lines, they now start small and grow at louder amplitudes, which more accurately reflects the audio input from a user’s perspective – as the track gets louder, the lines grow and light up more.

Closer inspection of the output of this iteration, however, shows some overlapping lines from the top and bottom half of the sketch. This would suggest that y1 and y2 are drawing negative values now, which poses a problem as my intention is for the top half of the screen to represent one side of the stereo audio and the bottom half, the other side. Still, this version shows a marked improvement in appearing to visualize audio to an audience, as the visual activity and vibrancy now matches the volume and complexity of the audio input.