Brief Evaluation of Design Iterations

Overall I feel the interactive visualization piece I created with Processing worked well in the media space, with testers commenting they liked the visual effect produced by the input of their own voice.

Using an iterative approach to design in this project allowed me to quickly frame and solve problems that arose, from the early stages of examining the space to coding the visualizer in the later parts of the process. Looking at the broader process and approach I took, there is a clear progression through stages from requirements gathering to prototyping and testing, however within each of these stages I took an iterative approach on a smaller scale, from redesigning our posters to study visual media in the space to prototyping my project in processing – each iteration revealed new problems to me that I explored solutions to in the following version, then solved and tested outside of the space using Minim’s audio player function.

While the audio visualizer I designed is relatively simple, exploring Processing and Minim further could lead me to more complex ways to visualize audio, and get different kinds of data from an input – for example, instead of charting amplitude over time, getting frequency values instead could allow me to create a proper spectrogram within Processing. It could also be applied to a wide variety of audio inputs, from microphones to music tracks, or as part of an app allowing a user to create their own musical audio input and then visualizing it.

I have only scratched the surface, but audio visualization has many potential uses such as in nightclubs or electronic music events to enhance the experience between music and lights or screens. Digitally, audio visualization is also used to provide visual effects to music uploaded to video sites such as YouTube while they don’t have music videos. As technology progresses and more innovative forms of digital media and user input develop, this is something that will stay applicable as it is key to the senses of sight and hearing in perceiving a media piece.

Advertisements

Testing in the Space

As previously described in my posts exploring the space, my processing work was tested on publicly visible screens in the Weymouth house foyer. On the day there was a university open day, so the area was quite busy with a considerable amount of background noise, however due to the microphone I used for testing this did not pose much of an issue to the functionality of the visualizer.

I put my work on display for a time, allowing some of my fellow designers to test and use it.

Overall the testing phase of my project proved very successful, the visualizer translated well to input from a microphone in the space and displayed perfectly on the screen. However, it may not have been readily apparent that the piece was interactive – the audience mainly noted it was interactive by noticing me and my coursemates testing it. Making a camera-based work would have made it more obvious that it was a piece of interactive media, but as it was visualization of audio it was less clear at first that it was necessarily responding to sound input.

Audio Visualisation test 1 from Joe White on Vimeo.

Above is the first test of my work, as my coursemate Joel demonstrates it responding to a simple “Hello” into a microphone.

Audio Visualisation test 2 from Joe White on Vimeo.

This video shows how the sketch appears while idle, a small amount of background noise is picked up and results in a slight shimmer but does not interfere with talking directly into the mic.

Audio Visualisation test 3 from Joe White on Vimeo.

Tristan tests out my work here, giving a more continuous speech of audio for the visualiser to respond to.

Users commented that they liked the aesthetic style of the piece (such as the red-blue contrast and the fade) and its responsiveness, making them feel like they had proper control over what was displayed on the screen through their speech, an outcome I am pleased with. Moving the project from testing on my local PC with audio tracks to a microphone on the public screen was seamless and didn’t require any tweaking or fixes to the code to make it work.

IMAG0004   IMAG0007   IMAG0003

Prototyping – Fourth & Final Iteration

This iteration addresses what I feel are the last of my issues with my audio visualizer, and I now have a clean, fully functional version of it.

import ddf.minim.*;

Minim minim;
AudioPlayer player;

int col1 = 100;
int col2 = 100;
int x ;
int y ;

void setup()
{
  frameRate(30);
  size(1280, 620, P3D);
  background(0) ;
  smooth ();
  float x = 6;
  minim = new Minim(this);  
  player = minim.loadFile("time.mp3");
  player.loop();
}

void draw()
{
  translate(0, 310);
  
  for (int i = 0; i < player.bufferSize () - 1; i++) {
    
   if (col1 < 255) {
     col1 = 100+int(((1+(player.right.get(i+1)))/2)*155) ;
    } else {
    col1 = 0;
  }
  
   if (col2 < 255) {
     col2 = 100+int(((1+(player.left.get(i+1)))/2)*155) ;
    } else {
      col2 = 0;
  }
  
  // 
  int y1 = int((player.left.get(i+1))*305);
  int y2 = int((player.right.get(i+1))*305);
  
  // turn get values into actual amplitude by preventing negative values
  if (y1 < 0) {
    y1 = -y1;
  }
  
  if (y2 <0) {
    y2 = -y2;
  }
    
    if (i == 0) {
      
      strokeWeight(5);
      
      //top line
      stroke(col1,0,0);
      line(x,0,x,0-y1-10);
      
      //bottom line
      stroke(0,0,col2);
      line(x,0,x,10+y2);
      
      //seperator
      strokeWeight(2);
      stroke(0);
      line(0,0,1280,0);
    
    }
  }
  
  //Fade lines into background
  fill(0,15);
  noStroke();
  rect(0,-310,width,height);
  
  if (x < width) {
    x = x + 6;
  } else {
    x = 6;
  }
  
}

void mousePressed (){
  background (0);
}

This iteration has only one notable change from the previous version, the addition of lines 45-52. Using “if” conditions allowed me to change any negative values of y1 or y2 to their positive counterparts. While this means that -0.5 would return the same value as 0.5, this is reflective of the nature of amplitude – the peak of a wave will be extremely similar in value, if not identical, to its trough, so by making positive and negative values identical the visualizer now displays the actual amplitude of the audio input.

Final iteration.
Final iteration.

Due to this change I also removed the addition of 1 to the y1 and y2 integers, as now all values will be on a scale of 0 to 1 anyway. The result is a visualizer that scales accurately with the volume and intensity of the audio track’s input, and while it may not truly represent the player.left/right.get values, the audio and visual elements appear more synchronized from an audience’s point of view.

Now that the visualizer is fully functional, I have prepared my final and testing-ready prototype, by changing the form of audio input from Minim’s audio player to its audio input (to take sound from a microphone). The final code, to be used in my testing, is shown below:

import ddf.minim.*;

Minim minim;
AudioInput in;

int col1 = 100;
int col2 = 100;
int x ;
int y ;

void setup()
{
  frameRate(30);
  size(1280, 620, P3D);
  background(0) ;
  smooth ();
  float x = 6;
  minim = new Minim(this);
  in = minim.getLineIn();
}

void draw()
{
  translate(0, 310);
  
  for (int i = 0; i < in.bufferSize() -1; i++) {
        
   if (col1 < 255) {
     col1 = 75+int(in.right.get(i)*255) ;
    } else {
    col1 = 0;
  }
  
   if (col2 < 255) {
     col2 = 75+int(in.left.get(i)*255) ;
    } else {
      col2 = 0;
  }
  
  int y1 = int(in.left.get(i)*305);
  int y2 = int(in.right.get(i)*305);
    
  if (i == 0) {
      
      strokeWeight(5);
      
      //top line
      stroke(col1,75,75);
      line(x,0,x,0-y1);
      
      //bottom line
      stroke(col2,75,75);
      line(x,0,x,y2);
      
      //seperator
      strokeWeight(2);
      stroke(0);
      line(0,0,1280,0);
    
    }
  }
  
  //Fade lines into background
  fill(0,15);
  noStroke();
  rect(0,-310,width,height);
  
  if (x < width) {
    x = x + 6;
  } else {
    x = 6;
  }
  
}

void mousePressed (){
  background (0);
}

More detailed pictures and videos of my project in the space will be taken in the testing phase to show how the visualizer actually reacts to audio input – creating a video output through Processing so far has proven to create frame rate issues as .tiff files are created.

Prototyping – Third Iteration

The third version of my visualizer addresses the previously mentioned issue, that the top and bottom lines were overlapping into the wrong sides of the screen.

import ddf.minim.*;

Minim minim;
AudioPlayer player;

int col1 = 100;
int col2 = 100;
int x ;
int y ;

void setup()
{
  frameRate(30);
  size(1280, 620, P3D);
  background(0) ;
  smooth ();
  float x = 6;
  minim = new Minim(this);
  player = minim.loadFile("time.mp3");
  player.loop();
}

void draw()
{
  translate(0, 310);
  
  for (int i = 0; i < player.bufferSize () - 1; i++) {
    
   if (col1 < 255) {
     col1 = 100+int(((1+(player.right.get(i+1)))/2)*155) ;
    } else {
    col1 = 0;
  }
  
   if (col2 < 255) {
     col2 = 100+int(((1+(player.left.get(i+1)))/2)*155) ;
    } else {
      col2 = 0;
  }
  
  int y1 = int(((1+(player.left.get(i+1)))/2)*305);
  int y2 = int(((1+(player.right.get(i+1)))/2)*305);
    
    if (i == 0) {
      
      strokeWeight(5);
      
      //top line
      stroke(col1,0,0);
      line(x,0,x,0-y1);
      
      //bottom line
      stroke(0,0,col2);
      line(x,0,x,y2);
      
      //seperator
      strokeWeight(2);
      stroke(0);
      line(0,0,1280,0);
    
    }
  }
  
  //Fade lines into background
  fill(0,15);
  noStroke();
  rect(0,-310,width,height);
  
  if (x < width) {
    x = x + 6;
  } else {
    x = 6;
  }
  
}

void mousePressed (){
  background (0);
}

The changes in this iteration keep the left and right audio values to their own side of the screen. As the lines were previously drawn using the same colors, I changed the top lines to be predominantly red and the bottom to be blue. The shade and brightness were still determined by the opposite side’s value, however keeping the two sides’ colors separate made it clearer when testing to see if the overlapping issue solved – additionally, I felt it had a nicer contrasting aesthetic with vivid red and dark blue.

When working on this iteration I realized that the values returned by player.left.get and player.right.get were on a scale of -1 to 1. The value returned was not technically the amplitude, but instead a value of the actual audio wave, whereas a silent input would be 0. The amplitude, technically, is the difference between the peak and trough of the audio wave. To fix this, I tweaked the way col1, col2, y1 and y2 are calculated (see lines 29-42). In this version, adding 1 to each calculation of player.left/right.get resulted on the scale being from 0 to 2 instead of -1 to 1, and then I simply divided the values by 2 so values went from 0 to 1.

third iteration
A new aesthetic shows the top and bottom lines keep to their own sides.

This iteration has resulted in a much more refined, functional audio visualizer with a more aesthetically appealing design in my opinion. The top and bottom sides of the screen now draw their own amplitude waves over time, without crossing over. However, as I calculated the player.left/right.get values from 0 to 1 now, it appears the base value for a quiet audio wave is 0.5, not 0, as negative values are now from 0 to 0.5. Converting these values into a true representation of amplitude is something to address in the next iteration.

Prototyping – Second Iteration

I have now developed the second Iteration of my audio visualizer, with changes made to the way lines draw input from the audio and reflect the sound that plays.

import ddf.minim.*;

Minim minim;
AudioPlayer player;

int col1 = 100;
int col2 = 100;
int x ;
int y ;

void setup()
{
  frameRate(30);
  size(1280, 620, P3D);
  background(0) ;
  smooth ();
  float x = 6;
  minim = new Minim(this);  
  player = minim.loadFile("time.mp3");
  player.loop();
}

void draw()
{

  translate(0, 310);
  
  for (int i = 0; i < player.bufferSize () - 1; i++) {
    
    
      if (col1 < 255) {
    col1 = 75+int(player.right.get(i)*255) ;
    } else {
    col1 = 0;
  }
  
    if (col2 < 255) {
      col2 = 75+int(player.left.get(i)*255) ;
    } else {
      col2 = 0;
    }
  
  int y1 = int(player.left.get(i)*305);
  int y2 = int(player.right.get(i)*305);
    
    if (i == 0) {
      
      strokeWeight(5);
      stroke(col1,75,75);
      line(x,0,x,0-y1);
      stroke(col2,75,75);
      line(x,0,x,y2);
      strokeWeight(4);
      stroke(10,25,75);
      line(0,0,1280,0);
    
    }
  }
  
  fill(0,10);
  noStroke();
  rect(0,-310,width,height);
  
  if (x < width) {
    x = x + 6;
  } else {
    x = 6;
  }
  
}

void mousePressed (){
  background (0);
}

This iteration gets the values y1 and y2 in a different manner, I have translated the sketch’s origin point to halfway down the Y axis (line 26), and then calculated the values for y1 and y2 identically, but note that when it comes to drawing the lines (50, 52) the first line’s end Y value is 0-y1. As the origin is now halfway down the screen, one set of lines can be drawn with positive values (going down from 0, 0) and the other set can be drawn with negative Y values (sending them up from 0, 0).

The lines drawn start from the center and grow with amplitude.
The lines drawn start from the center and grow with amplitude.

The result appears to show a solution to one problem that was framed by my previous iteration – that the quietest amplitudes displayed full length bars. By using this new method to determine the height values of the lines, they now start small and grow at louder amplitudes, which more accurately reflects the audio input from a user’s perspective – as the track gets louder, the lines grow and light up more.

Closer inspection of the output of this iteration, however, shows some overlapping lines from the top and bottom half of the sketch. This would suggest that y1 and y2 are drawing negative values now, which poses a problem as my intention is for the top half of the screen to represent one side of the stereo audio and the bottom half, the other side. Still, this version shows a marked improvement in appearing to visualize audio to an audience, as the visual activity and vibrancy now matches the volume and complexity of the audio input.

Prototyping – First iteration of audio visualizer

Over the previous months I have learned to use Processing to visualize data, using techniques such as for loops, conditionals, and integers to create visual output. I have also learned to use Minim at a basic level, Processing’s included library for using audio.

Building on my analysis and what I have learned of Processing thus far, I have created the first iteration of what I intend to be my final project – an audio visualizer built with Processing’s Minim Library.

import ddf.minim.*;

Minim minim;
AudioPlayer player;

int col1 = 100;
int col2 = 100;
int x ;
int y ;

void setup()
{
  frameRate(30);
  size(1280, 620, P3D);
  background(0) ;
  smooth ();
  float x = 6;
  minim = new Minim(this);
  player = minim.loadFile("time.mp3");
  player.loop();
}

void draw()
{

  for (int i = 0; i < player.bufferSize () - 1; i++) {

    if (col1 < 255) {
    col1 = 75+int(player.right.get(i)*255) ;
    } else {
    col1 = 0;
  }

    if (col2 < 255) {
      col2 = 75+int(player.left.get(i)*255) ;
    } else {
      col2 = 0;
    }

  int y1 = (int(player.left.get(i)*305));
  int y2 = 620-(int(player.right.get(i)*305));

    if (i == 0) {
      stroke(col1,75,75);
      line(x,308,x,y1);
      stroke(col2,75,75);
      line(x,312,x,y2);

    }
  }

  strokeWeight(5);

  fill(0,10);
  noStroke();
  rect(0,0,width,height);

  if (x < width) {
    x = x + 6;
  } else {
    x = 6;
  }

}

void mousePressed (){
  background (0);
}

My final project will be interactive as the audio values will be based on user input, however for now I am using the audio player function of Minim to get input from a preset audio track rather than a microphone. This will allow me to develop iterations without requiring microphone testing each time.

first iteration
The height and colors of the lines are determined by sound.

My main goal with this iteration was to get the visual elements created by Processing successfully responding to the audio input, creating integers y1, y2, col1 and col2 to be determined by the player.left.get and player.right.get functions of the Minim library. These two functions return values for amplitude of whatever audio input has been provided, be it an audio track or ongoing input, and respond to Stereo audio – left and right.

The visual elements are a series of lines created across the screen using a for loop (see lines 26, 43-47, and 58-62), to display the audio input’s values chronologically like a wave going across the screen – the top half of the screen draws lines based on the left audio, and the bottom is based on the right audio. This results in an aesthetic similar to a spectrogram, however displaying amplitude over time compared to by frequency. As the lines repeat from the start, I also put in a fade effect (lines 54-56) to gradually fade out previously drawn lines, which is also a visually pleasing effect. I also lowered the frame rate to 30fps, as the sketch felt a bit too fast paced at full 60fps not giving much time to see how the graph reacts to the sound input.

My next iterations will need to examine in greater detail how my code responds to the audio’s input, as curiously quieter audio values still display full length lines drawn on the screen (though colors remain dull), and refine the visualizer to react more accurately with what the audience will perceive from the audio. The values drawn from player.left.get and player.right.get may not reflect the actual sound.

References

Minim documentation – http://code.compartmental.net/minim/index.html

Analysis – Audio Visualization

Based on my understanding of the space and its audience, and interactive displays, I have decided to focus on audio visualization, and synergy between audio and visual media, as the concept for my final piece.

As the space is in a university, the primary demographic moving through the Weymouth house foyer is media students, likely to take an interest in digital art and electronic music themes. My goal is to create a piece which will draw their attention and is not overly complex, but still allows users to have interaction with a visual output through audio.

Kosara (2008) describes key characteristics of visualization: “the visual must be the primary means of communication”, and “the most important criteria is that the visualization must provide a way to learn something about the data … there must be at least some relevant aspects of the data that can be read. The visualization must also be recognizable as one and not pretend to be something else”. These will be factors to consider when developing my project, a strong relationship between the audio and the visual elements of my piece, and that the interaction between the two is clearly recognizable to the user.

Music visualization is often made vivid and artistic in correlation to the audio input. Several examples gathered on the blog Visual Complexity (2010) are shown below:

“The Shape of Song” by Martin Wattenberg
“Narratives 2.0” by Matthias Dittrich

My work will likely not amount to the above level of complexity, however they are examples of the beautiful results that can be achieved through visualizing audio. With the correct setup and design, even a simple audio input such as speech should be able to produce a nice effect, made doubly satisfying by the synergy between audio and visual elements of the piece – audiovisual pieces of media are effective as the two different sensory outputs compliment and enhance each other. The video below is an example of a physical audiovisual installation by media group AntiVJ, and exemplifies why I am interested in this concept:

The key to making my project successful will be to combine this concept with interactivity, allowing users direct control over the audio input and therefore the visuals on the screen. Synergy between audible and visual elements is already satisfying, giving a user control over it too should amount to an engaging interactive display.

 

References

Kosara, R., 2008. What is Visualization? A definition. eagereyes [online]. Available from: https://eagereyes.org/criticism/definition-of-visualization

Lima, M., 2010. Music Visualization: Beautiful Tools to “See” Sound. Visual Complexity [online]. Available from: http://www.visualcomplexity.com/vc/blog/?p=811

Wattenberg, M., The Shape of Song [online]. Available from: http://www.turbulence.org/Works/song/

Dittrich, M., Narratives 2.0 [online]. Available from: http://www.matthiasdittrich.com/projekte/narratives/visualisation/index.html

AntiVJ, 2013. The Ark. AntiVJ [online]. Available from: http://antivj.com/theark/