As previously described in my posts exploring the space, my processing work was tested on publicly visible screens in the Weymouth house foyer. On the day there was a university open day, so the area was quite busy with a considerable amount of background noise, however due to the microphone I used for testing this did not pose much of an issue to the functionality of the visualizer.
I put my work on display for a time, allowing some of my fellow designers to test and use it.
Overall the testing phase of my project proved very successful, the visualizer translated well to input from a microphone in the space and displayed perfectly on the screen. However, it may not have been readily apparent that the piece was interactive – the audience mainly noted it was interactive by noticing me and my coursemates testing it. Making a camera-based work would have made it more obvious that it was a piece of interactive media, but as it was visualization of audio it was less clear at first that it was necessarily responding to sound input.
Above is the first test of my work, as my coursemate Joel demonstrates it responding to a simple “Hello” into a microphone.
This video shows how the sketch appears while idle, a small amount of background noise is picked up and results in a slight shimmer but does not interfere with talking directly into the mic.
Tristan tests out my work here, giving a more continuous speech of audio for the visualiser to respond to.
Users commented that they liked the aesthetic style of the piece (such as the red-blue contrast and the fade) and its responsiveness, making them feel like they had proper control over what was displayed on the screen through their speech, an outcome I am pleased with. Moving the project from testing on my local PC with audio tracks to a microphone on the public screen was seamless and didn’t require any tweaking or fixes to the code to make it work.