Tuesday, December 12, 2006

Final Blog : Headbanger's Ball

This blog shall cover the conception, development, and completion of my final physical computing project, entitled 'Headbanger's Ball".


Concept

The concept behind 'Headbanger's Ball' is a stress relief device in the form of a boxing pad attached to the wall (in a public or semi-public space) that, when hit (preferably with the head) will insult the user with a vaiety of pre-recorded audio samples. The user presumably hits the device until they find the insults funny, or have hit the pad enough that their rage has subsided.

The user then has the option of recording an insult or joke of their own into the array of audio samples so that future users shall get the benefit of their own wisdom, and give the inital user incentive to come back and see their contributions in practice.

The idea was concieved while I was struggling with a particular part of a project; so much so that I wanted to bang my head against the wall - and therein lies the epiphany.


Design

The physical structure of 'Headbanger's Ball' is relatively simple; two thin pieces of plywood, one with four buttons at the corners, are inserted into a boxing practice pad. This way, a hit anywhere on the surface of the pad will trigger at least one of the buttons, all of which are connected to the same digital input pin on the microcontroller (thus, a hit triggering one of the buttons is as good as a hit triggering all four).

The pad is sheltered in a wooden frame (made lovingly by myself). The frame is meant to limit the pad's movement along and the X and Y axises, and give some extra support along the Z axis. The frame also has hooks on the top to facilite the mounting process.

The wires from the buttons come through a hold cut in the back of the frame, so as to minimalize the stress on the wiring. They go from the back of the frame into the case containing the breadboard, microconroller, and the mechanism to record your own vocal additions.

Click on the link below for a simple diagram of the Headbanger's Ball:


http://itp.nyu.edu/~blm272/PComp/HBB_Diagram.pdf
(copy and paste the above if the other one doesn't appear)


The programing, in theory, wasn't that difficult, though I personally had a few stumbling blocks along the way which made this the mose challanging part of the project, next to refining the hit response.

In sum, I created an arraylist of audio samples using the Sonia library. I then prerecorded and preloaded about 20 samples to populate the array. The tricky part was writing code that would record the sample based on a chage in incoming serial data and put it in the correct spot in the array. Obviously, the problem was eventually solved.

Here's a copy of the processing code, just for kicks (and in the name of open source):

//Headbanger's Ball
//Ben Leduc-Mills
//Final Project, Fall 2006

import processing.serial.*;
import pitaru.sonia_v2_9.*; // import necessary libraries
import java.util.ArrayList.*;

ArrayList allSamples = new ArrayList(); //declare array list


int whichSample; //var for sample that is playing
int inByte; // var to read bytes from serial port
int sampleTime = 5; //max length of live sample in seconds
int preLoadedSamples = 23; //var to chnage # of preloaded samples (= to # -1)

Serial myPort;
Sample mySample;
Sample myLiveSample;


void setup(){

size(512,200);

Sonia.start(this); // Start Sonia engine.
LiveInput.start(); // start Live Input engine

println(Serial.list());
myPort = new Serial(this, Serial.list()[0], 9600);

// create a new sample object.
mySample = new Sample("file1.wav");

//write loop to go through and load all the samples in the array list
for (int i = 1; i < preLoadedSamples; i++) { //test = number of PRELOADED samples +1
Sample mySample =new Sample ("file" + i + ".wav");
allSamples.add(mySample);
}

myPort.write(65); //send an "A" out the serial port
}

void draw(){

while (myPort.available() > 0) {
int inByte = myPort.read();
println(inByte);
}
delay (100);

//play samples in array on hit
if (myPort.read() == 49){
println("play");
Sample currentSample = (Sample) allSamples.get(whichSample);
currentSample.play();
//mySample.play();
delay (1000);
whichSample++;
if (whichSample >= allSamples.size()){
whichSample = 0;
}
}
//record new sample when button is pushed
if (myPort.read() == 51){
println("record");
myLiveSample = new Sample (44100 * sampleTime);
LiveInput.startRec(myLiveSample);
delay (5000);
myLiveSample.setVolume(7); //adjust volume
myLiveSample.play();
allSamples.add(myLiveSample);
}

}

/*void keyPressed(){
println("play");
//on mouseclick, play current sample in arraylist, wait 1 second, cue next sample, wait for next click
Sample currentSample = (Sample) allSamples.get(whichSample);
currentSample.play();
delay(1000);
whichSample++;
//if we get to the end, start over
if (whichSample >= allSamples.size()){ //allSample.size (returns # of things in array)
whichSample = 0;
}
}*/

/*void mousePressed(){
println ("recording");
myLiveSample = new Sample (44100 * sampleTime); //five seconds worth of sample @ 44100 rate
LiveInput.startRec(myLiveSample); // start recording
delay (1000); //wait
}

void mouseReleased(){
println ("stopped");
LiveInput.stopRec(myLiveSample); //stop recording on key (button) released
delay(200);
myLiveSample.play();
allSamples.add(myLiveSample);

}*/




// Safely close the sound engine upon Browser shutdown.
public void stop(){
Sonia.stop();
super.stop();
}


Conclusions

I was quite happy overall with the way this project turned out. Apart from the frame being less sturdy then I had hoped, the actual circuit interface took a lot of abuse and still works, an aspect integral to the cathartic success of the project.

I am interested in pursuing some further work on this project, specifically in two main areas; refining the response actions, and refining the strutural interface.
Ideally, I think a variety of responses based on such things as how hard the hit was, the rapidity of the hits, and even proximity sensing might enhance the user experience tremendously. In addition, I am interested in looking into alternate (or at least more stable) housing for the project. Perhaps a correctly weighted stand could allow for the pad to be disconnected from the wall. There are also a myriad of possibilities concerning the actual look of the pad; covered by a mask, encased in a life-sized dummy, embedded or hidden into a wall, and turning the pad back into a handheld portable device are all ideas that seem to have some merit. All in all, this was a very rewarding process and I look forward to continuing it.

Thursday, October 26, 2006

Midterm (Parts I-VIII, Abridged)

Kazoodles : Networked Interactive Kazoo Trio


I-III. The process (Brainstorming to Forming)

Let me first say, even in hindsight, that it was wonderful working with Chris and Meredith; we were able to discuss, delegate, and disagree without much friction or wasted time. We were able to settle on a general area of interest almost right off the bat: updating some kind of musical instrument into the digital age. We were also able to decide with some conviction on the kazoo, as it was possibly the most un-modern, un-digital instrument we could think of. Plus, kazoo's are fun.

It took us considerably longer to refine our ideas into something that was both conherent in vision and limited in scope enough that we could at least conceive of a process to complete it. Thus, we settled on the idea of a multiple kazoo orchestra that would somehow interact with a patch through MAZ/MSP to make sounds.

From there we had several more questions to answer, some of which we struggled with throughout most of the developmet of the project, such as:
Should the sounds be pretty, or artsy?
What sort of sensors should we put in the kazoos? (What will fit, give the most dynamic but predictable readings, be relatively cheap, etc.)
What will the kazoo's look like when they're done?
How to we ensure that the user will understand what they can/should/should not do?
How in the hell to we get MAX and ARDUINO to understand eachother?

And so, we embarked.

IV-VII. Design, Develop, Debug, Repeat.

We decided to approach the developmet by divide and conquer. Meredith was in charge of the actual wiring, and the look and feel for the kazoo's. Chris and I, being musicians, worked on getting the Arduino to talk to MAX, and for MAX to understand and sort the data that it was getting. As Chris was the only one with previous experience in MAX, and I felt more comfotable writing the Arduino code, we ended more or less splitting the duties that way, although we all made substantial contributions to all facets of the project.

Our first main challenge was to get MAX and Arduino talking to each other. Although we had not yet made a final decision on which sensor we were going to use, we knew that they would be analog in nature, to give a more sensitive reflection of musical input. We decided to do most of the development testing using potentiometers, as they were ready at hand, and could give fiarly consistent readings over the whole of the possible value range we were looking at (i.e. 0-1024).

Using the serial object from MAX we able to read a stream of data coming in from the pot. However, all we were receiving was a non-sensical loop of numbers. We then, (after some time) realized not only that we needed to send the data over in BYTE form instead of DEC, but that MAX would only be able to use values of 1-255. In other words, we had to scale down the output values from Arduino. Moreover, we had to find a way to help MAX recognize the split between data streams, since we were using multiple analog inputs, all with the same possible ranges and we wanted each kazoo to control a different aspect of one main sound wave (progamming the kazoo's to harmonize with eachother reliably would've taken a few more weeks of work).

Out solution was to scale: we scaled down the number of analog kazoo's down to two; thus we could divide the acceptble range of values (0-255) into two distinct streams (0-127 and 128-255) which would then be easy to split in MAX. Easy to split in MAX means easy to manipluate distinct aspects of the sound.

Since we had decided on the setup of a master kazoo (equipped with a small mic) as the master kazoo controlling the main pitch along with the two analog kazoo's changing the wave, we had decided to adapt a classic vocoder patch in MAX. (A vocoder is basically a voice-disortion effect) We were then able to hook up each analog data stream into one of the aspects of the vocoder patch. This worked technically, but resulted in totally random and mostly unlistenable noise.

So, we had to find a way to limit the possible sound variations that the sensors could produce. After a few a tries, Chris came up with the idea of taking a few (three) presets that we liked, setting one as the default (i.e. no analog input), and the other two presets as the possible range of the two analog sensors. This required a lot of further setup, and our Max patch was starting to look pretty intimidating (which I rather liked, since I kind of understood it) but we were able to do it, and so we were ready to go, after some fine tuning.

As for the Arduino side of it, we had relatively little trouble getting the code set up to test out the max patch with potentiometers. We had a significantly longer process coding after we switched from pots to thermistors and added PWM'ing led's to each kazoo.

After some debugging, both with code and with the breadboard, the kazoo's were working, at least technically. The issue was that the thermistors were not responding as we had predicted to human breath. Actually, they weren't respoding predicably to anything. Our mistake for not testing the sensors before installing them in the kazoo's. Fortunately, we were able to further limit the MAX patch to allow for some fairly randominput from the thermistors, without descending into total chaos.

Here's the final code, in case you're wondering:

int potPin = 0; // Analog input pin that the pot (therm) is attached to
int potValue = 0;
int potPin2 = 1;
int potValue2 = 0;

int led1 = 11; // w/ analog in 0
int led2 = 10; //w / analog in 1
int led3 = 9; // for mic led


void setup() {

// initialize serial communications at 9600 bps:
Serial.begin(9600);
}

void loop() {
potValue = analogRead(potPin); // pot1 (0-127)
analogWrite (led1, potValue/4); //light led 1 w/ the therm

// Serial.println (potValue, DEC); // print pot1 value


if (analogRead(potPin) > 127) {
potValue = (analogRead(potPin) / 8)+1;
}
else if (analogRead(potPin) <= 127) {
potValue = (analogRead(potPin) / 1) +1;
}

Serial.print (potValue, BYTE); // print pot1 value




potValue2= analogRead(potPin2); //pot2 (128-255)
analogWrite (led2, potValue2/4); //light led 2 w/ the therm

// Serial.println (potValue2, DEC); //print pot2 value

if (analogRead(potPin2) < 128) {
potValue2 = (analogRead(potPin2) + 128);
}
else if (analogRead(potPin2) >= 128) {
potValue2 = (analogRead(potPin2)/8) +128;
}

Serial.print (potValue2, BYTE); //print pot2 value

delay (10);
}



The only remaining hurdle was to get it all working at the same time; something we had continually struggled with in one form or another throughout the project. We were espcially having troubles with MAX and Arduino fighting for the serial port and crashing the Mac's in the audio lab. We spent a lot of time rebooting at attempting (unsuccessfully, as noted in our presentatnion) to figure out a reliable protocol for uploading and playing with both a mic in and the serial port engaged.

We were lucky enough of the time to keep going with the project, until we had a sound that we were all happy with. So, all we could do is cross our fingers.

Part VIII. Conclusions

While this was a time intensive project all around, and I think we came a long way, espcially in getting a reliable parcing of information from Arduino into MAX, there are few things I think we should have done differetly.
1. Test your sensors. Duh.
2/3. I think we spent too much time worrying about the actual sound, and not enough time on whether we could consistently get it to work. Our first priority once all the pieces were in place should have been to ensure that we could get a reliable result. If you take a great picture and don't develop the film, it's not such a great picture, is it?
3. Although this is relative to how much time we had, I think we should have done more educated research into what we were doing. A little digging around in the beginning would have saved us a lot of time and frustration in the end.

Having said all of that, I am happy with the way things came out. Were it not for the serial issues, we indeed achieved our goal, which, considering we had almost no clue as to how were we going to do it, is a pretty good accomplishment. I also learned quite a bit, espcially since this was my first time dealing with MAX/MSP.
All in all, I'm very happy with it, and very happy to move on.

Sunday, September 24, 2006

Lab: Comination Combo Lock & Luv Meter

I decided to combine these assignment, both as a challange and as a time saving device. What I came up with was a device with two photoells and two led's; the led's would switch from 'not' to 'hot' only when a certain (but different) range was returned on both photocells at the same time.

The code looks like this:

#define analogPin1 0 //photocell 1
#define analogPin2 1 //photocell 2
#define digitalPin1 3 //led 1
#define digitalPin2 4 //led 2

int analogInVar1;
int analogInVar2;
int digitalOutVar1;
int digitalOutVar2;

void setup (){
pinMode(analogPin1, INPUT);
pinMode (analogPin2, INPUT);
Serial.begin(9600);
}
void loop (){
analogInVar1 = analogRead(analogPin1);
analogInVar2 = analogRead(analogPin2);

Serial.print("Photocell 1 value: ");//get values
Serial.println(analogInVar1, DEC);
Serial.print("Photocell 2 value: ");
Serial.println(analogInVar2, DEC);

if (analogInVar1 < 30 && analogInVar2 < 800){
digitalWrite (3 , HIGH);
digitalWrite (4, LOW);
}
else{
digitalWrite (3, LOW);
digitalWrite (4, HIGH);
}
}

Here are some pictures:


The Interface


The Circuit

So from the code you can see that the wiring setup is fairly easy; the photocells go to analog pins 0 and 1 and the led's go to digial pins 3 and 4. The if statement is there to set the key to the lock - photo cell 1 must be < 30 and photo cell 2 must be <800 (I had wildly varient readings from the two photocells) for the led's to switch.

Monday, September 18, 2006

Observation Assignment (Parts 1 & 2)

I did both parts of the observation assignment at the Astor Place Starbucks, looking first at all visable cell phone and laptop interactions, then in depth at the cell phone interactions. Here is a general layout of the Starbucks and places at which a cell phone, a laptop, or both, were used:



Although I was there for over an hour, not a single laptop user closed up and went home, so every laptop on the diagram represents at least an hour's worth of interaction; be it more or less consant. Since I saw no opening or closing of laptops, no setup or breakdown, most of the visable interactive cues were indistingushable from one another, being of three simple catagories:

1.) Typing (placing the hands on the horizontal extention of the machine and pressing down with some frequency)

2.) Mouse (moving of the right or left hand while clutching a palm sized device; also involves the pressing down of digits, most noticably the index finger of the operating hand)

3.) Optical (Interaction with the vertical part of the machine, mostly achieved with the eyes, and sometimes ears; accompanied often, but not always, by one or both of the afmorementioned types of interactivity)

These various catagories of interaction changed and combined at such a high frequency and irregularity that timing of the individual actions proved impossible.

As cell phone interactions seemed to allow for a more diverse and easier to specify interaction set, I chose these devices to serve as my in depth observations.

I chose to break down the cell phone interactions into several catagories as well:

1.) Phone-to-ear (those interactions in which the user held the phone up to their ear)

2.) Phone-to-finger (those interactions in which the user pressed buttons on the phone)

3.) Phone-to-eye (those interactions in which no buttons were pressed, and phone was not held up to ear, but phone was clearly the focus of the user)

On the diagram below, I have separated the cell phone actions into these catagories, along with the frequency of the various interactions:



I also tallied one other catagory, which I dubbed 'transient' interactions. Those being the interactions I witnessed from those either taking orders to go, or those who were standing in line and then sat somewhere hidden from my view.

Of these, I witnessed:
PE: 4
PF: 6
PI: 9

Without going to far into the interpretation of the actions, it may be useful to comment that all of the 'transient' interactions occurred while the user was standing, whilst all other noted interactions were done while the user was sitting.

As best as I could calculate, the average duration of the various forms of interaction (transient interactions included) was thus:

PE: 8.5 Minutes
PF: 3.0 Minutes
PE: < 1 Minute

Without making too many assumptions regarding the exact usage of each type, it would seem as though the PE method was used when the greatest amount of time was needed to complete the interaction, involved the most constant interaction with the device, and thus was used when large amounts of information needed to be input to the device.

The PF method was used for a much shorter amount of time on average, implying that the amount and complexity of input was less then that of the phone-to-ear method. The frequency of user input was also much more dispersed in most cases.

Finally, the PI method involved the least and most basic input (none), as well as the shortest interaction time. We might conclude then, that the information gained or exchanged in these interactions is of a very simple variety.

As for ease with which the device was used, it appeared that all subjects were quite familiar with the layout of their cell phone, regardless of the type of interaction that was occurring. Although no 'failed' interactions were noted, subjects of the PF variety seemed to show the most distress either before or after their interaction. PE users sometimes showed negative reactions, though no notably change in their interaction with the device occurred. PI users, perhaps due to the short span of their interactions, did not typically display any significant reaction during or after their cell phone use.

All subjects were also, quite interestingly, acutely aware of where they kept the device when not in use. A fair number of seated subjects (six) actually kept their phone on the table whether it was in use or not. Of the transient users, an additional five users kept their phone in hand throughout my observation of them (notably shorter for the transient group).

In summary, this observation has been a most interesting and enlightening way to disect human/digital relations, and to look at how we deal with devices on a daily basis. I would have perhaps enjoyed looking at a less-familiar device, so that some failed interactions may have occurred, and perhaps led to some basic criticism of the device itself. However, being able to abstract the usability from the actual use is a valuable tool for evaluating the success of a design, and will surely come in handy in the future.

Week's 1-2 Summary

So, as I am a little bit late in creating this blog, I will just copy and paste the notes I took for the journal over the past two weeks:

Week 1:

I was very excited to get started with the lab, even though I have absolutely zero experience with physical computing, and as it turned out, the lab took me quite a while.

The first two parts of the lab actually turned out to be the most challenging for me. It seemed as though everything was infinately more complicated in 3d then on the page. This was largely due to the fact that I had resovled to attempt to learn from skematics only, seeing as how I wouldn't always have pictures to copy from.

While this proved to be very helpful on the last three excersises, the first two required a lot of trial and error. Even after I got some parts to work, I had to go back and try to understand why.

The most difficult parts for me was understanding how to read the 'flow' of the circuit on the breadboard, picking the right resistors, and confusing which elements needed grounding. However, by the end of the second excersise, I had begun to feel pretty comfortable with the elements involved. I also learned how exciting and amusing it can be to make an LED light up.

Week 2:


My biggest lesson from this week was in patience. Most of the excersise I had gotten through fairly quickly, though figuring which port was the right one took me a while, espically since there was only one USB port, and I had to try all the options a few different times before the program successfully downloaded. But oh, how satisfying.

But somehow, I had created some sort of randomizing switch instead of a realible one. The red and yellow LED's would switch, both be on, both be off, and would change at their own behest, regardless of what the button was doing.

I checked the code as best I could, re-sautered the wires, checked every connection, and found no reason. Jiggling certain wires seemed to increase the frequency of change, but try as I might, I could not isolate the problem.

Finally, (with some help) I realized that I had omitted the 10k ohm resistor. Since neither of the LED's seemed to be burning out, I had mistakely assumed that the resistence was correct.

So my lesson for the week is to always check resistence and make sure all your current is getting used in the right places.

Ben's P-Comp Blog

Ok, here we go...