Elephant in the Room

Yesterday was the first day of the Joue le Jeu – Play Along exhibition at the Gaîté Lyrique in Paris. This show has been curated by two of my Kokoromi cohorts, Heather Kelley and Cindy Poremba, along with Lynn Hughes from Concordia University in Montréal. These three amazing women have curated a show containing both “traditional” videos games, which can be played in the gallery space by visitors, as well as a number of commissioned pieces created especially for the show. One of these commissions is my piece for the Gaîté’s Chambre Sonore.

The Chambre Sonore is a room that was custom built for the Gaîté, though not specifically for my piece, that contains a 10 channel sound system (8 mounted in the walls around the room, 1 in the ceiling, and a sub behind one of the walls), 8 pressure sensitive floor panels, and 8 color LED lights that ring the ceiling. All of these can be interfaced with from a computer under the stairs next to the room.

Elephant in the Room Wall Text

High Concept

The high concept of the piece is that the room has a digital creature living in it and that visitors can interact with the creature by stepping on the different floor panels in the room. Over time this interaction with the creature changes how it sounds so that if someone visits it more than once in a day, or comes back several days later, it should sound noticeably different than their previous visit.

Implementation

The actual implementation of the piece differs slightly from the high concept, however. I had originally envisioned creating a single complex sound generating system that would have lots of different values that could be slowly changed by stepping on the various floor panels. In this way, it would have been a very slowly evolving soundscape, without a readily discernible way for visitors to tell that they were making a difference by being in the room. Ultimately, I decided that it would make more sense within the context of the exhibition as a whole if visitors were able to easily figure out that the piece was reacting to them.

So, the piece contains five different modes that it might be in when a visitor enters and by paying close attention to the feedback provided by the lights and sounds, they should be able to figure out the “puzzle” of the mode and then intentionally push the mode to its end, triggering a transition to the next mode. The modes have the very unpoetic names of Rainstorm, Pulsing Tones, Soothe, Chirps, and Tune Maker.

Hyrax footprints stuck to the floor of the Chambre Sonore to indicate where visitors can stand to interact with the piece.

Each mode is a very different soundscape, helping to meet the goal of visitors hearing something different if they visit the room a second time. Rainstorm sounds like digital rain with a whooshing wind and if visitors stand in the correct place, they will anger a creature that will roar at them. Pulsing Tones is a large pulsing chord that visitors can turn into high-pitched static by increasing the amplitude modulation frequency applied to each note in the chord. Soothe is a gentle environment with a low frequency noise wash, long glassy tones taken from a harmonic series, and a soft heartbeat emanating from the wall. Visitors can create marimba-like tones by stepping on floor panels. Chirps emits periodic descending or ascending tones that are patched through a modulated delay effect. Visitors can use floor panels to increase and decrease the duration of the tones, or to increase and decrease the speed of the delay modulation. Tune Maker is a generative dance tune that visitors can add musical elements to simply by dancing around the floor. If they dance fast enough they are rewarded with a prerecorded dance tune that has choreographed lighting. Here’s a short video of what Rainstorm sounds and looks like.

Technology

As mentioned, there are three systems in the room that the computer interfaces with: 10 channel audio, 8 pressure sensitive floor panels, and 8 color LED lights. The audio runs through an audio card installed in the computer, out to two amplifiers. The floor panels send MIDI messages and are plugged into a USB MIDI interface. The lights are controlled using DMX and I chose to use the ENTTEC DMX USB PRO to control them. I was able to interface with all of this hardware using Processing.

Visual proof that Elephant in the Room was built with Processing.

For the audio, I used Minim, my sound library. Almost all of the sound is generated in real-time using the UGen framework. However, to handle the 10 channels, I had to bypass using an AudioOutput object and write a custom class that could deal with the sound card on the computer. JavaSound gives access to audio devices through the Mixer class. Typically, for a multi-channel sound card, the outputs will be made available as stereo pairs. So, there’s a Mixer for channels 1/2, 3/4, 5/6, and so on. Once you have a Mixer, you can ask it for a SourceDataLine that is used to write out audio data, but if a Mixer represents a stereo pair, you’re never going to be able to ask for a 10 channel SourceDataLine. In fact, even though two of the Mixers listed for the sound card were named 5.1 and 7.1, I was not able to ask those for 6 and 8 channel outputs. Instead, I gather up all the stereo pairs representing channels 1 through 10 and ask each of them for a stereo output. Then I generate 10 channels of audio from my root UGen and write out two channels of audio to each stereo output in turn. This lets me think about generating audio in 10 channel terms and makes it easy to write UGens that can pan sound around the room, send sound to just one speaker, or expand a stereo signal across several speakers.

Flashing disco lights during Tune Maker.

For the lights, I used a library for Processing called dmxP512, which made dealing with the lights super easy. At first, I thought I would have to use MIDI and a light cue building program that had been used with previous installations for the Chambre Sonore, but the direct control that dmxP512 gave me was much better. I wound up writing a wrapper class for it that let me set colors using a color variable in Processing and also let me fade colors over time. A very useful idea I came up with was to allow for animating color and intensity separately. So I can set a light to color(255,15,23), for instance, but then set the intensity to 0.2 and wind up with that color at 20% brightness. I simply multiply the RGB components by the intensity before sending them to the lights and this allows me to do stuff like blink a light by animating the intensity while at the same time slowly shifting the color from blue to pink.

The floor communicates with the computer by sending MIDI messages and I was able to receive these by writing some pretty straightforward JavaSound MIDI code. Basically, you just ask for the Transmitter object of the MIDI device that the floor is plugged into, write a class that implements the Receiver interface, and set an instance of that class as the Receiver for the Transmitter from the MIDI device. So I wrote a FloorReceiver class that I can add a FloorListener to so that the MidiMessage parsing can be in one place and other code can simply receive notifications when a floor panel goes down or up. I believe the floor is made by a French company called Interface-Z and it appears to send only one kind of MIDI message: a NOTE_ON message where the note number indicates which pad sent the message and the velocity indicates whether the panel went down (64) or up (0). I did find the floor to be a little bit frustrating to work with because some panels respond in only a small area and others a comparatively large one. This is the reason for the footprint stickers: to show people where they can stand to best interact with the room and also which direction to face so they will hopefully see a connection between where they are standing and the light that is blinking at them.

The Chambre Sonore lit with cool cyan lights.

Reception

From what I have seen, most people enjoy interacting with the room and seem to understand that they are having some kind of effect on the sound. But aside from several people I know personally who really made an effort to figure out the room, I haven’t seen anyone actually clue in to how to get a mode to an end point. Certainly, part of this is due to the fact that nothing explicitly states that this is possible, but I think it also has to do with the fact that the only feedback is from lights and sound. Each floor panel is associated with a particular light and speaker, and this relationship is maintained throughout all the modes, but unfortunately the layout of the room doesn’t physically reinforce the relationship. This isn’t a problem per se, since the piece is meant to work as simply an interactive ambient soundscape, and the piece has been switching between modes despite people not quite getting it. I had hoped to do more observation of the general public interacting with the room, but if I’m in the room, it sort of changes the experience for people, so I’ve mostly stayed out of sight. All in all, I’m pretty happy with how it’s going so far.

Elephant in the Room shows at the Gaîté Lyrique in Paris as part of Joue le Jeu – Play Along until August 12th, 2012.

WaveShaper Update and Sale!

Another bug-fix update will be going out on the store later tonight or tomorrow and to celebrate I’m dropping the price to $1.99 for one day. If you’ve been reluctant to spend the full $6 on it, grab it on January 28th!

Also coming reasonably soon will be version 2.0, which will include Dropbox integration and Audio Copy/Paste.

Auto-rotation in iOS with openFrameworks

I don’t think that I’ve mentioned this before, but WaveShaper is a mixture of openFrameworks, UIKit, and Minim for C++. One of the major difficulties I’ve had in developing the app is getting the UIKit views and controllers to play nice. It’s not hard to render UIKit widgets on top of the EAGLView, you simply add your UIKit widget as a subview of the UIWindow or the EAGLView that openFrameworks creates, but it does not work to directly use openFrameworks coordinates to set the position and size of UIKit widgets.

The reason it doesn’t work is because (I’m pretty sure) openGL’s coordinate system is not the same as Apple’s. Part of what openFrameworks does is translate the coordinates of incoming touches from Apple’s coordinate system to openGL’s before sending touch events. This allows you to always think about the drawing surface in the “natural” way for doing openGL apps: upper left corner is always (0,0), positive X is to the right, positive Y is down. You can use this knowledge to simply transform your openGL coordinates to Apple coordinates when setting the position and size of UIKit widgets, but if you are using ofxiPhoneSetOrientation to reorient your openGL surface, the mapping between openGL coordinates and UIKit coordinates changes depending on the orientation. This means you have to reposition all of your UIKit objects every time the device is reoriented. Even worse, what I discovered is that in one of the landscape modes, even though the UIKit widgets looked correct, I actually had the device orientation set to the opposite of what it should have been, so when I displayed popovers, they were upside-down. When I presented modal view controllers, they would appear in the correct orientation, but then not rotate when device orientation changed.

After much hair-pulling, I finally came up with a solution that I think is pretty solid. Essentially, what I have done is embed the EAGLView in a view hierarchy that is well behaved with respect to Apple’s system and I do not call ofxiPhoneSetOrientation to change the orientation of the app. What this means is that from openFrameworks’ perspective the app never changes orientation, which is true in some sense because the EAGLView is added as a subview to a vanilla UIView that is strictly responsible for handling the rotation.

Ok, onto some actual code. Before setting up any UI, I set the openFrameworks orientation to landscape:

Then I rearrange the view hierarchy that openFrameworks creates so that I can set the position and size of UIKit widgets using openFrameworks coordinates. Included in this is creating a root view controller for the window, so that I can take advantage of the auto-rotation behavior they provide (and because Apple expects your app to have a root view controller):

Here’s the LandscapeViewController class, which limits supported orientations to landscape orientations and also turns off animations during rotation so that the orientation snaps. If you like the animated orientation change, you can simply remove those lines.

Finally, the class that registers to receive ofxiPhoneAlerts needs to handle device orientation changes like this:

And again, WaveShaper only displays in landscape, which is why those are the only orientations I handle.

Once all of this is in place, you can happily position buttons, sliders, popovers, and whatever else using the openFrameworks coordinate system and they will all look correct and rotate with the screen.

WaveShaper 1.1: bug fix update coming soon!

I’ve been seeing several reports of WaveShaper crashing a lot and after spending some time looking into it, I’m pretty sure I’ve fixed the problem. It wasn’t a crash, per se, but actually a lock up in the audio system, which caused the OS to kill the app. Sometimes this could happen on start up and sometimes when attempting to preview files in the file list.

The update should be available on December 30th. This is later than I’d like it to be available, but the iTunes Connect holiday shutdown began today and lasts through the 29th.

My sincerest apologies to all who have bought the app and experienced these crashes.

WaveShaper for iPad Released!

Today I released my first sound making app for iPad!

WaveShaper is an app for iPad that allows you to load up any audio file and make crazy cool sounds with it. It’s a lot like record scratching, but totally maxed out. This video will do a much better job of explaining it than words ever will:

Visit waveshaperapp.com for more info, sound samples, and screenshots. Or just go buy it right now!

Minim without Processing

I’ve been asked many times to remove the PApplet dependency from Minim and tonight I have done it. If you’d like to try out Minim in your not-Processing Java app, you can grab the latest from the repo to do so. I’ve put the details in the Javadoc, but I’ll lay out the basics here, as well.

Processing provides two key methods that the JavaSound implementation of Minim uses when dealing with sound files. The sketchPath method is used by createRecorder to generate an absolute path from the file name provided to that method. The createInput method is used to get InputStreams for reading audio files. In order to remove the dependency on PApplet, the Minim constructor that required a PApplet as an argument has been replaced with one that takes an Object. This Object is then passed to the JavaSound implementation, which uses reflection to try to locate sketchPath and createInput methods on that Object.

What it boils down to is that if you are building a not-Processing Java app, you must simply write sketchPath and createInput methods for one of your application classes and then pass an instance of that class to the Minim constructor. The exact method signatures are:

String sketchPath(String fileName);
InputStream createInput(String fileName);