Setup and Shutdown
To start using Minim you must first instatiate a Minim
object, which you can then use to load audio files or acquire inputs and outputs. Here’s a partial program that demonstrates these things:
[snip java]
Minim minim;
AudioPlayer player;
AudioInput input;
void setup()
{
size(100, 100);
minim = new Minim(this);
player = minim.loadFile(“song.mp3”);
input = minim.getLineIn();
}
void draw()
{
// do what you do
}
[/snip]
If you are using Minim outside of Processing, then before your program exits you must close
any audio I/O classes you get from Minim and then stop
your Minim instance. Audio I/O classes include AudioPlayer
, AudioSample
, AudioSnippet
, AudioInput
, and AudioOutput
.
Playing A File
One of the main motivaters behind writing Minim was that neither of the available libraries for Processing allowed stereo playback of audio files. Minim to the rescue! It is incredibly easy to play a file using Minim. Just put the file into the data folder of your sketch and then use this code:
[snip java]
import ddf.minim.*;
Minim minim;
AudioPlayer song;
void setup()
{
size(100, 100);
minim = new Minim(this);
// this loads mysong.wav from the data folder
song = minim.loadFile(“mysong.wav”);
song.play();
}
void draw()
{
background(0);
}
[/snip]
Minim can play all of the typical uncompressed file formats such a WAV, AIFF, and AU. It can also play MP3 files thanks to the inclusion of Javazoom’s MP3SPI package with the distribution.
If you are using Minim outside of Processing, then the constructor of Minim requires an Object that can handle two important file system operations so that it doesn’t have to worry about details of the current environment. These two methods are:
[snip java]
String sketchPath( String fileName )
InputStream createInput( String fileName )
[/snip]
These are methods that are defined in Processing, which Minim was originally designed to cleanly interface with. The sketchPath
method is expected to transform a filename into an absolute path and is used when attempting to create an AudioRecorder
(see below). The createInput
method is used when loading files and is expected to take a filename, which is not necessarily an absolute path, and return an InputStream
that can be used to read the file. For example, in Processing, the createInput
method will search in the data folder, the sketch folder, handle URLs, and absolute paths. If you are using Minim outside of Processing, you can handle whatever cases are appropriate for your project.
Retrieving MetaData
Metadata is information about a file, as opposed to the actual contents of the file. You can get the metadata of a file after you have loaded it into an AudioPlayer
. The most likely reason you will want to do this is to display ID3 tag information. Here’s a short sketch that does exactly that:
[snip java]
import ddf.minim.*;
Minim minim;
AudioPlayer groove;
AudioMetaData meta;
void setup()
{
size(512, 256, P3D);
minim = new Minim(this);
// groove.mp3 would be in the sketches data folder
groove = minim.loadFile(“groove.mp3”);
meta = groove.getMetaData();
// serif.vlw would be in the data folder, most likely created using the PDE
textFont( loadFont(“serif.vlw”) );
textMode(SCREEN);
}
int ys = 15;
int yi = 15;
void draw()
{
background(0);
int y = ys;
text(“File Name: ” + meta.fileName(), 5, y);
text(“Length (in milliseconds): ” + meta.length(), 5, y+=yi);
text(“Title: ” + meta.title(), 5, y+=yi);
text(“Author: ” + meta.author(), 5, y+=yi);
text(“Album: ” + meta.album(), 5, y+=yi);
text(“Date: ” + meta.date(), 5, y+=yi);
text(“Comment: ” + meta.comment(), 5, y+=yi);
text(“Track: ” + meta.track(), 5, y+=yi);
text(“Genre: ” + meta.genre(), 5, y+=yi);
text(“Copyright: ” + meta.copyright(), 5, y+=yi);
text(“Disc: ” + meta.disc(), 5, y+=yi);
text(“Composer: ” + meta.composer(), 5, y+=yi);
text(“Orchestra: ” + meta.orchestra(), 5, y+=yi);
text(“Publisher: ” + meta.publisher(), 5, y+=yi);
text(“Encoded: ” + meta.encoded(), 5, y+=yi);
}
[/snip]
Drawing A Waveform
Something else you might want to do is to draw the waveform of the sound you are playing. Most of the classes in Minim that handle input and output of audio data derive from a class called AudioSource
. This class defines three AudioBuffer
members that are then inherited by classes that extend AudioSource
. AudioPlayer
is just such a class, so you can access these buffers with an AudioPlayer
object. The buffers are named left
, right
, and mix
. They contain the left channel, the right channel, and the mix of the left and right channels, respectively. Even if the audio you are playing is mono, all three buffers will be available and return values. When you are playing a mono file you will simply find that all three buffers contain the same information. So, let’s draw a waveform. We can redefine the draw
function from up above like this:
[snip java]
void draw()
{
background(0);
stroke(255);
// we draw the waveform by connecting neighbor values with a line
// we multiply each of the values by 50
// because the values in the buffers are normalized
// this means that they have values between -1 and 1.
// If we don’t scale them up our waveform
// will look more or less like a straight line.
for(int i = 0; i < song.bufferSize() - 1; i++)
{
line(i, 50 + song.left.get(i)*50, i+1, 50 + song.left.get(i+1)*50);
line(i, 150 + song.right.get(i)*50, i+1, 150 + song.right.get(i+1)*50);
}
}
[/snip]
This drawing code will work regardless of what kind of input or output class song
is because they all extend AudioSource
, which provides the buffers. There are two problems with the code we have so far, the window size is set to 100×100 up in setup
and we have no idea how long the buffers are (i.e. what number song.bufferSize()
returns). The first problem is easy enough to fix, we can set the window dimensions to 512×200. The second problem we can fix by including the buffer length we want in the call to loadFile
. Now setup
should look like this:
[snip java]
void setup()
{
size(512, 200);
minim = new Minim(this);
// specify 512 for the length of the sample buffers
// the default buffer size is 1024
song = minim.loadFile(“mysong.wav”, 512);
song.play();
}
[/snip]
This ensures that we have the same number of values in the buffer as we have screen real-estate to display them. If you don’t provide a value for the buffer size, the buffers will be 1024 samples long.
Drawing a Frequency Spectrum
Something else you might be interested in doing is analyzing your song while it plays and drawing the frequency spectrum. You can do this by using an FFT
object. If we include an FFT
object in our program and draw the spectrum a little faded out behind the waveform, our program will look like this:
[snip java]
import ddf.minim.*;
import ddf.minim.analysis.*;
Minim minim;
AudioPlayer song;
FFT fft;
void setup()
{
size(512, 200);
// always start Minim first!
minim = new Minim(this);
// specify 512 for the length of the sample buffers
// the default buffer size is 1024
song = minim.loadFile(“mysong.wav”, 512);
song.play();
// an FFT needs to know how
// long the audio buffers it will be analyzing are
// and also needs to know
// the sample rate of the audio it is analyzing
fft = new FFT(song.bufferSize(), song.sampleRate());
}
void draw()
{
background(0);
// first perform a forward fft on one of song’s buffers
// I’m using the mix buffer
// but you can use any one you like
fft.forward(song.mix);
stroke(255, 0, 0, 128);
// draw the spectrum as a series of vertical lines
// I multiple the value of getBand by 4
// so that we can see the lines better
for(int i = 0; i < fft.specSize(); i++)
{
line(i, height, i, height - fft.getBand(i)*4);
}
stroke(255);
// I draw the waveform by connecting
// neighbor values with a line. I multiply
// each of the values by 50
// because the values in the buffers are normalized
// this means that they have values between -1 and 1.
// If we don't scale them up our waveform
// will look more or less like a straight line.
for(int i = 0; i < song.left.size() - 1; i++)
{
line(i, 50 + song.left.get(i)*50, i+1, 50 + song.left.get(i+1)*50);
line(i, 150 + song.right.get(i)*50, i+1, 150 + song.right.get(i+1)*50);
}
}
[/snip]
Synthesizing Sound
The first thing you need to do to play synthesized sound is get an AudioOutput
. You do this by asking Minim for one:
[snip java]
AudioOutput out = minim.getLineOut();
[/snip]
Calling getLineOut
with no arguments will return a stereo line out with a buffer size of 1024 samples that plays 16 bit audio at a 44100 Hz sample rate. It’s mostly those first two things you want to worry about. If you want a mono line out or a different buffer size, you can call getLineOut
like this:
[snip java]
AudioOutput out = minim.getLineOut(Minim.MONO);
[/snip]
OR
[snip java]
AudioOutput out = minim.getLineOut(Minim.STEREO, 512);
[/snip]
where 512 is the length you want the buffers to be. Once you have an AudioOutput
, you can “patch” UGen
s to it. Minim comes with quite a few UGen
s which you can explore by checking out the documentation. Here’s an example that demonstrates the Oscil
class and the different waveforms you can use with it.
[snip java]
import ddf.minim.*;
import ddf.minim.ugens.*;
Minim minim;
AudioOutput out;
Oscil wave;
void setup()
{
size(512, 200, P3D);
minim = new Minim(this);
// use the getLineOut method of the Minim object to get an AudioOutput object
out = minim.getLineOut();
// create a sine wave Oscil, set to 440 Hz, at 0.5 amplitude
wave = new Oscil( 440, 0.5f, Waves.SINE );
// patch the Oscil to the output
wave.patch( out );
}
void draw()
{
background(0);
stroke(255);
strokeWeight(1);
// draw the waveform of the output
for(int i = 0; i < out.bufferSize() - 1; i++)
{
line( i, 50 - out.left.get(i)*50, i+1, 50 - out.left.get(i+1)*50 );
line( i, 150 - out.right.get(i)*50, i+1, 150 - out.right.get(i+1)*50 );
}
// draw the waveform we are using in the oscillator
stroke( 128, 0, 0 );
strokeWeight(4);
for( int i = 0; i < width-1; ++i )
{
point( i, height/2 - (height*0.49) * wave.getWaveform().value( (float)i / width ) );
}
}
void mouseMoved()
{
// usually when setting the amplitude and frequency of an Oscil
// you will want to patch something to the amplitude and frequency inputs
// but this is a quick and easy way to turn the screen into
// an x-y control for them.
float amp = map( mouseY, 0, height, 1, 0 );
wave.setAmplitude( amp );
float freq = map( mouseX, 0, width, 110, 880 );
wave.setFrequency( freq );
}
void keyPressed()
{
switch( key )
{
case '1':
wave.setWaveform( Waves.SINE );
break;
case '2':
wave.setWaveform( Waves.TRIANGLE );
break;
case '3':
wave.setWaveform( Waves.SAW );
break;
case '4':
wave.setWaveform( Waves.SQUARE );
break;
case '5':
wave.setWaveform( Waves.QUARTERPULSE );
break;
default: break;
}
}
[/snip]
Getting An AudioInput
Minim provides the class AudioInput
for monitoring the user’s current record source (this is often set in the sound card control panel), such as the microphone or the line-in, and what you actually get a handle on is not entirely consistent across platforms. Investigate the setInputMixer
method if you want to try to obtain a specific input. The methods for obtaining an AudioIutput
look almost exactly like the ones for obtaining an AudioOutput
:
[snip java]
getLineIn()
getLineIn(int type)
getLineIn(int type, int bufferSize)
getLineIn(int type, int bufferSize, float sampleRate)
getLineIn(int type, int bufferSize, float sampleRate, int bitDepth)
[/snip]
type
indicates whether you want a stereo or mono AudioInput
. You should use either Minim.MONO
or Minim.STEREO
as the value for this argument. The default value for type
(i.e. what will be used in the case of the method with no arguments) is Minim.STEREO
. bufferSize
is how large you want the sample buffers to be. The default value of bufferSize
is 1024. sampleRate
is what the sample rate of your synthesized audio will be. The default value for sampleRate
is 44100. bitDepth
is what the bit depth of your synthesized audio will be. Currently, the only acceptable values for bitDepth
are 8 and 16. The default value for bitDepth
is 16. It should also be noted that the parameters of this method are describing the attributes that you want your input to have and it may be the case that audio hardware of your machine doesn’t support the particular set of attributes you care about, in which case these methods will return null
. When this happens it doesn’t necessarily mean that you can’t get a input at all, just that you asked for an unavailable configuration. By default an AudioInput
will not monitor the incoming audio, which is to say that you won’t hear that audio coming out of your speakers. This is reduce the chances that you experience nasty feedback by running a sketch whose input can hear the speakers, such as a laptop microphone. You can easily enable monitoring should you need that, as shown is this example:
[snip java]
import ddf.minim.*;
Minim minim;
AudioInput in;
void setup()
{
size(512, 200, P3D);
minim = new Minim(this);
// use the getLineIn method of the Minim object to get an AudioInput
in = minim.getLineIn();
}
void draw()
{
background(0);
stroke(255);
// draw the waveforms so we can see what we are monitoring
for(int i = 0; i < in.bufferSize() - 1; i++)
{
line( i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50 );
line( i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50 );
}
String monitoringState = in.isMonitoring() ? "enabled" : "disabled";
text( "Input monitoring is currently " + monitoringState + ".", 5, 15 );
}
void keyPressed()
{
if ( key == 'm' || key == 'M' )
{
if ( in.isMonitoring() )
{
in.disableMonitoring();
}
else
{
in.enableMonitoring();
}
}
}
[/snip]
Creating An AudioRecorder
Minim provides the class AudioRecorder
for recording audio data to disk. Here is the method used to obtain an AudioRecorder
:
[snip java]
createRecorder(Recordable source, String filename)
[/snip]
source
is the Recordable object that you want to use as the record source. filename
is the name of the file, including the extension, to save to. Most often you will be recording either an AudioInput
or an AudioOutput
.
[snip java]
import ddf.minim.*;
Minim minim;
AudioInput in;
AudioRecorder recorder;
void setup()
{
size(512, 200, P3D);
minim = new Minim(this);
in = minim.getLineIn();
// create a recorder that will record from the input to the filename specified
// the file will be located in the sketch’s root folder.
recorder = minim.createRecorder(in, “myrecording.wav”);
textFont(createFont(“Arial”, 12));
}
void draw()
{
background(0);
stroke(255);
// draw the waveforms
// the values returned by left.get() and right.get() will be between -1 and 1,
// so we need to scale them up to see the waveform
for(int i = 0; i < in.bufferSize() - 1; i++)
{
line(i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50);
line(i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50);
}
if ( recorder.isRecording() )
{
text("Currently recording...", 5, 15);
}
else
{
text("Not recording.", 5, 15);
}
}
void keyReleased()
{
if ( key == 'r' )
{
// to indicate that you want to start or stop capturing audio data, you must call
// beginRecord() and endRecord() on the AudioRecorder object. You can start and stop
// as many times as you like, the audio data will be appended to the end of whatever
// has been recorded so far.
if ( recorder.isRecording() )
{
recorder.endRecord();
}
else
{
recorder.beginRecord();
}
}
if ( key == 's' )
{
// we've filled the file out buffer,
// now write it to the file we specified in createRecorder
// the method returns the recorded audio as an AudioRecording,
// see the example AudioRecorder >> RecordAndPlayback for more about that
recorder.save();
println(“Done saving.”);
}
}
[/snip]
That’s it for the Quickstart Guide! I haven’t covered absolutely everything you can do with Minim, but I have touched on some of the main features. The full API is documented with examples on the documentation site, but you can also reference the Javadoc if you prefer. Many examples are included in the full distribution of Minim, should you find yourself wanting to move beyond what’s included with Processing.