Minim: An Audio Library for Processing

It’s here, the first release of my audio library for Processing: Minim.

Here are some of the cool features:

  • Released under the GPL, source is included in the full distribution.
  • AudioFileIn: Mono and Stereo playback of WAV, AIFF, AU, SND, and MP3 files.
  • AudioFileOut: Mono and Stereo audio recording either buffered or direct to disk.
  • AudioInput: Mono and Stereo input monitoring.
  • AudioOutput: Mono and Stereo sound synthesis.
  • AudioSignal: A simple interface for writing your own sound synthesis classes.
  • Comes with all the standard waveforms, a pink noise generator and a white noise generator. Additionally, you can extend the Oscillator class for easy implementation of your own periodic waveform.
  • AudioEffect: A simple interface for writing your own audio effects.
  • Comes with low pass, high pass, band pass, and notch filters. Additionally, you can extend the IIRFilter class for easy implementation of your own IIR filters.
  • Easy to attach signals and effects to AudioInputs and AudioOutputs. All the mixing and processing is taken care of for you.
  • Provides an FFT class for doing spectrum analysis.
  • Provides a BeatDetect class for doing beat detection.

Visit the download page, then take a look at the Quickstart Guide or dive right into the Javadocs.

38 thoughts on “Minim: An Audio Library for Processing

  1. Yes a great library. It seems somewhat less intimidating to me then Ess to work with for students (and mp3 playback is included…).

    What i miss is a startOfBuffer method that relaties a audiobuffer with the entire stream (there is such a method in Ess). I would really like to have that available to make a player where you can visualize the wave form and the cue position.

    Furthermore i have noticed that triggering a sample on linux gives rather distorted sound (and varying over time as well). The ‘drummachine’ is a nive example of that behaviour.

    Rein

  2. The functionality you would like is already present in the AudioFileIn class. You can use cue to position the “playhead” anywhere you like in the file. cue(0) will set the playback position to the beginning of the file. You can see how this works by taking a look at the scrubbing example.

  3. Thanks for answering so quickly!

    Maybe you could enlighten me a bit. If in a draw() function i look at the samples in (say) the left buffer (from 0 to 1023: btw on linux i can’t set a different buffersize without introducing distortion) but where in the entire stream do these samples come from?

    The position method (used in the scrubbing example) points relatively to the entire file. I would like to know where the bufferstart is with respect to the entire file.

  4. I just took a look at the Ess docs to see what you are referencing. He’s got bufferStartTime, which “contains the time in milliseconds that the last buffer of data was sent to the sound playback engine”, and on AudioChannel he’s got getCurrentPlayFrame(), which “returns the exact sample frame being played.”

    The first value sounds like it is totally independent of the length of the file being played or the position in the file. It is simply telling you when the last buffer was received by the playback engine, I assume counting from when the sketch was launched. But I don’t know. This isn’t a number that I make note of because it never occurred to me it might be useful information.

    The second value, the current play frame, is similar to what position() returns, though he claims much higher resolution. I’m not sure how he is achieving this, particularly because audio is sent to the system one buffer at a time, so your resolution is always limited by how large the buffer you are sending is.

    AudioFileIn keeps track of how many bytes it has read from the file and all that position() does is convert this number to milliseconds, based on the audio format of the file. This means that the start of the buffer is going to be slightly before that value, but really the amount is negligible. If you’ve got a buffer of 1024 samples and you are playing back audio recorded at 44.1 KHz, one buffer corresponds roughly to 23 milliseconds.

    Thanks for the comments about the Linux issues. I don’t have a Linux box to test on, but I hope to remedy that in the near future.

  5. Thanks again! Nice to have such direct feedback.

    OK i understand that asking for which sampe is being played is too much to ask for. However knowing which frame (buffer) is being played would be nice because then in a draw() function i could check whether i have seen the actual frame before (and then i don’t need to update the display).

    I imagine that in case i make an effect filter that just makes a copy of the buffer and simply counts how many frames i have ‘seen’ allready i can do the counting myself (but that’s quite a lot of overhead i assume).

    I’m familiar with (audio) signal processing but not that much with the way audio is implemented in the java sound system. I would be very very grateful (and i guess a lot of other potential users as well) if you could give us a few pointers on how things are done. Who is in control? Who decides it is time to send a new frame to the hardware? Who asks for a new frame from the input stream. etc etc.

    Thanks again.
    Rein

  6. Ah, here there seems to be some confusion regarding terminology. A sample frame is not the same as a sample buffer. Maybe it is the case that some people refer to a large group of samples as a frame, maybe in the context of windowing a signal before doing processing on it, but I like to keep the two terms separate. It was a major source of confusion when I first started using Ess because sometimes there was talk of samples (obviously single values) and sometimes talk of sample frames, which sounds like something else but was never really explained.

    When I talk about a sample frame, I’m talking about a chunk of bytes from an audio file that contains the information for a single sample. This might be as small as 4 bytes for a mono file, but is twice as large for a stereo file because one frame has the sample for the left channel and then the sample for the right channel.

    A sample buffer is just a storage space of arbitrary size that is used to read chunks of data from the file. The sending of samples read from a file to the system is handled by a LineReader in its own thread. The line reader asks its associated AudioFileIn for samples as it needs them. The control flow is like this:

    1. LineReader passes AudioFileIn its sample buffer.
    2. AudioFileIn reads samples from its file and fills both its buffers and LinerReader’s.
    3. LineReader writes its newly filled buffer to the system, which blocks until the entire buffer has been written.
    4. The process starts over at step one.

    LineReader’s thread is always running, it doesn’t stop running if you pause the pause playback of the file or if the file finishes playing, it just writes silence to the system.

    For a more complete description of everything involved you should read the Java Sound Programmer Guide and then just take a look at the source code for Minim.

  7. Hi ddf
    Well, this is a nice sound library. I am new to all this (programming, processing, etc.) and I was wondering about this: if you want to play back an audio file, but wish to stop it before it ends, so you can start another one, how do you do it since it does not seem to have a stop() function? Ess has one, but what is the trick with Minim?

  8. You can stop the playback of a file by calling the pause method. If you call play after that, it will begin playing from where you paused it, unless you’ve called rewind or cue to reposition the playhead.

    If you are wondering how to load a new file into an existing AudioFileIn object, you can’t. You simply create a new one for each file you want to play. This shouldn’t be a problem unless you are trying to load many files. In fact, this makes me think I need to set up a way to completely dispose of a file that you are finished with, including the playback thread that is created by Minim.

  9. Great ddf, your advice got me out of trouble.
    Actually I am working on an interactive installation with motion tracking (the JMyron library), and the idea is that, depending on where the visitor stands in space he/she will start one of the three sound files I have. So now they are playing one at a time depending on one’s position in space. Great. There remains one problem though: if I use the rewind method within my draw loop (it has to be within a draw loop because I am using other functions at the same time to trigger other devices) and the visitor move a little (but remaining within the same area pertaining to the same sound file) the song, obviously, constantly rewinds (which is annoying) if I just let it play, the problem is that once it gets to the end, it does not start again for the next visitor, which is also problematic. I have tried the cue function, but because it is a function it can’t be within my draw loop -and that is not good either. Any extra suggestions?
    Thanks for taking the time to read all this and trying to understand each of our own needs. great work.

  10. I’m sure there is a way to set up the logic so that the behaviour is what you want. I’ll drop you an e-mail about this.

  11. This is great, but I’m having trouble with MP3 playback:

    On Mac OS X, AudioFileIn.length() returns zero, whereas length() works fine with .wav and .aiff.

    What’s worse: I get a “Minim Error === AudioFileIn: error reading from file.” error when I try to play back the same mp3s on Windows. Again, .wav and .aiff work well.

    Is this a limitation of JavaSound? Have you come across these problems?

  12. Hi,

    This is a nice easy, to use sound library that can do things much easier and simpler than ESS and Sonia, at least for a novice in processing and Java. Thanks for the efforts.

    Since the documentation is not quite ready yet, I couldn’t find how a can do a particular thing. I managed to link the incoming fft information to visual parameters and got some kind of sound reactive visuals. Its quite simple right now, but I want to know how I can get the spectrum divided into certain frequency bins and get seperate visuals react to seperate frequency bins. So, say, if theres a kick sound in the lower registers of the frequency spectrum, I want it to activate a certain visual parameter, and if there’s a, say, hi-hat sound in the higher registers, i want it to activate another one. I guess it sounds like a kind of music visualisation.

    Which function should i use to achieve this and how can I implement it? (at least in simple terms)

    Thanks again for your efforts,

    Emre.

  13. Well, this seems to be a kind of customized beat detection idea, so I’ll try to check out that function.

    Cheers.

  14. Hey! It’s a very nice library indeed! I am using minim library for some of my projects.Perhaps a simple question for you: is there a function (way) to determine the sounds you listen to from which speaker you want them to be heared? I mean i want to listen the “a” sound from the left speaker and the “b” sound for the right speaker. Is that doable? I searched the manual and I din’t find an example about that.
    Thanks!

  15. Hello, thank you for lovely and sophisticated library. I have created audiovisual synchronization based on loaded mp3 file. I really like the way how it detects the beats etc. I was wondering if I could transfer the same visuals to real time life audio input detection? I would like to create interactive installation where the poeple would play an instruments conected to the computer and be able to see those visuals /created by their instrument sounds/.
    Thanks a lot!
    K.

  16. Hi,

    I’ve got a strange issue with minim on osx. I’m using the audioplayer to play and pause 8 mp3′s depending on the positions of a pot-meter.

    The problem is that after switching between mp3′s sometimes a loud noise starts, and after a while the mp3′s itself start to go weird. Is this a limitation of the audioplayer or is it a bug?

    thanks!

    Jurgen

  17. Hi, great job at this! I made a question a few days but still trying hard about it without success :( I need to play (or trig) a short audio (a drop) when I turn a blue LED on for tracking porpoises. The issue is I can’t find how to (if possible) change the sample rate according to LED position… so, I need: play a sound full once every time an ellipse is created; to change the sample rate according to LED position.

    Thanks!

  18. Hi ddf,
    I think I have the same problem about playing multiple mp3, say 16, at the same time.

    First, i found that there is some noise after i pause them and replay them,
    Second, it is about the sometimes they are not on-beat, they are not in syn even i started them at the same time. (i try to increase to max memory for processing in a certain high level already)

    Since i read the manual that, the audio snipplet run faster than audio player for looping, I use snipplet for the looping purpose. But the problem still exist.

    condition: using 16 audio snipplet to play 16 mp3s VBR 320bps with 4 seconds

    THanks !

  19. Pingback: bluecube's me2DAY
  20. Pingback: Deep Source Code
  21. Hi ddf,

    Thanks for creating a great library. I got set up within a matter of minutes. However, I get terrible distortion when playing back my audio file. I tried many formats (wav, mp3, aif) of the same tune and different settings (44.1 KHz, 22.05 KHz, stereo, mono) but the result is the same. I then checked all of the online examples that use the library to make sure it wasn’t my tune and the distortion was present in all of the online examples I saw. I am running OS X 10.5.8 and Processing 1.0.9 and Java is up to date. Please help!

  22. As a follow-up: I discovered that this is a common problem for people on OS X 1.5 running Java apps. The only suggested fix (changing the sample rate for output device to 44.1 KHz in MIDI Settings) doesn’t work for me as it produces an awful screech as soon as I change the setting. Thanks!

  23. hey ddf,
    i have the same problem like gwen. i’m getting values from a sensor through arduino and according to the values audio has to be played and volume is manipulated. then it should start recording and play the recorded file. and that should happen several times. but the audio file can’t be played again once it gets to the end, it does not start again. rewind method within the draw loop constantly rewinds, except i use delays but that’s problematic because it delays everything else too and the cue function can’t be within the draw loop. also the saved recorded file needs to be loaded in the this section otherwise the previous file is played but i cannot do minim.loadFile(…) in the draw section of course. any suggestions? maybe you could tell me how you helped gwen.

    i also was thinking about using minim for recording and ess or sonia for playing. but the libraries do not really work together, do they? i got an error trying to use minim+sonia.

    cheers and thank you,
    alyssa

  24. @Alyssa

    You’ll notice Gwen’s comments are from 2007 so I don’t recall what I e-mailed her. I think your issue is best discussed on the Processing forum because it sounds more like a program logic problem than a Minim problem per-se. If you’d be so kind as to repost your comment there, we can continue the discussion.

  25. I’m not that much of a internet reader to be honest but your blog’s
    really great, keep it up! I’ll go ahead and bookmark your site to come back in the future. This will help with my new audio visual stuff site, All the best

  26. Hi!
    I’ve read a quickstart guide, I think you’ve done pretty good audio lib!
    Hope you can help me.
    I want to draw a waveform of my audio file, but i want to draw the WHOLE waveform, not only the buffer being played.
    How can I do it?
    Thanks in advance!

  27. If you get the latest version of Processing, you should be able to use the loadFileIntoBuffer method of the Minim class to load the file into a MultiChannelBuffer. You could then draw the contents of the buffer in a similar manner to how you draw the buffer being played, using either the getChannel or getSample methods of MultiChannelBuffer. See this example to get started: https://github.com/ddf/Minim/blob/master/examples/Advanced/loadFileIntoBuffer/loadFileIntoBuffer.pde

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">