Nifty Note Manipulation

Jason Ellis works in IBM Research and maintains a brilliant blog about art and games. In a recent post, he links to this demo video for Melodyne’s new Direct Note Access technology, which looks frankly astonishing.

As the video shows, exploding a sampled chord allows the kind of editing we’re already used to with MIDI sequencers, at the individual note level, now from polyphonic recordings. Wow. I’m particularly blown away by the ability to change the key of a music recording, something I’d previously not even dreamed of.

Great fun for music makers, but Jason also starts thinking about the implications for games.

Imagine the kinds of new music games that could be built, making use of music the original developers never heard or even imagined – building from software that finally understands sound as intimately as the player does.

Exciting stuff.

4 thoughts on “Nifty Note Manipulation

  1. wow ! yeah melodyne is awesome and pulling polyphonic data out of it and controlling elements of the virtual realm using it could be nuts. Also feeding data from the virtual realm to control music could also be equally if not more nuts! Would love to somehow develop Parsec to include a system like this : http://parsec.wordpress.com

  2. I’m not sure what benefit this would produce for generative music or algorithmic composition over just sampling the notes separately, which is what has always been done.

    Like he says though, for analysing and tweaking polyphonic recordings though, this is very cool. You could tune your guitar after recording, rather than before…

  3. daz has often talked about trying to create systems that make recorded music have that live feel, or somehow keeping the music fresher. There was a system a while back that played a track but randomly selected slight variations of certain passages to make it not sound indentical every play, hence feeling a bit more live.
    I can see how this might work to inject (whilst playing a CD or MP3) some subtle yet interesting adjustments on the fly to inject small imperfections and give anything a live sound.

  4. Sensational, very exciting for musicians like myself!

    I thought the interface was quite nice and seems fairly intuitive, I like the way the audio track is divided into waveforms according to pitch on a timeline and you can edit each individual waveform to adjust pitch, length and dynamics.

    Seeing as they can arrange in pitch order and change key easily, the logical step is to convert this directly into notation. Playing a mainstream jazz / pop recording into this and churning out notation would be awesome! I’ve always dreamt of whistling or singing a tune and having a program throw back the notes to me. The midi keyboard in the demo hints at this functionality =D

    Very exciting =) Not sure exactly how it interprets the waveform and breaks it down into individual notes, but the video demo is just brilliant. Computing the Fourier transform of the sampled audio sounds convincing, since each formant has its own frequency, the signal can be represented by a mix of sinusoid waves, or partials. This page goes through some relevant Fourier technicalities:

    http://www.dspdimension.com/category/tutorial/

    But the formants are very close together, the AI must be very sensitive, wow.

Comments are closed.