ChucK Composition

Though I first looked at ChucK quite a while back, I recently decided to give it another look and give it an actual try for more than a few minutes. After playing around with it for an evening, I ended up liking it a lot more than I had previously. The feature set has improved markedly since I last tried it, as well as the consistency and thoroughness of the examples. There are still a few areas I would definitely like to see added, mostly centered around integration with other audio applications. Being able to load ChucK scripts as a VST/AU/LADSPA plugin would be a nice advantage, although routing via Rewire or JACK would be about as good. If my interest with this language keeps up, I may decide to get involved with the development and implement some of these myself.

I’ve wondered about alternative scores for music performance a lot, and while a ChucK script may not be exactly what I’ve been thinking about, it might make a lot of sense for virtual accompaniment. A properly written score could direct the human performer, playing a specific part such as guitar or voice, via visual cues; at the same time, the input of the performer could be processed by the program to affect changes in the machine’s performance. Louder RMS values on the input could be reflected by more rapid note generation by the computer, more periodic transients could lead from ambient soundscapes to rhythmic sequences, different instruments could noodle about on riffs within the current chord, etc. Of course all the typical MIDI and OSC control tricks still apply. All this could surely be done with Ableton Live or any other host with sufficient plugins, but setting up some of the more complicated things with that particular model could be roundabout, to say the least. Obviously everything could be done with C++ as well, but it would take forever and be tough to maintain across platforms. The point is that ChucK seems to be a good middle ground. While it isn’t a ready to go solution, it offers a direct and quick path to implement any of silly audio ideas floating around.

I’ve attached my first little experimental composition script, and a sample performance of it. It uses the Stk instruments, so sounds a bit similar to many of the examples, but I think it’s a bit nicer. It’s short, but it certainly gave me a nice introduction and greatly increased my comfortability in the language. Hopefully I’ll soon be capable of writing some longer, and nicer compositions.

Reaction-diffusion experiments

I’ve been playing around with reaction-diffusion morphogenesis simulations lately, to see if I can come up with some cool art. Turing theorized this system over a half century ago, so it’s no surprise that people like Greg Turk have explored the area pretty thoroughly. It’s hard to come up with new ideas, but my recent interest in VJing led me to ponder animated patterns of reaction-diffusion.

Reaction-diffusion animations can mean simply displaying the morphogenesis simulation as it progresses, but I wanted to try something a bit more different. Given that the patterns form deterministically in the simulation, I hypothesized that subtle changes in the initial state of the simulation would lead to subtle changes in the stabilized pattern. Thus my idea was that, by slightly varying the random substrate in a coherent manner between each frame, the resulting patterns would be coherent, meaning continuously wriggling labyrinths and shifting spots.


Alas, my first attempts unfortunately failed to produce anything artistically interesting. While my general hypothesis was correct, the animation is dominated by infrequent, abrupt popping updates, rather than the slowly shifting patterns I was hoping for. I may be able to make something interesting out of this by computing differences from frame to frame and interpolating between them for a longer period in the animation. In any case, further experimentation is needed.