For the ears: cfml-prototype.mp3
Context Free is an excellent tool for exploring generative spaces in the domain of 2D visual art (and Structure Synth does a fantastic job in 3D), but can a language of circles, rectangles, and triangles mutated by rotates, translates, and scales be translated into the domain of music? The result is not just a rich analogy, but a fun and expressive software performance instrument.
In creating cfml, I set my goal as making a translation of cfdg (the language for visual compositions) into the domain of live music. At the highest level this meant figuring out what sense of music I was going to map to. Context Free doesn’t do all kinds of visual art, really it can only place many many copies of a few primitives around the page with some interesting transformations applied to them. After this, it is up to a graphics library to render these shapes to pixels and shoot them out the display. Over in the music domain I decided that a “single note played on a particular instrument” was a good primitive and that common musical transformations such as pitch transposition, time-stretching, and volume control would make nice analogs to cfdg’s geometric and color transformations. These primitive musical objects are handed to the system as MIDI events and are then rendered (live!) to a nice, sampled waveform and shot out the speakers for the audience to hear.
Below is a side-by-side comparison of several concepts in cfdg and cfml. Keep in mind that cfdg has a custom, Java-like syntax while cfml inherits its syntax from Scheme.
|Concept||Visuals in cfdg||Music in cfml|
The comparison above shouldn’t be too scary (once you get past the syntax change). But what prompted a syntax change in the first place? Surely parens have no more special relation to music than they do to visual art.
Here’s where things get tricky. Visual art is static, timeless, and purely spatial. Music, on the other hand, gives up almost all of its spatial detail in exchange for rich temporal detail. Music happens over time — and time isn’t something we can talk about in cfdg.
When making cfml, the best tool I knew of for algorithmic composition of music was Impromptu, a Scheme-based livecoding environment. An essential idiom in live composition in Impromptu is “temporal recursion” whereby a function schedules a call back to itself after a time delay. Having had such a great experience with Impromptu in the past (and being an appropriately lazy programmer) decided to create cfml as an internal domain-specific-language within Scheme, running inside of Impromptu and exploiting its musical performance libraries (for abstract musical note manipulation and MIDI playback). Mad propz @ Abelson and Sussman.
Time manifests itself in cfml not as a whimpy delay-by-n-beats operator, but two super-powered functional combinators called
after. The composite
(during melody harmony) immediately begins composing the melody and the harmony and assumes a total duration of the longer of the two pieces. Likewise,
(after chorus verse) delays not only the performance, but even delays the composition of the verse until the chorus has finished performing. The result assumes the duration of the total duration of the sequence. Given the literal syntax shown in the comparison (which allowed simple, constant time offsets) you can create rich musical pieces from individual notes by composing (in the mathematical sense) chunks using the two combinators and a sprinkling of transformations (to pitch, volume, and duration). Ok,
choose is a pretty powerful combinator as well, but it didn’t require nearly as much continuation-related magic to implement as they others.
The deferred composition in cfml isn’t just a gimmick to get the music playing sooner, it is an essential element of the language’s semantics. Consider the definition
(define (song) (after (during melody harmony) song)). This simple rule is clearly recursive, and it is equally clearly missing a base case! If you tried to perform song you’d get an infinite sequence of notes. Well, you’d get tired after a while and modify the rule to have a likely base case and the song would end naturally. Deferred composition not only saves the CPU work of deciding which notes to play later, it saves the human artist the work of deciding how to compose those notes that will come later. Livecoding is required as well, as there must be some way to affect the running program if you are ever going to tame that infinite recursion.
My hope is that cfml (in proximity to cfdg) makes it easier to think about generative art in domain-agnostic terms. There is a whole art (or maybe science (well, its clearly design)) to expressively crafting recursive definitions and tending nondeterministic rules while they are in the process of executing. There are teachable tricks to refactoring processes in-flight while preserving (or mutating) certain perceivable aspects. There are new spaces of concerns that go into the design of tools for this mode of artistic programming that don’t make sense in the big-project, enterprise software engineering mindset.
People talk agile, extreme, but try programming in a limited model of computation and responding to shifting client demands on a second-by-second basis by modifying the software while it runs. You’d be surprised by how relaxing it is. Really.
Ok, now that you’ve reached the end, go click the fancy image at the start of this article for a nice, practical comparison of cfml and cfdg. If you like what you see (and have a Mac), go pick up Impromptu, download my cfml library and example from github (patches welcome) and start hacking away.
About the author: Adam is a PhD student, research scientist, software engineer, musician, artist, and hacker. He has a very special kind of respect for those elegant weapons like lisp (pronounced "scheme") and prolog, for a more civilized age.