Wednesday, August 15, 2007

How Computer Music?
Pretty much immediately after putting the previous computer music post online, I realized it didn't really provide the information Matt wanted. He shouldn't be surprised, though. People can pontificate endlessly about the validity of their "artistic" endeavours but how many of them will be willing to back up such pontifications by getting into the nitty gritty of what the hell they actually do?

There's nothing wrong with that, of course. Nobody wants to give away their trade secrets. More to the point, it's understandable that musicians often fear explaining their music to the point that they "destroy the magic". Still, having made some pretty wild claims for the potential imaginative uniqueness of computer music practice, I should probably provide some evidence.

More to the point, there really is no danger that explaining how connect_icut music works will destroy any alleged "magic". For better or worse, each individual grain of sound on my albums is the end result of excruciating amounts of theoretical speculation (not to mention computer programming). The music is - at once - (a) theory incarnate and (b) very very abstract. As such, it could probably use a little explanation.

The only thing I really worry about is that explaining what I do and how it's done will reveal the inevitable fact that I don't fully walk it as I talk it. The truth is that I haven't really managed to get quite the sound I've been looking for (yet). Perhaps more relevant to the terms of this particular discussion is the fact that my Max patches are, for the most part, based around recognizable software/hardware paradigms.

There's a good reason for this though. Whereas a lot of experimental computer music is about the manipulation of "pure" sound, connect_icut music is more concerned with the manipulation of music, as such. Don't take this the wrong way - it's not so much about adding a few generically "experimental" sounds to basically conventional music, it's about taking conventional elements of melody, harmony, rhythm, instrumental texture, songwriting etc. and giving them a thorough digital going over.

Getting down to the aforementioned nitty gritty, then, connect_icut music is generally built on live improvisations, sampled, looped and fed into randomized generative processes. The idea behind the music has always been to have strong element of repetition/stasis counterpointed by the imposition of constant change and randomization.

Using basic music software like Reason, it was always fairly easy to simulate this theory but I never really felt that I'd truly manifested it, so to speak. I had very vivid ideas about the performance interfaces and sounds I wanted to be working with but standard music software simply couldn't realize these ideas precisely precisely enough.

This is why I had to take the difficult step of getting stuck into Max. But it's also why I could get stuck into Max. As far as I can tell, you can only really do anything with Max if you have a really specific idea about what you want to do with it.

Basically, Max provides a graphical interface for object-based computer programming. In other words, Max lets you build cool stuff by stringing a bunch of boxes together and writing stuff in them.
Max is specifically designed for building multi-media software and - accordingly - it can be expanded with sets of objects for processing audio (MSP) and video (Jitter). See the screen-shots provided for additional clarity (you can click on them for a much closer look) or take a look at the Cycling 74 website.
The screen-shot above shows a randomly modulated filter sub-patch from one of the patches in my Max/MSP set-up. The audio comes in through the inlet objects at the top of the patch, then goes to the svf~ (simple variable filter) objects. The basic parameters of the filter can be controlled with dials, linked to the svf~ objects via more inlets at the top. After being filtered with the svf~s, the audio is sent back out of the patch through the outlets at the bottom.

The neat thing about this patch, though, is the way the filter cut-off frequency is randomly modulated. Basically, the metro object outputs a series of "bangs" according to a tempo. This bang is sent to two separate random objects. One of these generates a random number which sets a new tempo for the metro object (so that its tempo is changing at random intervals) while the other outputs a value that modulates the cutoff frequency of the filter.

For me, the power of Max/MSP is that it allows me to build fully customized performance interfaces from scratch and populate them with instruments and effects built to my exact specifications. The filter above is a good example of the kind of thing I build in Max/MSP but it doesn't really give much of an impression about how I use the things to make music.

So let's take a look at the Max set-up I posted as the accompanying image for the previous computer music post and trace the path of some audio through it. In this example, we're going to assume that the initial audio is created by an improvisation using the keyboard patcher in the bottom right hand corner of the screen.
This keyboard is essentially a very simple three-oscillator polyphonic synth that I built entirely in Max/MSP. The synth is controlled by an external MIDI keyboard, which allows me to improvise chords, melodies, pitch-bends etc.
The audio from the synth is sent to the input patch above it. This allows the audio to be sliced, filtered and distorted before being sent elsewhere. The elsewhere we're going to look at, in this case, is the rack patch to the left of the input. Essentially, this is a set of three samplers. Each one is able to sample five second snatches of audio from the input patch. The samplers loop these samples and apply a range of granular slicing, filtering and distortion effects to the loops.
The audio output from the samplers is then sent either directly to the output patch at the bottom left of the screen or via the effects patcher in the bottom middle of the screen. These set the global volume and effect parameters for the set-up's final audio output.
The text above is a gross simplification of what the set-up is capable of but I hope it gives some insight into the kind of thing that a person can build with Max/MSP. While it may not be the best illustration of my computer music theories because it does, to an extent, mirror things that can be achieved with standard audio software/hardware, you shouldn't find it too hard to infer the kind of unconventional audio environments that one can come up with.

I also hasten to add that my programming is extremely basic by the standards of most Max hackers. I really only learn anything in Max on a need-to-know basis. In other words, if I have a musical idea, I go figure out how to achieve it in Max. Otherwise, I don't really spend any time acquiring new Max "chops".

This is probably a way of being able to feel "punk rock" about my music while still using some very complex, high-level software technology. The important point here is that my enthusiasm for computer music practice dies not stem from a love of technical virtuosity per se. My philosophy is that a musician should have just enough expertise to do what they want to do.

By the same token, if a musician has no desire to create previously unimaginable worlds of sound, there's no reason why they should be pissing around with something like Max/MSP. There's nothing wrong with musical tradition and there's nothing wrong with musical genre.

Furthermore, most computer music is pretty damn generic, ain't it - considerably less alien than, say, Derek Bailey's acoustic guitar work. Using this stuff is certainly no guarantee of original/uncanny results but it does have a huge potential for unleashing the musical imagination.

For me, the opportunities created by computer music technology are just too good to turn up. With my particular musical inclinations and interests, it would simply be lazy cowardice not to get deep deep into the world of computer music making.

Unfortunately, at the time of writing, my crippling computer problem has still not been solved. I could only write this post because I just got my Mac back from the repair shop. It's going back in again tomorrow.

Another important thing to remember about computers (and I think I've said this before): they don't reduce inconvenience, they just move it around.

2 comments:

Brady Cranfield! said...

great work. thanks sam.

Biggie Samuels said...

Cheers. Thanks for posting.

Interestingly, a few people already commented on this post but they all chose to do so through the private medium of email. Would it be outlandish to suggest that people are embarrassed to discuss this topic in public?

Anyway, the emails I got raised some interesting questions, a couple of which I'll respond to here, without naming-and-shaming those who asked.

First of all, don't take my comment about computers only moving inconvenience around to mean that I'm going to switch sides and become a "gearhead". Let's not forget that this whole thing started when Woebot asked me if I was interested in moving away from the computer in favor of gear and I told him that I had no interest in doing that whatsoever.

Second, I don't think my "punk rock" attitude to Max programming contradicts my arguments about the unlimited possibilities of computer music. Yes, I think software presents musicians with potentially unlimited opportunities. How you respond to this depends on the strength of your creative vision. If you know what you want to do and why you want to do it, then software allows you to do it and you'll develop natural limits to how you use it. If you're just dicking around, then it'll lead either to complete creative deadlock or the over-reliance on presets. Theoretically.