Experimental Chamber Music for Waterphone, Synthesizer And Two Delay Lines

Here is a little experiment to explore an interesting observation I made. I would like to believe that this development is quite original.

Experimental Chamber Music for Waterphone, Synthesizer And Two Delay Lines
(5:30, 5.1 MByte, .m4a/AAC “iPOd” format)

What my Synthesizer and the two delay lines are doing is entirely the result of my programming in MaxMSP, leaving me free to play a Waterphone along with it. It is algorithmic rather than sequenced, continuously making up musical motifs for two players according to a set of rules and changing the settings of the two delay units. Depending on the programming, algorithmic music can be intricate, pleasant, intense, easy listening … but after listening for a while, it often becomes clear that it lacks some kind of intelligence, that it is on the intellectual level of trivial smalltalk.

I have run such algorithmic music and watched other musicians jamming along with it for some weeks now, and I have observed that human interaction sometimes creates musical conversations, which sound similar to group improvisations. It doesn’t sound mindless at all any more. I made this piece to explore a little further what is happening and how it happens.

It began as a project to develop a hardware and software combination to help me to continue making music while dealing with my progressive disability.

The “nerdy stuff” starts here feel free to skip the next few paragraphs (the grey text) if the technology does not interest you…

Instead of a Digital Audio Workstation, I use AudioMulch as a software hub for putting everything together (“com-posing”) and recording it on multiple tracks while performing live. It suits my non-linear and layered approach to composition much better than the sequencer and multitrack recorder DAW metaphor.

A little anecdote: When I enquired about certain features which would have helped me to speed up the development, Ross Bencina, the programmer told me that it was the wrong software for me. He is enthusiastic about his work, both, as a programmer and as a performer, and it seems to me that he felt I was somehow corrupting the spirit of it (mainly as a tool for live performance) by trying to use it for MIDI sequencing.

Anyway, improvised live performance is my main goal, but I do have special needs which don’t always make sense to others … and I am almost certain that Mr Bencina will “forgive” me once he sees what I am trying to do. His “telling me off” has resulted in the need for me to speed up my efforts learning how to program using Max/MSP software.

Max/MSP, AudioMulch, and my Modular Synthesizer are now tightly integrated with each other. It works like this:

I use MIDI for sending information between the different components of my set-up. AudioMulch supplies the MIDI clock. Everything which uses timing can synchronize to the clock in AudioMulch. If I speed up AudioMulch, everthing else, including the Modular Synthesizer, follows.

MaxMSP makes up algorithmic compositions as MIDI events. It sends some of them to a module which converts them to Analogue Synthesizer control signals, and others to AudioMulch. It also passes the MIDI clock to the Synthesizer.

On the Synthesizer, I make patches to route the signals just as if they originated on the synth itself or on a sequencer. I can have up to four lines of notes and up to eight algorithmically controlled low frequency oscillators or envelopes of arbitrary wave shape and complexity coming from MaxMSP. If required, I can also convert analogue control signals to MIDI, which allows me to have voltage controlled software (VST, etc.) effects (!).

Up to six audio outputs of the synth go to AudioMulch via a Firewire Audio Interface connected to my computer, along with microphones and pickups.

In AudioMulch, I process the incoming Audio with effects. The effects parameters can be controlled via any number of oscillators and envelopes coming in from Max as MIDI controllers … and these can originate as LFOs and envelopes on my synth, resulting in voltage controlled software effects.

I currently also play my Cork City Gamelan samples in AudioMulch, using MIDI sent from Max to control the file players. I am working on more complex sample players and live processors which I shall program using MSP, playing directly in Max, sending the Audio via Soundflower to AudioMulch for the final live mix and recording.

… and this is the end of the technology section … but not the end of the article yet.

The system is rock solid and exceptionally simple and easy to use. It is literally “plug in and make a sound” after I launch two programs. I would like to take it somewhere to a public place, set up an algorithm (AlgoRhythm?) to play forever changing mindless trivia all day, and invite anybody to come and play along for a while, trying to insert some sense into the conversation. I am very serious about this, and I am looking for suggestions as to how it can be organized, where, when, etc …

 

This entry was posted in Articles With Music For Downloading, Modular Synthesis, Music Making for People With Disabilities, Technology Reviews and Do-It-Yourself Tips. Bookmark the permalink.

One Response to Experimental Chamber Music for Waterphone, Synthesizer And Two Delay Lines

  1. Denis O Sullivan says:

    Excellent track, the delays are adding a new dimension to the overall sound. It’s great to see & hear how it is all evolving. I’ve enjoyed the jams immensely, my consciousness has truly been expanded.. 😉

Comments are closed.