crush depth

Unbolted Frontiers

In the past couple of years, I've gotten back into composing music after a fifteen year break. I had originally stopped making music without any expectation that I'd ever do it again. Making music had become extremely stressful due to two factors: economics, and proprietary software. Briefly, I'd assumed that I needed to make money from music because I believed I had no other skills. This lead to a very fear-driven and unpleasant composition process more suited to producing periodic mild mental breakdowns than listenable music. Additionally, all of the software I was using to make music was the standard vendor lock-in proprietary fare; That's a nice studio you have here, it'd be a shame if you stopped paying thousands of monetary units per year to keep it working exactly the same way it always has and something happened to it... What? You want to use some different software and migrate your existing data to it? Ha ha ha! You're not going anywhere!

So I gave up and got into software instead. Fast forward fifteen years and now, as a (presumably) successful self-employed software engineer and IT consultant, economics are no longer a factor. Additionally, after fifteen years of using, contributing to, developing, and publishing open-source software, coupled with huge advances in the quality of all of the available open-source music software - I no longer have any desire or use for proprietary audio software. There is nothing that any proprietary audio software vendor can offer me that I could conceivably want.

As I mentioned, my music composition process in the distant past had been very fear-driven, and I'd accumulated a ton of rather destructive mental habits. However, fifteen years of meditation and other somewhat occult practices have left with me the tools needed to recognize, identify, and extinguish those mental habits.

One habit that I identified very early turned out to be one that was common amongst other musicians I know. Specifically: Too much detail too soon. In essence, it's very easy in modern electronic music composition environments to write four bars of music (looping the music continuously as you write), and progressively add more and more detail to the music, and do more and more tuning and adjustments to the sounds. Before long, eight hours have passed and you have four very impressive bars of music, and no idea whatsoever as to what the next four bars are going to be. Worse, you've essentially induced a mild hypnotic trance by listening to the same section of music over and over, and have unconsciously cemented the idea into your mind that these four bars of music are going to be immediately followed by the same four bars. Even worse; you've almost certainly introduced a set of very expensive (in terms of computing resources here, but perhaps in terms of monetary cost too, if you're that way inclined) electronic instruments and effects in an attempt to get the sound just the way you like it, and you may now find that it's practically impossible to write any more music because the amount of electronic rigging has become unwieldy. If you're especially unlucky, you may have run out of computing resources. Better hope you can write the rest of the music using only those instrument and plugin instances you already have, because you're not going to be creating any more!

Once I'd identified this habit, I looked at ways to eliminate it before it could manifest. I wanted to ensure that I'd written the entirety of the music of the piece before I did any work whatsoever on sounds or production values. The first attempt at this involved painstakingly writing out a physical score in MuseScore using only the built-in general MIDI instruments. The plan was to then take the score and recontextualize it using electronic instruments, effects, et cetera. In other words, the composition would be explicitly broken up into stages: Write the music, use the completed music as raw MIDI input to a production environment, and then manipulate the audio, add effects, and so on to produce a piece that someone might actually want to listen to. As an added bonus, I'd presumably come away with a strong understanding of the music-theoretic underpinnings of the resulting piece, and I'd also be forced into making the actual music itself interesting because I'd not be able to rely on using exciting sounds to maintain my own interest during the composition process.

The result was successful, but I'd not try it again; having to write out a score using a user interface that continually enforces well-formed notation is far too time consuming and makes it too difficult to experiment with instrument lines that have unusual timing. MuseScore is an excellent program for transcribing music, but it's an extremely painful and slow environment for composition. In defense of MuseScore: I don't think it's really intended for composition.

Subsequent attempts to compose involved writing music in a plain MIDI sequencer on a piano roll, but using only a limited number of FM synth instances. Dexed, specifically. These attempts were mostly successful, but I still felt a tendency to play with and adjust the sounds of the synthesizers as I wrote. The problem was that the sounds that Dexed produced were actually good enough to be used as the final result, and I was specifically trying to avoid doing anything that could be used as a final result at this stage of the composition.

So, back to the drawing board again. I reasoned that the problem was down to using sounds that could conceivably appear in the final recording and, quite frankly, being allowed any control whatsoever over the sounds. An obvious solution presented itself: Write the music using the worst General MIDI sound module you can find. Naming no names, I found a freely-available SoundFontⓡ that sounded exactly like a low-budget early 90s sound module. Naturally, I could never publish music that used a General MIDI sound module with a straight face. This would ensure that I wrote all the music first without any possibility of spending any time playing with sounds and then (and only then) could I move to re-recording the finished music in an environment that had sounds that I could actually tolerate.

Attempts to compose music in this manner were again successful, but it became apparent that General MIDI was a problem. The issue is that General MIDI sound modules attempt to (for the most part) emulate actual physical acoustic instruments. This means that if you're presented with the sound of a flute, you'll almost certainly end up writing monophonic musical lines that could conceivably be played on a flute. The choice of instrument sound inevitably influences the kind of musical lines that you'll write.

What I wanted were sounds that:

  • Sounded bad. This ensures that the sounds chosen during the writing of the music won't be the sounds that are used in the final released version of the piece.

  • Sounded different to each other. A musician who's opinion I respect suggested that I use pure sine waves. The problem with this approach is that once you have four or five sine wave voices working in counterpoint, it becomes extremely hard to work out which voice is which. I need sounds that are functionally interchangeable and yet are recognizably distinct.

  • Sounded abstract. I don't want to have any idea what instrument the sound is supposed to represent. It's better if it doesn't represent any particular instrument; I don't want to be unconsciously guided to writing xylophone-like or brass-like musical lines just because the sound I randomly pick sounds like a xylophone or trumpet.

  • Required little or no computing resources. I want to be able to have lots of instruments without being in any way concerned about computing resource use. Additionally, I don't want to deal with any synthesizer implementation bugs during composition.

Reviewing all the options, I decided that the best way to achieve the above would be to produce a custom SoundFontⓡ. There are multiple open source implementations available that can play SoundFontⓡ files, and all of them are light on resource requirements and appear to lack bugs. I wanted a font that contained a large range of essentially unrecognizable sounds, all of which could be sustained indefinitely whilst a note is held. There were no existing tools for the automated production of SoundFontⓡ files, so I set to work:

  • jsamplebuffer provides a simple Java API to make it easier to work with buffers of audio samples.

  • jnoisetype provides a Java API and command-line tools to read and write SoundFontⓡ files.

I then went through hundreds of Dexed presets collected from freely-available DX7 patch banks online, and collected 96 sounds that were appropriately different from each other. I put together some Ardour project files used to export the sounds from Dexed instances into FLAC files that could be sliced up programatically and inserted into a SoundFontⓡ file for use in implementations such as FluidSynth.

The result: Unbolted Frontiers.

A SoundFontⓡ that no sensible person could ever want to use. 96 instruments using substandard 22khz audio samples, each possessing the sole positive quality that they don't sound much like any of the others. Every note is overlaid with a pure sine wave to assist with the fact that FM synthesis tends to lead to sounds that have louder sideband frequencies than they do fundamental frequencies. All instruments achieve sustain with a simple crossfade loop. Hopefully, none of the instruments sound much like any known acoustic instrument. The only percussion sounds available sound like someone playing a TR-808 loudly and badly.

If I hear these sounds used in recorded music, I'm going to be very disappointed.