WB: When I was an undergraduate at the State University of New York at Albany, between 1967 and 71, I encountered some records by Harry Partch, and Ben Johnston. But my most “in person” encounters with microtonality were visits to Albany by composers. John Eaton visited sometime in 1970-71, and he played the Synket (an early analog synth that could play microtonally – I think in quarter-tones) Then in 1971, Sal Martirano visited with his Sal-Mar construction, a large self-built assemblage of digital logic controlling a collection of analog synthesizer boards. This was capable of playing in a number of equal temperaments. And only a couple of weeks after Sal Martirano’s visit, John Cage and Lejaren Hiller came by and presented (with Joel Chadabe’s assistance) what I think was the second performance of HPSCHD. They gave a number of lectures where they talked about how the piece was made and put together. There were a number of tape parts in the piece – all played non-synchronously – and each tape was in a different equal temperament from (if I remember correctly) 5 to 53 tones per octave. HPSCHD requires quite a large tech crew, and Joel Chadabe had all of his students dragooned into an ad hoc tech crew. The intense interaction with Cage, Hiller, and Chadabe in putting on the piece was quite a sudden “total immersion” into the world of microtonality, computer generation, indeterminate performance, etc. In 1971, I went to the University of California, San Diego, where they had a large Buchla-100 system. I began working with algorithmic complexes of sequencers where each oscillator was tuned to a different system. But the real breakthrough for me was when I started working with the Serge Synthesizer, (about 1973) in which the oscillators could be synced to subharmonic series. Working with the subharmonic series, and the presence of Harry Partch in San Diego resulted in an epiphany – now I could work in a systematic way with alternate tunings. From there, things took off.
KG: So did the Serge subdivide a high frequency pitch similar to the later scalatron?
WB: Not quite. The Scalatron digitally divided down frequencies so that 10-bit digital values resulted. With the Serge, you arbitrarily chose a high frequency and then the analog circuitry divided the high frequency down. If I remember correctly you couldn’t divide the high sync frequency by more than 10 or 15 before it became unstable, but you could then use that resulting frequency as a sync frequency input on its own so so you would be for example dividing divide-by-10s or 15s by divide-by-10s or 15s etc. I could be remembering wrong here. I remember to get real precision and stability, you would use divide-by-10s. In the late 70s, Julian Driscoll built a divide by 32 module, which was an extension of the Serge divide-by-10 logic. In any case, you can see that we were working at the lower end of the subharmonic series.
KG: Getting back to yourself, would it be correct to identify you as an algorithmic composer? I seem to remember you treating that type of composition as an extension of Cage’s work?
WB: One of the things I did, before I began to do microtonality of all sorts was to base much of my music on processes. These could be musical processes, or visual things or writing words with processes. I also wrote things that had nothing whatever to do with processes. Or in a 1973 piece like “for Charlemagne Palestine,” I had a drone, which was made of several oscillators and these oscillators were slightly detuned from each other so that you got beating. (Charlemagne showed me this trick.) These oscillators were tuned by ear. In 2021, I wrote “for Hal Budd” where I tuned oscillators to the exact tuning of pentatonics from “Lou Harrison’s Music Primer,” and the beating between the oscillators resulted from the Lou Harrison tunings. Same idea, beating pairs of sines, but “Charlemagne” was all intuitive, and “Hal” was more done according to different systemic pentatonic scales.
There’s a difference between making pieces out of abundant material – like the Cage/Hiller piece, or Martirano’s machine, which I believe played in 12, 16, 20, 24, etc edo. And making pieces out of the inherent relationship between tones, like a just intonation scale. My earliest microtonal pieces consisted of making pieces out of abundant material. It was a while before I taught myself how to make pieces out of inherent relationships between a set of tones. In any case, most of my microtonal things proceeded from abstract (the multi-scalar approach) to the concrete. But even in the concrete ones, I still had the background of abstract or arbitrary sets of pitches as the basis for things.
KG: I am curious if this means you use tuning systems with less notes than you once did? Are there tunings you prefer or aspects you prefer in a tuning?
WB: In 2007 as part of my PhD, I evolved a series of over 150 different just scales based on additive-sequence related series of 12 pitches. 12 was chosen because it could map to the black and whites. Also I could modulate between different scales and yet keep the position of the hands on the keyboard in the same place as the other scales. But there was no particular acoustical reason for choosing 12. The additive series used could, and occasionally did, go way beyond 12. I still use smaller scales, and I use bigger scales. I like using Erv’s Euler Genus which has 64 notes which divide into eight eight-note sets (or four sixteen-note, etc). That’s been a lot of fun playing with that. I noticed that I tend to use a different scale, or family of scales for each piece. I’m fairly arbitrary in my choice of which scale to use for which piece. One thing I sometimes like are irregular scales – the additive sequence scales are quite nice in generating irregular scales. And changing one factor in the generation of the additive sequence scales can generate related but different scales. Sometimes, though, I take a lot of time to generate a specific scale for a specific purpose, and stick with that scale for a particular purpose. The 19-note just tuning fork scale, which was based on inverted and combined ancient Greek tetrachords, was one such scale, and I sometimes keep returning to it.
KG: I know you have been involved in the creation of both Hardware and software machines if you tell us a bit about these and would be curious what instruments or plug-ins would you still like to see?
WB: When I was a grad student, I built Aardvarks IV. This was a random number generator (I was shown how to do this by Ed Kobrin, who was a Sal Martirano student) which allowed some pretty complex control. The shift register feedback that powered this was taught to me by Joel Chadabe and John Roy. This was between 1973 and 1975. In 2018, I asked Antonio Tuzzi if he could emulate Aardvarks IV in software. He could and did. So my initial dream of unlimited random generators (Aardvarks IV only had 16) was now realisable in software. Any time I get a new module, I try to study it to try and intuit what that new module can do. If the module was in hardware, I try to study it to see what its structure is, so that we can then have it in software.
KG: A common problem with electronic music seems to be that equipment breaks down and operating systems disappear that prevent present day live presentations. One often has the recordings but do you put much effort into trying to preserve previous work, or even updating the works based on new possibilities offered by new equipment?
WB: I gave up trying to keep machines alive for N amount of time quite a while ago. Next year, with MESS (Melbourne Experimental Sound Studio) I’m going to do a project where we try to revive some earlier machines – maybe Aardvarks IV or Aardvarks VII or the Grainger Electric Eye Tone Tool, that will be an interesting project. Mostly though, I’m hoping that good recordings will be how the music survives.
KG: You have also done extensive compositions with acoustic instruments. I assume that working in different intonational systems is more limited to the possibilities of electronic? Having done both, do you feel that electronic generated music is better suited to explore and develop microtonality?
WB: The joy of electronics is that they’re so flexible. The joy of acoustic instruments is their stability. Both are a joy to me. When I wrote my quarter-tone baritone ukulele pieces, there was one additional thing: having to learn the pieces, physically. With electronics, I usually don’t have to learn to physically play the instruments like that. A lot of home-made instruments don’t involve virtuoso-style performance. Things like the retuned guitars-ukuleles do involve the need to practice.
KG: Teaching is still an activity you are still engaged in. Do you find microtonality being more acceptable to students more than it once was? Are you finding more students taking it up?
WB: Yes, microtonality is one thing that students are taking to more and more easily. In my improv classes, I noticed this year that the improv students were more rapidly adapting to extended-instrument techniques than I’ve seen any group do before. Microtonality is a similar field, but not quite as rapidly being taken up. But still more students are taking it up. Because microtonality involves a bit more work than “simply” doing extended techniques, the uptake is a bit slower, but not a lot slower. At Box Hill the 2nd assignment in 3rd year is a microtonal piece. Most of the students choose very simple scales to start out with, but it’s a start. The task for me is to keep finding simple enough enough free software for them to use and start off with. Once we can cross that hurdle, then things seem to get easier.
KG: Would you mind introducing some of your work for us ?
For Charlemagne Palestine was my first drone piece that exploited the acoustic phenomenon of beating produced by very closely tuned electronic tones. It was made for my friend Charlemagne Palestine, whose four day long electronic music/dance/live-in performance at the Theatre Vanguard, in Los Angeles was a complete inspiration for me. Near the end of the piece, a small surrealist poem I found in a dream is read, in response to the mood created by the electronic tones. The piece was made on the CEMS Moog System at the State University of New York at Albany in the Summer of 1973. This is an excerpt of the first five minutes of the piece.
I was looking at an old piece of mine, “Saturday in the Triakontahedron with Leonhard,” which uses a 64 note scale proposed by Ervin Wilson, the Euler Genus 3 5 7 9 11 13, made of harmonics produced by multiplying all sets of 3 or less factors together, as well as 1 and the product of all 6 factors. I had not used this structure since composing the piece back in 2004-6. To my delight, all the software I had used to make the piece back then still worked, and I could recover the elements of the programs and the tunings kind of easily. (“Kind of” meaning I had to search between 3 computers and about 5 hard drives to find all the materials, but they were there.) So using those materials, I made this piece. Along the way, of course, I made a totally different piece than I made back in 2004-6.
HARMONY: A Euler Fokker Genus is a scale made up of a number of factors which are multiplied against each other. So if you had 3 factors, and they each didn’t appear more than once, your scale would have these factors:
And it could also be expressed as a cube:
And the resulting scale would be like this:
Erv’s original scale (which I used in the earlier piece), had 6 factors – 3 5 7 9 11 and 13. If you take 6 factors, 3 at a time, you find there are 20 ways of combining those elements.
Although I didn’t use it in this piece, you could make a “cube of cubes,” where a scale made of the first 3 elements, for example, could be transposed on 3 axes of the second three elements. This would make a 64 note scale, in which the original 8 note scale appears at 8 different transposition levels. So the 64 note scale can be divided into eight 8-note scales. There are 20 ways that this can happen. Here’s Erv’s original diagram of one of the 20 “cube of cubes” divisions of the Genus:
In this piece, I’m only using the lower most left cube, but I’m using all 20 possible cubes made with three factors. As an example, the piece begins with the cube given above (0:00 – 3:00 in the piece), and between 54:00 and 57:00 in the piece, we’re using an 8 note scale made from factors 7 11 and 13:
Which listed out, looks like this:
Listing of Euler Genus 2V 7 * 11 * 13 – Factors, Ratios and Cent Values
11 * 13
7 * 11
7 * 13
7 * 11 * 13
Each scale appears as an ascending and descending sampled acoustic guitar melody, roughly in the middle of its section. So Scale 1 – the 8 note scale made of factors 3 5 and 7, appears at roughly 1:30 into the piece, while the above scale, which I’m calling 2V, appears at about 55:30 into the piece. In each of the 20 sections of the piece, each lasting 3 minutes, one of the 20 possible 8-note scales, made from 3 different factors, is used. The scales are selected with a program I wrote in the late John Dunn’s MusicWonk. I have a set of 20 buttons to select the scales. I also have a set of 8 buttons, not used in this piece, in which I can also select each of the 8 possible transpositions of each scale – each transposition being one of the other corner cubes in the “cube of cubes” diagram. (Naturally, this means that if I were to use these, I could make another 7 one-hour pieces, given 3 minutes per scale – as in, piece 2 would use the 20 scales in transposition 2, piece 3 would use the 20 scales in transposition 3, etc. The thought of the work involved in making those additional pieces is easily enough for me to consign those other versions to the realm of “conceptual art,” leaving just this one piece in the realm of “music.”
TIMBRE: I really really really hate sampled choir sounds. They really strike me as ultimately cheesy and in bad taste. Looking through the sample list in the Kontakt Factory Samples, which I’d acquired several years ago and never gotten around to using, I saw that they had a large number of sampled choir sounds, some of which would morph between different vowel sounds. Just for a laugh, I listened to this, and actually, to my surprise, liked what I heard. Still cheesy and in bad taste, but it sounded like something I could use. My tastes had evolved from “no I can’t use that” to “I think I can live with the cheesyness and bad taste of that.” The Kontakt Factory Samples also had a nice sounding piano sample, a good nylon guitar sample and a useable marimba sound. Plus, all of them could be retuned into my 64 note scale using the standard Kontakt tuning script. Other sounds soon presented themselves – SoniCouture was giving away a sample of tube drums for Kontakt. They sounded very good and were flexible, and could take the microtuning. They also reminded me of the tube drums used by Robert Erickson in his “Cradle II” piece, which had greatly impressed me when I was working with him in the early 70s. Time for a homage – why not?
Similarly, Decent Samples were giving away a sampled zither, called the Mandolin Guitarophon, which had a wonderful preset of a granulated texture, that was similarly flexible and microtunable. Spitfire Audio had a solo viola, from its Solo Strings sample set that sounded very nice and was microtunable, and also from Spitfire, the massed woodwinds sample set in its Masse set could be microtuned. So as I was working on the piece, this orchestra gradually assembled itself. I made a structure where no more than four instruments at a time were playing (with the constant addition of the guitar playing the scales about half-way through each section), and assigned a differently composed algorithmic melody, each playing at its own tempo, to each instrument. The green knob in the lower left of the control panel shown above, selected different subsets from the 8-note scale of the moment, using a probability distribution to select from the chosen pitches. There are 23 different probability distributions/chords used. This means that I can improvise “chord progressions” in my scale of the moment, should I so choose.
Within the structured elements of the piece (change scale every 3 minutes, use only the timbres chosen for that section), I also improvise. I choose when to play the choir samples, when to begin or end the drum melodies, when to bring in or take out the other instruments, choose what polytempo the marimba will be playing, and what tempo the guitar-sound scales will be playing at. So with the score for the overall structure in front of me, and using the performance interface shown above, I can improvise happily for an hour, making yet another version of the piece. The version that I’m sending around was made on the 9th of June 2020. It was made with the software Plogue Bidule, Kontakt Sampler (using the sample sets described above), and MusicWonk. Hardware was a Lenovo laptop computer and a Korg nanoKey Studio keyboard. (For a suitably large commissioning fee, other individual versions could easily be made. Mercenary, eh?)
THE TITLE: We live in Daylesford, Victoria, an old gold mining town. We’d been here nearly 10 years, and I noticed on a map a marker to the ruins of the “Mistletoe Mine,” about 2 kilometers from our house. So recently we’ve started looking for it. We’ve had some lovely bushwalks, but haven’t, as of June 11, found it yet. The location given on the map, GPS located and all, has no mine ruins at all. So we’re still looking. Meanwhile, this piece was written, and was looking for a title – the union of art and life revealed it self again.
So this is my gift to you – a real-time, improvised, structured, orchestral and choral, sampled microtonal harmonic progressing piece. I enjoyed wandering around this forest of sonic resources as I made it, and I hope you will enjoy doing so too.
I first met Hal Budd in the early 70s. I was a graduate student at the University of California, San Diego, he was a faculty member at California Institute of the Arts, Los Angeles. There was a lot of traffic between the two schools, and I met Hal on a number of occasions. He introduced me to the music of Colin McPhee, which was a very important discovery for me, and I remember really liking his early electronic pieces, The Oak of the Golden Dream and Coeur d’Orr. In 1975, I moved to Australia, and although I rarely saw him again, I followed his music closely. His Obscure Record of the late 70s with Madrigals of the Rose Angel was a favorite, and his collaboration with Brian Eno, The Pearl, was one of my “all-time favorites.”
I was saddened to hear of his death, and I was working on a review of the UVI Synth Anthology 3 sample library at the time. I was using tunings derived from Lou Harrison’s Garland of Pentatonic Scales as part of the test-driving of the sample library, and it occurred to me to use a number of those scales to make a piece, as a memorial to Hal, that just sat there in its beauty. It seemed logical to use the scales collected by Lou, who had insisted on the importance of a composer using materials they found simply beautiful, a lesson that he passed on to both Hal and myself, among others. The exact timbre I used to make the piece was derived from an old Ensoniq ESQ-M. I remembered my father having an ESQ1, the keyboard version of that synth, and I had enjoyed playing with that synthesizer and its timbres in the 80s. Revisiting them now was a pleasure.
The block-like nature of the structure of the piece, one pentatonic scale at a time, gradually building an almost five octave structure, only to fade away revealing the basic voicing of the next pentatonic, was made possible by the structure of the Korg nanoKEY keyboard. The phasing and chorusing of the basic waveforms produce further animation in the sustaining sounds. As I listen back to the piece, it strikes me that I’ve rarely heard a piece of mine in which the sound is heard with such physicality. The sound seems simultaneously solid and unchanging, while also being liquid and fluid in its evolving nature. In any case, it seems to me to be an example of something Hal valued very much in his own work – a sensuosity and beauty of timbre, which often became the primary focus for the work’s perception.
The score for the piece is appended here. Looking at the score shows how the work is assembled in real-time, note-by-note, tuning by tuning. This may seem to be at odds with the seeming unchanging (or continually changing) nature of the sound of the piece. While working on the piece, though, it seemed the best way to easily produce the piece was with a score which would remind me of all the button presses, key plays, and tuning changes that were needed to make the piece. It was recorded in one take, on Christmas evening in 2020, as a memorial to a very sweet and gentle man, a colleague whose work I valued highly.