The DMX-1000 was one of the earliest Digital Synthesisers. Essentially it was a dedicated 16 bit audio processing computer designed as an OEM product to be integrated into a existing computer setup – usually a DEC PDP11 microcomputer – where the user would write their own interface and score programmes to run the DMX 1000 from the master computer. The instrument sold for $XX in 1979 putting it beyond the reach of most musicians, however, the DMX was not intended as a mass market product but aimed at electronic and computer music studios (one of the first models being purchased by the University of Milan Cybernetics institute). The instrument was designed and built by Dean Wallraff previously a programmer at the M.I.T. Experimental Music Studio:
“…I worked there M.I.T.) as a Technical Instructor, mostly doing programming on one of the first visual score editors for music. I composed music using their system, always in non-standard tuning systems. It was slow work, since it took the computer half an hour of calculation to generate a minute’s worth of sound, which was then played back from disk. Some of my music was released on records.
After a year and a half, I decided it was time to leave. The work was getting repetitious, and the pay was low. The big problem was that I would miss the studio’s system, which was the only way I could make music in my non-standard tuning systems. I decided to build my own digital synthesizer, which would let me compose at home, and would generate sound in real time. We moved to New York at this time, into an apartment in an Italian section of Brooklyn…I worked my day job, developing funds-transfer systems for Chase and Citibank, and my night job, designing and building my synthesizer”
The DMX 1000 was capable of running a varied combination of oscillators, filters and noise generators which could be polyphonically combined and patched (a maximum of 20 simple oscillators with amplitude and frequency control reduced to 14 oscillators with envelope control, or alternatively 6 voices of frequency modulation, 15 first order filter sections, or 8 second order filter sections, or 30 white noise generators) . this made the machine as powerful as the most complex analogue synthesiser on the market at the time but with the additional benefit of being entirely programmable and run from a user generated score in real-time.
To avoid the complexity of the user having to integrate into an existing computer system and write their own software, a complete system,The DMX-1010 was later designed by Wallraff’s Digital Music Systems company which consisted of a LSI-11 based computer system running score and synthesis software with a floppy disk, CRT terminal, a 61-note keyboard.
DMX-100 and Pod-X
Pod-X was a collection of composition tools designed specifically for the DMX-1000 by the Candadian composer, Barry Truax in 1982 based on his ongoing Pod (POisson Distribution) probability composition model.
“PODX started in 1982 with the acquisition of the DMX-1000 (still working, amazingly enough) – which allowed the flip remark of the “X-rated POD system” to be occasionally uttered. Maybe I could just apply to the Guinness Book of Records for the longest continuously running (and used) computer music system, though it has seen several metamorphoses over that period. And possibly is one of the most productive…”
Despite the DMX-1000’s flexibility it was rapidly killed off by the advent of powerful and much more affordable digital synthesisers such as the Yamaha DX range of FM instruments.
“We sold dozens of the machines during the next few years, to university computer music studios and research organizations. It was the most flexible real-time synthesizer you could buy at the time, and it allowed composers to do things they couldn’t do with any other affordable system. But Yamaha introduced the DX-7 in the mid-80’s, which provided more raw synthesis power (though less flexibility in programming) in a unit that cost a tenth the price of ours. I spent a year or so trying unsuccessfully to raise money to develop a new generation of synthesizers, and then got out of the business.”
Developed by the computer engineer Patrick Saint-Jean directed by the composer Iannis Xenakis at CEMAMu (Centre d’Etudes de Mathématique et Automatique Musicales) in Issy les Moulineaux, Paris, France, UPIC was one of a family of early computer-based graphic controllers for digital music (Other including Max Mathews’ Graphic 1 ) which themselves were based on earlier analogue graphical sound synthesis and composition instruments such as Yevgeny Murzin’s ANS Synthesiser , Daphne Oram’s ‘Oramics‘, John Hanert’s ‘Hanert Electric Orchestra’ and much earlier Russian optical synthesis techniques.
Xenakis had been working with computer systems as far back as 1961 using an IBM system to generate mathematical algorithmic scores for ‘Metastaseis’; “It was a program using probabilities, and I did some music with it. I was interested in automating what I had done before, mass events like Metastaseis. So I saw the computer as a tool, a machine that could make easier the things I was working with. And I thought perhaps I could discover new things”. In the late 1960s when computers became powerful enough to handle both graphical input and sound synthesis, Xenakis began developing his ideas for what was to become the UPIC system; an intuitive graphical instrument where the user could draw sound-waves and organise them into a musical score. Xenakis’s dream was to create a device that could generate all aspects of an electroacoustic composition graphically and free the composer from the complexities of software as well as the restrictions of conventional music notation.
UPIC consisted of an input device; a large high resolution digitising tablet the actions of which were displayed on a CRT screen, and a computer; for the analysis of the input data and generation and output of the digital sound. Early version of the UPIC system were not able to respond in real time to user input so the composer had to wait until the data was processed and output as audible sound – The UPIC system has subsequently been developed to deliver real-time synthesis and composition and expanded to allow for digitally sampled waveforms as source material, rather than purely synthesised tones.
To create sounds, the user drew waveforms or timbres on the input tablet which could then be transposed, reversed, inverted or distorted through various algorithmic processes. These sounds could then be stored and arranged as a graphical score. The overall speed of the composition could be stretched creating compositions of up to an hour or a few seconds. Essentially, UPIC was a digital version of Yevgeny Murzin’s ANS Synthesiser which allowed the composer to draw on a X/Y axis to generate and organise sounds.
Since it’s first development UPIC has been used by a number of composers including Iannis Xenakis (Mycenae Alpha being the first work completely composed on the system), Jean-Claude Risset (on Saxatile (1992), Takehito Shimazu (Illusions in Desolate Fields (1994), Julio Estrada (on ‘eua’on’), Brigitte Robindoré, Nicola Cisternino and Gerard Pape (CCMIX’s director).
More recent developments of the UPIC project include the French Ministry of Culture sponsored ‘IanniX’ ; an open-source graphic sequencer and HighC; a software graphic synthesiser and sequencer based directly on the UPIC interface.
Images of the UPIC System
Iannis Xenakis and the UPIC system
Iannis Xenakis working with schoolchildren at the UPIC system
Iannis Xenakis and the UPIC system
Iannis Xenakis and the UPIC system
Iannis Xenakis working with schoolchildren at the UPIC system
Iannis Xenakis: Who is He? Joel Chadabe January 2010
‘Graphic 1’ was an hybrid hardware-software graphic input system for digital synthesis that allowed note values to be written on a CRT computer monitor – although very basic by current standards, ‘Graphic 1’ was the precursor to most computer based graphic composition environments such as Cubase, Logic Pro, Ableton Live and others.
Graphic 1 was developed by William Ninke (plus Carl Christensen and Henry S. McDonald) at Bell labs for use by Max Mathews as a graphical front-end for MUSIC IV synthesis software to circumvent the lengthy and tedious process of adding numeric note values to the MUSIC program. 1 Interview with Max Mathews, Curtis Roads and Max Mathews. Computer Music Journal, The MIT Press, Vol. 4, No. 4 (Winter, 1980), 15-22.
” The Graphic 1 allows a person to insert pictures and graphs directly into a computer memory by the very act of drawing these objects…Moreover the power of the computer is available to modify, erase, duplicate and remember these drawings” 2 Thom Holmes, (2020), Electronic and Experimental Music Technology, Music, and Culture, Routledge, 275.
Graphic 2/ GRIN 2 was later developed in 1976 as a commercial design package based on a faster PDP2 computer and was sold by Bell and DEC as a computer-aided design system for creating circuit designs and logic schematic drawings.
Interview with Max Mathews, Curtis Roads and Max Mathews. Computer Music Journal, The MIT Press, Vol. 4, No. 4 (Winter, 1980), 15-22.
Thom Holmes, (2020), Electronic and Experimental Music Technology, Music, and Culture, Routledge, 275.
EMS was the London electronic music studio founded and run by Peter Zinovieff in 1965 to research and produce experimental electronic music. The studio was based around two DEC PDP8 minicomputers, purportedly the first privately owned computers in the world.
Digital signal processing was way beyond the capabilities of the 600,000 instructions-per-second, 12k RAM, DEC PDP8s; instead, Peter Grogono was tasked with developing a new musical composition and ‘sequencing’ language called MUSYS. MUSYS was designed to be an easy to use, ‘composer friendly’ and efficient (i.e. it could run within the limitations of the PDP8 and save all the data files to disk – rather than paper tape) programming language to make electronic music. MUSYS, written in assembly language, allowed the PDP8s to control a bank of 64 filters which could be used either as resonant oscillators to output sine waves, or in reverse, to read and store frequency data from a sound source. This meant that MUSYS was a type of low resolution frequency sampler; it could ‘sample’ audio frequency data at 20 samples per second and then reproduce that sampled data back in ‘oscillator mode’. MUSYS was therefore a hybrid digital-analogue performance controller similar to Max Mathew’s GROOVE System (1970) and Gabura & Ciamaga’s PIPER system (1965) and a precursor to more modern MIDI software applications.
“It all started in 1969, when I was working at Electronic Music Studios (EMS) in Putney, S.W. London, UK. I was asked to design a programming language with two constraints. The first constraint was that the language should be intelligible to the musicians who would use it for composing electronic music. The second constraint was that it had to run on a DEC PDP8/L with 4K 12-bit words of memory.”
The two PDP8’s were named after Zinovieff’s children Sofka (an older a PDP8/S) and Leo (a newer, faster a PDP8/L). Sofka was used as a sequencer that passed the time-events to the audio hardware (the 64 filter-oscillators, six amplifiers, three digital/analog converters, three “integrators” (devices that generated voltages that varied linearly with time), twelve audio switches, six DC switches, and a 4-track Ampex tape-deck). Leo was used to compute the ‘score’ and pass on the data when requested by Sofka every millisecond or so;
“These devices could be controlled by a low-bandwidth data stream. For example, a single note could be specified by: pitch, waveform, amplitude, filtering, attack rate, sustain rate, and decay time. Some of these parameters, such as filtering, would often be constant during a musical phrase, and would be transmitted only once. Some notes might require more parameters, to specify a more complicated envelope, for instance. But, for most purposes, a hundred or so events per second, with a time precision of about 1 msec, is usually sufficient. (These requirements are somewhat similar to the MIDI interface which, of course, did not exist in 1970.)”
Previous to the development of MUSYS, the EMS PDP8s were used for the first ever unaccompanied performance of live computer music ‘Partita for Unattended Computer’ at Queen Elizabeth Hall, London, 1967. Notable compositions based on the MUSYS sytem include: ‘Medusa’ Harrison Birtwistle 1970, ‘Poems of Wallace Stevens’ Justin Connolly. 1970, ‘Tesserae 4’ Justin Connolly 1971, ‘Chronometer’ Harrison Birtwistle 1972, ‘Dreamtime’ David Rowland 1972, ‘Violin Concerto’ Hans Werner Henze 1972.
Demonstrating the digital manipulation of a voice with the frequency sampler:
‘In the Beginning‘ PeterGrogono with Stan Van Der Beek 1972. “In 1972, Stan Van Der Beek visited EMS. Peter Zinovieff was away and, after listening to some of the things we could do, Stan left with brief instructions for a 15 minute piece that would “suggest the sounds of creation and end with the words ‘in the beginning was the word'”. All of the sounds in this piece are derived from these six words, heard at the end, manipulated by the EMS computer-controlled filter bank.”
A composition consisting of a single note might look like this:
#NOTE 56, 12, 15;
The note has pitch 56 ( from an eight-octave chromatic scale with notes numbered from 0 to 63), loudness 12 (on a logarithmic scale from 0 to 15), and duration 15/100 = 0.15 seconds. The loudness value also determines the envelope of the note.
An example of a MUSYS program that would play fifty random tone rows:
50 (N = 0 X = 0
1 M=12^ K=1 M-1 [ M (K = K*2) ]
X & K[G1]
X = X+K N = N+1 #NOTE M, 15^, 10^>3;
12 - N[G1]
MUSYS evolved in 1978 into the MOUSE programming language; a small, efficient stack based interpreter.
EMS (Electronic Music Studios) was founded in 1965 by Peter Zinovieff, the son of an aristocrat Russian émigré with a passion for electronic music who set up the studio in the back garden of his home in Putney, London. The EMS studio was the hub of activity for electronic music in the UK during the late sixties and seventies with composers such as Harrison Birtwistle, Tristram Cary, Karlheinz Stockhausen and Hans Werner Henze as well as the commercial electronic production group ‘Unit Delta Plus (Zinovieff, Delia Derbyshire and Brian Hodgson).
Zinovieff , with David Cockerell and Peter Grogono developed a software program called MUSYS (which evolved into the current MOUSE audio synthesis programming language) to run on two DEC PDP8 mini-computers allowing the voltage control of multiple analogue synthesis parameters via a digital punch-paper control. In the mid 1960’s access outside the academic or military establishment to, not one but two, 12-bit computers with 1K memory and a video monitor for purely musical use was completely unheard of:
” I was lucky in those days to have a rich wife and so we sold her tiarra and we swapped it for a computer. And this was the first computer in the world in a private house.” – Peter Zinovieff
The specific focus of EMS was to work with digital audio analysis and manipulation or as Zinovieff puts it “ To be able to analyse a sound; put it into sensible musical form on a computer; to be able to manipulate that form and re-create it in a musical way” (Zinovieff 2007). Digital signal processing was way beyond the capabilities of the DEC PDP8’s; instead they were used to control a bank of 64 oscillators (actually resonant filters that could be used as sine wave generators) modified for digital control. MUSYS was therefore a hybrid digital-analogue performance controller similar to Max Mathew’s GROOVE System (1970) and Gabura & Ciamaga’s PIPER system (1965).
Even for the wealthy Peter Zinovieff, running EMS privately was phenomenally expensive and he soon found himself running into financial difficulties. The VCS range of synthesisers was launched In 1969 after Zinovieff received little interest when he offered to donate the Studio to the nation (in a letter to ‘The Times’ newspaper). It was decided that the only way EMS could be saved was to create a commercial, miniaturised version of the studio as a modular, affordable synthesiser for the education market. The first version of the synthesiser designed by David Cockerell, was an early prototype called the Voltage Controlled Studio 1; a two oscillator instrument built into a wooden rack unit – built for the Australian composer Don Banks for £50 after a lengthy pub conversation:
“We made one little box for the Australian composer Don Banks, which we called the VCS1…and we made two of those…it was a thing the size of a shoebox with lots of knobs, oscillators, filter, not voltage controlled. Maybe a ring modulator, and envelope modulator” David Cockerell 2002
The VCS1 was soon followed by a more commercially viable design; The Voltage Controlled Studio 3 (VCS3), with circuitry by David Cockerell, case design by Tistram Cary and with input from Zimovieff . This device was designed as a small, modular, portable but powerful and versatile electronic music studio – rather than electronic instrument – and as such initially came without a standard keyboard attached. The price of the instrument was kept as low as possible – about £330 (1971) – by using cheap army surplus electronic components:
“A lot of the design was dictated by really silly things like what surplus stuff I could buy in Lisle Street [Army-surplus junk shops in Lisle Street, Soho,London]…For instance, those slow motion dials for the oscillator, that was bought on Lisle street, in fact nearly all the components were bought on Lisle street…being an impoverished amateur, I was always conscious of making things cheap. I saw the way Moog did it [referring to Moog’s ladder filter] but I adapted that and changed that…he had a ladder based on ground-base transistors and I changed it to using simple diodes…to make it cheaper. transistors were twenty pence and diodes were tuppence!” David Cockerell from ‘Analog Days’
Despite this low budget approach, the success of the VCS3 was due to it’s portability and flexibility. This was the first affordable modular synthesiser that could easily be carried around and used live as a performance instrument. As well as an electronic instrument in it’s own right, the VCS3 could also be used as an effects generator and a signal processor, allowing musicians to manipulate external sounds such as guitars and voice.
The VCS3 was equipped with two audio oscillators of varying frequency, producing sine and sawtooth and square waveforms which could be coloured and shaped by filters, a ring modulator, a low frequency oscillator, a noise generator, a spring reverb and envelope generators. The device could be controlled by two unique components whose design was dictated by what could be found in Lisle street junk shops; a large two dimensional joystick (from a remote control aircraft kit) and a 16 by 16 pin board allowing the user to patch all the modules without the clutter of patch cables.
The original design intended as a music box for electronic music composition – in the same vein as Buchla’s Electronic Music Box – was quickly modified with the addition of a standard keyboard that allowed tempered pitch control over the monophonic VCS3. This brought the VCS3 to the attention of rock and pop musicians who either couldn’t afford the huge modular Moog systems (the VCS3 appeared a year before the Minimoog was launched in the USA) or couldn’t find Moog, ARP or Buchla instruments on the British market. Despite it’s reputation as being hopeless as a melodic instrument due to it’s oscillators inherent instability the VCS3 was enthusiastically championed by many british rock acts of the era; Pink Floyd, Brian Eno (who made the external audio processing ability of the instruments part of his signature sound in the early 70’s), Robert Fripp, Hawkwind (the eponymous ‘Silver Machine‘), The Who, Gong and Jean Michel Jarre amongst many others. The VCS3 was used as the basis for a number of other instrument designs by EMS including an ultra-portable A/AK/AKS (1972) ; a VCS3 housed in a plastic carrying case with a built-in analogue sequencer, the Synthi HiFli guitar synthesiser (1973), EMS Spectron Video Synthesiser, Synthi E (a cut-down VCS3 for educational purposes) and AMS Polysynthi as well as several sequencer and vocoder units and the large modular EMS Synthi 100 (1971).
Despite initial success – at one point Robert Moog offered a struggling Moog Music to EMS for $100,000 – The EMS company succumbed to competition from large established international instrument manufacturers who brought out cheaper, more commercial, stable and simpler electronic instruments; the trend in synthesisers has moved away from modular user-patched instruments to simpler, preset performance keyboards. EMS finally closed in 1979 after a long period of decline. The EMS name was sold to Datanomics in Dorset UK and more recently a previous employee Robin Wood, acquired the rights to the EMS name in 1997 and restarted small scale production of the EMS range to the original specifications.
Peter Zinovieff. Currently working as a librettist and composer of electronic music in Scotland.
David Cockerell, chief designer of the VCS and Synthi range of instruments left EMS in 1972 to join Electro-Harmonix and designed most of their effect pedals. He went to IRCAM, Paris in 1976 for six months, and then returned to Electro-Harmonix . Cockerell designed the entire Akai sampler range to date, some in collaboration with Chris Huggett (the Wasp & OSCar designer) and Tim Orr.
Tristram Cary , Director of EMS until 1973. Left to become Professor of Electronic Music at the Royal College of Music and later Professor of Music at the University of Adelade. Now retired.
Peter Grogono Main software designer of MUSYS. Left EMS in 1973 but continued working on the MUSYS programming language and further developed it into the Mouse language. Currently Professor at the Department of Computer Science, Concordia University, Canada.
The EMS Synthi 100
The EMS Synthi 100 was a large and very expensive (£6,500 in 1971) modular system, fewer than forty units were built and sold. The Synthi 100 was essentially 3 VCS3’s combined; delivering a total of 12 oscillators, two duophonic keyboards giving four note ‘polyphony’ plus a 3 track 256 step digital sequencer. The instrument also came with optional modules including a Vocoder 500 and an interface to connect to a visual interface via a PDP8 computer known as the ‘Computer Synthi’.
The Allen Computer Organ was one of the first commercial digital instruments, developed by Rockwell International (US military technology company) and built by the Allen Organ Co in 1971. The organ used an early form of digital sampling allowing the user to chose pre-set voices or edit and store sounds using an IBM style punch-card system.
The sound itself was generated from MOS (Metal Oxide Silicon) boards. Each MOS board contained 22 LSI (Large Scale Integration) circuit boards (miniaturised photo-etched silicon boards containing thousands of transistors – based on technology developed by Rockwell International for the NASA space missions of the early 70’s) giving a total of 48,000 transistors; unheard of power for the 1970’s.
“GROOVE is a hybrid system that interposes a digital computer between a human composer-performer and an electronic sound synthesizer. All of the manual actions of the human being are monitored by the computer and stored in its disk memory ”
Max Mathews and Richard Moore 1 Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music, Prentice Hall, 1997.p158
In 1967 the composer and musician Richard Moore began a collaboration with Max Mathews at Bell Labs exploring performance and expression in computer music in a ‘musician-friendly’ environment. The result of this was a digital-analogue hybrid system called GROOVE (Generated Realtime Operations On Voltage-controlled Equipment) in which a musician played an external analogue synthesiser and a computer monitored and stored the performer’s manipulations of the interface; playing notes, turning knobs and so-on. 2Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music, Prentice Hall, 1997.p158The objective being to build a real-time musical performance tool by concentrating the computers limited power, using it to store musical parameters of an external device rather than generating the sound itself :
“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.” 3Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University.
The system, written in assembler, only ran on the Honeywell DDP224 computer that Bell had acquired specifically for sound research. The addition of a disk storage device meant that it was also possible to create libraries of programming routines so that users could create their own customised logic patterns for automation or composition. GROOVE allowed users to continually adjust and ‘mix’ different actions in real time, review sections or an entire piece and then re-run the composition from stored data. Music by Bach and Bartok were performed with the GROOVE at the first demonstration at a conference on Music and Technology in Stockholm organized by UNESCO in 1970. Among the participants also several leading figures in electronic music such as Pierre Schaffer and Jean-Claude Risset.
“Starting with the Groove program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the radio-baton, plus a program, the conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.” 4Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University.
The GROOVE system consisted of:
14 DAC control lines scanned every 100th/second ( twelve 8-bit and two 12-bit)
An ADC coupled to a multiplexer for the conversion of seven voltage signal: four generated by the same knobs and three generated by 3-dimensional movement of a joystick controller;
Two speakers for audio sound output;
A special keyboard to interface with the knobs to generate On/Off signals
A teletype keyboard for data input
A CDC-9432 disk storage;
A tape recorder for data backup
Antecedents to the GROOVE included similar projects such as PIPER, developed by James Gabura and Gustav Ciamaga at the University of Toronto, and a system proposed but never completed by Lejaren Hiller and James Beauchamp at the University of Illinois . GROOVE was however, the first widely used computer music system that allowed composers and performers the ability to work in real-time. The GROOVE project ended in 1980 due to both the high cost of the system – some $20,000, and also to advances in affordable computing power that allowed synthesisers and performance systems to work together flawlessly. 5 F. Richard Moore, Elements of Computer Music, PTR Prentice Hall, 1990.
Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music, Prentice Hall, 1997.p158
Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music, Prentice Hall, 1997.p158
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University.
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University.
F. Richard Moore, Elements of Computer Music, PTR Prentice Hall, 1990.
The Ferranti Mk1 was the world’s first commercially available general-purpose computer; a commercial development of the Manchester Mk1 at Manchester university in 1951. Included in the Ferranti Mark 1’s instruction set was a ‘hoot’ command, which enabled the machine to give auditory feedback to its operators. Looping and timing of the ‘hoot’ commands allowed the user to output pitched musical notes; a feature that enabled the Mk1 to have produced the oldest existing recording of computer music ( The earliest reported but un-recorded computer music piece was created earlier in the same year by the CSIR Mk1 in Sydney Australia). The recording was made by the BBC towards the end of 1951 programmed by Christopher Strachey, a maths teacher at Harrow and a friend of Alan Turing.
CSIRAC was an early digital computer designed by the British engineer Trevor Pearcey as part of a research project at CSIRO ( Sydney-based Radiophysics Laboratory of the Council for Scientific and Industrial Research) in the early 1950’s. CSIRAC was intended as a prototype for a much larger machine use and therefore included a number of innovative ‘experimental’ features such as video and audio feedback designed to allow the operator to test and monitor the machine while it was running. As well as several optical screens, CSIR Mk1 had a built-in Rola 5C speaker mounted on the console frame. The speaker was an output device used to alert the programmer that a particular event had been reached in the program; commonly used for warnings, often to signify the end of the program and sometimes as a debugging aid. The output to the speaker was basic raw data from the computer’s bus and consisted of an audible click. To create a more musical tone, multiple clicks were combined using a short loop of instructions; the timing of the loop giving a change in frequency and therefore an audible change in pitch.
The first piece of digital computer music was created by Geoff Hill and Trevor Pearcey on the CSIR Mk1 in 1951 as a way of testing the machine rather than a musical exercise. The music consisted of excerpt from popular songs of the day; ‘Colonel Bogey’, ‘Bonnie Banks’, ‘Girl with Flaxen Hair’ and so on. The work was perceived as a fairly insignificant technical test and wasn’t recorded or widely reported:
CSIRAC – the University’s giant electronic brain – has LEARNED TO SING!
…it hums, in bathroom style, the lively ditty, Lucy Long. CSIRAC’s song is the result of several days’ mathematical and musical gymnastics by Professor T. M. Cherry. In his spare time Professor Cherry conceived a complicated punched-paper programme for the computer, enabling it to hum sweet melodies through its speaker… A bigger computer, Professor Cherry says, could be programmed in sound-pulse patterns to speak with a human voice… The Melbourne Age, Wednesday 27th July 1960
…When CSIRAC began sporting its musical gifts, we jumped on his first intellectual flaw. When he played “Gaudeamus Igitur,” the university anthem, it sounded like a refrigerator defrosting in tune. But then, as Professor Cherry said yesterday, “This machine plays better music than a Wurlitzer can calculate a mathematical problem”… Melbourne Herald, Friday 15th June 1956:
The CSIR Mk1 was dismantled in 1955 and moved to The University of Melbourne, where it was renamed CSIRAC. Professor of Mathematics, Thomas Cherry, had a great interest in programming and music and he created music with CSIRAC. During it’s time in Melbourne the practice of music programming on the CSIRAC was refined allowing the input of music notation. The program tapes for a couple of test scales still exist, along with the popular melodies ‘So early in the Morning’ and ‘In Cellar Cool’.