‘UPIC system’ (Unité Polyagogique Informatique du CEMAMu) Patrick Saint-Jean & Iannis Xenakis, France, 1977.

Iannis Xenakis and the UPIC system

Iannis Xenakis and the UPIC system

Developed by the computer engineer Patrick Saint-Jean directed by the composer Iannis Xenakis at CEMAMu (Centre d’Etudes de Mathématique et Automatique Musicales) in Issy les Moulineaux, Paris, France, UPIC was one of a family of early computer-based graphic controllers for digital music (Other including Max Mathews’ Graphic 1 ) which themselves were based on earlier analogue graphical sound synthesis and composition instruments such as Yevgeny Murzin’s ANS Synthesiser , Daphne Oram’s ‘Oramics‘, John Hanert’s ‘Hanert Electric Orchestra’  and much earlier Russian optical synthesis techniques.

UPIC Schematic

UPIC Schematic

Xenakis had been working with computer systems as far back as 1961 using an IBM system to generate mathematical algorithmic scores for ‘Metastaseis’; “It was a program using probabilities, and I did some music with it. I was interested in automating what I had done before, mass events like Metastaseis. So I saw the computer as a tool, a machine that could make easier the things I was working with. And I thought perhaps I could discover new things”. In the late 1960s when computers became powerful enough to handle both graphical input and sound synthesis, Xenakis began developing his ideas for what was to become the UPIC system; an intuitive graphical instrument where the user could draw sound-waves and organise them into a musical score. Xenakis’s dream was to create a device that could  generate all aspects of an electroacoustic composition graphically and free the composer from the complexities of software as well as the restrictions of conventional music notation. 

UPIC Diagram

UPIC Diagram from a film by Patrick Saint Jean in 1976

UPIC consisted of an input device; a large high resolution digitising tablet the actions of which were displayed on a CRT screen, and a computer; for the analysis of the input data and generation and output of the digital sound. Early version of the UPIC system were not able to respond in real time to user input so the composer had to wait until the data was processed and output as audible sound – The UPIC system has subsequently been developed to deliver real-time synthesis and composition and expanded to allow for digitally sampled waveforms as source material, rather than purely synthesised tones.

The UPIC System hardware

The UPIC System hardware

To create sounds, the user drew waveforms or timbres on the input tablet which could then be transposed, reversed, inverted or distorted through various algorithmic processes. These sounds could then be stored and arranged as a graphical score. The overall speed of the composition could be stretched creating compositions of up to an hour or a few seconds.  Essentially, UPIC was a digital version of Yevgeny Murzin’s ANS Synthesiser which allowed the composer to draw on a X/Y axis to generate and organise sounds.

Since it’s first development UPIC has been used by a number of composers including Iannis Xenakis (Mycenae Alpha being the first work completely composed on the system), Jean-Claude Risset (on Saxatile (1992), Takehito Shimazu (Illusions in Desolate Fields (1994), Julio Estrada (on ‘eua’on’), Brigitte Robindoré, Nicola Cisternino and Gerard Pape (CCMIX’s director).

More recent developments of the UPIC project include the French Ministry of Culture sponsored ‘IanniX’ ; an open-source graphic sequencer and HighC; a software graphic synthesiser and sequencer based directly on the UPIC interface.



Images of the UPIC System


Sources:

Iannis Xenakis: Who is He? Joel Chadabe January 2010

http://www.umatic.nl/

http://patrick.saintjean.free.fr/SILOCOMUVI_UPICPSJ2012/CMMM2009-UPIC-CNET-SILOCoMuVi1975-77.html

‘Images of Sound in Xenakis’s Mycenae-Alpha’ Ronald Squibbs, Yale University, rsquibbs @ minerva.cis.yale.edu

IanniX project homepage

‘Graphic 1′ William H. Ninke, Carl Christensen, Henry S. McDonald and Max Mathews. USA, 1965


‘Graphic 1′  was an hybrid hardware-software graphic input system for digital synthesis that allowed note values to be written on a CRT computer monitor – although very basic by current standards, ‘Graphic 1′ was the precursor to most computer based graphic composition environments such as Cubase, Logic Pro, Ableton Live and so-on.

The IBM704b at Bell Labs used with the Graphics 1 system

The IBM704b at Bell Labs used with the Graphics 1 system

‘Graphic 1′ was developed by William Ninke (plus  Carl Christensen and Henry S. McDonald) at Bell labs for use by Max Mathews as a graphical front-end for MUSIC IV synthesis software to circumvent the lengthy and tedious process of adding numeric note values to the MUSIC program.

” The Graphic 1 allows a person to insert pictures and graphs directly into a computer memory by the very act of drawing these objects…Moreover the power of the computer is available to modify, erase, duplicate  and remember these drawings”
Max Mathews  quoted from ‘Electronic and Experimental Music: Technology, Music, and Culture’ by Thom Holmes

Lawrence Rosller of Bell labs with Max Mathews in front of the Graphics 1 system c 1967

Lawrence Rosller of Bell labs with Max Mathews in front of the Graphics 1 system c 1967

Graphic 2/ GRIN 2 was later developed in 1976 as a commercial design package based on a faster PDP2 computer and was sold by Bell and DEC as a computer-aided design system for creating circuit designs and logic schematic drawings.

Audio recordings of the Graphic I/MUSIC IV system

Graphic I Audio file 1

Graphic I Audio file 2

Graphic I Audio file 3

Graphic I Audio file 4


Sources:

‘Interview with Max Mathews’ C. Roads and Max Mathews. Computer Music Journal, Vol. 4, No. 4 (Winter, 1980), pp. 15-22. The MIT Press

Electronic and Experimental Music: Technology, Music, and Culture. Thom Holmes

http://www.musicainformatica.it/

http://cm.bell-labs.com/cm/cs/cstr/99.html

‘The Oramics Machine: From vision to reality’. PETER MANNING. Department of Music, Durham University, Palace Green, Durham, DH1 3RL, UK

M. V. Mathews and L. Rosler’ Perspectives of New Music’  Vol. 6, No. 2 (Spring – Summer, 1968), pp. 92-118

W. H. Ninke, “GRAPHIC I: A Remote Graphical Display Console System,” Proceedings of the Fall Joint Computer Conference of the American Federation of Information Processing Societies 27 (1965), Part I, pp. 839-846.

‘Encyclopedia of Computer Science and Technology: Volume 3 – Ballistics …’ Jack Belzer, Albert G. Holzman, Allen Kent

MUSYS. Peter Grogono, United Kingdom, 1969

EMS was the London electronic music studio founded and run by Peter Zinovieff in 1965 to research and produce experimental electronic music. The studio was based around two DEC PDP8 minicomputers, purportedly the first privately owned computers in the world.

One of the DEC PDP8 mini-computers at EMS

One of the DEC PDP8 mini-computers at EMS

Digital signal processing was way beyond the capabilities of the 600,000 instructions-per-second, 12k RAM, DEC PDP8s; instead, Peter Grogono was tasked with developing a new musical composition and ‘sequencing’ language called MUSYS. MUSYS was designed to be an easy to use, ‘composer friendly’ and efficient (i.e. it could run within the limitations of the PDP8 and save all the data files to disk – rather than paper tape) programming language to make electronic music.  MUSYS, written in assembly language, allowed the PDP8s to control a bank of 64 filters which could be used either as resonant oscillators to output sine waves, or in reverse, to read and store frequency data from a sound source. This meant that MUSYS was a type of low resolution frequency sampler; it could ‘sample’ audio frequency data at 20 samples per second and then reproduce that sampled data back in ‘oscillator mode’. MUSYS was therefore a hybrid digital-analogue performance controller similar to Max Mathew’s GROOVE System (1970) and  Gabura & Ciamaga’s PIPER system (1965) and a precursor to more modern MIDI software applications.

“It all started in 1969, when I was working at Electronic Music Studios (EMS) in Putney, S.W. London, UK. I was asked to design a programming language with two constraints. The first constraint was that the language should be intelligible to the musicians who would use it for composing electronic music. The second constraint was that it had to run on a DEC PDP8/L with 4K 12-bit words of memory.”

The two PDP8′s were named after Zinovieff’s children Sofka (an older a PDP8/S) and Leo (a newer, faster a PDP8/L). Sofka was used as a sequencer that passed the time-events to the audio hardware (the 64 filter-oscillators,  six amplifiers, three digital/analog converters, three “integrators” (devices that generated voltages that varied linearly with time), twelve audio switches, six DC switches, and a 4-track Ampex tape-deck). Leo was used to compute the ‘score’ and pass on the data when requested by Sofka every millisecond or so;

“These devices could be controlled by a low-bandwidth data stream. For example, a single note could be specified by: pitch, waveform, amplitude, filtering, attack rate, sustain rate, and decay time. Some of these parameters, such as filtering, would often be constant during a musical phrase, and would be transmitted only once. Some notes might require more parameters, to specify a more complicated envelope, for instance. But, for most purposes, a hundred or so events per second, with a time precision of about 1 msec, is usually sufficient. (These requirements are somewhat similar to the MIDI interface which, of course, did not exist in 1970.)”

partita-for-unattended-computer-3

partita-for-unattended-computer-1

Previous to the development of MUSYS, the EMS PDP8s were used for the first ever unaccompanied performance of live computer music ‘Partita for Unattended Computer’ at Queen Elizabeth Hall, London, 1967. Notable compositions based on the MUSYS sytem include: ‘Medusa’ Harrison Birtwistle 1970, ‘Poems of Wallace Stevens’  Justin Connolly. 1970, ‘Tesserae 4′  Justin Connolly 1971, ‘Chronometer’  Harrison Birtwistle 1972, ‘Dreamtime’ David Rowland 1972, ‘Violin Concerto’  Hans Werner Henze 1972.

Audio Examples

Demonstrating the digital manipulation of a voice with the frequency sampler:

In the Beginning‘ PeterGrogono with Stan Van Der Beek 1972. “In 1972, Stan Van Der Beek visited EMS. Peter Zinovieff was away and, after listening to some of the things we could do, Stan left with brief instructions for a 15 minute piece that would “suggest the sounds of creation and end with the words ‘in the beginning was the word’”. All of the sounds in this piece are derived from these six words, heard at the end, manipulated by the EMS computer-controlled filter bank.”

Datafield‘ Peter Grogono 1970

Chimebars  Peter Grogono 1968

 MUSYS code examples

A composition consisting of a single note might look like this:

      #NOTE 56, 12, 15;
      $

The note has pitch 56 ( from an eight-octave chromatic scale with notes numbered from 0 to 63), loudness 12 (on a logarithmic scale from 0 to 15), and duration 15/100 = 0.15 seconds. The loudness value also determines the envelope of the note.

An example of a MUSYS  program that would play fifty random tone rows:

      50 (N = 0 X = 0
      1  M=12^  K=1  M-1 [ M (K = K*2) ]
         X & K[G1]
         X = X+K  N = N+1  #NOTE M, 15^, 10^>3;
         12 - N[G1]
      $

MUSYS evolved in 1978 into the MOUSE programming language; a small, efficient stack based interpreter.


Sources:

http://users.encs.concordia.ca/~grogono/Bio/ems.html

Peter Grogono.’MUSYS: Software for an Electronic Music Studio. Software – Practice and Experience’, vol. 3, pages 369-383, 1973.

http://www.retroprogramming.com/2012/08/mouse-language-for-microcomputers-by.html

The ‘PIPER’ System James Gabura & Gustav Ciamaga, Canada, 1965

Charles Hamm, Lejaren Hiller, Salvatore Martirano, Herbert Braid, Kenneth  Gaburo at the EMS, Toronto, 1965

Charles Hamm, Lejaren Hiller, Salvatore Martirano, Herbert Braid, James Gaburo at the EMS, Toronto, 1965

PIPER was one of the earliest hybrid performance system allowing composers and musicians to write and edit music in real time using computers and analogue synthesisers. The system was developed by  James Gabura & Gustav Ciamaga Who also collaborated with Hugh Le Caine on the ‘Sonde’) at the University of Toronto (UTEMS) in 1965. With computing technology in 1965 being to weak to synthesise and control sounds in real-time a work-around was to leave the scoring and parameter control to the computer and the audio generation to an external analogue synthesiser. The PIPER system consisted two Moog oscillators and a custom built amplitude regulator to generate the sound and an IBM 6120 to store parameter input and to score the music. The computer would read and store the musicians input; keyboard notes, filter changes, note duration and so-on and allow the user to play this back and edit in real-time.

By the 1980′s such large hybrid analogue-digital performance systems like PIPER and Max Mathew’s GROOVE were obsolete due to the advent of affordable, microcomputers and analogue/digital sequencer technology.

 


Sources

http://www.thecanadianencyclopedia.ca/en/article/gustav-ciamaga-emc/

http://ems.music.illinois.edu/ems/articles/battisti.html

‘GROOVE Systems’, Max Mathews & Richard Moore, USA 1970

Max Mathews with the GROOVE system

Max Mathews with the GROOVE system

In 1967 the composer and musician Richard Moore began a collaboration with Max Mathews at Bell Labs exploring performance and  expression in computer music in a ‘musician-friendly’ environment. The result of this was a digital-analogue hybrid system called GROOVE  (Generated Realtime Operations On Voltage-controlled Equipment) in which a musician played an external analogue synthesiser and a computer monitored and stored the performer’s manipulations of the interface; playing notes, turning knobs and so-on. The objective being to build a real-time musical performance tool by concentrating the computers limited power, using it to store musical parameters of an external device rather than generating the sound itself :

“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wroteMusic 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. TheIBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University

Richard Moore with the Groove System

Richard Moore with the Groove System

The system, written in assembler, only ran on the Honeywell DDP224 computer that Bell had acquired specifically for sound research. The addition of a disk storage device meant that it was also possible to create libraries of programming routines so that users could create their own customised logic patterns for automation or composition. GROOVE allowed users to continually adjust and ‘mix’ different actions in real time, review sections or an entire piece and then re-run the composition from stored data. Music by Bach and Bartok were performed with the GROOVE at the first demonstration at a conference on Music and Technology in Stockholm organized by UNESCO  in 1970. Among the participants also several leading figures in electronic music such as Pierre Schaffer and Jean-Claude Risset.

“Starting with the Groove program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the radio-baton, plus a program, the conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.”
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University

The GROOVE System at the Bell Laboratories circa 1970

The GROOVE System at the Bell Laboratories circa 1970

The GROOVE system consisted of:

  • 14 DAC control lines scanned every 100th/second ( twelve 8-bit and two 12-bit)
  • An ADC coupled to a multiplexer for the conversion of seven voltage signal: four generated by the same knobs and three generated by 3-dimensional movement of a joystick controller;
  • Two speakers for audio sound output;
  • A special keyboard to interface with the knobs to generate On/Off signals
  • A teletype keyboard for data input
  • A CDC-9432 disk storage;
  • A tape recorder for data backup



Antecedents to the GROOVE included similar projects such as PIPER, developed by James Gabura and Gustav Ciamaga at the University of Toronto, and a system proposed but never completed by Lejaren Hiller and James Beauchamp at the University of Illinois . GROOVE was however, the first widely used computer music system that allowed composers and performers the ability to work in real-time. The GROOVE project ended in 1980 due to both the high cost of the system – some $20,000, and also  to advances in affordable computing power that allowed synthesisers and performance systems to work together flawlessly .


Sources

Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music, Prentice Hall, 1997.

F. Richard Moore, Elements of Computer Music, PTR Prentice Hall, 1990.

http://www.vintchip.com/mainframe/DDP-24/DDP24.html

‘MUSIC N’, Max Vernon Mathews, USA, 1957

Max Mathews was a pioneering, central figure in computer music. After studying engineering at California Institute of Technology and the Massachusetts Institute of Technology in 1954 Mathews went on to develop ‘Music 1′ at Bell Labs; the first of the ‘Music’ family of computer audio programmes and the first widely used program for audio synthesis and composition. Mathews spent the rest of his career developing the ‘Music N’ series of programs and became a key figure in digital audio, synthesis, interaction and performance. ‘Music N’ was the first time a computer had been used to investigate audio synthesis ( Computers had been used to generate sound and music with the CSIR M1 and Ferranti Mk1 as early as 1951, but more as a by-product of machine testing rather than for specific musical objectives) and set the blueprint for computer audio synthesis that remains in use to this day in programmes like CSound, MaxMSP and SuperCollider and graphical modular programmes like Reaktor.

IBM 704 System

IBM 704 System

“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines – they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”

Max Mathews “Horizons in Computer Music”, March 8–9, 1997, Indiana University:

MUSIC I 1957

Music 1 was written in Assembler/machine code to make the most of the technical limitations of the IBM704 computer. The audio output was a simple monophonic triangle wave tone with no attack or decay control. It was only possible to set the parameters of amplitude, frequency and duration of each sound. The output was stored on magnetic tape and then converted by a DAC to make it audible (Bell Laboratories, in those years, were the only ones in the United States, to have a DAC; a 12-Bit valve technology converter, developed by EPSCO), Mathews says;

In fact, we are the only ones in the world at the time who had the right kind of a digital-to-analog converter hooked up to a digital tape transport that would play a computer tape. So we had a monopoly, if you will, on this process“.

In 1957 Mathews and his colleague Newman Guttman created a synthesised 17 second piece using Music I, titled ‘The Silver Scale’ ( often credited as being the first proper piece of  computer generated music) and a one minute piece later in the same year called ‘Pitch Variations’ both of which were released on an anthology called ‘Music From Mathematics’ edited by Bell Labs in 1962.

Mathews and the IBM 7094

Mathews and the IBM 7094

MUSIC II 1958

Was an updated more versatile and functional version of Music I . Music II  still used assembler but for the transistor (rather than valve) based, much faster IBM 7094 series. Music II had four-voice polyphony and a was capable of generating sixteen wave shapes via the introduction of a wavetable oscillator.

MUSIC III 1960

“MUSIC 3 was my big breakthrough, because it was what was called a block diagram compiler, so that we could have little blocks of code that could do various things. One was a generalized oscillator … other blocks were filters, and mixers, and noise generators.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

The introduction of Unit Generators (UG) in MUSIC III was an evolutionary leap in music computing proved by the fact that almost all current programmes use the UG concept in some form or other. A Unit generator is essentially a pre-built discreet function within the program; oscillators, filters, envelope shapers and so-on, allowing the composer to flexibly connect multiple UGs together to generate a specific sound. A separate ‘score’ stage was added where sounds could be arranged in a musical chronological fashion. Each event was assigned to an instrument, and consisted of a series of values for the unit generators’ various parameters (frequency, amplitude, duration, cutoff frequency, etc). Each unit generator and each note event was entered onto a separate punch-card, which while still complex and archaic by today’s standards, was the first time a computer program used a paradigm familiar to composers.

“The crucial thing here is that I didn’t try to define the timbre and the instrument. I just gave the musician a tool bag of what I call unit generators, and he could connect them together to make instruments, that would make beautiful music timbres. I also had a way of writing a musical score in a computer file, so that you could, say, play a note at a given pitch at a given moment of time, and make it last for two and a half seconds, and you could make another note and generate rhythm patterns. This sort of caught on, and a whole bunch of the programmes in the United States were developed from that. Princeton had a programme called Music 4B, that was developed from my MUSIC 4 programme. And (theMIT professor) Barry Vercoe came to Princeton. At that time, IBM changed computers from the old 1794 to the IBM 360 computers, so Barry rewrote the MUSIC programme for the 360, which was no small job in those days. You had to write it in machine language.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

Max Mathews and Joan Miller at Bell labs

Max Mathews and Joan Miller at Bell labs

MUSIC IV

MUSIC IV was the result of the collaboration between Max Mathews and  Joan Miller completed in 1963 and was a more complete version of the MUSIC III system using a modified macro enabled version of the assembler language. These programming changes meant that MUSIC IV would only run on the Bell Labs IBM 7094.

“Music IV was simply a response to a change in the language and the computer. It Had some technical advantages from a computer programming standpoint. It made heavy use of a macro assembly program Which Existed at the time.”
Max Mathews 1980

MUSIC IVB, IVBF and IVF

Due to the lack of portability of the MUSIC IV system other versions were created independently of Mathews and the Bell labs team, namely MUSIC IVB at Princeton and MUSIC IVBF at the Argonne Labs. These versions were built using FORTRAN rather than assembler language.

MUSIC V

MUSIC V was probably the most popular of the MUSIC N series from Bell Labs. Similar to MUSIC IVB/F versions, Mathews abandoned assembler and built MUSIC V in the FORTRAN language specifically for the IBM 360 series computers. This meant that the programme was faster, more stable and  could run on any IBM 360 machines outside of  Bell Laboratories. The data entry procedure was simplified, both in Orchestra and in Score section. One of the most interesting news features was the definition of new modules that allow you to import analogue sounds into Music V. Mathews persuaded Bell Labs not to copyright the software meaning that MUSIC V was probably one of the first open-source programmes, ensuring it’s adoption and longevity leading directly to today’s CSound.

“… The last programme I wrote, MUSIC 5, came out in 1967. That was my last programme, because I wrote it in FORTRAN. FORTRAN is still alive today, it’s still in very good health, so you can recompile it for the new generation of computers. Vercoe wrote it for the 360, and then when the 360 computers died, he rewrote another programme called MUSIC 11 for the PDP-11, and when that died he got smart, and he wrote a programme in the C language called CSound. That again is a compiler language and it’s still a living language; in fact, it’s the dominant language today. So he didn’t have to write any more programmes.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

MUSIC V marked the end of Mathews involvement in MUSIC N series but established it as the parent for all future music programmes. Because of his experience with the real-time limitations of computer music, Mathews became interested in developing ideas for performance based computer music such as the GROOVE system (with Richard Moore in 1970) system in and The ‘Radio Baton’ (with Tom Oberheim in 1985 ).

YEAR VERSION PLACE AUTHOR
1957 Music I Bell Labs (New York) Max Mathews
1958 Music II Bell Labs (New York) Max Mathews
1960 Music III Bell Labs (New York) Max Mathews
1963 Music IV Bell Labs (New York) Max Mathews, Joan Miller
1963 Music IVB Princeton University Hubert Howe, Godfrey Winham
1965 Music IVF Argonne Laboratories (Chicago) Arthur Roberts
1966 Music IVBF Princeton University Hubert Howe, Godfrey Winham
1966 Music 6 Stanford University Dave Poole
1968 Music V Bell Labs (New York) Max Mathews
1969 Music 360 Princeton University Barry Vercoe
1969 Music 10  Stanford University John Chowning, James Moorer
1970 Music 7 Queen’s College (New York) Hubert Howe, Godfrey Winham
1973 Music 11 M.I.T. Barry Vercoe
1977 Mus10 Stanford University Leland Smith, John Tovar
1980 Cmusic University of California Richard Moore
1984 Cmix Princeton University Paul Lansky
1985 Music 4C University of Illinois James Beauchamp, Scott Aurenz
1986 Csound M.I.T. Barry Vercoe


Sources

http://www.computer-history.info/Page4.dir/pages/IBM.704.dir/

http://www.musicainformatica.org

Curtis Roads, Interview with Max Mathews, Computer Music Journal, Vol. 4, 1980.

‘Frieze’ Interview with Max Mathews. by Geeta Dayal

An Interview with Max Mathews.  Tae Hong Park. Music Department, Tulane University

CSIR Mk1 & CSIRAC, Trevor Pearcey & Geoff Hill, Australia, 1951

Trevor Pearcey at the CSIR Mk1

Trevor Pearcey at the CSIR Mk1

CSIRAC was an early digital computer designed by the British engineer Trevor Pearcey as part of a research project at CSIRO ( Sydney-based Radiophysics Laboratory of the Council for Scientific and Industrial Research)  in the early 1950′s. CSIRAC was intended as a prototype for a much larger machine use and therefore included a number of innovative ‘experimental’ features such as video and audio feedback designed to allow the operator to test and monitor the machine while it was running. As well as several optical screens,  CSIR Mk1 had a built-in Rola 5C  speaker mounted on the console frame. The speaker was an output device used to alert the programmer that a particular event had been reached in the program; commonly used for warnings, often to signify the end of the program and sometimes as a debugging aid. The output to the speaker was basic raw data from the computer’s bus and consisted of an audible click. To create a more musical tone, multiple clicks were combined using a short loop of instructions; the timing of the loop giving a change in frequency and therefore an audible change in pitch.

A closeup of the CSIRAC console switch panel. Note the multiple rows of 20 switches used to set bits in various registers.

The CSIRAC console switch panel with multiple rows of 20 switches used to set bits in various registers.

The first piece of digital computer music was created by Geoff Hill and Trevor Pearcey on the  CSIR Mk1 in 1951 as a way of testing the machine rather than a musical exercise. The music consisted of excerpt from  popular songs of the day; ‘Colonel Bogey’, ‘Bonnie Banks’, ‘Girl with Flaxen Hair’ and so on. The work was perceived as a fairly insignificant technical test and wasn’t recorded or widely reported:

An audio reconstruction  of CSIRAC playing Colonel Bogey (c.1951)
 CSIRAC plays In Cellar Cool with a simulation of CSIRAC’s room noises.

CSIRAC – the University’s giant electronic brain – has LEARNED TO SING!

…it hums, in bathroom style, the lively ditty, Lucy Long. CSIRAC’s song is the result of several days’ mathematical and musical gymnastics by Professor T. M. Cherry. In his spare time Professor Cherry conceived a complicated punched-paper programme for the computer, enabling it to hum sweet melodies through its speaker… A bigger computer, Professor Cherry says, could be programmed in sound-pulse patterns to speak with a human voice…
The Melbourne Age, Wednesday 27th July 1960

Later version of the CSIRAC at The University of Melbourne

Later version of the CSIRAC at The University of Melbourne

…When CSIRAC began sporting its musical gifts, we jumped on his first intellectual flaw. When he played “Gaudeamus Igitur,” the university anthem, it sounded like a refrigerator defrosting in tune. But then, as Professor Cherry said yesterday, “This machine plays better music than a Wurlitzer can calculate a mathematical problem”…
Melbourne Herald, Friday 15th June 1956:

Portable computer: CSIRAC on the move to Melbourne, June 1955

Portable computer: CSIRAC on the move to Melbourne, June 1955

The CSIR Mk1 was dismantled in 1955 and moved to The University of Melbourne, where it was renamed CSIRAC. Professor of Mathematics, Thomas Cherry, had a great interest in programming and music and he created music with CSIRAC. During it’s time in Melbourne the practice of music programming on the CSIRAC was refined allowing the input of music notation. The program tapes for a couple of test scales still exist, along with the popular melodies ‘So early in the Morning’ and ‘In Cellar Cool’.

Music instructions for the CSIRAC by Thomas Cherry

Music instructions for the CSIRAC by Thomas Cherry

Music instructions for the CSIRAC by Thomas Cherry

Music instructions for the CSIRAC by Thomas Cherry




Later version of the CSIRAC at The University of Melbourne

Later version of the CSIRAC at The University of Melbourne


Sources

http://www.audionautas.com/2011/09/music-of-csirac.html

Australia’s First Computer Music, Common Ground Publishing, Paul Doornbusch pauld@koncon.nl

http://ww2.csse.unimelb.edu.au/dept/about/csirac/music/index.html

The ‘RCA Synthesiser I & II’ Harry Olsen & Herbert Belar, USA, 1952

The RCA Mark II Synthesizer at the Columbia-Princeton Electronic Music Center at Columbia's Prentis Hall on West 125th Street in 1958. Pictured: Milton Babbitt, Peter Mauzey, Vladimir Ussachevsky.

The RCA Mark II Synthesizer at the Columbia-Princeton Electronic Music Center at Columbia’s Prentis Hall on West 125th Street in 1958. Pictured: Milton Babbitt, Peter Mauzey, Vladimir Ussachevsky.

In the 1950′s RCA was one of the largest entertainment conglomerates in the United States; business interests included manufacturing record players, radio and electronic equipment (military and domestic – including the US version of the Theremin) as well as recording music and manufacturing records. In the early 50′s RCA initiated a unusual research project whose aim was to auto-generate pop ‘hits’ by analysing thousands of  music recordings; the plan being that if they could work out what made a hit a hit, they could re-use the formula and generate their own hit pop music. The project’s side benefit also explored the possibility of cutting the costs of recording sessions by automating arrangements and using electronically generated sounds rather than  expensive (and unionised) orchestras; basically, creating music straight from score to disc without error or re-takes.

RCA MkII

RCA MkII

The RCA electrical engineers Harry Olson and Hebart Belar were appointed to develop an instrument capable of delivering this complex task, and in doing so inadvertently (as is so often the case in the history of electronic music) created one of the first programmable synthesisers – the precursors being the Givelet Coupleux Organ of 1930 and the Hanert Electric Orchestra in 1945.

Paper punch roll showing parameter allocation

Paper punch roll showing parameter allocation

The resulting RCA Mark I machine was a monstrous collection of modular components that took up a whole room at  Columbia University’s Computer Music Center  (then known as the Columbia-Princeton Electronic Music Center). The ‘instrument’ was basically an analogue computer; the only input to the machine was a typewriter-style keyboard where the musician wrote a score in a type of binary code.

 

Paper-punch input of the RCA Synthesisir

Paper-punch input of the RCA Synthesisir

Punch paper terminals of the RCA MkII

Punch paper terminals of the RCA MkII

The keyboard punched holes in a pianola type paper role to determine pitch, timbre, volume and envelope – for each note. Despite the apparent crudeness of this input device, the paper roll technique allowed for complex compositions; The paper role had four columns of holes for each parameter – giving a parameter range of sixteen for each aspect of the sound. The paper roll moved at 10cm/sec – making a maximum bpm of 240. Longer notes were composed of individual holes, but with a mechanism which made the note sustain through till the last hole. Attack times were variable from 1 ms to 2 sec, and decay times from 4 ms to 19 sec. On the Mark II, High and low pass filtering was added, along with noise, glissando, vibrato and resonance,  giving a cumulative total of millions of possible settings.

Structure of the RCA MkII

Structure of the RCA MkII

RCA Synthesiser structure

RCA Synthesiser structure

The sound itself was generated by a series of vacuum tube oscillators (12 in the MkI and 24 in the MkII) giving four voice polyphony which could be divided down into different octaves . The the sound was manually routed to the various components – a technique that was adopted in the modular synthesisers of the 1960′s and 70′s. The eventual output of the machine was  was monitored on speakers and recorded to a lacquer disc, where, by re-using and bouncing the disc recordings, a total of 216 sound tracks could be obtained. In 1959 a more practical tape recorder was substituted.

Babbit, Luening, Ussachevsky and others at the RCA MkII

Babbit, Luening, Ussachevsky and others at the RCA MkII

It seems that by the time the MkII Synthesiser was built RCA had given up on their initial analysis project. Mainstream musicians had baulked at un-musical interface and complexity of the machine; but these very same qualities that appealed to the new breed of serialist composers who took the RCA Synthesiser to their heart;

The number of functions associated with each component of the musical event…has been multiplied. In the simplest possible terms, each such ‘atomic’ event is located in a five-dimensional musical space determined by pitch-class, register, dynamic, duration, and timbre. These five components not only together define the single event, but, in the course of a work, the successive values of each component create an individually coherent structure, frequently in parallel with the corresponding structures created by each of the other components. Inability to perceive and remember precisely the values of any of these components results in a dislocation of the event in the work’s musical space, an alteration of its relation to all other events in the work, and–thus–a falsification of the composition’s total structure…

Why should the layman be other than bored and puzzled by what he is unable to understand, music or anything else?…Why refuse to recognize the possibility that contemporary music has reached a stage long since attained by other forms of activity? The time has passed when the normally well-educated [person] without special preparation could understand the most advanced work in, for example, mathematics, philosophy, and physics. Advanced music, to the extent that it reflects the knowledge and originality of the informed composer, scarcely can be expected to appear more intelligible than these arts and sciences to the person whose musical education usually has been even less extensive than his background in other fields.

I dare suggest that the composer would do himself and his music an immediate and eventual service by total, resolute, and voluntary withdrawal from this public world to one of private performance and electronic media, with its very real possibility of complete elimination of the public and social aspects of musical composition. By so doing, the separation between the domains would be defined beyond any possibility of confusion of categories, and the composer would be free to pursue a private life of professional achievement, as opposed to a public life of unprofessional compromise and exhibitionism.

But how, it may be asked, will this serve to secure the means of survival for the composer and his music? One answer is that, after all, such a private life is what the university provides the scholar and the scientist. It is only proper that the university which–significantly–has provided so many contemporary composers with their professional training and general education, should provide a home for the ‘complex,’ ‘difficult,’ and ‘problematical’ in music.

Milton Babbitt

Among the composers who used the machine frequently were the Princeton composer Milton Babbitt and Charles Wuorinen, the latter of whom composed the Pulitzer Prize-winning “Time Econium” on it in 1968.

RCA Synthesiser

RCA Synthesiser

The pioneering RCA Synthesiser became obsolete and fell out of use in the early 1960s with the arrival of cheaper and reliable solid state transistor technology and the less complex programming interfaces of instruments such as the Buchla and Moog range of synthesisers. Neither machine survives in working condition today. The MkI  was dismantled during the 1960s (parts from it cannibalised to repair the MkII). The MkII is still at Columbia University’s Computer Music Center, but has not been maintained and reportedly is in poor condition, it was vandalised sometime in the early 1970′s and little used after that.



Images of the RCAI& II

 

rcasynth_labelrcasynth01

RCA issued a box set of four 45 RPM extended-play disks with a descriptive brochure.  This set featured a narration and demonstration of the basic features of the synthesizer, and concluded with renditions of several well known popular and classical pieces “played” on the synthesizer:

Side 1: The Synthesis of Music-The Physical Characteristics of Musical Sounds (7:13, 3.3 mb)
Side 2: The Synthesis of Music-Synthesis by Parts (Part 1) (5:55, 2.7 mb)
Side 3: The Synthesis of Music-Synthesis by Parts (Part 2) (4:37, 2.1 mb)
Side 4: Excerpts from Musical Selections (Part 1) (6:05, 2.8 mb)
Side 5: Excerpts from Musical Selections (Part 2) (3:28, 1.6 mb)
Side 6: Complete Selections-Bach Fugue No. 2, Brahms Hungarian Dance No. 1 (4:47, 2.2 mb)
Side 7: Complete Selections-Oh Holy Night (Adam), Home Sweet Home (Bishop) (6:42, 3.1 mb)
Side 8: Complete Selections-Stephen Foster Medley, Nola (Arndt), Blue Skies (Berlin) (7:49, 3.6 mb)

 


Sources

http://www.jamesfei.com/pictures/pictures-rca/pictures-rca.html