‘UPIC system’ (Unité Polyagogique Informatique du CEMAMu) Patrick Saint-Jean & Iannis Xenakis, France, 1977.

Iannis Xenakis and the UPIC system

Iannis Xenakis and the UPIC system

Developed by the computer engineer Patrick Saint-Jean directed by the composer Iannis Xenakis at CEMAMu (Centre d’Etudes de Mathématique et Automatique Musicales) in Issy les Moulineaux, Paris, France, UPIC was one of a family of early computer-based graphic controllers for digital music (Other including Max Mathews’ Graphic 1 ) which themselves were based on earlier analogue graphical sound synthesis and composition instruments such as Yevgeny Murzin’s ANS Synthesiser , Daphne Oram’s ‘Oramics‘, John Hanert’s ‘Hanert Electric Orchestra’  and much earlier Russian optical synthesis techniques.

UPIC Schematic

UPIC Schematic

Xenakis had been working with computer systems as far back as 1961 using an IBM system to generate mathematical algorithmic scores for ‘Metastaseis’; “It was a program using probabilities, and I did some music with it. I was interested in automating what I had done before, mass events like Metastaseis. So I saw the computer as a tool, a machine that could make easier the things I was working with. And I thought perhaps I could discover new things”. In the late 1960s when computers became powerful enough to handle both graphical input and sound synthesis, Xenakis began developing his ideas for what was to become the UPIC system; an intuitive graphical instrument where the user could draw sound-waves and organise them into a musical score. Xenakis’s dream was to create a device that could  generate all aspects of an electroacoustic composition graphically and free the composer from the complexities of software as well as the restrictions of conventional music notation. 

UPIC Diagram

UPIC Diagram from a film by Patrick Saint Jean in 1976

UPIC consisted of an input device; a large high resolution digitising tablet the actions of which were displayed on a CRT screen, and a computer; for the analysis of the input data and generation and output of the digital sound. Early version of the UPIC system were not able to respond in real time to user input so the composer had to wait until the data was processed and output as audible sound – The UPIC system has subsequently been developed to deliver real-time synthesis and composition and expanded to allow for digitally sampled waveforms as source material, rather than purely synthesised tones.

The UPIC System hardware

The UPIC System hardware

To create sounds, the user drew waveforms or timbres on the input tablet which could then be transposed, reversed, inverted or distorted through various algorithmic processes. These sounds could then be stored and arranged as a graphical score. The overall speed of the composition could be stretched creating compositions of up to an hour or a few seconds.  Essentially, UPIC was a digital version of Yevgeny Murzin’s ANS Synthesiser which allowed the composer to draw on a X/Y axis to generate and organise sounds.

Since it’s first development UPIC has been used by a number of composers including Iannis Xenakis (Mycenae Alpha being the first work completely composed on the system), Jean-Claude Risset (on Saxatile (1992), Takehito Shimazu (Illusions in Desolate Fields (1994), Julio Estrada (on ‘eua’on’), Brigitte Robindoré, Nicola Cisternino and Gerard Pape (CCMIX’s director).

More recent developments of the UPIC project include the French Ministry of Culture sponsored ‘IanniX’ ; an open-source graphic sequencer and HighC; a software graphic synthesiser and sequencer based directly on the UPIC interface.



Images of the UPIC System


Sources:

Iannis Xenakis: Who is He? Joel Chadabe January 2010

http://www.umatic.nl/

http://patrick.saintjean.free.fr/SILOCOMUVI_UPICPSJ2012/CMMM2009-UPIC-CNET-SILOCoMuVi1975-77.html

‘Images of Sound in Xenakis’s Mycenae-Alpha’ Ronald Squibbs, Yale University, rsquibbs @ minerva.cis.yale.edu

IanniX project homepage

‘Graphic 1′ William H. Ninke, Carl Christensen, Henry S. McDonald and Max Mathews. USA, 1965


‘Graphic 1′  was an hybrid hardware-software graphic input system for digital synthesis that allowed note values to be written on a CRT computer monitor – although very basic by current standards, ‘Graphic 1′ was the precursor to most computer based graphic composition environments such as Cubase, Logic Pro, Ableton Live and so-on.

The IBM704b at Bell Labs used with the Graphics 1 system

The IBM704b at Bell Labs used with the Graphics 1 system

‘Graphic 1′ was developed by William Ninke (plus  Carl Christensen and Henry S. McDonald) at Bell labs for use by Max Mathews as a graphical front-end for MUSIC IV synthesis software to circumvent the lengthy and tedious process of adding numeric note values to the MUSIC program.

” The Graphic 1 allows a person to insert pictures and graphs directly into a computer memory by the very act of drawing these objects…Moreover the power of the computer is available to modify, erase, duplicate  and remember these drawings”
Max Mathews  quoted from ‘Electronic and Experimental Music: Technology, Music, and Culture’ by Thom Holmes

Lawrence Rosller of Bell labs with Max Mathews in front of the Graphics 1 system c 1967

Lawrence Rosller of Bell labs with Max Mathews in front of the Graphics 1 system c 1967

Graphic 2/ GRIN 2 was later developed in 1976 as a commercial design package based on a faster PDP2 computer and was sold by Bell and DEC as a computer-aided design system for creating circuit designs and logic schematic drawings.

Audio recordings of the Graphic I/MUSIC IV system

Graphic I Audio file 1

Graphic I Audio file 2

Graphic I Audio file 3

Graphic I Audio file 4


Sources:

‘Interview with Max Mathews’ C. Roads and Max Mathews. Computer Music Journal, Vol. 4, No. 4 (Winter, 1980), pp. 15-22. The MIT Press

Electronic and Experimental Music: Technology, Music, and Culture. Thom Holmes

http://www.musicainformatica.it/

http://cm.bell-labs.com/cm/cs/cstr/99.html

‘The Oramics Machine: From vision to reality’. PETER MANNING. Department of Music, Durham University, Palace Green, Durham, DH1 3RL, UK

M. V. Mathews and L. Rosler’ Perspectives of New Music’  Vol. 6, No. 2 (Spring – Summer, 1968), pp. 92-118

W. H. Ninke, “GRAPHIC I: A Remote Graphical Display Console System,” Proceedings of the Fall Joint Computer Conference of the American Federation of Information Processing Societies 27 (1965), Part I, pp. 839-846.

‘Encyclopedia of Computer Science and Technology: Volume 3 – Ballistics …’ Jack Belzer, Albert G. Holzman, Allen Kent

MUSYS. Peter Grogono, United Kingdom, 1969

EMS was the London electronic music studio founded and run by Peter Zinovieff in 1965 to research and produce experimental electronic music. The studio was based around two DEC PDP8 minicomputers, purportedly the first privately owned computers in the world.

One of the DEC PDP8 mini-computers at EMS

One of the DEC PDP8 mini-computers at EMS

Digital signal processing was way beyond the capabilities of the 600,000 instructions-per-second, 12k RAM, DEC PDP8s; instead, Peter Grogono was tasked with developing a new musical composition and ‘sequencing’ language called MUSYS. MUSYS was designed to be an easy to use, ‘composer friendly’ and efficient (i.e. it could run within the limitations of the PDP8 and save all the data files to disk – rather than paper tape) programming language to make electronic music.  MUSYS, written in assembly language, allowed the PDP8s to control a bank of 64 filters which could be used either as resonant oscillators to output sine waves, or in reverse, to read and store frequency data from a sound source. This meant that MUSYS was a type of low resolution frequency sampler; it could ‘sample’ audio frequency data at 20 samples per second and then reproduce that sampled data back in ‘oscillator mode’. MUSYS was therefore a hybrid digital-analogue performance controller similar to Max Mathew’s GROOVE System (1970) and  Gabura & Ciamaga’s PIPER system (1965) and a precursor to more modern MIDI software applications.

“It all started in 1969, when I was working at Electronic Music Studios (EMS) in Putney, S.W. London, UK. I was asked to design a programming language with two constraints. The first constraint was that the language should be intelligible to the musicians who would use it for composing electronic music. The second constraint was that it had to run on a DEC PDP8/L with 4K 12-bit words of memory.”

The two PDP8′s were named after Zinovieff’s children Sofka (an older a PDP8/S) and Leo (a newer, faster a PDP8/L). Sofka was used as a sequencer that passed the time-events to the audio hardware (the 64 filter-oscillators,  six amplifiers, three digital/analog converters, three “integrators” (devices that generated voltages that varied linearly with time), twelve audio switches, six DC switches, and a 4-track Ampex tape-deck). Leo was used to compute the ‘score’ and pass on the data when requested by Sofka every millisecond or so;

“These devices could be controlled by a low-bandwidth data stream. For example, a single note could be specified by: pitch, waveform, amplitude, filtering, attack rate, sustain rate, and decay time. Some of these parameters, such as filtering, would often be constant during a musical phrase, and would be transmitted only once. Some notes might require more parameters, to specify a more complicated envelope, for instance. But, for most purposes, a hundred or so events per second, with a time precision of about 1 msec, is usually sufficient. (These requirements are somewhat similar to the MIDI interface which, of course, did not exist in 1970.)”

partita-for-unattended-computer-3

partita-for-unattended-computer-1

Previous to the development of MUSYS, the EMS PDP8s were used for the first ever unaccompanied performance of live computer music ‘Partita for Unattended Computer’ at Queen Elizabeth Hall, London, 1967. Notable compositions based on the MUSYS sytem include: ‘Medusa’ Harrison Birtwistle 1970, ‘Poems of Wallace Stevens’  Justin Connolly. 1970, ‘Tesserae 4′  Justin Connolly 1971, ‘Chronometer’  Harrison Birtwistle 1972, ‘Dreamtime’ David Rowland 1972, ‘Violin Concerto’  Hans Werner Henze 1972.

Audio Examples

Demonstrating the digital manipulation of a voice with the frequency sampler:

In the Beginning‘ PeterGrogono with Stan Van Der Beek 1972. “In 1972, Stan Van Der Beek visited EMS. Peter Zinovieff was away and, after listening to some of the things we could do, Stan left with brief instructions for a 15 minute piece that would “suggest the sounds of creation and end with the words ‘in the beginning was the word’”. All of the sounds in this piece are derived from these six words, heard at the end, manipulated by the EMS computer-controlled filter bank.”

Datafield‘ Peter Grogono 1970

Chimebars  Peter Grogono 1968

 MUSYS code examples

A composition consisting of a single note might look like this:

      #NOTE 56, 12, 15;
      $

The note has pitch 56 ( from an eight-octave chromatic scale with notes numbered from 0 to 63), loudness 12 (on a logarithmic scale from 0 to 15), and duration 15/100 = 0.15 seconds. The loudness value also determines the envelope of the note.

An example of a MUSYS  program that would play fifty random tone rows:

      50 (N = 0 X = 0
      1  M=12^  K=1  M-1 [ M (K = K*2) ]
         X & K[G1]
         X = X+K  N = N+1  #NOTE M, 15^, 10^>3;
         12 - N[G1]
      $

MUSYS evolved in 1978 into the MOUSE programming language; a small, efficient stack based interpreter.


Sources:

http://users.encs.concordia.ca/~grogono/Bio/ems.html

Peter Grogono.’MUSYS: Software for an Electronic Music Studio. Software – Practice and Experience’, vol. 3, pages 369-383, 1973.

http://www.retroprogramming.com/2012/08/mouse-language-for-microcomputers-by.html

EMS Synthesisers, Peter Zinovieff, Tristram Cary, David Cockerell United Kingdom, 1969

EMS (Electronic Music Studios) was founded in 1965 by Peter Zinovieff, the son of an aristocrat Russian émigré with a passion for electronic music who set up the studio in the back garden of his home in Putney, London. The EMS studio was the hub of activity for electronic music in the UK during the late sixties and seventies with composers such as Harrison Birtwistle, Tristram Cary, Karlheinz Stockhausen and Hans Werner Henze as well as the commercial electronic production group ‘Unit Delta Plus  (Zinovieff, Delia Derbyshire and Brian Hodgson).

Front panel of the DEC PDP8i

Front panel of the DEC PDP8i

Zinovieff , with David Cockerell and Peter Grogono developed a software program called MUSYS (which evolved into the current MOUSE audio synthesis programming language) to run on two DEC PDP8 mini-computers allowing the voltage control of multiple analogue synthesis parameters via a digital punch-paper control.  In the mid 1960′s access outside the academic or military establishment to, not one but two, 12-bit computers with 1K memory and a video monitor for purely musical use was completely unheard of:

” I was lucky in those days to have a rich wife and so we sold her tiarra and we swapped it for a computer. And this was the first computer in the world in a private house.” – Peter Zinovieff

The specific focus of EMS was to work with digital audio analysis and manipulation or as Zinovieff puts it “ To be able to analyse a sound; put it into sensible musical form on a computer; to be able to manipulate that form and re-create it in a musical way” (Zinovieff 2007). Digital signal processing was way beyond the capabilities of the DEC PDP8′s; instead they were used to control a bank of 64 oscillators (actually resonant filters that could be used as sine wave generators) modified for digital control. MUSYS was therefore a hybrid digital-analogue performance controller similar to Max Mathew’s GROOVE System (1970) and  Gabura & Ciamaga’s PIPER system (1965).

Peter Zinovieff at the controls of the PDP8 Computer, EMS studio London

Peter Zinovieff at the controls of the PDP8 Computer, EMS studio London

ems_studio_diagram

EMS studio diagram (from Mark Vail’s ‘ Vintage Synthesizers’)

Even for the wealthy Peter Zinovieff, running EMS privately was phenomenally expensive and he soon found himself running into financial difficulties. The VCS range of synthesisers was launched In 1969 after Zinovieff received little interest when he offered to donate the Studio to the nation (in a letter to ‘The Times’ newspaper). It was decided that the only way EMS could be saved was to create a commercial, miniaturised version of the studio as a modular, affordable synthesiser for the education market. The first version of the synthesiser designed by David Cockerell, was an early prototype called the  Voltage Controlled Studio 1; a two oscillator instrument built into a wooden rack unit – built for the Australian composer Don Banks for £50 after a lengthy pub conversation:

“We made one little box for the Australian composer Don Banks, which we called the VCS1…and we made two of those…it was a thing the size of a shoebox with lots of knobs, oscillators, filter, not voltage controlled. Maybe a ring modulator, and envelope modulator” David Cockerell 2002

vcs-3_0001 The VCS1 was soon followed by a more commercially viable design; The Voltage Controlled Studio 3 (VCS3), with circuitry by David Cockerell, case design by Tistram Cary and with input from Zimovieff . This device was designed as a small, modular, portable but powerful and versatile electronic music studio – rather than electronic instrument – and as such initially came without a standard keyboard attached. The price of the instrument was kept as low as possible – about £330 (1971) – by using cheap army surplus electronic components:

“A lot of the design was dictated by really silly things like what surplus stuff I could buy in Lisle Street [Army-surplus junk shops in Lisle Street, Soho,London]…For instance, those slow motion dials for the oscillator, that was bought on Lisle street, in fact nearly all the components were bought on Lisle street…being an impoverished amateur, I was always conscious of making things cheap. I saw the way Moog did it [referring to Moog's ladder filter] but I adapted that and changed that…he had a ladder based on ground-base transistors and I changed it to using simple diodes…to make it cheaper. transistors were twenty pence and diodes were tuppence!” David Cockerell from ‘Analog Days’

Despite this low budget approach, the success of the VCS3 was due to it’s portability and flexibility. This was the first affordable modular synthesiser that could easily be carried around and used live as a performance instrument. As well as an electronic instrument in it’s own right, the VCS3 could also be used as an effects generator and a signal processor, allowing musicians to manipulate external sounds such as guitars and voice.

VCS3 with DK1 keyboard

VCS3 with DK1 keyboard

The VCS3 was equipped with two audio oscillators of varying frequency, producing sine and sawtooth and square waveforms which could be coloured and shaped by filters, a ring modulator, a low frequency oscillator, a noise generator,  a spring reverb and envelope generators. The device could be controlled by two unique components whose design was dictated by what could be found in Lisle street junk shops; a large two dimensional joystick (from a remote control aircraft kit) and a 16 by 16 pin board allowing the user to patch all the modules without the clutter of patch cables.

The iconic 16 X 16 pin-patch panel of the VCS3

The iconic 16 X 16 pin-patch panel of the VCS3. The 2700 ohm resistors soldered inside the pin vary in tolerance 5% variance and later 1%; the pins have different colours: the ‘red’ pins have 1% tolerance and the ‘white’ have 5% while the ‘green’ pins are attenuating pins having a resistance of 68,000 ohms giving differing results when constructing a patch.

The original design intended as a music box for electronic music composition – in the same vein as Buchla’s Electronic Music Box – was quickly modified with the addition of a standard keyboard that allowed tempered pitch control over the monophonic VCS3. This brought the VCS3 to the attention of rock and pop musicians who either couldn’t afford the huge modular Moog systems (the VCS3 appeared a year before the Minimoog was launched in the USA) or couldn’t find Moog, ARP or Buchla instruments on the British market. Despite it’s reputation as being hopeless as a melodic instrument due to it’s oscillators inherent instability the VCS3 was enthusiastically championed by many british rock acts of the era; Pink Floyd, Brian Eno (who made the external audio processing ability of the instruments part of his signature sound in the early 70′s), Robert Fripp, Hawkwind (the eponymous ‘Silver Machine‘), The Who, Gong and Jean Michel Jarre amongst many others. The VCS3 was used as the basis for a number of other instrument designs by EMS including an ultra-portable A/AK/AKS (1972) ; a VCS3 housed in a plastic carrying case with a built-in analogue sequencer, the Synthi HiFli guitar synthesiser (1973), EMS Spectron Video Synthesiser, Synthi E (a cut-down VCS3 for educational purposes) and AMS Polysynthi as well as several sequencer and vocoder units and the large modular EMS Synthi 100 (1971).

Despite initial success – at one point Robert Moog offered a struggling Moog Music to EMS for $100,000 – The EMS company succumbed to competition from large established international instrument manufacturers who brought out cheaper, more commercial, stable and simpler electronic instruments; the trend in synthesisers has moved away from modular user-patched instruments to simpler, preset performance keyboards. EMS finally closed in 1979 after a long period of decline. The EMS name was sold to Datanomics in Dorset UK and more recently a previous employee Robin Wood, acquired the rights to the EMS name in 1997 and restarted small scale production of the EMS range to the original specifications.

Peter Zinovieff.  Currently working as a librettist and composer of electronic music in Scotland.

David Cockerell, chief designer of the VCS and Synthi range of instruments left EMS in 1972 to join Electro-Harmonix and designed most of their effect pedals. He went to IRCAM, Paris in 1976 for six months, and then returned to Electro-Harmonix . Cockerell  designed the entire Akai sampler range to date, some in collaboration with Chris Huggett (the Wasp & OSCar designer) and Tim Orr.

Tristram Cary , Director of EMS until 1973. Left to become Professor of Electronic Music at the Royal College of Music and later Professor of Music at the University of Adelade. Now retired.

Peter Grogono Main software designer of MUSYS. Left EMS in 1973 but continued working on the MUSYS programming language and further developed it into the Mouse language. Currently Professor at the Department of Computer Science, Concordia University, Canada.

The Synthi 100 at IPEM Studios Netherlands.

The Synthi 100 at IPEM Studios Netherlands.

The EMS Synthi 100

The EMS Synthi 100 was a large and very expensive (£6,500 in 1971)  modular system, fewer than forty units were built and sold. The Synthi 100 was essentially  3 VCS3′s combined; delivering a total of 12 oscillators, two duophonic keyboards giving four note ‘polyphony’ plus a 3 track 256 step digital sequencer. The instrument also came with optional modules including a Vocoder 500 and an interface to connect to a visual interface via a PDP8 computer known as the ‘Computer Synthi’.  

Images of EMS Synthesisers


Documents:

VCS3 Manual (pdf)


Sources:

http://www.till.com/articles/arp/ ‘Analog Days’. T. J PINCH, Frank Trocco. Harvard University Press, 2004

‘Vintage Synthesizers’: Pioneering Designers, Groundbreaking Instruments, Collecting Tips, Mutants of Technology. Mark Vail. March 15th 2000. Backbeat Books

http://www.redbullmusicacademy.com/lectures/dr-peter-zinovieff-the-original-tectonic-sounds?template=RBMA_Lecture%2Ftranscript

http://users.encs.concordia.ca/~grogono

http://www.emssynthesisers.co.uk/

https://jasperpye.wordpress.com/category/synths

Peter Forrest, The A-Z of Analogue Synthesisers Part One A-M, Oct 1998.

The ‘Allen Computer Organ’, Ralph Deutsch – Allen Organ Co, USA, 1971

Allen Computer Organ of 1971

Allen 301-3 Digital Computer organ of 1971

The Allen Computer Organ was one of the first commercial digital instruments, developed by Rockwell International (US military technology company) and built by the Allen Organ Co in 1971. The organ used an early form of digital sampling allowing the user to chose pre-set voices or edit and store sounds using an IBM style punch-card system.

The Rockwell/Allen Computer Organ engineering  team with a prototype model.

The Rockwell/Allen Computer Organ engineering team with a prototype model.

The sound itself was generated from MOS (Metal Oxide Silicon) boards. Each MOS board contained 22 LSI (Large Scale Integration) circuit boards (miniaturised photo-etched silicon boards containing thousands of transistors – based on technology developed by Rockwell International for the NASA space missions of the early 70′s) giving a total of 48,000 transistors; unheard of power for the 1970′s.

Publicity photograph demonstrating  the punch-car reader

Publicity photograph demonstrating the punch-car reader

Allen Organ voice data punch cards

Allen Organ voice data punch cards


Sources

http://www.allenorgan.com/

https://picasaweb.google.com/106647927905455601813/Allen301BDigitalComputerOrgan

http://www.nightbloomingjazzmen.com/Ralph_Deutsch_Digital_Organ.html

http://www.leagle.com/decision/19731480363FSupp1117_11306

The ‘PIPER’ System James Gabura & Gustav Ciamaga, Canada, 1965

Charles Hamm, Lejaren Hiller, Salvatore Martirano, Herbert Braid, Kenneth  Gaburo at the EMS, Toronto, 1965

Charles Hamm, Lejaren Hiller, Salvatore Martirano, Herbert Braid, James Gaburo at the EMS, Toronto, 1965

PIPER was one of the earliest hybrid performance system allowing composers and musicians to write and edit music in real time using computers and analogue synthesisers. The system was developed by  James Gabura & Gustav Ciamaga Who also collaborated with Hugh Le Caine on the ‘Sonde’) at the University of Toronto (UTEMS) in 1965. With computing technology in 1965 being to weak to synthesise and control sounds in real-time a work-around was to leave the scoring and parameter control to the computer and the audio generation to an external analogue synthesiser. The PIPER system consisted two Moog oscillators and a custom built amplitude regulator to generate the sound and an IBM 6120 to store parameter input and to score the music. The computer would read and store the musicians input; keyboard notes, filter changes, note duration and so-on and allow the user to play this back and edit in real-time.

By the 1980′s such large hybrid analogue-digital performance systems like PIPER and Max Mathew’s GROOVE were obsolete due to the advent of affordable, microcomputers and analogue/digital sequencer technology.

 


Sources

http://www.thecanadianencyclopedia.ca/en/article/gustav-ciamaga-emc/

http://ems.music.illinois.edu/ems/articles/battisti.html

‘GROOVE Systems’, Max Mathews & Richard Moore, USA 1970

Max Mathews with the GROOVE system

Max Mathews with the GROOVE system

In 1967 the composer and musician Richard Moore began a collaboration with Max Mathews at Bell Labs exploring performance and  expression in computer music in a ‘musician-friendly’ environment. The result of this was a digital-analogue hybrid system called GROOVE  (Generated Realtime Operations On Voltage-controlled Equipment) in which a musician played an external analogue synthesiser and a computer monitored and stored the performer’s manipulations of the interface; playing notes, turning knobs and so-on. The objective being to build a real-time musical performance tool by concentrating the computers limited power, using it to store musical parameters of an external device rather than generating the sound itself :

“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wroteMusic 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. TheIBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University

Richard Moore with the Groove System

Richard Moore with the Groove System

The system, written in assembler, only ran on the Honeywell DDP224 computer that Bell had acquired specifically for sound research. The addition of a disk storage device meant that it was also possible to create libraries of programming routines so that users could create their own customised logic patterns for automation or composition. GROOVE allowed users to continually adjust and ‘mix’ different actions in real time, review sections or an entire piece and then re-run the composition from stored data. Music by Bach and Bartok were performed with the GROOVE at the first demonstration at a conference on Music and Technology in Stockholm organized by UNESCO  in 1970. Among the participants also several leading figures in electronic music such as Pierre Schaffer and Jean-Claude Risset.

“Starting with the Groove program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the radio-baton, plus a program, the conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.”
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University

The GROOVE System at the Bell Laboratories circa 1970

The GROOVE System at the Bell Laboratories circa 1970

The GROOVE system consisted of:

  • 14 DAC control lines scanned every 100th/second ( twelve 8-bit and two 12-bit)
  • An ADC coupled to a multiplexer for the conversion of seven voltage signal: four generated by the same knobs and three generated by 3-dimensional movement of a joystick controller;
  • Two speakers for audio sound output;
  • A special keyboard to interface with the knobs to generate On/Off signals
  • A teletype keyboard for data input
  • A CDC-9432 disk storage;
  • A tape recorder for data backup



Antecedents to the GROOVE included similar projects such as PIPER, developed by James Gabura and Gustav Ciamaga at the University of Toronto, and a system proposed but never completed by Lejaren Hiller and James Beauchamp at the University of Illinois . GROOVE was however, the first widely used computer music system that allowed composers and performers the ability to work in real-time. The GROOVE project ended in 1980 due to both the high cost of the system – some $20,000, and also  to advances in affordable computing power that allowed synthesisers and performance systems to work together flawlessly .


Sources

Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music, Prentice Hall, 1997.

F. Richard Moore, Elements of Computer Music, PTR Prentice Hall, 1990.

http://www.vintchip.com/mainframe/DDP-24/DDP24.html

‘MUSIC N’, Max Vernon Mathews, USA, 1957

Max Mathews was a pioneering, central figure in computer music. After studying engineering at California Institute of Technology and the Massachusetts Institute of Technology in 1954 Mathews went on to develop ‘Music 1′ at Bell Labs; the first of the ‘Music’ family of computer audio programmes and the first widely used program for audio synthesis and composition. Mathews spent the rest of his career developing the ‘Music N’ series of programs and became a key figure in digital audio, synthesis, interaction and performance. ‘Music N’ was the first time a computer had been used to investigate audio synthesis ( Computers had been used to generate sound and music with the CSIR M1 and Ferranti Mk1 as early as 1951, but more as a by-product of machine testing rather than for specific musical objectives) and set the blueprint for computer audio synthesis that remains in use to this day in programmes like CSound, MaxMSP and SuperCollider and graphical modular programmes like Reaktor.

IBM 704 System

IBM 704 System

“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines – they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”

Max Mathews “Horizons in Computer Music”, March 8–9, 1997, Indiana University:

MUSIC I 1957

Music 1 was written in Assembler/machine code to make the most of the technical limitations of the IBM704 computer. The audio output was a simple monophonic triangle wave tone with no attack or decay control. It was only possible to set the parameters of amplitude, frequency and duration of each sound. The output was stored on magnetic tape and then converted by a DAC to make it audible (Bell Laboratories, in those years, were the only ones in the United States, to have a DAC; a 12-Bit valve technology converter, developed by EPSCO), Mathews says;

In fact, we are the only ones in the world at the time who had the right kind of a digital-to-analog converter hooked up to a digital tape transport that would play a computer tape. So we had a monopoly, if you will, on this process“.

In 1957 Mathews and his colleague Newman Guttman created a synthesised 17 second piece using Music I, titled ‘The Silver Scale’ ( often credited as being the first proper piece of  computer generated music) and a one minute piece later in the same year called ‘Pitch Variations’ both of which were released on an anthology called ‘Music From Mathematics’ edited by Bell Labs in 1962.

Mathews and the IBM 7094

Mathews and the IBM 7094

MUSIC II 1958

Was an updated more versatile and functional version of Music I . Music II  still used assembler but for the transistor (rather than valve) based, much faster IBM 7094 series. Music II had four-voice polyphony and a was capable of generating sixteen wave shapes via the introduction of a wavetable oscillator.

MUSIC III 1960

“MUSIC 3 was my big breakthrough, because it was what was called a block diagram compiler, so that we could have little blocks of code that could do various things. One was a generalized oscillator … other blocks were filters, and mixers, and noise generators.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

The introduction of Unit Generators (UG) in MUSIC III was an evolutionary leap in music computing proved by the fact that almost all current programmes use the UG concept in some form or other. A Unit generator is essentially a pre-built discreet function within the program; oscillators, filters, envelope shapers and so-on, allowing the composer to flexibly connect multiple UGs together to generate a specific sound. A separate ‘score’ stage was added where sounds could be arranged in a musical chronological fashion. Each event was assigned to an instrument, and consisted of a series of values for the unit generators’ various parameters (frequency, amplitude, duration, cutoff frequency, etc). Each unit generator and each note event was entered onto a separate punch-card, which while still complex and archaic by today’s standards, was the first time a computer program used a paradigm familiar to composers.

“The crucial thing here is that I didn’t try to define the timbre and the instrument. I just gave the musician a tool bag of what I call unit generators, and he could connect them together to make instruments, that would make beautiful music timbres. I also had a way of writing a musical score in a computer file, so that you could, say, play a note at a given pitch at a given moment of time, and make it last for two and a half seconds, and you could make another note and generate rhythm patterns. This sort of caught on, and a whole bunch of the programmes in the United States were developed from that. Princeton had a programme called Music 4B, that was developed from my MUSIC 4 programme. And (theMIT professor) Barry Vercoe came to Princeton. At that time, IBM changed computers from the old 1794 to the IBM 360 computers, so Barry rewrote the MUSIC programme for the 360, which was no small job in those days. You had to write it in machine language.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

Max Mathews and Joan Miller at Bell labs

Max Mathews and Joan Miller at Bell labs

MUSIC IV

MUSIC IV was the result of the collaboration between Max Mathews and  Joan Miller completed in 1963 and was a more complete version of the MUSIC III system using a modified macro enabled version of the assembler language. These programming changes meant that MUSIC IV would only run on the Bell Labs IBM 7094.

“Music IV was simply a response to a change in the language and the computer. It Had some technical advantages from a computer programming standpoint. It made heavy use of a macro assembly program Which Existed at the time.”
Max Mathews 1980

MUSIC IVB, IVBF and IVF

Due to the lack of portability of the MUSIC IV system other versions were created independently of Mathews and the Bell labs team, namely MUSIC IVB at Princeton and MUSIC IVBF at the Argonne Labs. These versions were built using FORTRAN rather than assembler language.

MUSIC V

MUSIC V was probably the most popular of the MUSIC N series from Bell Labs. Similar to MUSIC IVB/F versions, Mathews abandoned assembler and built MUSIC V in the FORTRAN language specifically for the IBM 360 series computers. This meant that the programme was faster, more stable and  could run on any IBM 360 machines outside of  Bell Laboratories. The data entry procedure was simplified, both in Orchestra and in Score section. One of the most interesting news features was the definition of new modules that allow you to import analogue sounds into Music V. Mathews persuaded Bell Labs not to copyright the software meaning that MUSIC V was probably one of the first open-source programmes, ensuring it’s adoption and longevity leading directly to today’s CSound.

“… The last programme I wrote, MUSIC 5, came out in 1967. That was my last programme, because I wrote it in FORTRAN. FORTRAN is still alive today, it’s still in very good health, so you can recompile it for the new generation of computers. Vercoe wrote it for the 360, and then when the 360 computers died, he rewrote another programme called MUSIC 11 for the PDP-11, and when that died he got smart, and he wrote a programme in the C language called CSound. That again is a compiler language and it’s still a living language; in fact, it’s the dominant language today. So he didn’t have to write any more programmes.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

MUSIC V marked the end of Mathews involvement in MUSIC N series but established it as the parent for all future music programmes. Because of his experience with the real-time limitations of computer music, Mathews became interested in developing ideas for performance based computer music such as the GROOVE system (with Richard Moore in 1970) system in and The ‘Radio Baton’ (with Tom Oberheim in 1985 ).

YEAR VERSION PLACE AUTHOR
1957 Music I Bell Labs (New York) Max Mathews
1958 Music II Bell Labs (New York) Max Mathews
1960 Music III Bell Labs (New York) Max Mathews
1963 Music IV Bell Labs (New York) Max Mathews, Joan Miller
1963 Music IVB Princeton University Hubert Howe, Godfrey Winham
1965 Music IVF Argonne Laboratories (Chicago) Arthur Roberts
1966 Music IVBF Princeton University Hubert Howe, Godfrey Winham
1966 Music 6 Stanford University Dave Poole
1968 Music V Bell Labs (New York) Max Mathews
1969 Music 360 Princeton University Barry Vercoe
1969 Music 10  Stanford University John Chowning, James Moorer
1970 Music 7 Queen’s College (New York) Hubert Howe, Godfrey Winham
1973 Music 11 M.I.T. Barry Vercoe
1977 Mus10 Stanford University Leland Smith, John Tovar
1980 Cmusic University of California Richard Moore
1984 Cmix Princeton University Paul Lansky
1985 Music 4C University of Illinois James Beauchamp, Scott Aurenz
1986 Csound M.I.T. Barry Vercoe


Sources

http://www.computer-history.info/Page4.dir/pages/IBM.704.dir/

http://www.musicainformatica.org

Curtis Roads, Interview with Max Mathews, Computer Music Journal, Vol. 4, 1980.

‘Frieze’ Interview with Max Mathews. by Geeta Dayal

An Interview with Max Mathews.  Tae Hong Park. Music Department, Tulane University

The ‘Ferranti Mk1 ‘ Computer. Freddie Williams & Tom Kilburn, United Kingdom, 1951.

Ferranti Mk1 Computer

Ferranti Mk1 Computer

The oldest existing recording of a computer music programme. The Ferranti Mk1in 1951. Recorded live to acetate disk with a small audience of technicians.

The Ferranti Mk1 was the world’s first commercially available general-purpose computer; a commercial development of the Manchester Mk1 at Manchester university in 1951. Included in the Ferranti Mark 1′s instruction set was a ‘hoot’ command, which enabled the machine to give auditory feedback to its operators. Looping and timing of the ‘hoot’ commands allowed the user to output pitched musical notes; a feature that enabled the Mk1 to have produced the oldest existing recording of computer music ( The earliest reported but un-recorded computer music piece was created earlier in the same year by the CSIR Mk1 in Sydney Australia). The recording was made by the BBC towards the end of 1951 programmed by Christopher Strachey, a maths teacher at Harrow and a friend of Alan Turing.

Ferranti Mk1

Ferranti Mk1

Ferranti Mk1

Ferranti Mk1

 


Sources

http://www.cs.man.ac.uk/CCS/res/res62.htm

http://www.computer50.org/mark1/FM1.html

CSIR Mk1 & CSIRAC, Trevor Pearcey & Geoff Hill, Australia, 1951

Trevor Pearcey at the CSIR Mk1

Trevor Pearcey at the CSIR Mk1

CSIRAC was an early digital computer designed by the British engineer Trevor Pearcey as part of a research project at CSIRO ( Sydney-based Radiophysics Laboratory of the Council for Scientific and Industrial Research)  in the early 1950′s. CSIRAC was intended as a prototype for a much larger machine use and therefore included a number of innovative ‘experimental’ features such as video and audio feedback designed to allow the operator to test and monitor the machine while it was running. As well as several optical screens,  CSIR Mk1 had a built-in Rola 5C  speaker mounted on the console frame. The speaker was an output device used to alert the programmer that a particular event had been reached in the program; commonly used for warnings, often to signify the end of the program and sometimes as a debugging aid. The output to the speaker was basic raw data from the computer’s bus and consisted of an audible click. To create a more musical tone, multiple clicks were combined using a short loop of instructions; the timing of the loop giving a change in frequency and therefore an audible change in pitch.

A closeup of the CSIRAC console switch panel. Note the multiple rows of 20 switches used to set bits in various registers.

The CSIRAC console switch panel with multiple rows of 20 switches used to set bits in various registers.

The first piece of digital computer music was created by Geoff Hill and Trevor Pearcey on the  CSIR Mk1 in 1951 as a way of testing the machine rather than a musical exercise. The music consisted of excerpt from  popular songs of the day; ‘Colonel Bogey’, ‘Bonnie Banks’, ‘Girl with Flaxen Hair’ and so on. The work was perceived as a fairly insignificant technical test and wasn’t recorded or widely reported:

An audio reconstruction  of CSIRAC playing Colonel Bogey (c.1951)
 CSIRAC plays In Cellar Cool with a simulation of CSIRAC’s room noises.

CSIRAC – the University’s giant electronic brain – has LEARNED TO SING!

…it hums, in bathroom style, the lively ditty, Lucy Long. CSIRAC’s song is the result of several days’ mathematical and musical gymnastics by Professor T. M. Cherry. In his spare time Professor Cherry conceived a complicated punched-paper programme for the computer, enabling it to hum sweet melodies through its speaker… A bigger computer, Professor Cherry says, could be programmed in sound-pulse patterns to speak with a human voice…
The Melbourne Age, Wednesday 27th July 1960

Later version of the CSIRAC at The University of Melbourne

Later version of the CSIRAC at The University of Melbourne

…When CSIRAC began sporting its musical gifts, we jumped on his first intellectual flaw. When he played “Gaudeamus Igitur,” the university anthem, it sounded like a refrigerator defrosting in tune. But then, as Professor Cherry said yesterday, “This machine plays better music than a Wurlitzer can calculate a mathematical problem”…
Melbourne Herald, Friday 15th June 1956:

Portable computer: CSIRAC on the move to Melbourne, June 1955

Portable computer: CSIRAC on the move to Melbourne, June 1955

The CSIR Mk1 was dismantled in 1955 and moved to The University of Melbourne, where it was renamed CSIRAC. Professor of Mathematics, Thomas Cherry, had a great interest in programming and music and he created music with CSIRAC. During it’s time in Melbourne the practice of music programming on the CSIRAC was refined allowing the input of music notation. The program tapes for a couple of test scales still exist, along with the popular melodies ‘So early in the Morning’ and ‘In Cellar Cool’.

Music instructions for the CSIRAC by Thomas Cherry

Music instructions for the CSIRAC by Thomas Cherry

Music instructions for the CSIRAC by Thomas Cherry

Music instructions for the CSIRAC by Thomas Cherry




Later version of the CSIRAC at The University of Melbourne

Later version of the CSIRAC at The University of Melbourne


Sources

http://www.audionautas.com/2011/09/music-of-csirac.html

Australia’s First Computer Music, Common Ground Publishing, Paul Doornbusch pauld@koncon.nl

http://ww2.csse.unimelb.edu.au/dept/about/csirac/music/index.html