The Synclavier I & II. Jon Appleton, Sydney Alonso & Cameron Jones. USA, 1977

Late version of the Synclavier II

Late version of the Synclavier II 9600TS system with an Apple Macintosh running a terminal emulator

The Synclavier I was the first commercial digital FM synthesiser and music workstation launched by the New England Digital Corporation (NED) of Norwich, Vermont, USA in 1978. The system was designed by the composer and professor of Digital Electronics at Dartmouth College, Jon Appleton with software programmer, Sydney Alonso and Cameron Jones, a student at the time at Dartmouth School of Engineering.

The origins of the Synclavier began when Cameron Jones and Sydney Alonso started to develop software and hardware for electronic music for John Appleton’s electronic music course at Dartmouth. After graduation Jones and Alonso developed a 16-bit processor card and a new compiler to create their ‘ABLE’  computer, NED’s first product, sold to institutions for data collection applications. The first musical application developed by NED was the ‘Dartmouth Digital Synthesiser’ based around the  ABLE microprocessor which was released as a production model Synclavier I in 1977. The new device was intended as a fully-integrated, high end music production system rather than an instrument and sold for $200,000 to $500,000, way beyond the reach of most musicians and recording studios.

Synclavier 1

Synclavier 1 with the VT100 Computer

The synclavier 1 was an FM synthesis based keyboard-less sound module, and was only programmable via a DEC VT100 computer supplied with the system. This version was quickly replaced by the integrated keyboard Synclavier II in 1979. The model II was a FM/Additive hybrid synthesiser with a 32 track digital sequencer memory and was the first musical device aimed at creating an integrated ‘tapeless studio’. The Syncalvier II was equally expensive echoing the fact that almost all of the components were either sourced from hardware developed for military uses or were custom designed and built by NED themselves. NED designed the system to be as robust as possible, built around their own ABLE computer hardware (as a testament to this durability, NASA chose the ABLE computer to run the onboard systems of the Gallileo space probe which in fourteen years travelled to the edges of the solar system – eight years longer than the original mission plan)

Synclavier-II ORK keyboard

Synclavier-II ORK keyboard

The instrument was controlled by a standard ‘ORK’ on-off keyboard and edited by the same DEC VT100 (later a VT640) computer as well as via a distinctive set of multiple red buttons (the same lights used in B52 bomber aircraft, chosen for durability) and rotary dial that allowed the user to edit straight from the keyboard and get visual feedback on the state of the instrument’s parameters. The keyboard was soon replaced in the new PSMT model by a ‘VPK’ weighted, velocity sensitive manual licensed from Sequential Circuits (the same keyboard as the Prophet T8) that dramatically improved the playability of the instrument.

Synclavier II PSMT

Synclavier II PSMT

The Synclavier II was a 64 voice polyphonic modular digital synthesiser; the user purchased a selection of individual cards for each function making it easy to expand and repair. In 1982 a digital 16 bit sample facility was added that allowed the user to not only sample but re-synthesise samples using FM, making the Synclavier one of the earliest digital samplers (The Fairlight CMI being the first) and in 1984 a direct to disk digital audio recording, sample to (32MB) memory, 200 track sequencer, guitar interface, MIDI and SMPTE capability were included making the Synclavier II an extremely powerful (but very expensive) integrated audio production tool. The instrument became a fixture of high-end music and soundtrack production studios – and is still in use by many. The Synclavier is instantly recognisable on many 1980 film and pop hits; used by artists such as Depeche Mode, Michael Jackson, Laurie Anderson, Herbie Hancock, Sting, Genesis, David Bowie and many other. The Synclavier was particularly championed by Frank Zappa – one of the few artists who privately owned a Synclavier – who used it extensively on many of his works including m Jazz From Hell and  Civilization, Phaze III:

“What I’ve been waiting for ever since I started writing music was a chance to hear what I wrote played back without mistakes and without a bad attitude. The Synclavier solves the problem for me. Most of the writing I’m doing now is not destined for human hands.”

Frank Zappa

Despite it’s popularity in recording studios the Synclavier inevitably succumbed to competition from increasingly powerful and cheaper personal computers, MIDI synthesisers and low cost digital samplers. New England Digital closed it’s doors in 1992, many of the company assets purchased by Fostex for use in hard-disk recording systems. In 1993, A new Synclavier Company was established by ex-NED employees as a support organisation for existing Synclavier customers.

Images of the Synclavier i & II


‘Graphic 1′ William H. Ninke, Carl Christensen, Henry S. McDonald and Max Mathews. USA, 1965

‘Graphic 1′  was an hybrid hardware-software graphic input system for digital synthesis that allowed note values to be written on a CRT computer monitor – although very basic by current standards, ‘Graphic 1′ was the precursor to most computer based graphic composition environments such as Cubase, Logic Pro, Ableton Live and so-on.

The IBM704b at Bell Labs used with the Graphics 1 system

The IBM704b at Bell Labs used with the Graphics 1 system

‘Graphic 1′ was developed by William Ninke (plus  Carl Christensen and Henry S. McDonald) at Bell labs for use by Max Mathews as a graphical front-end for MUSIC IV synthesis software to circumvent the lengthy and tedious process of adding numeric note values to the MUSIC program.

” The Graphic 1 allows a person to insert pictures and graphs directly into a computer memory by the very act of drawing these objects…Moreover the power of the computer is available to modify, erase, duplicate  and remember these drawings”
Max Mathews  quoted from ‘Electronic and Experimental Music: Technology, Music, and Culture’ by Thom Holmes

Lawrence Rosller of Bell labs with Max Mathews in front of the Graphics 1 system c 1967

Lawrence Rosller of Bell labs with Max Mathews in front of the Graphics 1 system c 1967

Graphic 2/ GRIN 2 was later developed in 1976 as a commercial design package based on a faster PDP2 computer and was sold by Bell and DEC as a computer-aided design system for creating circuit designs and logic schematic drawings.

Audio recordings of the Graphic I/MUSIC IV system

Graphic I Audio file 1

Graphic I Audio file 2

Graphic I Audio file 3

Graphic I Audio file 4


‘Interview with Max Mathews’ C. Roads and Max Mathews. Computer Music Journal, Vol. 4, No. 4 (Winter, 1980), pp. 15-22. The MIT Press

Electronic and Experimental Music: Technology, Music, and Culture. Thom Holmes

‘The Oramics Machine: From vision to reality’. PETER MANNING. Department of Music, Durham University, Palace Green, Durham, DH1 3RL, UK

M. V. Mathews and L. Rosler’ Perspectives of New Music’  Vol. 6, No. 2 (Spring – Summer, 1968), pp. 92-118

W. H. Ninke, “GRAPHIC I: A Remote Graphical Display Console System,” Proceedings of the Fall Joint Computer Conference of the American Federation of Information Processing Societies 27 (1965), Part I, pp. 839-846.

‘Encyclopedia of Computer Science and Technology: Volume 3 – Ballistics …’ Jack Belzer, Albert G. Holzman, Allen Kent

‘ARP’ Synthesisers. Alan Robert Pearlman, USA, 1970

Front panel of the ARP 2500

Front main panel of the ARP 2500

ARP Synthesisers was started by the engineer and musical enthusiast Alan Robert Pearlman – hence ‘ARP’ – in 1970 in Lexington, Massachusetts, USA. Previous to ARP, Pearlman had worked as an engineer at NASA and ran his own company Nexus Research laboratory Inc., a manufacturer of op-amps (precision circuits used in amplifiers and test equipment) which he sold in 1967 to fund the launch of the ARP company in 1969. The inspiration for ARP came after he played with both Moog and Buchla synthesisers and being unimpressed by the tuning instability of the oscillators and lack of commercial focus – especially the keyboard-less Buchla Box – and became determined to produce a stable, friendly, commercial electronic instrument.

“If you would like to spend your time creatively, actively producing new music and sound, rather than fighting your way through a nest of cords, a maze of distracting apparatus, you’ll find the ARP uniquely efficient . . . matrix switch interconnection for patching without patch cords…P.S. The oscillators stay in tune.”
ARP Advert 1970

Slider matrix of the 2500

Slider matrix of the 2500

The first product was the ARP 2500, a large monophonic modular voltage-controlled synthesiser designed along similar lines to the Moog Modular series 100. The 2500 had a main cabinet holding up to 12 modules and two wing-extension adding another six modules each. The interface was designed to be as clear as possible to non-synthesists with a logically laid out front panel and, unlike the Buchla and Moog Modular, dispensed with patch cables in favour of a series of  10X10 slider matrices, leaving the front panel clear of cable clutter. The 2500 also came with a 10-step analogue sequencer far in advance of any other modular system of the day

Despite the fact that the 2500 proved to be an advanced, reliable and user-friendly machine with much more stable and superior oscillators to the Moog, it was not commercially successful, selling only approximately 100 units.

ARP 2500 Modules

ARP 2500 Modules

Modules of the ARP 2500

Module # Type of Module Description
1002 power supply
1003 dual envelope generator This module contains two ADSR envelope generators (actually labeled “Attack”, “Initial Decay”, “Sustain”, and “Final Decay”), each switchable between single or multiple triggering. There is a manual gate button as well as a front panel input for gate/trigger and a back panel input for a sustain pedal.
1003a dual envelope generator (same as 1003, except re-positioned trigger switches and gate buttons)
1004 VCO A Voltage Controlled Oscillator with a range from 0.03Hz to 16kHz, this module can function as a VCO or an LFO. It features separate outputs for each of its five waveforms (sine, triangle, square, sawtooth, and pulse) and 6 CV (control voltage) inputs, as well as a CV input for Pulse Width Modulation.
1004p VCO This module is the same as the 1004, except each waveform has its own attenuation knob for mixing all the waveforms together. There is a separate output to for the mixed waveforms.
1004r VCO This module is the same as the 1004, except each waveform has its own rocker switch to route any or all of the waveforms to an extra mix output.
1004t VCO This module is the same as the 1004r, except it uses toggle switches.
1005 VCA andRing Modulator This module is half Voltage Controlled Amplifier and half Balanced (Ring) Modulator. It is switchable between linear or exponential voltage control, and features 11 inputs, 3 outputs, and illuminated push-buttons.
1006 VCF and VCA The Voltage Controlled Filter (24dB/octave, low-pass, with resonance) and Voltage Controlled Amplifier (switchable between linear and exponential) in one module
1012 Convenience Module This module routes two jack inputs to any of the upper ten lines of the lower matrix. (Remember, most of the patching for this instrument is done from these matrix sliders).
1016 dual noise generators This module features two random voltage generators outputting white or pink noise and two slow sample-and-hold circuits, four outputs in all.
1023 dual VCO Both oscillators feature the same waveforms as 1004 with a switch for high and low frequency ranges. There are a total of 10 control inputs and 2 audio outputs.
1026 Preset Voltage module This module contains eight manually or sequencer-driven gated control-voltages, each with two knobs sending control voltages to separate outputs. It can be connected, via the rear panel, to module 1027 Sequencer or module 1050 Mix-Sequencer.
1027 Sequencer This is a 10X3 sequencer with 14 outputs (including 10 separate position/step gates), 6 inputs, buttons for step and reset, and a knobs for pulse repetition/width, which controls the silence between the steps.
1033 Dual Delayed-Trigger Envelope Generator This module is the same as the 1003 ADSR module except it has two more knobs to control gate delay.
1036 Sample-and-Hold / random voltage
1045 Voice Module This all-in-one module contains a VCO, VCF, VCA, and two ADSR envelope generators, as well as 16 inputs, and four outputs. (Note: Most modules feature a spelling mistake “Resanance” instead of “Resonance”.)
1046 quad envelope generator This module is basically a 1003 and a 1033 combined into one module.
1047 Multimode Filter / Resonator This module features 15 inputs, 4 outputs and an overload warning light.
1050 Mix-Sequencer This module features two 4X1 mixers with illuminated on/off buttons.
3001 Keyboard This keyboard features a 5-octave, 61-note (C-C) keyboard with the bottom two octaves (C-B) reverse colored to show the keyboard split. The top half of the keyboard is duophonic. There are separate CV (1v/octave), gate, and trigger outputs for each side of the split, as well as separate panels on either side of the keyboard with controls for portamento, tuning, and pitch interval.
? Dual-Manual Keyboard Two 3001s, one on top of the other, with the bottom octave (C-B) or two octaves (C-B) of the top keyboard reverse colored to show the split.

from ‘The A-Z of Analogue Synthesizers’, by Peter Forrest, published by Susurreal Publishing, Devon, England, copyright 1994 Peter Forrest

ARP 2600

ARP 2600

The ARP 2600 (1971)

Stevie Wonder endorses the ARP 2600

Stevie Wonder endorses the ARP 2600

The 2600 similar to the EMS’s VCS3 was a portable, semi-modular analog subtractive synthesiser with built in modules and, again similar to the VCS3 was designed to target the educational market; schools, universities and so-on. The inbuilt modules could be patched using a combination of patch cables or by using sliders to control internally hard wired connections:

“ARP 2600 The ultimate professional-quality portable synthesizer Equally at home in the electronic music studio or on stage, the ARP 2600 provides the incredible new sounds in today’s leading rock bands The 2600 is also owned by many of the most prestigious universities and music schools in the world Powerful. dependable, and easy to play. the 2600 can be played without patchcords or modified with patch cords. This arrangement provides maximum speed and convenience for live performance applications, as well as total programming flexibility for teaching, research composition and recording. An pre-wired patch connection(s) can be overridden by simply inserting a patchcord into the appropriate jack on the front panel.

The ARP 2600 is easily expanded and can be used with the ARP 2500 series.Renowned for its electronic superiority, the oscillators and filters in the 2600 are the most stable and accurate available anywhere Accompanied by the comprehensive, fully illustrated owner s manual, the ARP 2600 is recognized as the finest, most complete portable synthesizer made today

FUNCTIONS: 3 Voltage Controlled Oscillators 03 Hz to 20 KHz in two ranges Five waveforms include: variable-width pulse. triangle. sine, square, and sawtooth 1 Voltage Controlled Lowpass filter Variable resonance, DC coupled. Doubles as a low distortion sine oscillator. 1 Voltage Controlled Amplifier Exponential and linear control response characteristics 1 Ring Modulator. AC or DC coupled 2 Envelope Generators. 1 Envelope Follower. 1 Random Noise Generator. Output continuously variable from flat to -6db/octave 1 Electronic Switch, bidirectional 1 Sample & Hold with internal clock. 1 General purpose Mixer and Panpot. 1 Voltage Processor with variable lag. 2 Voltage Processors with inverters 1 Reverberation unit. Twin uncorrelated stereo outputs 2 Built-in monitoring amplifiers and speakers, with standard stereo 8-ohm headphone jack. 1 Microphone Preamp with adjustable gain 1 Four-octave keyboard with variable tuning. variable portamento, variable tone interval, and precision memory circuit. DIMENSIONS: Console 32″ x 18″ x 9x Keyboard 35″ x 10″ x 6″ WEIGHT: 58 Ibs”
ARP 2600 Promotional material 1971

ARP 2800 ‘Odyssey’ 1972

By the mid-1970s ARP had become the dominant synthesiser manufacturer, with a 40 percent share of the $25 million market. This was due to Pearlman’s gift for publicity – the ARP2500 famously starred in the film ‘Close Encounters of the Third Kind’ (1977) as well as product endorsements by famous rock starts; Stevie Wonder, Pete Townsend, Herbie Hanckock and so-on – and the advent of reliable, simpler, commercial instrument designs such as the ARP 2800 ‘Odyssey’ in 1972.

ARP 2800 Odyssey

ARP 2800 Odyssey

The ARP 2800 ‘Odyssey’ 1972-1981

The Odyssey was ARP’s response to Moog’s ‘Minimoog’; a portable, user-friendly, affordable performance synthesiser; essentially a scaled down version of the 2600 with built in keyboard – a form that was to dominate the synthesiser market for the next twenty years or so.

The Odyssey was equipped with two oscillators and was one of the first synthesisers to have duo-phonic capabilities. Unlike the 2600 there were no patch ports, instead all of the modules were hard wired and routable and controllable via sliders and button son the front panel. ‘Modules’ consisted of  two Voltage Controlled Oscillators (switchable between  sawtooth, square, and pulse waveforms)  a resonant low-pass filter, a non-resonant high-pass filter, Ring Modulator, noise generator (pink/white) ADSR and AR envelopes, a triangle and square wave LFO, and a sample-and-hold function. The later Version III model had a variable expression keyboard allowing flattening or sharpening of the pitch and the addition of vibrato depending on key pressure and position.

ARP 2800 Odyssey Mki

ARP 2800 Odyssey MkI

ARP Production model timeline 1969-1981:

  • 1969 – ARP 2002 Almost identical to the ARP 2500, except that the upper switch matrix had 10 buses instead of 20.
  • 1970 – ARP 2500
  • 1970 – ARP Soloist (small, portable, monophonic preset, aftertouch sensitive synthesizer)
  • 1971 – ARP 2600
  • 1972 – ARP Odyssey
  • 1972 – ARP Pro Soloist (small, portable, monophonic preset, aftertouch sensitive synthesizer – updated version of Soloist)
  • 1974 – ARP String Ensemble (polyphonic string voice keyboard manufactured by Solina)
  • 1974 – ARP Explorer (small, portable, monophonic preset, programmable sounds)
  • 1975 – ARP Little Brother (monophonic expander module)
  • 1975 – ARP Omni (polyphonic string synthesiser )
  • 1975 – ARP Axxe (pre-patched single oscillator analog synthesiser)
  • 1975 – ARP String Synthesiser (a combination of the String Ensemble and the Explorer)
  • 1977 – ARP Pro/DGX (small, portable, monophonic preset, aftertouch sensitive synthesiser – updated version of Pro Soloist)
  • 1977 – ARP Omni-2 (polyphonic string synthesiser with rudimentary polyphonic synthesiser functions – updated version of Omni)
  • 1977 – ARP Avatar (an Odyssey module fitted with a guitar pitch controller)
  • 1978 – ARP Quadra (4 microprocessor-controlled analog synthesisers in one)
  • 1979 – ARP Sequencer (analog music sequencer)
  • 1979 – ARP Quartet (polyphonic orchestral synthesiser not manufacted by ARP – just bought in from Siel and rebadged )
  • 1980 – ARP Solus (pre-patched analog monophonic synthesiser)
  • 1981 – ARP Chroma (microprocessor controlled analog polyphonic synthesiser – sold to CBS/Rhodes when ARP closed)

The demise of ARP Instruments was brought about by disorganised management and the decision to invest heavily in a guitar style synthesiser, the SRP Avatar. Although this was an innovative and groundbreaking instrument it failed to sell and ARP were never able to recoup the development costs. ARP filed for bankruptcy in 1981.

ARP Image Gallery


‘Analog Days’. T. J PINCH, Frank Trocco. Harvard University Press, 2004

‘Vintage Synthesizers’: Pioneering Designers, Groundbreaking Instruments, Collecting Tips, Mutants of Technology. Mark Vail. March 15th 2000. Backbeat Books

The rise and fall of ARP instruments‘ By Craig R. Waters with Jim Aikin

The ‘Sound Processor’ or ‘Audio System Synthesiser’ Harald Bode, USA, 1959

Harald Bode demonstrating the

Harald Bode demonstrating the Audio System Synthesiser

In 1954 the electronic engineer and pioneering instrument designer Harald Bode moved from his home in Bavaria, Germany to Brattleboro, Vermont, USA to lead the development team at the Estey Organ Co, working on developing his instrument the ‘Bode Organ’ as the prototype for the new Estey Organ. As a sideline Bode set up his own home workshop in 1959 to develop his ideas for a completely new and innovative instrument “A New Tool for the Exploration of Unknown Electronic Music Instrument Performances”. Bode’s objective was to produce a device that could included everything needed for film and TV audio production; soundtracks, sound design and audio processing– perhaps inspired by Oskar Sala’s successful (and lucrative ) film work, such as on Alfred Hitchcock  ‘The Birds’ (1963).

Bode’s new idea was to create a modular device where different components could be connected as needed; and in doing so created the first modular synthesiser – a concept that was copied sometime later by Robert Moog and Donald Buchla amongst others. The resulting instrument  the ‘Audio System Synthesiser’ allowed the user to connect multiple devices such as Ring modulators, Filters, Reverb Generators etc in any order to modify or generate sounds. The sound could be recorded to tape, mixes or further processing; “A combination of well-known devices enabled the creation of new sounds” (Bode 1961)

circuitry of the

circuitry of the Audio System Synthesiser

Bode wrote a description of the Audio System Synthesiser in the December 1961 issue of Electronics Magazine and demonstrated it at the Audio Engineering Society (AES), a convention for the electro-acoustics industry in New York in 1960. In the audience was a young Robert Moog who was at the time running a business selling Theremin Kits. Inspired by Bode’s ideas Moog designed the famous series of Moog modular synthesisers. Bode would later license modules to be included in Moog modular systems including a Vocoder, Ring Modulator, filter and Pitch shifter as well as producing a number of components which were widely used in electronic music studios during the 196os

Front panel of the Audio System Synthesiser

Front panel of the Audio System Synthesiser

Text from the 1961 edition of Electronics Magazine

New sounds and musical effects can be created either by synthesizing acoustical phenomena, by processing natural or artificial (usually electronically generated) sounds, or by applying both methods. Processing acoustical phenomena often results in substantial deviations from the original.

Production of new sounds or musical effects can be made either by intermediate or immediate processing methods. Some methods of intermediate processing may include punched tapes for control of the parameters of a sound synthesizer, and may also include such tape recording procedures as reversal, pitch-through-speed changes, editing and dubbing.

Because of the time differential between production and performance when using the intermediate process, the composer-performer cannot immediately hear or judge his performance, therefore corrections can be made only after some lapse of time. Immediate processing techniques present no such problems.

Methods of immediate processing include spectrum and envelope shaping, change of pitch, change of overtone structure including modification from harmonic to nonharmonic overtone relations, application of periodic modulation effects, reverberation, echo and other repetition phenomena.

The output of the ring-bridge modulator shown in Figure 2a yields the sum and differences of the frequencies applied to its two inputs but contains neither input frequency. This feature has been used to create new sounds and effects. Figure 2b shows a tone applied to input 1 and a group of harmonically related frequencies applied to input 2. The output spectrum is shown in Figure 2c.

Due to operation of the ring-bridge modulator, the output frequencies are no longer harmonically related to each other. If a group of properly related frequencies were applied to both inputs and a percussive-type envelope were applied to the output signal, a bell-like tone would be produced.

In a more general presentation, the curves of Figure 3 show the variety of tone spectra that may be derived with a gliding frequency between 1 cps and 10 kcps applied to one and two fixed 440 and 880 cps frequencies (in octave relationship) applied to the other input of the ring-bridge modulator. The output frequencies are identified on the graph.

Frequencies applied to the ring-bridge modulator inputs are not limited to the audio range. Application of a subsonic frequency to one input will periodically modulate a frequency applied to the other. Application of white noise to one input and a single audio frequency to the other input will yield tuned noise at the output. Application of a percussive envelope to one input simultaneously with a steady tone at the other input will result in a percussive-type output that will have the characteristics of the steady tone modulated by the percussive envelope.

The unit shown in Figure 4 provides congruent envelope shaping as well as the coincident percussive envelope shaping of the program material. One input accepts the control signal while the other input accepts the material to be subjected to envelope shaping. The processed audio appears at the output of the gating circuit.

To derive control voltages for the gating functions, the audio at the control input is amplified, rectified and applied to a low-pass filter. Thus, a relatively ripple-free variable DC bias will actuate the variable gain, push-pull amplifier gate. When switch S1 is in the gating position, the envelope of the control signal shapes that of the program material.

To prevent the delay caused by C1 and C2 on fast-changing control voltages, and to eliminate asymmetry caused by the different output impedances at the plate and cathode of V2, relatively high-value resistors R3 and R4 are inserted between phase inverter V2 and the push-pull output of the gate circuit. These resistors are of the same order of magnitude as biasing resistors R1 and R2 to secure a balance between the control DC signal and the audio portion of the program material.

The input circuits of V5 and V6 act as a high-pass filter. The cutoff frequency of these filters exceeds that of the ripple filter by such an amount that no disturbing audio frequency from the control input will feed through to the gate. This is important for clean operation of the percussive envelope circuit. The pulses that initiate the percussive envelopes are generated by Schmitt trigger V9 and V10. Positive-going output pulses charge C5 (or C5 plus C6 or C7 chosen by S2) with the discharge through R5. The time constant depends on the position of S2.

To make the trigger circuit respond to the beginning of a signal as well as to signal growth, differentiator C3 and R6 plus R7 is used at the input of V9. The response to signal growth is especially useful in causing the system to yield to a crescendo in a music passage or to instants of accentuation in the flow of speech frequencies.

The practical application of the audio-controlled percussion device within a system for the production of new musical effects is shown in Figure 5. The sound of a bongo drum triggers the percussion circuit, which in turn converts the sustained chords played by the organ into percussive tones. The output signal is applied to a tape-loop repetition unit that has four equally spaced heads, one for record and three for playback. By connecting the record head and playback head 2 in parallel, output A is produced. By connecting playback head 1 and playback head 3 in parallel, output B is produced, and a distinctive ABAB pattern may be achieved. Outputs A and B can be connected to formant filters having different resonance frequencies.

The number of repetitions may be extended if a feedback loop is inserted between playback head 2 and the record amplifier. The output voltages of the two filters and the microphone preamplifier are applied to a mixer in which the ratio of drum sound to modified percussive organ sound may be controlled.

The program material originating from the melody instrument is applied to one of the inputs of the audio-controlled gate and percussion unit. There it is gated by the audio from a percussion instrument. The percussive melody sounds at the output of the gate are applied to the tape-loop repetition system. Output signal A — the direct signal and the information from playback head 2 — is applied through amplifier A and filter 1 to the mixer. Output signal B — the signals from playback heads 1 and 3 — is applied through amplifier B to one input of the ring-bridge modulator. The other ring-bridge modulator input is connected to the output of an audio signal generator.

The mixed and frequency-converted signal at the output of the ring-bridge modulator is applied through filter 2 to the mixer. At the mixer output a percussiveABAB signal (stemming from a single melody note, triggered by a single drum signal) is obtained. In its A portion it has the original melody instrument pitch while its B portion is the converted nonharmonic overtone structure, both affected by the different voicings of the two filters. When the direct drum signal is applied to a third mixer input, the output will sound like a voiced drum with an intricate aftersound. The repetition of the ABAB pattern may be extended by a feedback loop between playback head two and the record amplifier.

When applying the human singing voice to the input of the fundamental frequency selector, the extracted fundamental pitch may be distorted in the squaring circuit and applied to the frequency divider (or dividers). This will derive a melody line whose pitch will be one octave lower than that of the singer. The output of the frequency divider may then be applied through a voicing filter to the program input of the audio-controlled gate and percussion unit. The control input of this circuit may be actuated by the original singing voice, after having passed through a low-pass filter of such a cutoff frequency that only vowels —typical for syllables — would trigger the circuit. At the output of the audio-controlled gate, percussive sounds with the voicing of a string bass will be obtained mixed with the original voice of the singer. The human voice output signal will now be accompanied by a coincident string bass sound which may be further processed in the tape-loop repetition unit. The arbitrarily selected electronic modules of this synthesizer are of a limited variety and could be supplemented by other modules.

A system synthesizer may find many applications such as exploration of new types of electronic music or as a tool for composers who are searching for novel sounds and musical effects. Such a device will present a challenge to the imagination of composer-programmer. The modern approach of synthesizing intricate electronic systems from modules with a limited number of basic functions has proven successful in the computer field. This approach has now been made in the area of sound synthesis. With means for compiling any desired modular configuration, an audio system synthesizer could become a flexible and versatile tool for sound processing and would be suited to meet the ever-growing demand for exploration and production of new sounds.

Harald Bode 1961

PDF of the article here: 1961 edition of Electronics Magazine


Bode’s Audio System Synthesiser’

Audio Files:

Demonstration of the Audio System Synthesiser by Harald bode in 1962. 4.36 demo

PHASE 4-2 ARPEGGIO” (4:51) Composed in 1964 while Bode was experimenting with various phasers, filters, and frequency shifters.


‘Pattern Playback’ Franklin S. Cooper. USA, 1949


Franklin Cooper with the Pattern Playback machine

The Pattern Playback was not a musical instrument as such but an early hardware device designed to synthesise and analyse speech, designed and  built by Dr. Franklin S. Cooper and his colleagues, including John M. Borst and Caryl Haskins, at Haskins Laboratories in the late 1940s and completed in 1950.

Diagram showing the function of the Pattern Playback machine

Diagram showing the function of the Pattern Playback machine

The device converted a picture or ‘spectrogram’ of a sound back in to sound. The ‘Pattern Playback’ machine functioned in a very similar way to the Russian ANS Synthesiser using a photo-electrical system; a mercury arc-light was projected through a rotating glass disc printed with fifty harmonics of a fundamental frequency as a way of generating a range of tones. The light is then projected through an acetate ‘black and transparent’ spectrogram image that lets through the portions of light that carry frequencies corresponding to the spectrogram. The resulting ‘filtered’ light hits a photo-voltaic cell which generated the final audible sound .


The Pattern Playback machine

Pattern Playback

The Pattern Playback machine

Several versions of the device were built at Haskins Laboratories and used up until 1976. The Pattern Playback now resides in the Museum at Haskins Laboratories in New Haven, Connecticut.


The history of speech synthesis

The ‘Baldwin Organ’ Winston E. Kock & J.F. Jordan, USA, 1946

Early Model of Winston Kock's Baldwin organ

Winston Kock’s Baldwin Organ Model Five 1947

The Baldwin organ was an electronic organ, many models of which have been manufactured by the Baldwin Piano & Organ Co. since 1946. The original models were designed by Dr Winston E. Kock who became the company’s director of electronic research after his return from his studies at the Heinrich-Hertz-Institute, Berlin, in 1936. The organ was a development of Kock’s Berlin research with the GrosstonOrgel using the same neon-gas discharge tubes to create a stable, affordable polyphonic instrument. The Baldwin Organ were based on an early type of subtractive synthesis; the neon discharge tubes generating a rough sawtooth wave rich in harmonics which was then modified by formant filters to the desired tone.

Tone modifying circuits of the Baldwin organ

Tone modifying circuits of the Baldwin organ

Another innovative aspect of the Baldwin Organ was the touch sensitive keyboard designed to create a realistic variable note attack similar to a pipe organ. As the key was depressed, a curved metal strip progressively shorted out a carbon resistance element to provide a gradual rather than sudden attack (and decay) to the sound.  This feature was unique at that time, and it endowed the Baldwin instrument with an unusually elegant sound which captivated many musicians of the day.

“How did it sound? I have played Baldwin organs at a time when they were still marketed and in my opinion, for what it is worth, they were pretty good in relative terms.  That is to say, they sounded significantly better on the whole than the general run of analogue organs by other manufacturers, and they were only beaten by a few custom built instruments in which cost was not a factor.  It would not be true to say they sounded as good as a good digital organ today, but they compared favourably with the early Allen digitals in the 1970′s.  Nor, of course, did they sound indistinguishable from a pipe organ, but that is true for all pipeless organs.  To my ears they also sounded much better and more natural than the cloying tone of the more expensive Compton Electrone which, like the Hammond, also relied on attempts at additive synthesis with insufficient numbers of harmonics.”

From ‘Winston Kock and the Baldwin Organ; by Colin Pykett

Electronic Generator of the earlt model Baldwin Organ

Electronic Tone Generator of the early model Baldwin Organ showing neon gas-discharge tube oscillators.

Kock’s 1938 Patent of the Baldwin organ

Winston Kock playing an early experimental design for an electric instrument

Winston Kock playing his early experimental electronic instrument 1932

Winston E. Kock Biographical Details:

Winston Kock was born into a German-American family in 1909 in Cincinnati, Ohio. Despite being a gifted musician he decided to study electrical engineering at Cincinnati university and in his 20’s designed a highly innovative, fully electronic organ for his master’s degree.

The major problem of instrument design during the 1920′s and 30′s was the stability and cost of analogue oscillators. Most commercial organ ventures had failed for this reason; a good example being  Givelet & Coupleux’s  huge valve Organ in 1930. it was this reason that Laurens Hammond (and many others) decided on Tone-Wheel technology for his Hammond Organs despite the inferior audio fidelity.

Kock had decided early on to investigate the possibility of producing a commercially viable instrument that was able to produce the complexity of tone possible from vacuum tubes. With this in mind, Kock hit upon the idea of using much cheaper neon ‘gas discharge’ tubes as oscillators stabilised with resonant circuits. This allowed him to design an affordable, stable and versatile organ.

Kock's Sonar device during WW2

Kock’s Sonar device during WW2

In the 1930’s Kock, fluent in German, went to Berlin to study On an exchange fellowship (curiously, the exchange was with Sigismund von Braun, Wernher von Braun’s eldest brother –Kock was to collaborate with Wernher twenty five years later at NASA) at the Heinrich Hertz Institute conducting research for a doctorate under Professor K W Wagner. At the time Berlin, and specifically the Heinrich Hertz Institute, was the global centre of electronic music research. Fellow students and professors included; Jörg Mager, Oskar Vierling, Fritz Sennheiser, Bruno Helberger, Harald Bode, Friedrich Trautwein, Oskar Sala and Wolja Saraga amongst others. Kock’s study was based around two areas: – improving the understanding of glow discharge (neon) oscillators, and developing realistic organ tones using specially designed filter circuits. 

Kock worked closely with Oskar Vierling for his Phd and co-designed the GrosstonOrgel in 1934 but disillusioned by the appropriation of his work by the newly ascendant Nazi party he decided to leave for India, sponsored by the Baldwin Organ Company arriving at the Indian Institute of Music in Bangalore in 1935.

Returning from India in 1936, Dr Kock became Baldwin’s Director of Research while still in his mid-twenties, and with J F Jordan designed many aspects of their first electronic organ system which was patented in 1941.


Winston E Kock (L) as the first Director of Engineering Research at NASA

When the USA entered the second world war Kock moved to Bell Telephone Laboratories where he was involved on radar research and specifically microwave antennas. In the mid-1950’s he took a senior position in the Bendix Corporation which was active in underwater defence technology. He moved again to become NASA’s first Director of Engineering Research, returning to Bendix in 1966 where he remained until 1971 when he became Acting Director of the Hermann Schneider Laboratory of the University of Cincinatti. Kock Died in Cincinatti in 1982.

 Winston Kock was a prolific writer of scientific books but he also wrote fiction novels under the pen name of Wayne Kirk.

Acoustic lenses developed by Winston Kock at the Bell Labs in the 1950's

Acoustic lenses developed by Winston Kock at the Bell Labs in the 1950′s

Acoustic lenses developed by Winston Kock at the Bell Labs in the 1950's

Acoustic lenses developed by Winston Kock at the Bell Labs in the 1950′s


Acoustic lenses developed by Winston Kock at the Bell Labs in the 1950′s


Hugh Davies. The New Grove Dictionary of Music and Musicians

The ‘Mastersonic Organ’ John Goodell & Ellsworth Swedien, USA, 1949

The Mastersonic Organ was an improved tone wheel organ designed to produce more accurate pipe organ sounds. The designers,  John Goodell and Ellsworth Swedien, discovered that if they shaped the tone-wheel ‘pickups’ they could induce tones with different ‘natural’ harmonic content – rather than attempt to create a pure sine wave and artificially colour it as in the Hammond Organ. To achieve this the Mastersonic had individually shaped magnets for each tone wheel sound; a “string” magnet, a “flute” magnet, a “diapason” magnet, and so on.

Mastersonic Tone Generation

Mastersonic Tone Generation (Alan Conway Ashton ‘electronics, Music and Computers’ 1971)

“…There were twelve shafts with seven pitch wheels each which rotated near the irregularly shaped magnets wound with coils. Each of the pitch wheels contained twice as many rec­tangular teeth as the preceding one, so seven octaves were produced per shaft. Several differently shaped poles were dispersed radially around each wheel.”
Alan Conway Ashton electronics, Music and Computers

Each tone-wheel was shielded against magnetic interference from the other, adding to the bulk and complexity of the instrument. The instrument was controlled by a seven octave special keyboard, designed to simulate attack envelopes. The resulting sound was indeed a much more accurate pipe organ sound but at the expense of size; the Mastersonic was a huge, complex and expensive machine and few were built or sold.


‘Microsound’ Curtis Roads MIT 2001

ELECTRONICS, MUSIC AND COMPUTERS. Alan Conway Ashton. December 1971 UTEC-CSc-71-117

The ‘Allen Computer Organ’, Ralph Deutsch – Allen Organ Co, USA, 1971

Allen Computer Organ of 1971

Allen 301-3 Digital Computer organ of 1971

The Allen Computer Organ was one of the first commercial digital instruments, developed by Rockwell International (US military technology company) and built by the Allen Organ Co in 1971. The organ used an early form of digital sampling allowing the user to chose pre-set voices or edit and store sounds using an IBM style punch-card system.

The Rockwell/Allen Computer Organ engineering  team with a prototype model.

The Rockwell/Allen Computer Organ engineering team with a prototype model.

The sound itself was generated from MOS (Metal Oxide Silicon) boards. Each MOS board contained 22 LSI (Large Scale Integration) circuit boards (miniaturised photo-etched silicon boards containing thousands of transistors – based on technology developed by Rockwell International for the NASA space missions of the early 70′s) giving a total of 48,000 transistors; unheard of power for the 1970′s.

Publicity photograph demonstrating  the punch-car reader

Publicity photograph demonstrating the punch-car reader

Allen Organ voice data punch cards

Allen Organ voice data punch cards


‘GROOVE Systems’, Max Mathews & Richard Moore, USA 1970

Max Mathews with the GROOVE system

Max Mathews with the GROOVE system

In 1967 the composer and musician Richard Moore began a collaboration with Max Mathews at Bell Labs exploring performance and  expression in computer music in a ‘musician-friendly’ environment. The result of this was a digital-analogue hybrid system called GROOVE  (Generated Realtime Operations On Voltage-controlled Equipment) in which a musician played an external analogue synthesiser and a computer monitored and stored the performer’s manipulations of the interface; playing notes, turning knobs and so-on. The objective being to build a real-time musical performance tool by concentrating the computers limited power, using it to store musical parameters of an external device rather than generating the sound itself :

“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wroteMusic 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. TheIBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University

Richard Moore with the Groove System

Richard Moore with the Groove System

The system, written in assembler, only ran on the Honeywell DDP224 computer that Bell had acquired specifically for sound research. The addition of a disk storage device meant that it was also possible to create libraries of programming routines so that users could create their own customised logic patterns for automation or composition. GROOVE allowed users to continually adjust and ‘mix’ different actions in real time, review sections or an entire piece and then re-run the composition from stored data. Music by Bach and Bartok were performed with the GROOVE at the first demonstration at a conference on Music and Technology in Stockholm organized by UNESCO  in 1970. Among the participants also several leading figures in electronic music such as Pierre Schaffer and Jean-Claude Risset.

“Starting with the Groove program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the radio-baton, plus a program, the conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.”
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University

The GROOVE System at the Bell Laboratories circa 1970

The GROOVE System at the Bell Laboratories circa 1970

The GROOVE system consisted of:

  • 14 DAC control lines scanned every 100th/second ( twelve 8-bit and two 12-bit)
  • An ADC coupled to a multiplexer for the conversion of seven voltage signal: four generated by the same knobs and three generated by 3-dimensional movement of a joystick controller;
  • Two speakers for audio sound output;
  • A special keyboard to interface with the knobs to generate On/Off signals
  • A teletype keyboard for data input
  • A CDC-9432 disk storage;
  • A tape recorder for data backup

Antecedents to the GROOVE included similar projects such as PIPER, developed by James Gabura and Gustav Ciamaga at the University of Toronto, and a system proposed but never completed by Lejaren Hiller and James Beauchamp at the University of Illinois . GROOVE was however, the first widely used computer music system that allowed composers and performers the ability to work in real-time. The GROOVE project ended in 1980 due to both the high cost of the system – some $20,000, and also  to advances in affordable computing power that allowed synthesisers and performance systems to work together flawlessly .


Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music, Prentice Hall, 1997.

F. Richard Moore, Elements of Computer Music, PTR Prentice Hall, 1990.

‘MUSIC N’, Max Vernon Mathews, USA, 1957

Max Mathews was a pioneering, central figure in computer music. After studying engineering at California Institute of Technology and the Massachusetts Institute of Technology in 1954 Mathews went on to develop ‘Music 1′ at Bell Labs; the first of the ‘Music’ family of computer audio programmes and the first widely used program for audio synthesis and composition. Mathews spent the rest of his career developing the ‘Music N’ series of programs and became a key figure in digital audio, synthesis, interaction and performance. ‘Music N’ was the first time a computer had been used to investigate audio synthesis ( Computers had been used to generate sound and music with the CSIR M1 and Ferranti Mk1 as early as 1951, but more as a by-product of machine testing rather than for specific musical objectives) and set the blueprint for computer audio synthesis that remains in use to this day in programmes like CSound, MaxMSP and SuperCollider and graphical modular programmes like Reaktor.

IBM 704 System

IBM 704 System

“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines – they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”

Max Mathews “Horizons in Computer Music”, March 8–9, 1997, Indiana University:

MUSIC I 1957

Music 1 was written in Assembler/machine code to make the most of the technical limitations of the IBM704 computer. The audio output was a simple monophonic triangle wave tone with no attack or decay control. It was only possible to set the parameters of amplitude, frequency and duration of each sound. The output was stored on magnetic tape and then converted by a DAC to make it audible (Bell Laboratories, in those years, were the only ones in the United States, to have a DAC; a 12-Bit valve technology converter, developed by EPSCO), Mathews says;

In fact, we are the only ones in the world at the time who had the right kind of a digital-to-analog converter hooked up to a digital tape transport that would play a computer tape. So we had a monopoly, if you will, on this process“.

In 1957 Mathews and his colleague Newman Guttman created a synthesised 17 second piece using Music I, titled ‘The Silver Scale’ ( often credited as being the first proper piece of  computer generated music) and a one minute piece later in the same year called ‘Pitch Variations’ both of which were released on an anthology called ‘Music From Mathematics’ edited by Bell Labs in 1962.

Mathews and the IBM 7094

Mathews and the IBM 7094


Was an updated more versatile and functional version of Music I . Music II  still used assembler but for the transistor (rather than valve) based, much faster IBM 7094 series. Music II had four-voice polyphony and a was capable of generating sixteen wave shapes via the introduction of a wavetable oscillator.


“MUSIC 3 was my big breakthrough, because it was what was called a block diagram compiler, so that we could have little blocks of code that could do various things. One was a generalized oscillator … other blocks were filters, and mixers, and noise generators.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

The introduction of Unit Generators (UG) in MUSIC III was an evolutionary leap in music computing proved by the fact that almost all current programmes use the UG concept in some form or other. A Unit generator is essentially a pre-built discreet function within the program; oscillators, filters, envelope shapers and so-on, allowing the composer to flexibly connect multiple UGs together to generate a specific sound. A separate ‘score’ stage was added where sounds could be arranged in a musical chronological fashion. Each event was assigned to an instrument, and consisted of a series of values for the unit generators’ various parameters (frequency, amplitude, duration, cutoff frequency, etc). Each unit generator and each note event was entered onto a separate punch-card, which while still complex and archaic by today’s standards, was the first time a computer program used a paradigm familiar to composers.

“The crucial thing here is that I didn’t try to define the timbre and the instrument. I just gave the musician a tool bag of what I call unit generators, and he could connect them together to make instruments, that would make beautiful music timbres. I also had a way of writing a musical score in a computer file, so that you could, say, play a note at a given pitch at a given moment of time, and make it last for two and a half seconds, and you could make another note and generate rhythm patterns. This sort of caught on, and a whole bunch of the programmes in the United States were developed from that. Princeton had a programme called Music 4B, that was developed from my MUSIC 4 programme. And (theMIT professor) Barry Vercoe came to Princeton. At that time, IBM changed computers from the old 1794 to the IBM 360 computers, so Barry rewrote the MUSIC programme for the 360, which was no small job in those days. You had to write it in machine language.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

Max Mathews and Joan Miller at Bell labs

Max Mathews and Joan Miller at Bell labs


MUSIC IV was the result of the collaboration between Max Mathews and  Joan Miller completed in 1963 and was a more complete version of the MUSIC III system using a modified macro enabled version of the assembler language. These programming changes meant that MUSIC IV would only run on the Bell Labs IBM 7094.

“Music IV was simply a response to a change in the language and the computer. It Had some technical advantages from a computer programming standpoint. It made heavy use of a macro assembly program Which Existed at the time.”
Max Mathews 1980


Due to the lack of portability of the MUSIC IV system other versions were created independently of Mathews and the Bell labs team, namely MUSIC IVB at Princeton and MUSIC IVBF at the Argonne Labs. These versions were built using FORTRAN rather than assembler language.


MUSIC V was probably the most popular of the MUSIC N series from Bell Labs. Similar to MUSIC IVB/F versions, Mathews abandoned assembler and built MUSIC V in the FORTRAN language specifically for the IBM 360 series computers. This meant that the programme was faster, more stable and  could run on any IBM 360 machines outside of  Bell Laboratories. The data entry procedure was simplified, both in Orchestra and in Score section. One of the most interesting news features was the definition of new modules that allow you to import analogue sounds into Music V. Mathews persuaded Bell Labs not to copyright the software meaning that MUSIC V was probably one of the first open-source programmes, ensuring it’s adoption and longevity leading directly to today’s CSound.

“… The last programme I wrote, MUSIC 5, came out in 1967. That was my last programme, because I wrote it in FORTRAN. FORTRAN is still alive today, it’s still in very good health, so you can recompile it for the new generation of computers. Vercoe wrote it for the 360, and then when the 360 computers died, he rewrote another programme called MUSIC 11 for the PDP-11, and when that died he got smart, and he wrote a programme in the C language called CSound. That again is a compiler language and it’s still a living language; in fact, it’s the dominant language today. So he didn’t have to write any more programmes.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

MUSIC V marked the end of Mathews involvement in MUSIC N series but established it as the parent for all future music programmes. Because of his experience with the real-time limitations of computer music, Mathews became interested in developing ideas for performance based computer music such as the GROOVE system (with Richard Moore in 1970) system in and The ‘Radio Baton’ (with Tom Oberheim in 1985 ).

1957 Music I Bell Labs (New York) Max Mathews
1958 Music II Bell Labs (New York) Max Mathews
1960 Music III Bell Labs (New York) Max Mathews
1963 Music IV Bell Labs (New York) Max Mathews, Joan Miller
1963 Music IVB Princeton University Hubert Howe, Godfrey Winham
1965 Music IVF Argonne Laboratories (Chicago) Arthur Roberts
1966 Music IVBF Princeton University Hubert Howe, Godfrey Winham
1966 Music 6 Stanford University Dave Poole
1968 Music V Bell Labs (New York) Max Mathews
1969 Music 360 Princeton University Barry Vercoe
1969 Music 10  Stanford University John Chowning, James Moorer
1970 Music 7 Queen’s College (New York) Hubert Howe, Godfrey Winham
1973 Music 11 M.I.T. Barry Vercoe
1977 Mus10 Stanford University Leland Smith, John Tovar
1980 Cmusic University of California Richard Moore
1984 Cmix Princeton University Paul Lansky
1985 Music 4C University of Illinois James Beauchamp, Scott Aurenz
1986 Csound M.I.T. Barry Vercoe


Curtis Roads, Interview with Max Mathews, Computer Music Journal, Vol. 4, 1980.

‘Frieze’ Interview with Max Mathews. by Geeta Dayal

An Interview with Max Mathews.  Tae Hong Park. Music Department, Tulane University