The ‘Fotosonor’ was a photo-electrical organ built in France during the 1950s and was designed to replace a traditional pipe organ liturgical music. Several models of the instrument were built;
The moveable tone units and amplifier of the Fotosonor Choir Organ
The ‘Choir Organ’ was a large traditional wooden panelled , two manual church organ. This modular version had up to eleven optical tone units – each unit reproduced the sound of a traditional organ; Drone, Flutes, Trumpets and so-on. The large tone units were housed in a separate moveable cabinet so that only the ‘traditional’ keyboard part of the instrument was visible.
The two unit Fotosonor ‘Quatre Jeux’
The ‘Deux Jeux’ and Quatre Jeux’ were of a more modern metal-clad design each with two or four tone units respectively. In this design the tone units were integrated into the keyboard part of the instrument alongside an amplifier and loudspeaker system. The manufacturers also suggest that a turntable “…can be easily incorporated to accompany the organ, allowing the study of liturgical works in general; particularly Gregorian chant, choral singing hymns”
The four unit ‘Quatre Jeux’
The pipe-organ sound of the Fotosonor was generated using a photo-electrical technique; rotating glass discs printed with looped sound-waves interrupted a light beam trained on a photo-electrical cell thereby generating a reproduction of the tone ‘recorded’ on the disc. This had the added benefit of the organist being able to ‘update’ the instrument with optical recordings of new sounds.
Sources
‘Fotosonor’ Promotional booklet. La Société Française Electro-musicale. 23 Rue Lamartine, Paris 9.
The ‘Fonosynth’ now at the Musical Instrument Museum, Munich, Germany. (photo: suonoelettronico.com)
The Fonosynth was a large analogue valve and transistor based studio synthesiser designed and built by the Polish-Italian sound engineer Paul Ketoff (with musical input from the Italian composer Gino Marinuzzi jr) and was created specifically for the new electronic music studio at the American Academy in Rome.
This studio had been founded by the American composer and co-founder of the Columbia-Princeton Electronic Music Center (CPEMC in 1959 – the first studio for electronic music in the USA and home to the RCA Synthesiser) Otto Luening (June 15, 1900 Milwaukee, Wisconsin USA – September 2, 1996 New York City, NY, USA) – who was at that time Composer in Residence on a one year secondment from Columbia Princeton at the American Academy:
“During his first residency in the Spring and Summer of 1958, Luening tapped Columbia’s Alice M. Ditson fund to purchase a library of contemporary music recordings for the composers’ use. Then, when he returned for a 6 week summer visit in 1961, he converted the composers’ “listening room” (located in the basement of the Academy) into a rudimentary “electronic music studio”. This studio eventually contained three sine wave oscillators, a spring reverberation unit, a microphone, an Ampex stereo portable tape recorder and a mixing console. Added to the listening room’s professional 350 Series Ampex mono tape recorder and a radio/record player, this equipment became a laboratory for sound research. In 1964-65 when Luening returned for a second, year long stay as Composer-in-Residence, the Ditson fund ( Columbia University’s Alice M. Ditson Fund) covered the purchase of one of the first portable electronic music synthesizers in existence, the Synket (nb; the ‘Synket’ here is probably confused with the Fonosynth), invented and constructed by Paul Ketoff – a brilliant Roman audio engineer involved with Rome’s Cinecittà. It was Ketoff, in fact, who had designed the original studio room and mixing 8 console and whose guidance and unflagging enthusiasm had been a key element in making the fledgling studio operable. With the Synket installed, the listening room became a fairly advanced electronic studio for the time and it served as such for many of the Fellows. The studio was also used by a number of visiting American (Larry Austin, Alvin Curran, etc.) and Italian (Aldo Clementi, Mauro Bortolotti) composers.”
Richard Trythall ‘A History of the Rome Prize in Music Composition’
There were numerous American Avant-Garde composers and musicians passing through Rome at this time (1950-60s) usually on some kind of study grant. This was in-part because of a postwar initiative set up by the US government to promote American culture and regenerate the cultural life of Rome:
“an international showcase idea which went along with lots of neon signs and skyscrapers – to shout down communism”
(Alvin Curran. Soundings No. 10, Soundings Press, Santa Fe, 1976).
Artists involved with the American Academy, Rome included flutist Fritz Kraber, clarinettist Jerry Kirkbride, sopranos Joan Logue and Carol Plantamura, violist Joan Kalisch, pianists Joe Rollino and Paul Sheftel and composers such as William O. Smith, John Eaton, Richard Trythall, John Heineman, Alvin Curran, Frederick Rzewski, Richard Teitelbaum, Allen Bryant, Jeffrey Levine, Joel Chadabe, Jerome Rosen and Larry Moss, Larry Austin.
Detail of the Fonosynth
The Fonosynth was completed in 1958 and was used throughout the late fifties and mid sixties by a number of expat American electronic musicians and composers (including Otto Luening, William O. Smith, and George Balch Wilson, Richard Trythall, Alvin Curran amongst others) as well as for film soundtrack sound effects for the Italian film industry. The Fonosynth now resides at the at the Museum of Musical Instruments in Munich, Germany.
The Fonosynth’s sound was generated by twelve sine wave oscillators and six square wave oscillators (each matched with individual band-pass filters). This audio signal could be modulated and coloured using audio filters (2 octave filters, 2 selective resonant filters, 1 self oscillating filter, 1 threshold filter) two LFOs, an impulse generator and white noise generator, two ring modulators and a wave shape generator to determine the ADSR of each sound. The resulting output was fed into an 18 channel mixing console and amplified to a stereo or mono audio output.
The whole instrument was controlled by an unusual keyboard made up of 6 rows of 24 keys allowing for Enharmonic, microtonal performance and composition.
Soon after, Ketoff built a successor to the Fonosynth in 1963 called the Syn-ket (Synthesiser – Ketoff) which was designed as a portable performance instrument – with musical input from John Eaton (and others).
Paul Ketoff (at the Syn-ket) c 1963
Biographical Notes Paul Ketoff/Paolo Ketoff.
Polish-Italian Electronic and sound engineer. Born 1921 died 1996. Ketoff became the chief sound technician at RCA Italiana/Cinecittà film studios, Roma, in 1964 and the Fonolux post production company, between 1957 and 1965. Ketoff designed many devices for film music production including dynamic sound compressors and ring modulators, reverb chambers and plates, and established a new standard of sound post-production.
Film credits for sound production and effects from this period include (1966) ‘Africa Adido” , (1966) ‘La Traviata’ (1965) ‘Terrore Nello Spazio’, (1960) ‘L’ avventura’ (1953) ‘Pane, amore e fantasia’ (1965) ‘Planet of the Vampires’, (1959) ‘Hercules Unchained’
Commisioned to design and build the Electronic Music Studio at the American Academy in Rome, Ketoff finished his first synthesiser, the ‘Fonosynth’ in 1958 and then designed a much more compact voltage controlled performance ins trument called the Syn-ket in 1963 which was presented at the conference of the Audio Engineering Society (AES) in 1964,
Ketoff was a lifelong friend and collaborator with the Italian composer Gino Marinuzzi jr. Paolo Ketoff was married to Landa Ketoff, the well known musical critic for La Repubblica Newspaper.
Gino Marinuzzi Jr
Biographical Notes Gino Marinuzzi Jr.
Born 07/04/1920 – New York (U.S.A.), Died: (age 76) in Rome, Lazio, Italy
Gino Marinuzzi jr. was the Son of the conductor Gino Marinuzzi and was born in 1920 in New York, USA, while his father was touring in the United States. He studied piano and composition at the Milan Conservatory. Before graduating in composition, piano and conducting. Marinuzzi jr wrote his first early work: Concertino (piano chamber orchestra) and various compositions for piano at the age of sixteen.
Marinuzzi became the assistant conductor at the Teatro dell’Opera in Rome from 1946 to 1951. He made his debut as a conductor in Spain in 1947, during a tour of the Ballet of the Roman theatre; then chooses to devote himself exclusively to composition. Marinuzzi made the numerous film soundtracks during this period and was very active in the field of electronic music. In 1956 he founded the ‘Studio of phonology for the Roman Philharmonic Academy’ and later was a founding member of the experimental study group R7 with Paolo Ketoff, Walter Flocks, Franco Evangelisti, Domenico Guaccero, Guido Guiducci and Egisto Macchi.
Marinuzzi spent two years – 1943 to 1945 – in a Nazi concentration camp, (prisoner 50914 Stalag XII F) an experience from which he created the ‘Lager lieder’, in which he elaborated on popular Russian, Ukrainian and Gypsy themes learned from his fellow prisoners.
In 1956 the composer opened the first laboratory of electronic music in Rome at the ‘Accademia Filarmonica Romana’ and constructed one of the first modular synthesiser for the production of electronic music called the ‘Fonosynth’. The device, made by the engineer Julian Strini and the sound engineer Paolo Ketoff in collaboration with Marinuzzi was completed in 1958,
In the 1960s and 70s Marinuzzi devoted himself mainly to film scores, theatre, radio and television, and only resumed composing for orchestra in the 1980s. He was particularly involved and played a pioneering role in research and musical experimentation in the field of electronic music since the 1950s.
He is the father of the singer and guitarist Joan Marinuzzi.
Marinuzzi’s film works include: ‘Romanzo d’amore’ (1950), Jean Renoir’s ‘Le Carrosse d’or’ (1952) and Vittorio Cottafavi’s Ercole alla conquista di Atlantide (1961). ‘I castrati’ (1964),Mario Bava ‘s ‘Terrore nello spazio’ (aka Planet of the Vampires, (1965), ‘ Matchless’ (1967). ‘La piovra’ (1984) (aka The Octopus).
The ‘electronic barbershop quartet’ equipped with the Wobble Organ
The Wobble Organ was a monophonic electronic instrument created by Bell Laboratories electrical engineer Larned A. Meacham in 1951. The device was intended as an inexpensive, portable recreational instruments where a family could get together to create an electronic “Barbershop quartet”. The Wobble Organ was not intended as a commercial instrument but designed for the popular self-build market of the time.
The instrument was designed to be playable by performers who “have little or no experience with the manipulation of conventional musical instruments”(1). Meacham’s solution to this was to avoid using a conventional keyboard and instead control the Wobble Organ with a pivoted joystick, or ‘wobble arm’ sliding against a curved form marked with pitch intervals ( a similar control method to Jörg Mager’s Sphärophon of 1926) . By raising and lowering the joystick the player could alter the pitch of a single neon/thyratron sawtooth oscillator over two and half octaves – the note could be turned on and off with a hand-held button.
(1) Electronic musical entertainment device US 2544466 A Filed April 27, 1950 Patented Mar. 6, 1951 UNITED STATES PATENT OFFICE ELECTRONIC MUSICAL ENTERTAINMENT DEVICE
Tom L. Rhea ‘The Evolution of Electronic Musical Instruments in the United States’ George Peabody College for teachers. 1972
Harald Bode demonstrating the Audio System Synthesiser
In 1954 the electronic engineer and pioneering instrument designer Harald Bode moved from his home in Bavaria, Germany to Brattleboro, Vermont, USA to lead the development team at the Estey Organ Co, working on developing his instrument the ‘Bode Organ’ as the prototype for the new Estey Organ. As a sideline Bode set up his own home workshop in 1959 to develop his ideas for a completely new and innovative instrument “A New Tool for the Exploration of Unknown Electronic Music Instrument Performances”. Bode’s objective was to produce a device that could included everything needed for film and TV audio production; soundtracks, sound design and audio processing– perhaps inspired by Oskar Sala’s successful (and lucrative ) film work, such as on Alfred Hitchcock ‘The Birds’ (1963).
Bode’s new idea was to create a modular device where different components could be connected as needed; and in doing so created the first modular synthesiser – a concept that was copied sometime later by Robert Moog and Donald Buchla amongst others. The resulting instrument the ‘Audio System Synthesiser’ allowed the user to connect multiple devices such as Ring modulators, Filters, Reverb Generators etc in any order to modify or generate sounds. The sound could be recorded to tape, mixes or further processing; “A combination of well-known devices enabled the creation of new sounds” (Bode 1961)
circuitry of the Audio System Synthesiser
Bode wrote a description of the Audio System Synthesiser in the December 1961 issue of Electronics Magazine and demonstrated it at the Audio Engineering Society (AES), a convention for the electro-acoustics industry in New York in 1960. In the audience was a young Robert Moog who was at the time running a business selling Theremin Kits. Inspired by Bode’s ideas Moog designed the famous series of Moog modular synthesisers. Bode would later license modules to be included in Moog modular systems including a Vocoder, Ring Modulator, filter and Pitch shifter as well as producing a number of components which were widely used in electronic music studios during the 196os
Front panel of the Audio System Synthesiser
Text from the 1961 edition of Electronics Magazine
New sounds and musical effects can be created either by synthesizing acoustical phenomena, by processing natural or artificial (usually electronically generated) sounds, or by applying both methods. Processing acoustical phenomena often results in substantial deviations from the original.
Production of new sounds or musical effects can be made either by intermediate or immediate processing methods. Some methods of intermediate processing may include punched tapes for control of the parameters of a sound synthesizer, and may also include such tape recording procedures as reversal, pitch-through-speed changes, editing and dubbing.
Because of the time differential between production and performance when using the intermediate process, the composer-performer cannot immediately hear or judge his performance, therefore corrections can be made only after some lapse of time. Immediate processing techniques present no such problems.
Methods of immediate processing include spectrum and envelope shaping, change of pitch, change of overtone structure including modification from harmonic to nonharmonic overtone relations, application of periodic modulation effects, reverberation, echo and other repetition phenomena.
The output of the ring-bridge modulator shown in Figure 2a yields the sum and differences of the frequencies applied to its two inputs but contains neither input frequency. This feature has been used to create new sounds and effects. Figure 2b shows a tone applied to input 1 and a group of harmonically related frequencies applied to input 2. The output spectrum is shown in Figure 2c.
Due to operation of the ring-bridge modulator, the output frequencies are no longer harmonically related to each other. If a group of properly related frequencies were applied to both inputs and a percussive-type envelope were applied to the output signal, a bell-like tone would be produced.
In a more general presentation, the curves of Figure 3 show the variety of tone spectra that may be derived with a gliding frequency between 1 cps and 10 kcps applied to one and two fixed 440 and 880 cps frequencies (in octave relationship) applied to the other input of the ring-bridge modulator. The output frequencies are identified on the graph.
Frequencies applied to the ring-bridge modulator inputs are not limited to the audio range. Application of a subsonic frequency to one input will periodically modulate a frequency applied to the other. Application of white noise to one input and a single audio frequency to the other input will yield tuned noise at the output. Application of a percussive envelope to one input simultaneously with a steady tone at the other input will result in a percussive-type output that will have the characteristics of the steady tone modulated by the percussive envelope.
The unit shown in Figure 4 provides congruent envelope shaping as well as the coincident percussive envelope shaping of the program material. One input accepts the control signal while the other input accepts the material to be subjected to envelope shaping. The processed audio appears at the output of the gating circuit.
To derive control voltages for the gating functions, the audio at the control input is amplified, rectified and applied to a low-pass filter. Thus, a relatively ripple-free variable DC bias will actuate the variable gain, push-pull amplifier gate. When switch S1 is in the gating position, the envelope of the control signal shapes that of the program material.
To prevent the delay caused by C1 and C2 on fast-changing control voltages, and to eliminate asymmetry caused by the different output impedances at the plate and cathode of V2, relatively high-value resistors R3 and R4 are inserted between phase inverter V2 and the push-pull output of the gate circuit. These resistors are of the same order of magnitude as biasing resistors R1 and R2 to secure a balance between the control DC signal and the audio portion of the program material.
The input circuits of V5 and V6 act as a high-pass filter. The cutoff frequency of these filters exceeds that of the ripple filter by such an amount that no disturbing audio frequency from the control input will feed through to the gate. This is important for clean operation of the percussive envelope circuit. The pulses that initiate the percussive envelopes are generated by Schmitt trigger V9 and V10. Positive-going output pulses charge C5 (or C5 plus C6 or C7 chosen by S2) with the discharge through R5. The time constant depends on the position of S2.
To make the trigger circuit respond to the beginning of a signal as well as to signal growth, differentiator C3 and R6 plus R7 is used at the input of V9. The response to signal growth is especially useful in causing the system to yield to a crescendo in a music passage or to instants of accentuation in the flow of speech frequencies.
The practical application of the audio-controlled percussion device within a system for the production of new musical effects is shown in Figure 5. The sound of a bongo drum triggers the percussion circuit, which in turn converts the sustained chords played by the organ into percussive tones. The output signal is applied to a tape-loop repetition unit that has four equally spaced heads, one for record and three for playback. By connecting the record head and playback head 2 in parallel, output A is produced. By connecting playback head 1 and playback head 3 in parallel, output B is produced, and a distinctive ABAB pattern may be achieved. Outputs A and B can be connected to formant filters having different resonance frequencies.
The number of repetitions may be extended if a feedback loop is inserted between playback head 2 and the record amplifier. The output voltages of the two filters and the microphone preamplifier are applied to a mixer in which the ratio of drum sound to modified percussive organ sound may be controlled.
The program material originating from the melody instrument is applied to one of the inputs of the audio-controlled gate and percussion unit. There it is gated by the audio from a percussion instrument. The percussive melody sounds at the output of the gate are applied to the tape-loop repetition system. Output signal A — the direct signal and the information from playback head 2 — is applied through amplifier A and filter 1 to the mixer. Output signal B — the signals from playback heads 1 and 3 — is applied through amplifier B to one input of the ring-bridge modulator. The other ring-bridge modulator input is connected to the output of an audio signal generator.
The mixed and frequency-converted signal at the output of the ring-bridge modulator is applied through filter 2 to the mixer. At the mixer output a percussiveABAB signal (stemming from a single melody note, triggered by a single drum signal) is obtained. In its A portion it has the original melody instrument pitch while its B portion is the converted nonharmonic overtone structure, both affected by the different voicings of the two filters. When the direct drum signal is applied to a third mixer input, the output will sound like a voiced drum with an intricate aftersound. The repetition of the ABAB pattern may be extended by a feedback loop between playback head two and the record amplifier.
When applying the human singing voice to the input of the fundamental frequency selector, the extracted fundamental pitch may be distorted in the squaring circuit and applied to the frequency divider (or dividers). This will derive a melody line whose pitch will be one octave lower than that of the singer. The output of the frequency divider may then be applied through a voicing filter to the program input of the audio-controlled gate and percussion unit. The control input of this circuit may be actuated by the original singing voice, after having passed through a low-pass filter of such a cutoff frequency that only vowels —typical for syllables — would trigger the circuit. At the output of the audio-controlled gate, percussive sounds with the voicing of a string bass will be obtained mixed with the original voice of the singer. The human voice output signal will now be accompanied by a coincident string bass sound which may be further processed in the tape-loop repetition unit. The arbitrarily selected electronic modules of this synthesizer are of a limited variety and could be supplemented by other modules.
A system synthesizer may find many applications such as exploration of new types of electronic music or as a tool for composers who are searching for novel sounds and musical effects. Such a device will present a challenge to the imagination of composer-programmer. The modern approach of synthesizing intricate electronic systems from modules with a limited number of basic functions has proven successful in the computer field. This approach has now been made in the area of sound synthesis. With means for compiling any desired modular configuration, an audio system synthesizer could become a flexible and versatile tool for sound processing and would be suited to meet the ever-growing demand for exploration and production of new sounds.
The Lipp Pianoline was a monophonic vacuum tube based keyboard instrument designed as an add-on for piano players. The Pianoline was part of a family of portable piano-attachment instruments popular in the 1950’s such as the Ondioline, Clavioline and Univox – the Pianoline being distinguished by it’s larger sized keys.
The instrument’s sound was generated by a number of astable multivibrator vacuum tubes and monostable multivibrator tubes for frequency division. Tone colour was added with filters, pre-amplification and vibrato. In contrast to similar keyboard add-on instruments, the tone generator and power supply were built into the keyboard unit rather than as an external module. The resulting sound was fed to an external, portable loudspeaker unit using an output cable.
The Pianoline was designed and built by the established Stuttgart based Piano manufacturer Richard Lipp & Sohn who were looking to diversify in the postwar market for electronic keyboards. In 1970 the company was acquired by the Jehle Piano Company and closed in 1972.
Max Mathews was a pioneering, central figure in computer music. After studying engineering at California Institute of Technology and the Massachusetts Institute of Technology in 1954 Mathews went on to develop ‘Music 1’ at Bell Labs; the first of the ‘Music’ family of computer audio programmes and the first widely used program for audio synthesis and composition. Mathews spent the rest of his career developing the ‘Music N’ series of programs and became a key figure in digital audio, synthesis, interaction and performance. ‘Music N’ was the first time a computer had been used to investigate audio synthesis ( Computers had been used to generate sound and music with the CSIR M1 and Ferranti Mk1 as early as 1951, but more as a by-product of machine testing rather than for specific musical objectives) and set the blueprint for computer audio synthesis that remains in use to this day in programmes like CSound, MaxMSP and SuperCollider and graphical modular programmes like Reaktor.
IBM 704 System . Image: The IBM 704 and 709 Systems1http://www.computer-history.info/Page4.dir/pages/IBM.704.dir
“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines – they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.” 2Max Mathews, (1997), Horizons in Computer Music, March 8–9, Indiana University.
MUSIC I 1957
Music 1 was written in Assembler/machine code to make the most of the technical limitations of the IBM704 computer. The audio output was a simple monophonic triangle wave tone with no attack or decay control. It was only possible to set the parameters of amplitude, frequency and duration of each sound. The output was stored on magnetic tape and then converted by a DAC to make it audible (Bell Laboratories, in those years, were the only ones in the United States, to have a DAC; a 12-Bit valve technology converter, developed by EPSCO), Mathews says;
“In fact, we are the only ones in the world at the time who had the right kind of a digital-to-analog converter hooked up to a digital tape transport that would play a computer tape. So we had a monopoly, if you will, on this process“.3An Interview with Max Mathews. Tae Hong Park. Music Department, Tulane University. https://tinyurl.com/ypfdw2xb
In 1957 Mathews and his colleague Newman Guttman created a synthesised 17 second piece using Music I, titled ‘The Silver Scale’ ( often credited as being the first proper piece of computer generated music) and a one minute piece later in the same year called ‘Pitch Variations’ both of which were released on an anthology called ‘Music From Mathematics’ edited by Bell Labs in 1962.
Max Mathews and an IBM mainframe at Bell Laboratories. (Courtesy Max Mathews.)4image: ‘An Interview with Max Mathews’. Tae Hong Park. Music Department, Tulane University. https://tinyurl.com/ypfdw2xb
MUSIC II 1958
Was an updated more versatile and functional version of Music I . Music II still used assembler but for the transistor (rather than valve) based, much faster IBM 7094 series. Music II had four-voice polyphony and a was capable of generating sixteen wave shapes via the introduction of a wavetable oscillator.
MUSIC III 1960
“MUSIC 3 was my big breakthrough, because it was what was called a block diagram compiler, so that we could have little blocks of code that could do various things. One was a generalized oscillator … other blocks were filters, and mixers, and noise generators.” 5Max Mathews, (2011), ‘Max Mathews (1926–2011)’, Interview with Geeta Dayal, Frieze Magazine.09 MAY 2011. https://www.frieze.com/article/max-mathews-1926-E2-80-932011
The introduction of Unit Generators (UG) in MUSIC III was an evolutionary leap in music computing proved by the fact that almost all current programmes use the UG concept in some form or other. A Unit generator is essentially a pre-built discreet function within the program; oscillators, filters, envelope shapers and so-on, allowing the composer to flexibly connect multiple UGs together to generate a specific sound. A separate ‘score’ stage was added where sounds could be arranged in a musical chronological fashion. Each event was assigned to an instrument, and consisted of a series of values for the unit generators’ various parameters (frequency, amplitude, duration, cutoff frequency, etc). Each unit generator and each note event was entered onto a separate punch-card, which while still complex and archaic by today’s standards, was the first time a computer program used a paradigm familiar to composers.
“The crucial thing here is that I didn’t try to define the timbre and the instrument. I just gave the musician a tool bag of what I call unit generators, and he could connect them together to make instruments, that would make beautiful music timbres. I also had a way of writing a musical score in a computer file, so that you could, say, play a note at a given pitch at a given moment of time, and make it last for two and a half seconds, and you could make another note and generate rhythm patterns. This sort of caught on, and a whole bunch of the programmes in the United States were developed from that. Princeton had a programme called Music 4B, that was developed from my MUSIC 4 programme. And (theMIT professor) Barry Vercoe came to Princeton. At that time, IBM changed computers from the old 1794 to the IBM 360 computers, so Barry rewrote the MUSIC programme for the 360, which was no small job in those days. You had to write it in machine language.” 6Max Mathews, (2011), ‘Max Mathews (1926–2011)’, Interview with Geeta Dayal, Frieze Magazine.09 MAY 2011. https://www.frieze.com/article/max-mathews-1926-E2-80-932011
Max Mathews with Joan Miller co-author of Music V. (Courtesy Max Mathews.)7image: ‘An Interview with Max Mathews’. Tae Hong Park. Music Department, Tulane University. https://tinyurl.com/ypfdw2xb
MUSIC IV
MUSIC IV was the result of the collaboration between Max Mathews and Joan Miller completed in 1963 and was a more complete version of the MUSIC III system using a modified macro enabled version of the assembler language. These programming changes meant that MUSIC IV would only run on the Bell Labs IBM 7094.
“Music IV was simply a response to a change in the language and the computer. It Had some technical advantages from a computer programming standpoint. It made heavy use of a macro assembly program Which Existed at the time.”
Max Mathews 1980. 8Curtis Roads, ‘Interview with Max Mathews’, Computer Music Journal, Vol. 4, 1980.
MUSIC IVB, IVBF and IVF
Due to the lack of portability of the MUSIC IV system other versions were created independently of Mathews and the Bell labs team, namely MUSIC IVB at Princeton and MUSIC IVBF at the Argonne Labs. These versions were built using FORTRAN rather than assembler language.
MUSIC V
MUSIC V was probably the most popular of the MUSIC N series from Bell Labs. Similar to MUSIC IVB/F versions, Mathews abandoned assembler and built MUSIC V in the FORTRAN language specifically for the IBM 360 series computers. This meant that the programme was faster, more stable and could run on any IBM 360 machines outside of Bell Laboratories. The data entry procedure was simplified, both in Orchestra and in Score section. One of the most interesting news features was the definition of new modules that allow you to import analogue sounds into Music V. Mathews persuaded Bell Labs not to copyright the software meaning that MUSIC V was probably one of the first open-source programmes, ensuring it’s adoption and longevity leading directly to today’s CSound.
“… The last programme I wrote, MUSIC 5, came out in 1967. That was my last programme, because I wrote it in FORTRAN. FORTRAN is still alive today, it’s still in very good health, so you can recompile it for the new generation of computers. Vercoe wrote it for the 360, and then when the 360 computers died, he rewrote another programme called MUSIC 11 for the PDP-11, and when that died he got smart, and he wrote a programme in the C language called CSound. That again is a compiler language and it’s still a living language; in fact, it’s the dominant language today. So he didn’t have to write any more programmes.” 9Max Mathews, (2011), ‘Max Mathews (1926–2011)’, Interview with Geeta Dayal, Frieze Magazine.09 MAY 2011. https://www.frieze.com/article/max-mathews-1926-E2-80-932011
MUSIC V marked the end of Mathews involvement in MUSIC N series but established it as the parent for all future music programmes. Because of his experience with the real-time limitations of computer music, Mathews became interested in developing ideas for performance based computer music such as the GROOVE system (with Richard Moore in 1970) system in and The ‘Radio Baton’ (with Tom Oberheim in 1985 ).
Max Mathews, (1997), Horizons in Computer Music, March 8–9, Indiana University.
3
An Interview with Max Mathews. Tae Hong Park. Music Department, Tulane University. https://tinyurl.com/ypfdw2xb
4
image: ‘An Interview with Max Mathews’. Tae Hong Park. Music Department, Tulane University. https://tinyurl.com/ypfdw2xb
5
Max Mathews, (2011), ‘Max Mathews (1926–2011)’, Interview with Geeta Dayal, Frieze Magazine.09 MAY 2011. https://www.frieze.com/article/max-mathews-1926-E2-80-932011
6
Max Mathews, (2011), ‘Max Mathews (1926–2011)’, Interview with Geeta Dayal, Frieze Magazine.09 MAY 2011. https://www.frieze.com/article/max-mathews-1926-E2-80-932011
7
image: ‘An Interview with Max Mathews’. Tae Hong Park. Music Department, Tulane University. https://tinyurl.com/ypfdw2xb
8
Curtis Roads, ‘Interview with Max Mathews’, Computer Music Journal, Vol. 4, 1980.
9
Max Mathews, (2011), ‘Max Mathews (1926–2011)’, Interview with Geeta Dayal, Frieze Magazine.09 MAY 2011. https://www.frieze.com/article/max-mathews-1926-E2-80-932011
The oldest existing recording of a computer music programme. The Ferranti Mk1 in 1951. Recorded live to acetate disk with a small audience of technicians. The Ferranti MK1 was the world’s first commercially available general-purpose computer; a commercial development of the Manchester Mk1 at Manchester university in 1951. Included in the Ferranti Mark 1’s instruction set was a ‘hoot’ command, which enabled the machine to give auditory feedback to its operators. Looping and timing of the ‘hoot’ commands allowed the user to output pitched musical notes; a feature that enabled the Mk1 to have produced the oldest existing recording of computer music ( The earliest reported but un-recorded computer music piece was created earlier in the same year by the CSIR MK1 in Sydney Australia). The recording was made by the BBC towards the end of 1951 programmed by Christopher Strachey, a maths teacher at Harrow and a friend of Alan Turing.
CSIRAC was an early digital computer designed by the British engineer Trevor Pearcey as part of a research project at CSIRO ( Sydney-based Radiophysics Laboratory of the Council for Scientific and Industrial Research) in the early 1950’s. CSIRAC was intended as a prototype for a much larger machine use and therefore included a number of innovative ‘experimental’ features such as video and audio feedback designed to allow the operator to test and monitor the machine while it was running. As well as several optical screens, CSIR Mk1 had a built-in Rola 5C speaker mounted on the console frame. The speaker was an output device used to alert the programmer that a particular event had been reached in the program; commonly used for warnings, often to signify the end of the program and sometimes as a debugging aid. The output to the speaker was basic raw data from the computer’s bus and consisted of an audible click. To create a more musical tone, multiple clicks were combined using a short loop of instructions; the timing of the loop giving a change in frequency and therefore an audible change in pitch.
The CSIRAC console switch panel with multiple rows of 20 switches used to set bits in various registers.
The first piece of digital computer music was created by Geoff Hill and Trevor Pearcey on the CSIR Mk1 in 1951 as a way of testing the machine rather than a musical exercise. The music consisted of excerpt from popular songs of the day; ‘Colonel Bogey’, ‘Bonnie Banks’, ‘Girl with Flaxen Hair’ and so on. The work was perceived as a fairly insignificant technical test and wasn’t recorded or widely reported:
CSIRAC – the University’s giant electronic brain – has LEARNED TO SING!
…it hums, in bathroom style, the lively ditty, Lucy Long. CSIRAC’s song is the result of several days’ mathematical and musical gymnastics by Professor T. M. Cherry. In his spare time Professor Cherry conceived a complicated punched-paper programme for the computer, enabling it to hum sweet melodies through its speaker… A bigger computer, Professor Cherry says, could be programmed in sound-pulse patterns to speak with a human voice… The Melbourne Age, Wednesday 27th July 1960
Later version of the CSIRAC at The University of Melbourne
…When CSIRAC began sporting its musical gifts, we jumped on his first intellectual flaw. When he played “Gaudeamus Igitur,” the university anthem, it sounded like a refrigerator defrosting in tune. But then, as Professor Cherry said yesterday, “This machine plays better music than a Wurlitzer can calculate a mathematical problem”… Melbourne Herald, Friday 15th June 1956:
Portable computer: CSIRAC on the move to Melbourne, June 1955
The CSIR Mk1 was dismantled in 1955 and moved to The University of Melbourne, where it was renamed CSIRAC. Professor of Mathematics, Thomas Cherry, had a great interest in programming and music and he created music with CSIRAC. During it’s time in Melbourne the practice of music programming on the CSIRAC was refined allowing the input of music notation. The program tapes for a couple of test scales still exist, along with the popular melodies ‘So early in the Morning’ and ‘In Cellar Cool’.
Music instructions for the CSIRAC by Thomas CherryMusic instructions for the CSIRAC by Thomas Cherry
Later version of the CSIRAC at The University of Melbourne
Console at GRM Paris showing the EMI mixing desk and parts of the Coupigny Synthesiser c1972
The GRM was an electro-acoustic music studio founded in 1951 by the musique concrète pioneer Pierre Schaeffer, composer Pierre Henry and the engineer Jacques Poullin and based at the RTF (Radiodiffusion-Télévision Française) buildings in Paris. The studio itself was the culmination of over a decades work into musique concrète and sound objects by Schaeffer and others at the ‘Groupe Recherches de Musique Concrète’ (GRMC) and the Studio d’Essai. The new studio was designed around Schaeffer’s sound theories later outlined in his book “Treaty of Musical Object – Traité des Objects Musicaux”:
“musique concrète was not a study of timbre, it is focused on envelopes, forms. It must be presented by means of non-traditional characteristics, you see … one might say that the origin of this music is also found in the interest in ‘plastifying’ music, of rendering it plastic like sculpture…musique concrète, in my opinion … led to a manner of composing, indeed, a new mental framework of composing” (James 1981, 79). Schaeffer had developed an aesthetic that was centred upon the use of sound as a primary compositional resource. The aesthetic also emphasised the importance of play (jeu) in the practice of sound based composition. Schaeffer’s use of the word jeu, from the verb jouer, carries the same double meaning as the English verb play: ‘to enjoy oneself by interacting with one’s surroundings’, as well as ‘to operate a musical instrument’
(Pierre Henry. Dack 2002).
Along with the WDR Studio in Germany, the GRM/GRMC was one of the earliest electro-acoustic music studios and attracted many notable avant-garde composers of the era including Olivier Messiaen, Pierre Boulez, Jean Barraqué, Karlheinz Stockhausen, Edgard Varèse, Iannis Xenakis, Michel Philippot, and Arthur Honegger. Compositional output from 1951 to 1953 comprised ‘Étude I’ (1951) and ‘Étude II’ (1951) by Boulez, ‘Timbres-durées’ (1952) by Messiaen, ‘Konkrete Etüde’ (1952) by Stockhausen, ‘Le microphone bien tempéré’ (1952) and ‘La voile d’Orphée’ (1953) by Pierre Henry, ‘Étude I’ (1953) by Philippot, ‘Étude’ (1953) by Barraqué, the mixed pieces ‘Toute la lyre’ (1951) and ‘Orphée 53′(1953) by Schaeffer/Henry, and the film music ‘Masquerage’ (1952) by Schaeffer and ‘Astrologie’ (1953) by Pierre Henry.
The original design of the studio followed strict Schaefferian theory and was completely centered around tape manipulation, recording and editing. Several novel ‘tape instruments’ were built and integrated into the studio setup including the phonogène (Three version were built; the phonogène Universal, Chromatic & Sliding) and the Morphophone.
The phonogène
The Phonogene
The Phonogène was a one-off multi-headed tape instrument designed by Jacques Poullin. In all, three version of the instrument were created;
The Chromatic Phonogène . A tape loop was driven by multiple capstans at varied speeds allowed the production of short bursts of tape sounds at varying pitches defined by a small one-octave keyboard.
The Sliding phonogène created a continuous tone by varying the tape speed via a control rod
The Phonogène Universal allowed transposition of pitch without altering the duration of the sound and vice-versa obtained through a rotating magnetic head called the ‘Springer temporal regulator’ (a similar design to VHS video tape recorders)
The morphophone
The Morphophone circa 1955
The Morphophone was a type of tape loop-delay mechanism, again designed by Jacques Pollin. A tape loop was stuck to the edge of a 50cm diameter rotating disk and the sound was picked up at varying points on the tape by ten magnetic heads (one recording, one erasing and ten playback heads). The resulting sound was passed through a series of bandpass filters (for each playback head) and amplified.
The Chamberlin was an early pre-cursor of the modern digital sampler using a complex mechanism that stored analogue audio samples on strips of audio tape – 1 tape for each key. When a key on the keyboard was pressed the tape strip played forward and when released the play head returns to the beginning of the tape. The note had a limited length, eight seconds on most models. The instrument was designed as an ‘amusing’ novelty instruments for domestic use but later found favour with rock musicians in the sixties and seventies.
The first Chamberlin Model200
All the original sounds were recordings of the Lawrence Welk Orchestra made by Harry Chamberlin at his home in California. The recording technique produced clean unaffected sound but with a heavy vibrato added by the musicians. The full set of sound that came with the Chamberlin were:
Brass: Alto Sax, Tenor Sax, Trombone, Trumpet, French Horn, Do Wah Trombone, Slur Trombone and Muted Trumpet.
Wind: flute, oboe, and bass clarinet.
Voice: Male Voice (solo) and Female Voice (solo).
Strings: 3 violins, Cello and Pizzicato violins.
Plucked strings: Slur Guitar, Banjo, Steel Guitar, Harp solo, Harp Roll, Harp 7th Arpeggio (harp sounds were not available to the public), Guitar and Mandolin.
Effects: Dixieland Band Phrases and Sound Effects.
In 1962 two Chamberlins were taken to Great Britain where they were used as the basis for the design for the Mellotron keyboard:
The Chamberlin was invented in the US in 1946 by Harry Chamberlin who had the idea (allegedly) when setting up his portable tape recorder to record himself playing his home organ. It is rumoured that it occured to him that if he could record the sound of a real instrument, he could make a keyboard instrument that could replay the sound of real instruments and thus the Chamberlin was born. Chamberlin’s idea was ‘simple’ – put a miniature tape playback unit underneath each key so that when a note was played, a tape of ‘real’ instruments would be played. At the time, the concept was totally unique.
In the ’50s, at least 100 Chamberlins were produced and to promote his instrument, Harry teamed up with a guy called Bill Fransen who was (allegedly) Harry’s window cleaner. Fransen was (allegedly) totally fascinated by this unique invention and subsequently became Chamberlin’s main (and only) salesman. However, there were terrible reliability problems with the Chamberlin and it had a very high (it is said 40%) failure rate with the primitive tape mechanism which resulted in tapes getting mangled.
Fransen felt that Chamberlin would never be able to fix these problems alone and so, unknown to Chamberlin (allegedly), Fransen brought some Chamberlins to the UK in the early ’60s to seek finance and a development partner. He showed the Chamberlin to a tape head manufacturer, Bradmatics, in the Midlands and the Bradley brothers (Frank, Leslie and Norman who owned Bradmatics) were (allegedly) very impressed with the invention and (allegedly) agreed to refine the design and produce them for Fransen but…Under the mistaken impression that the design was actually Fransen’s (allegedly)!
A new company, Mellotronics, was set up in the UK to manufacture and market this innovative new instrument and work got underway with the Bradley brothers (allegedly) unaware that they were basically copying and ripping off someone else’s idea! Of course, it wasn’t long before Harry Chamberlin got to hear of this and he too went to the UK to meet with the Bradley brothers. After some acrimonious discussions, the two parties settled with Harry selling the technology to the Bradleys. Mellotronics continued to develop their ‘Mellotron’ whilst Harry returned to the US where he continued to make his Chamberlins with his son, Richard, in a small ‘factory’ behind his garage and later, a proper factory in Ontario, a small suburb in Los Angeles. In total, they made a little over 700 units right through until 1981. Harry died shortly afterwards.
But whatever happened in those early meetings almost 40 years ago is inconsequential – the fact of the matter is that the two instruments are almost indistinguishable from each other. Each key has a playback head underneath it and each time a key is pressed, a length of tape passes over it that contains a recording of a ‘real’ instrument. The tape is of a finite length lasting about eight seconds and a spring returns it to its start position when the note is finished. As you can see from the photograph above though, the Chamberlin is smaller (although some mammoth dual-manual Chamberlins were also produced!).
Many claim that the Chamberlin had a better sound – clearer and more ‘direct’ …. which is strange because the Mellotron was (allegedly) better engineered than the Chamberlin. But there is a lot of confusion between the two instruments not helped by the fact that some Chamberlin tapes were used on the Mellotron and vice versa…. so even though the two companies were in direct competition with each other, they shared their sounds….. weird!
It also seems that some users were also confused and credited a ‘Mellotron’ on their records when in fact it might well have been a Chamberlin that they used (allegedly). However, given the similarities between the two, this confusion is understandable and it’s a tribute to Mellotronics’ marketing that they got the upper hand on the original design.
To be honest, the whole story is shrouded in hearsay and music history mythology and we may never know the truth (especially now that the original people involved are sadly no longer with us) but regardless of this, the Bradley brothers were obviously more successful with their marketing of the idea than Chamberlin himself. Although it was originally aimed at the home organ market with cheesy rhythm loops and silly sound effects, the Mellotron went on to become a legend in the history of modern music technology and the mere mention of its name can invoke dewy eyed nostalgia amongst some people. On the other hand, however, few people have even heard of the Chamberlin which is sad because Harry Chamberlin’s unique invention preceded the Mellotron by some fifteen years or more and by rights, it is the Chamberlin that deserves the title of “the world’s first sampler”.
Nostalgia has a lovely Chamberlin string sound that captures the original Chamberlin character quite authentically. Unlike the original, though, the sound is looped but, like the original, it has the same keyboard range (G2-F5) and is not velocity sensitive.