The ‘Sound Processor’ or ‘Audio System Synthesiser’ Harald Bode, USA, 1959

Harald Bode demonstrating the

Harald Bode demonstrating the Audio System Synthesiser

In 1954 the electronic engineer and pioneering instrument designer Harald Bode moved from his home in Bavaria, Germany to Brattleboro, Vermont, USA to lead the development team at the Estey Organ Co, working on developing his instrument the ‘Bode Organ’ as the prototype for the new Estey Organ. As a sideline Bode set up his own home workshop in 1959 to develop his ideas for a completely new and innovative instrument “A New Tool for the Exploration of Unknown Electronic Music Instrument Performances”. Bode’s objective was to produce a device that could included everything needed for film and TV audio production; soundtracks, sound design and audio processing– perhaps inspired by Oskar Sala’s successful (and lucrative ) film work, such as on Alfred Hitchcock  ’The Birds’ (1963).

Bode’s new idea was to create a modular device where different components could be connected as needed; and in doing so created the first modular synthesiser – a concept that was copied sometime later by Robert Moog and Donald Buchla amongst others. The resulting instrument  the ’Audio System Synthesiser’ allowed the user to connect multiple devices such as Ring modulators, Filters, Reverb Generators etc in any order to modify or generate sounds. The sound could be recorded to tape, mixes or further processing; “A combination of well-known devices enabled the creation of new sounds” (Bode 1961)

circuitry of the

circuitry of the Audio System Synthesiser

Bode wrote a description of the Audio System Synthesiser in the December 1961 issue of Electronics Magazine and demonstrated it at the Audio Engineering Society (AES), a convention for the electro-acoustics industry in New York in 1960. In the audience was a young Robert Moog who was at the time running a business selling Theremin Kits. Inspired by Bode’s ideas Moog designed the famous series of Moog modular synthesisers. Bode would later license modules to be included in Moog modular systems including a Vocoder, Ring Modulator, filter and Pitch shifter as well as producing a number of components which were widely used in electronic music studios during the 196os

Front panel of the Audio System Synthesiser

Front panel of the Audio System Synthesiser

Text from the 1961 edition of Electronics Magazine

New sounds and musical effects can be created either by synthesizing acoustical phenomena, by processing natural or artificial (usually electronically generated) sounds, or by applying both methods. Processing acoustical phenomena often results in substantial deviations from the original.

Production of new sounds or musical effects can be made either by intermediate or immediate processing methods. Some methods of intermediate processing may include punched tapes for control of the parameters of a sound synthesizer, and may also include such tape recording procedures as reversal, pitch-through-speed changes, editing and dubbing.

Because of the time differential between production and performance when using the intermediate process, the composer-performer cannot immediately hear or judge his performance, therefore corrections can be made only after some lapse of time. Immediate processing techniques present no such problems.

Methods of immediate processing include spectrum and envelope shaping, change of pitch, change of overtone structure including modification from harmonic to nonharmonic overtone relations, application of periodic modulation effects, reverberation, echo and other repetition phenomena.

The output of the ring-bridge modulator shown in Figure 2a yields the sum and differences of the frequencies applied to its two inputs but contains neither input frequency. This feature has been used to create new sounds and effects. Figure 2b shows a tone applied to input 1 and a group of harmonically related frequencies applied to input 2. The output spectrum is shown in Figure 2c.

Due to operation of the ring-bridge modulator, the output frequencies are no longer harmonically related to each other. If a group of properly related frequencies were applied to both inputs and a percussive-type envelope were applied to the output signal, a bell-like tone would be produced.

In a more general presentation, the curves of Figure 3 show the variety of tone spectra that may be derived with a gliding frequency between 1 cps and 10 kcps applied to one and two fixed 440 and 880 cps frequencies (in octave relationship) applied to the other input of the ring-bridge modulator. The output frequencies are identified on the graph.

Frequencies applied to the ring-bridge modulator inputs are not limited to the audio range. Application of a subsonic frequency to one input will periodically modulate a frequency applied to the other. Application of white noise to one input and a single audio frequency to the other input will yield tuned noise at the output. Application of a percussive envelope to one input simultaneously with a steady tone at the other input will result in a percussive-type output that will have the characteristics of the steady tone modulated by the percussive envelope.

The unit shown in Figure 4 provides congruent envelope shaping as well as the coincident percussive envelope shaping of the program material. One input accepts the control signal while the other input accepts the material to be subjected to envelope shaping. The processed audio appears at the output of the gating circuit.

To derive control voltages for the gating functions, the audio at the control input is amplified, rectified and applied to a low-pass filter. Thus, a relatively ripple-free variable DC bias will actuate the variable gain, push-pull amplifier gate. When switch S1 is in the gating position, the envelope of the control signal shapes that of the program material.

To prevent the delay caused by C1 and C2 on fast-changing control voltages, and to eliminate asymmetry caused by the different output impedances at the plate and cathode of V2, relatively high-value resistors R3 and R4 are inserted between phase inverter V2 and the push-pull output of the gate circuit. These resistors are of the same order of magnitude as biasing resistors R1 and R2 to secure a balance between the control DC signal and the audio portion of the program material.

The input circuits of V5 and V6 act as a high-pass filter. The cutoff frequency of these filters exceeds that of the ripple filter by such an amount that no disturbing audio frequency from the control input will feed through to the gate. This is important for clean operation of the percussive envelope circuit. The pulses that initiate the percussive envelopes are generated by Schmitt trigger V9 and V10. Positive-going output pulses charge C5 (or C5 plus C6 or C7 chosen by S2) with the discharge through R5. The time constant depends on the position of S2.

To make the trigger circuit respond to the beginning of a signal as well as to signal growth, differentiator C3 and R6 plus R7 is used at the input of V9. The response to signal growth is especially useful in causing the system to yield to a crescendo in a music passage or to instants of accentuation in the flow of speech frequencies.

The practical application of the audio-controlled percussion device within a system for the production of new musical effects is shown in Figure 5. The sound of a bongo drum triggers the percussion circuit, which in turn converts the sustained chords played by the organ into percussive tones. The output signal is applied to a tape-loop repetition unit that has four equally spaced heads, one for record and three for playback. By connecting the record head and playback head 2 in parallel, output A is produced. By connecting playback head 1 and playback head 3 in parallel, output B is produced, and a distinctive ABAB pattern may be achieved. Outputs A and B can be connected to formant filters having different resonance frequencies.

The number of repetitions may be extended if a feedback loop is inserted between playback head 2 and the record amplifier. The output voltages of the two filters and the microphone preamplifier are applied to a mixer in which the ratio of drum sound to modified percussive organ sound may be controlled.

The program material originating from the melody instrument is applied to one of the inputs of the audio-controlled gate and percussion unit. There it is gated by the audio from a percussion instrument. The percussive melody sounds at the output of the gate are applied to the tape-loop repetition system. Output signal A — the direct signal and the information from playback head 2 — is applied through amplifier A and filter 1 to the mixer. Output signal B — the signals from playback heads 1 and 3 — is applied through amplifier B to one input of the ring-bridge modulator. The other ring-bridge modulator input is connected to the output of an audio signal generator.

The mixed and frequency-converted signal at the output of the ring-bridge modulator is applied through filter 2 to the mixer. At the mixer output a percussiveABAB signal (stemming from a single melody note, triggered by a single drum signal) is obtained. In its A portion it has the original melody instrument pitch while its B portion is the converted nonharmonic overtone structure, both affected by the different voicings of the two filters. When the direct drum signal is applied to a third mixer input, the output will sound like a voiced drum with an intricate aftersound. The repetition of the ABAB pattern may be extended by a feedback loop between playback head two and the record amplifier.

When applying the human singing voice to the input of the fundamental frequency selector, the extracted fundamental pitch may be distorted in the squaring circuit and applied to the frequency divider (or dividers). This will derive a melody line whose pitch will be one octave lower than that of the singer. The output of the frequency divider may then be applied through a voicing filter to the program input of the audio-controlled gate and percussion unit. The control input of this circuit may be actuated by the original singing voice, after having passed through a low-pass filter of such a cutoff frequency that only vowels —typical for syllables — would trigger the circuit. At the output of the audio-controlled gate, percussive sounds with the voicing of a string bass will be obtained mixed with the original voice of the singer. The human voice output signal will now be accompanied by a coincident string bass sound which may be further processed in the tape-loop repetition unit. The arbitrarily selected electronic modules of this synthesizer are of a limited variety and could be supplemented by other modules.

A system synthesizer may find many applications such as exploration of new types of electronic music or as a tool for composers who are searching for novel sounds and musical effects. Such a device will present a challenge to the imagination of composer-programmer. The modern approach of synthesizing intricate electronic systems from modules with a limited number of basic functions has proven successful in the computer field. This approach has now been made in the area of sound synthesis. With means for compiling any desired modular configuration, an audio system synthesizer could become a flexible and versatile tool for sound processing and would be suited to meet the ever-growing demand for exploration and production of new sounds.

Harald Bode 1961

PDF of the article here: 1961 edition of Electronics Magazine

Bode's

Bode’s Audio System Synthesiser’

Audio Files:

Demonstration of the Audio System Synthesiser by Harald bode in 1962. 4.36 demo

PHASE 4-2 ARPEGGIO” (4:51) Composed in 1964 while Bode was experimenting with various phasers, filters, and frequency shifters.


Sources:

http://cec.sonus.ca/econtact/13_4/palov_bode_biography.html

http://cec.sonus.ca/econtact/13_4/bode_synthesizer.html

http://esteyorganmuseum.org/

The Lipp Pianoline Lipp, Germany, 1950

The Lipp Pianoline

The Lipp Pianoline

The Lipp Pianoline was a monophonic vacuum tube based keyboard instrument designed as an add-on for piano players. The Pianoline was part of a family of portable piano-attachment instruments popular in the 1950′s such as the Ondioline, Clavioline and Univox – the Pianoline being distinguished by it’s larger sized keys.


Sources:

 

‘MUSIC N’, Max Vernon Mathews, USA, 1957

Max Mathews was a pioneering, central figure in computer music. After studying engineering at California Institute of Technology and the Massachusetts Institute of Technology in 1954 Mathews went on to develop ‘Music 1′ at Bell Labs; the first of the ‘Music’ family of computer audio programmes and the first widely used program for audio synthesis and composition. Mathews spent the rest of his career developing the ‘Music N’ series of programs and became a key figure in digital audio, synthesis, interaction and performance. ‘Music N’ was the first time a computer had been used to investigate audio synthesis ( Computers had been used to generate sound and music with the CSIR M1 and Ferranti Mk1 as early as 1951, but more as a by-product of machine testing rather than for specific musical objectives) and set the blueprint for computer audio synthesis that remains in use to this day in programmes like CSound, MaxMSP and SuperCollider and graphical modular programmes like Reaktor.

IBM 704 System

IBM 704 System

“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines – they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”

Max Mathews “Horizons in Computer Music”, March 8–9, 1997, Indiana University:

MUSIC I 1957

Music 1 was written in Assembler/machine code to make the most of the technical limitations of the IBM704 computer. The audio output was a simple monophonic triangle wave tone with no attack or decay control. It was only possible to set the parameters of amplitude, frequency and duration of each sound. The output was stored on magnetic tape and then converted by a DAC to make it audible (Bell Laboratories, in those years, were the only ones in the United States, to have a DAC; a 12-Bit valve technology converter, developed by EPSCO), Mathews says;

In fact, we are the only ones in the world at the time who had the right kind of a digital-to-analog converter hooked up to a digital tape transport that would play a computer tape. So we had a monopoly, if you will, on this process“.

In 1957 Mathews and his colleague Newman Guttman created a synthesised 17 second piece using Music I, titled ‘The Silver Scale’ ( often credited as being the first proper piece of  computer generated music) and a one minute piece later in the same year called ‘Pitch Variations’ both of which were released on an anthology called ’Music From Mathematics’ edited by Bell Labs in 1962.

Mathews and the IBM 7094

Mathews and the IBM 7094

MUSIC II 1958

Was an updated more versatile and functional version of Music I . Music II  still used assembler but for the transistor (rather than valve) based, much faster IBM 7094 series. Music II had four-voice polyphony and a was capable of generating sixteen wave shapes via the introduction of a wavetable oscillator.

MUSIC III 1960

“MUSIC 3 was my big breakthrough, because it was what was called a block diagram compiler, so that we could have little blocks of code that could do various things. One was a generalized oscillator … other blocks were filters, and mixers, and noise generators.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

The introduction of Unit Generators (UG) in MUSIC III was an evolutionary leap in music computing proved by the fact that almost all current programmes use the UG concept in some form or other. A Unit generator is essentially a pre-built discreet function within the program; oscillators, filters, envelope shapers and so-on, allowing the composer to flexibly connect multiple UGs together to generate a specific sound. A separate ‘score’ stage was added where sounds could be arranged in a musical chronological fashion. Each event was assigned to an instrument, and consisted of a series of values for the unit generators’ various parameters (frequency, amplitude, duration, cutoff frequency, etc). Each unit generator and each note event was entered onto a separate punch-card, which while still complex and archaic by today’s standards, was the first time a computer program used a paradigm familiar to composers.

“The crucial thing here is that I didn’t try to define the timbre and the instrument. I just gave the musician a tool bag of what I call unit generators, and he could connect them together to make instruments, that would make beautiful music timbres. I also had a way of writing a musical score in a computer file, so that you could, say, play a note at a given pitch at a given moment of time, and make it last for two and a half seconds, and you could make another note and generate rhythm patterns. This sort of caught on, and a whole bunch of the programmes in the United States were developed from that. Princeton had a programme called Music 4B, that was developed from my MUSIC 4 programme. And (theMIT professor) Barry Vercoe came to Princeton. At that time, IBM changed computers from the old 1794 to the IBM 360 computers, so Barry rewrote the MUSIC programme for the 360, which was no small job in those days. You had to write it in machine language.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

Max Mathews and Joan Miller at Bell labs

Max Mathews and Joan Miller at Bell labs

MUSIC IV

MUSIC IV was the result of the collaboration between Max Mathews and  Joan Miller completed in 1963 and was a more complete version of the MUSIC III system using a modified macro enabled version of the assembler language. These programming changes meant that MUSIC IV would only run on the Bell Labs IBM 7094.

“Music IV was simply a response to a change in the language and the computer. It Had some technical advantages from a computer programming standpoint. It made heavy use of a macro assembly program Which Existed at the time.”
Max Mathews 1980

MUSIC IVB, IVBF and IVF

Due to the lack of portability of the MUSIC IV system other versions were created independently of Mathews and the Bell labs team, namely MUSIC IVB at Princeton and MUSIC IVBF at the Argonne Labs. These versions were built using FORTRAN rather than assembler language.

MUSIC V

MUSIC V was probably the most popular of the MUSIC N series from Bell Labs. Similar to MUSIC IVB/F versions, Mathews abandoned assembler and built MUSIC V in the FORTRAN language specifically for the IBM 360 series computers. This meant that the programme was faster, more stable and  could run on any IBM 360 machines outside of  Bell Laboratories. The data entry procedure was simplified, both in Orchestra and in Score section. One of the most interesting news features was the definition of new modules that allow you to import analogue sounds into Music V. Mathews persuaded Bell Labs not to copyright the software meaning that MUSIC V was probably one of the first open-source programmes, ensuring it’s adoption and longevity leading directly to today’s CSound.

“… The last programme I wrote, MUSIC 5, came out in 1967. That was my last programme, because I wrote it in FORTRAN. FORTRAN is still alive today, it’s still in very good health, so you can recompile it for the new generation of computers. Vercoe wrote it for the 360, and then when the 360 computers died, he rewrote another programme called MUSIC 11 for the PDP-11, and when that died he got smart, and he wrote a programme in the C language called CSound. That again is a compiler language and it’s still a living language; in fact, it’s the dominant language today. So he didn’t have to write any more programmes.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

MUSIC V marked the end of Mathews involvement in MUSIC N series but established it as the parent for all future music programmes. Because of his experience with the real-time limitations of computer music, Mathews became interested in developing ideas for performance based computer music such as the GROOVE system (with Richard Moore in 1970) system in and The ‘Radio Baton’ (with Tom Oberheim in 1985 ).

YEAR VERSION PLACE AUTHOR
1957 Music I Bell Labs (New York) Max Mathews
1958 Music II Bell Labs (New York) Max Mathews
1960 Music III Bell Labs (New York) Max Mathews
1963 Music IV Bell Labs (New York) Max Mathews, Joan Miller
1963 Music IVB Princeton University Hubert Howe, Godfrey Winham
1965 Music IVF Argonne Laboratories (Chicago) Arthur Roberts
1966 Music IVBF Princeton University Hubert Howe, Godfrey Winham
1966 Music 6 Stanford University Dave Poole
1968 Music V Bell Labs (New York) Max Mathews
1969 Music 360 Princeton University Barry Vercoe
1969 Music 10  Stanford University John Chowning, James Moorer
1970 Music 7 Queen’s College (New York) Hubert Howe, Godfrey Winham
1973 Music 11 M.I.T. Barry Vercoe
1977 Mus10 Stanford University Leland Smith, John Tovar
1980 Cmusic University of California Richard Moore
1984 Cmix Princeton University Paul Lansky
1985 Music 4C University of Illinois James Beauchamp, Scott Aurenz
1986 Csound M.I.T. Barry Vercoe


Sources

http://www.computer-history.info/Page4.dir/pages/IBM.704.dir/

http://www.musicainformatica.org

Curtis Roads, Interview with Max Mathews, Computer Music Journal, Vol. 4, 1980.

‘Frieze’ Interview with Max Mathews. by Geeta Dayal

An Interview with Max Mathews.  Tae Hong Park. Music Department, Tulane University

The ‘Ferranti Mk1 ‘ Computer. Freddie Williams & Tom Kilburn, United Kingdom, 1951.

Ferranti Mk1 Computer

Ferranti Mk1 Computer

The oldest existing recording of a computer music programme. The Ferranti Mk1in 1951. Recorded live to acetate disk with a small audience of technicians.

The Ferranti Mk1 was the world’s first commercially available general-purpose computer; a commercial development of the Manchester Mk1 at Manchester university in 1951. Included in the Ferranti Mark 1′s instruction set was a ‘hoot’ command, which enabled the machine to give auditory feedback to its operators. Looping and timing of the ‘hoot’ commands allowed the user to output pitched musical notes; a feature that enabled the Mk1 to have produced the oldest existing recording of computer music ( The earliest reported but un-recorded computer music piece was created earlier in the same year by the CSIR Mk1 in Sydney Australia). The recording was made by the BBC towards the end of 1951 programmed by Christopher Strachey, a maths teacher at Harrow and a friend of Alan Turing.

Ferranti Mk1

Ferranti Mk1

Ferranti Mk1

Ferranti Mk1

 


Sources

http://www.cs.man.ac.uk/CCS/res/res62.htm

http://www.computer50.org/mark1/FM1.html

CSIR Mk1 & CSIRAC, Trevor Pearcey & Geoff Hill, Australia, 1951

Trevor Pearcey at the CSIR Mk1

Trevor Pearcey at the CSIR Mk1

CSIRAC was an early digital computer designed by the British engineer Trevor Pearcey as part of a research project at CSIRO ( Sydney-based Radiophysics Laboratory of the Council for Scientific and Industrial Research)  in the early 1950′s. CSIRAC was intended as a prototype for a much larger machine use and therefore included a number of innovative ‘experimental’ features such as video and audio feedback designed to allow the operator to test and monitor the machine while it was running. As well as several optical screens,  CSIR Mk1 had a built-in Rola 5C  speaker mounted on the console frame. The speaker was an output device used to alert the programmer that a particular event had been reached in the program; commonly used for warnings, often to signify the end of the program and sometimes as a debugging aid. The output to the speaker was basic raw data from the computer’s bus and consisted of an audible click. To create a more musical tone, multiple clicks were combined using a short loop of instructions; the timing of the loop giving a change in frequency and therefore an audible change in pitch.

A closeup of the CSIRAC console switch panel. Note the multiple rows of 20 switches used to set bits in various registers.

The CSIRAC console switch panel with multiple rows of 20 switches used to set bits in various registers.

The first piece of digital computer music was created by Geoff Hill and Trevor Pearcey on the  CSIR Mk1 in 1951 as a way of testing the machine rather than a musical exercise. The music consisted of excerpt from  popular songs of the day; ‘Colonel Bogey’, ‘Bonnie Banks’, ‘Girl with Flaxen Hair’ and so on. The work was perceived as a fairly insignificant technical test and wasn’t recorded or widely reported:

An audio reconstruction  of CSIRAC playing Colonel Bogey (c.1951)
 CSIRAC plays In Cellar Cool with a simulation of CSIRAC’s room noises.

CSIRAC – the University’s giant electronic brain – has LEARNED TO SING!

…it hums, in bathroom style, the lively ditty, Lucy Long. CSIRAC’s song is the result of several days’ mathematical and musical gymnastics by Professor T. M. Cherry. In his spare time Professor Cherry conceived a complicated punched-paper programme for the computer, enabling it to hum sweet melodies through its speaker… A bigger computer, Professor Cherry says, could be programmed in sound-pulse patterns to speak with a human voice…
The Melbourne Age, Wednesday 27th July 1960

Later version of the CSIRAC at The University of Melbourne

Later version of the CSIRAC at The University of Melbourne

…When CSIRAC began sporting its musical gifts, we jumped on his first intellectual flaw. When he played “Gaudeamus Igitur,” the university anthem, it sounded like a refrigerator defrosting in tune. But then, as Professor Cherry said yesterday, “This machine plays better music than a Wurlitzer can calculate a mathematical problem”…
Melbourne Herald, Friday 15th June 1956:

Portable computer: CSIRAC on the move to Melbourne, June 1955

Portable computer: CSIRAC on the move to Melbourne, June 1955

The CSIR Mk1 was dismantled in 1955 and moved to The University of Melbourne, where it was renamed CSIRAC. Professor of Mathematics, Thomas Cherry, had a great interest in programming and music and he created music with CSIRAC. During it’s time in Melbourne the practice of music programming on the CSIRAC was refined allowing the input of music notation. The program tapes for a couple of test scales still exist, along with the popular melodies ‘So early in the Morning’ and ‘In Cellar Cool’.

Music instructions for the CSIRAC by Thomas Cherry

Music instructions for the CSIRAC by Thomas Cherry

Music instructions for the CSIRAC by Thomas Cherry

Music instructions for the CSIRAC by Thomas Cherry




Later version of the CSIRAC at The University of Melbourne

Later version of the CSIRAC at The University of Melbourne


Sources

http://www.audionautas.com/2011/09/music-of-csirac.html

Australia’s First Computer Music, Common Ground Publishing, Paul Doornbusch pauld@koncon.nl

http://ww2.csse.unimelb.edu.au/dept/about/csirac/music/index.html

The ‘Groupe de Recherches Musicales’ Pierre Schaeffer, Pierre Henry & Jacques Poullin, France 1951

Console at GRM Paris

Console at GRM Paris showing the EMI mixing desk and parts of the Coupigny Synthesiser c1972

The GRM was an electro-acoustic music studio founded in 1951 by the musique concrète pioneer Pierre Schaeffer, composer Pierre Henry and the engineer Jacques Poullin and based at the RTF (Radiodiffusion-Télévision Française) buildings in Paris. The studio itself was the culmination of over a decades work into musique concrète and sound objects by Schaeffer and others at the ‘Groupe Recherches de Musique Concrète’ (GRMC) and the Studio d’Essai. The new studio was designed around Schaeffer’s sound theories later outlined in his book  “Treaty of Musical Object – Traité des Objects Musicaux”:

“musique concrète was not a study of timbre, it is focused on envelopes, forms. It must be presented by means of non-traditional characteristics, you see … one might say that the origin of this music is also found in the interest in ‘plastifying’ music, of rendering it plastic like sculpture…musique concrète, in my opinion … led to a manner of composing, indeed, a new mental framework of composing” (James 1981, 79). Schaeffer had developed an aesthetic that was centred upon the use of sound as a primary compositional resource. The aesthetic also emphasised the importance of play (jeu) in the practice of sound based composition. Schaeffer’s use of the word jeu, from the verb jouer, carries the same double meaning as the English verb play: ‘to enjoy oneself by interacting with one’s surroundings’, as well as ‘to operate a musical instrument’
(Pierre Henry. Dack 2002).

Along with the WDR Studio in Germany, the GRM/GRMC was one of the earliest electro-acoustic music studios and attracted many notable avant-garde composers of the era including Olivier Messiaen, Pierre Boulez, Jean Barraqué, Karlheinz Stockhausen, Edgard Varèse, Iannis Xenakis, Michel Philippot, and Arthur Honegger. Compositional output from 1951 to 1953 comprised ‘Étude I’ (1951) and ‘Étude II’ (1951) by Boulez, ‘Timbres-durées’ (1952) by Messiaen, ‘Konkrete Etüde’ (1952) by Stockhausen, ‘Le microphone bien tempéré’ (1952) and ‘La voile d’Orphée’ (1953) by Pierre Henry, ‘Étude I’ (1953) by Philippot, ‘Étude’ (1953) by Barraqué, the mixed pieces ‘Toute la lyre’ (1951) and ‘Orphée 53′(1953) by Schaeffer/Henry, and the film music ‘Masquerage’ (1952) by Schaeffer and ‘Astrologie’ (1953) by Pierre Henry.

The original design of the studio followed strict Schaefferian theory and was completely centered around tape manipulation, recording and editing. Several novel ‘tape instruments’ were built and integrated into the studio setup including the phonogène (Three version were built; the phonogène Universal, Chromatic & Sliding) and the Morphophone.

Phonogene

The Phonogène Chromatique

The phonogène

The Phonogène was a one-off multi-headed tape instrument designed by Jacques Poullin. In all, three version of the instrument were created;

  • the Chromatic Phonogène . A tape loop was driven by multiple capstans at varied speeds allowed the production of short bursts of tape sounds at varying pitches defined by a small one-octave keyboard.
  • The Sliding phonogène created a continuous tone by varying the tape speed via a control rod
  • The Phonogène Universal allowed transposition of pitch without altering the duration of the sound and vice-versa obtained through a rotating magnetic head called the ‘Springer temporal regulator’ (a similar design to VHS video tape recorders)
The Morphophone

The Morphophone

The morphophone

The Morphophone was a type of tape loop-delay mechanism, again designed by Jacques Pollin. A tape loop was stuck to the edge of a 50cm diameter rotating disk and the sound was picked up at varying points on the tape by ten magnetic heads (one recording, one erasing and ten playback heads). The resulting sound was passed through a series of bandpass filters (for each playback head) and amplified.

 Images from the Groupe de Recherches Musicales Studio







Sources

GRM Archive

http://www.backspinpromo.com/recollectionGRM.html

The ‘Chamberlin’, Harry Chamberlin, USA, 1951

Chamberlin M1001

Chamberlin M1001

The Chamberlin was an early pre-cursor of the modern digital sampler using a complex mechanism that stored analogue audio samples on strips of audio tape – 1 tape for each key. When a key on the keyboard was pressed the tape strip played forward and when released the play head returns to the beginning of the tape. The note had a limited length, eight seconds on most models. The instrument was designed as an ‘amusing’ novelty instruments for domestic use but later found favour with rock musicians in the sixties and seventies.

The first Chamberlin Model200

The first Chamberlin Model200

All the original sounds were recordings of the Lawrence Welk Orchestra made by Harry Chamberlin at his home in California. The recording technique produced clean unaffected sound but with a heavy vibrato added by the musicians. The full set of sound that came with the Chamberlin were:

  • Keyboards: Marimba, Piano, Vibes (with vibrato), Bells (glockenspiel), Organ, Tibia Organ, Kinura Organ, Harpsichord, Accordion, Electric
  • Harpsichord and Flute/String Organ.
  • Brass: Alto Sax, Tenor Sax, Trombone, Trumpet, French Horn, Do Wah Trombone, Slur Trombone and Muted Trumpet.
  • Wind: flute, oboe, and bass clarinet.
  • Voice: Male Voice (solo) and Female Voice (solo).
  • Strings: 3 violins, Cello and Pizzicato violins.
  • Plucked strings: Slur Guitar, Banjo, Steel Guitar, Harp solo, Harp Roll, Harp 7th Arpeggio (harp sounds were not available to the public), Guitar and Mandolin.
  • Effects: Dixieland Band Phrases and Sound Effects.

In 1962 two Chamberlins were taken to Great Britain where they were used as the basis for the design for the Mellotron keyboard:

The Chamberlin was invented in the US in 1946 by Harry Chamberlin who had the idea (allegedly) when setting up his portable tape recorder to record himself playing his home organ. It is rumoured that it occured to him that if he could record the sound of a real instrument, he could make a keyboard instrument that could replay the sound of real instruments and thus the Chamberlin was born. Chamberlin’s idea was ‘simple’ – put a miniature tape playback unit underneath each key so that when a note was played, a tape of ‘real’ instruments would be played. At the time, the concept was totally unique.

In the ’50s, at least 100 Chamberlins were produced and to promote his instrument, Harry teamed up with a guy called Bill Fransen who was (allegedly) Harry’s window cleaner. Fransen was (allegedly) totally fascinated by this unique invention and subsequently became Chamberlin’s main (and only) salesman. However, there were terrible reliability problems with the Chamberlin and it had a very high (it is said 40%) failure rate with the primitive tape mechanism which resulted in tapes getting mangled.

Fransen felt that Chamberlin would never be able to fix these problems alone and so, unknown to Chamberlin (allegedly), Fransen brought some Chamberlins to the UK in the early ’60s to seek finance and a development partner. He showed the Chamberlin to a tape head manufacturer, Bradmatics, in the Midlands and the Bradley brothers (Frank, Leslie and Norman who owned Bradmatics) were (allegedly) very impressed with the invention and (allegedly) agreed to refine the design and produce them for Fransen but…Under the mistaken impression that the design was actually Fransen’s (allegedly)!

A new company, Mellotronics, was set up in the UK to manufacture and market this innovative new instrument and work got underway with the Bradley brothers (allegedly) unaware that they were basically copying and ripping off someone else’s idea! Of course, it wasn’t long before Harry Chamberlin got to hear of this and he too went to the UK to meet with the Bradley brothers. After some acrimonious discussions, the two parties settled with Harry selling the technology to the Bradleys. Mellotronics continued to develop their ‘Mellotron’ whilst Harry returned to the US where he continued to make his Chamberlins with his son, Richard, in a small ‘factory’ behind his garage and later, a proper factory in Ontario, a small suburb in Los Angeles. In total, they made a little over 700 units right through until 1981. Harry died shortly afterwards.

But whatever happened in those early meetings almost 40 years ago is inconsequential – the fact of the matter is that the two instruments are almost indistinguishable from each other. Each key has a playback head underneath it and each time a key is pressed, a length of tape passes over it that contains a recording of a ‘real’ instrument. The tape is of a finite length lasting about eight seconds and a spring returns it to its start position when the note is finished. As you can see from the photograph above though, the Chamberlin is smaller (although some mammoth dual-manual Chamberlins were also produced!).

Many claim that the Chamberlin had a better sound – clearer and more ‘direct’ …. which is strange because the Mellotron was (allegedly) better engineered than the Chamberlin. But there is a lot of confusion between the two instruments not helped by the fact that some Chamberlin tapes were used on the Mellotron and vice versa…. so even though the two companies were in direct competition with each other, they shared their sounds….. weird!

It also seems that some users were also confused and credited a ‘Mellotron’ on their records when in fact it might well have been a Chamberlin that they used (allegedly). However, given the similarities between the two, this confusion is understandable and it’s a tribute to Mellotronics’ marketing that they got the upper hand on the original design.

To be honest, the whole story is shrouded in hearsay and music history mythology and we may never know the truth (especially now that the original people involved are sadly no longer with us) but regardless of this, the Bradley brothers were obviously more successful with their marketing of the idea than Chamberlin himself. Although it was originally aimed at the home organ market with cheesy rhythm loops and silly sound effects, the Mellotron went on to become a legend in the history of modern music technology and the mere mention of its name can invoke dewy eyed nostalgia amongst some people. On the other hand, however, few people have even heard of the Chamberlin which is sad because Harry Chamberlin’s unique invention preceded the Mellotron by some fifteen years or more and by rights, it is the Chamberlin that deserves the title of “the world’s first sampler”.

Nostalgia has a lovely Chamberlin string sound that captures the original Chamberlin character quite authentically. Unlike the original, though, the sound is looped but, like the original, it has the same keyboard range (G2-F5) and is not velocity sensitive.

Quoted from: http://www.hollowsun.com/vintage/chamberlin/

Sources

http://www.hollowsun.com/vintage/chamberlin/

the ‘Maestrovox’, Victor Harold Ward, United Kingdom, 1952

Maestrovox

Maestrovox Consort Model

The Maestrovox was a monophonic portable vacuum tube organ built by Maestrovox Electronic Organs in Middlesex, UK. The instrument was one of the many designs similar to the Clavioline, Tuttivox and Univox and intended as a piano attachment instrument for dance bands and light orchestras of the day. The Maestrovox was produced from 1952 onwards and came in a number of models, the Consort, Consort De-Luxe, Coronation and a later version that mechanically triggered notes from a Piano keyboard, the Orchestrain.

 

Maestrovox Consort

Maestrovox Consort

 

Maestrovox – By Charles Hayward of ‘This Heat’

I used a Maestrovox keyboard with This Heat, set up just to the left of my drum kit (alongside a Bontempi electronic organ with about 3 sounds). It can be heard throughout This Heat’s recordings and was used onstage for most of the group’s gigs.

The Maestrovox was a fascinating instrument, it was advertised second-hand in the Evening News small-ads, maybe 1966 or 68, I didn’t really know what it was that I was going to see, just that I wanted to use an electronic keyboard in conjunction with domestic tape machines and this was going fairly cheap, £15 or so. I persuaded my brother to go half although in truth he never really used it. When we got it back home the unusual qualities of the instrument slowly became clear.

Firstly it was monophonic, with priority given to the highest note played; this was heavenly, you could ‘yodel’ between notes, sometimes using the lower note as a drone, sometimes playing contrary lines in 2 hands with only 1 note being heard at any time, sort of ‘strobing’ between 2 places. The keys were highly sprung, so that on the black notes, if played very quickly, the springs would activate even faster and the rate of change between the higher played note and a sustained lower sound would be very distinctive. This sound was used at the beginning and end of the 1st This Heat album and also played very quietly for about 20 minutes immediately before a gig, a bit like a distant alarm.

Tuning was an unsolvable problem that became a fantastic strength and the predominant reason for using the keyboard with the group. There were a couple of little tuning knobs on the console of filters that were changed with a screwdriver. No matter how I tried I could not find the place where the keyboard was in tune with itself, the nearest I could get was the low D to have its octave on the E 9 notes higher, in other words a 14 note octave (instead of the usual 12). Consequently every note was slightly flat or sharp. This meant that melodies had to be re-learnt when using the Maestrovox so that the tuning would bend in and out with other ‘orthodox’ tuned instruments. When played at the ‘back’ of the group’s sound the result would be to inexplicably ‘widen’ the sound.

The 4-step vibrato didn’t seem to work properly and had the effect of flattening the tuning by very small amounts, a little more than a quarter tone at the fullest extent. A series of filters changed the sound, 5 or 6 little buttons that could be engaged in different permutations. A 2- page pamphlet had a list of filter combinations that imitated ‘real’ instruments (always a doomed idea). I seem to remember that 13 was bassoon in the lower register (a particular favourite) and oboe in the higher register. These sound filters also effected the tuning. Another row of 3 buttons changed the attack parameters, without a little ‘slope’ it was kind of ‘clicky’, like the sound was being switched on.

The keyboard was about the size of a PSS Yamaha (which is sometimes confusingly described as a ’midi’ keyboard), and had a range of perhaps 3 octaves. The Maestrovox was designed to sit under a piano keyboard as a sort of addition to the acoustic instrument, although the tuning must have made any orthodox use hilarious. There was a sort of tripod that was supposed to hold it up against the underneath of the piano keyboard, this looked very shaky and unreliable, so my dad knocked up a stand, something like a shrunken Hammond. Valves glowed inside the keyboard which was connected via a multi-pin plug and lead to an amplifier that also served as a box for transportation. Both mains electricity and sound signal were conveyed by this lead. To boost the signal I connected a pair of crocodile clips to the speaker and this was then plugged in to a larger amplifier. I’m not sure if a connection socket was fixed for ease and reliability when This Heat started touring more regularly. The volume was controlled by a knee-operated lever (I remember harmoniums used this method too), I found a way of holding this in place and used a foot swell pedal instead.

It blew up sometime before This Heat began and it was quite a problem getting replacement valves. During the recording of ‘Cenotaph’ on the Deceit album it blew up again, in fact the track starts out with 2 tracks of Maestrovox and by the end there’s only 1 because it stopped working during the overdub. Getting replacement parts was time consuming, perhaps impossible, and then other things meant that a lot of equipment held in our rehearsal studio Cold Storage got lost, including the Maestrovox. By this time This Heat had split and it’s sound was so much part of that group that I was both sad and pleased to see it go.

Charles Hayward


Sources

http://www.debbiecurtis.co.uk/id99.html

The ‘Clavivox’ Raymond Scott, USA, 1952

Raymond Scott's Clavivox

Raymond Scott’s Clavivox

The Clavivox was invented by the composer and engineer Raymond Scott circa 1950. Scott was the leader of the Raymond Scott Quintet working originally for the CBS radio house band and later composing eccentric but brilliant scores for cartoons for Warner Bros such as ‘Loony Tunes’ and ‘Merrie Melodies’. Scott incorporated elements of Jazz, Swing, pop music and avant-garde modern music into his compositions using a highly personal and unusual form of notation and editing. To the exasperation of his musicians, Scott would record all the band sessions on lacquer discs and later, using a cut and paste technique, edit blocks of music together into complex and almost unplayable compositions.In the 1946 Scott founded Manhattan Research, a commercial electronic music studio designed and built by Raymond Scott, featuring Scott’s own electronic devices and other electronic instruments of the period. The studio had many unique sound processors and generators including ‘infinitely variable envelope shapers’, ‘infinitaly variable ring modulators’, ‘chromatic electronic drum generators’ and ‘variable wave shape generators’. Scott built his first electronic musical instrument in 1948 dubbed ‘The Karloff’ this machine was designed to create sound effects for advertisements and films and was said to be able to imitate sounds such as voice sounds, the sizzle of frying steak and jungle drums.

Raymond Scott in his studio with the Clavivox

Raymond Scott in his studio with the Clavivox

In the 1950′s Scott started to develop a commercial keyboard instrument the Clavivox or keyboard Theremin (completed circa 1956). The Clavivox was a vacuum tube oscillator instrument controlled by a three octave keyboard (with a sub assembly circuit designed by a young Bob Moog). The instrument was designed to simulate the continuous gliding tone of the Theremin but be playable with a keyboard. The machine was fitted with three ‘key’ controls on the left of the keyboard that controlled the attack of the note or cut of the note completely, these keys could be played with the left hand to give the enevelope characteristics of the note. Other controls on the Clavivox’s front panel were for fine and coarse tuning and vibrato speed and depth. Scott used the Clavivox in his cartoon scores for sound effects (similar to the ‘eerie whine’ of the Theremin) and stringand vocal sounds. The Clavivox was inteneded for mass production but the complexity and fragility of the instrument made this venture impractical.

During the 1960′s Scott built a number of electronic one off instruments and began experimenting with analogue pitch sequencing devices. One of the prototype instruments built during the sixties was a huge machine standing six feet high and covering 30 feet of scott’s studio wall. The pitch sequencer was built using hundreds of telephone exchange type switch relays and the sounds were generated from a bank of 16 oscillators, a modified Hammond organ, an Ondes Martenot and two Clavivoxes. The noise produced by the clicking switches had to be dampened by a thick layer of audio insulation.Scott used the machine to compose several early electronic music pieces in the 1960′s including three volumes of synthesised lullabys “Soothing Sounds for Baby” (1963) predating minimalist music’s (Phillip Glass, Steve Reich) use of repetition and sequences by 20 years.

Trailer of’Deconstructing Dad‘ a documentary on Raymond Scott.

Scott’s final and most ambitious machine christened the ‘ Electronium’ (not to be confused with the Hohner Electronium ) was the culmination of his work using pitch and rhythm sequencers (the design used a number of Moog-designed components, who had also contributed to the Clavivox) . Scott described the machine as an;

“instantaneous composition-performance machine, The Electronium is not a synthesizer — there is no keyboard [it was manipulated with knobs and switches] — and it cannot be used for the performance of existing music. The instrument is designed solely for the simultaneous and instantaneous composition-performance of musical works”

Raymond Scott

In 1972, Scott became the head of electronic music research and development for Motown Records. After his retirement, Scott used MIDI technology to continue composing until 1987, when he suffered the first of several debilitating strokes. Raymond Scott died in 1994.

Raymond Scott: born Harry Warnow September 10, 1908, Brooklyn,NY

Raymond Scott: born Harry Warnow September 10, 1908, Brooklyn, NY, February 8, Died 1994 North Hills, Los Angeles, California

 


Sources:

The Raymond Scott Archive. P O Box 6258,Hoboken.New Jersey 07030. USA.

The ‘ANS Synthesiser’ Yevgeny Murzin. Russia, 1958

The ANS Synthesiser

The ANS Synthesiser at the Glinka Museum Miscow.

The ANS Synthesiser takes it’s name and inspiration from the Russian composer Alexander Nikolayevich Scriabin (A.N.S.), whose mystical theories of a unified art of sound and light had a huge effect on avant-garde composers and theoreticians in Russia during the early Soviet period. Murzin’s objective was to build an instrument that combined graphics, light and music that gave the composer an unlimited palette of sound and freed them from the restrictions of instrumentation and musicians; a direct composition-to-music tool.

The ANS was a product of a culmination of several decades of exploration in sound and light by composers and artists such as Andrei Aramaazov, Boris Yankovsky, Evgeney Sholpo and others. To generate sound it uses the established photo-optic sound recording technique used in cinematography; this technique makes it possible to obtain a visible image of a sound wave, as well as to realise the opposite goal – synthesizing a sound from an artificially drawn sound wave.

One of the 44 photo-optical glass disks of the ANS

One of the photo-optical glass disks of the ANS

One of the main features of the ANS that Murzin designed is its photo-optic generator, consisting of rotating glass disks each containing 144 optic phonograms (tiny graphic representations of sound waves which, astonishingly, were hand drawn on each disk) of pure tones, or sound tracks. A bright light beam is projected through the spinning disks onto a photovoltaic resulting in a voltage tone equivalent to the frequency drawn on the disk; therefore the track nearest to the centre of the disc has the lowest frequency; the track nearest to the edge has the highest. Given a unit of five similar disks with different rotating speeds the ANS is able to produce 720 pure tones, covering the whole range of audible tones.

The ink covered coding field of the ANS
The programming field of the ANS

The composer selects the tones by using a coding field (the “score”) which is essentially a glass plate covered with an opaque, non-drying black mastic. The vertical axis of the coding field represents pitch and the horizontal, time in a way that is very similar to standard music notation. The score moves past a reading device which allows a narrow aperture of light to pass through the scraped off part of the plate onto a bank of twenty photocells that send a signal to twenty amplifiers and bandpass filters. The narrow aperture reads the length of the scraped-off part of the mastic during its run and transforms it into a sound duration. The minimum interval between each of the tones is 1/72 of an octave, or 1/6 of a semitone, which is only just perceptible to the ear. This allows for natural glissando effects and micro tonal and non-western scale compositions to be scored. The ANS is fully polyphonic and will generate all 720 pitches simultaneously if required – a vertical scratch would accomplish this, generating white noise.

Stanislav Kriechi at the ANS

Stanislav Kriechi explaining the coding field of the ANS

The non-drying mastic allows for immediate correction of the resulting sounds: portions of the plate that generate superfluous sounds can be smeared over, and missing sounds can be added. The speed of the score – the tempo of the piece – can also be smoothly regulated, all the way to a full stop via a handle at the front of the machine.

Murzin built only one version of the ANS, a working version currently resides at the Glinka State Central Museum of Musical Culture in Moscow. Martinov, Edison Denisov, Sofia Gubaidulina, Alfred Schnittke, Alexander Nemtin.

“I began experimenting with the ANS synthesizer when I joined Murzin’s laboratory in 1961. The most attractive method of composing for me was the freehand drawing of graphic structures on the score, including random and regulated elements, which are also transformed into sounds, noises and complex phonations. This offers new possibilities for composing, especially using variable tempo and volume. [...]

An example of an ANS score, picturing graphic structures that were drawn freehand on the mastic-covered plate. In 1961 I composed the music for the film Into Space. Artist Andrew Sokolov’s cosmic paintings appeared as moving images in the film, smoothly changing into each other and dissolving into fragments by means of cinematic devices. The light and color of Sokolov’s cosmic landscapes generated complex phonations and sound transitions in All this makes it possible for the composer to work directly and materially with the production of sound.my mind. The movement of the cosmic objects on the screen initiated the rhythms of my music. I tried to express all this by tracing it on the ANS’s score, making corrections after listening to the resultant sounds in order to gradually obtain the suitable phonation. I finally felt that the sounds produced by the ANS synthesizer on the basis of my freehand graphic structures correlated perfectly with the pictures on the screen. From 1967 to 1968 I experimented with moving timbres on the ANS and studied different modes of animating electronic sounds. During this period, I composed the following pieces for performance on the ANS: “Echo of the Orient”, “Intermezzo”, “North Song” “Voices and Movement” and  “Scherzo”. All of these were composed traditionally for orchestra previous to my work with the ANS. When I coded these orchestra scores on the ANS, I wanted to solve the problem of animating electronic sounds, so that the phonation of the ANS could approach that of the orchestra. These pieces appeared on a recording entitled ANS, which was produced in 1970 by MELODIA record label.

Later I used the ANS to help me compose the music for a puppet show that incorporated the use of light called ‘Fire of Hope’, which was based on Pablo Picasso’s works. The play was performed in 1985 at a festival in Moscow and in 1987 at a festival in Kazan by the Moscow group Puppet Pantomime, under the artistic direction of Marta Tsifrinovich. My composition Variations, written for the ANS, was also performed during the 1987 Kazan festival.

In 1991, I began working on the music for the slide composition ‘Rarschach Rhapsody’ by P.K.Hoenich, who is known for his light pictures created with sunrays. The composition consisted of 40 sun projections with abstract and half-abstract forms. ‘Rorschach Rhapsody’ was performed at the symposium of the International Society for Polyaesthetic Education in September 1992 in Mittersill, Austria. In 1993, I collaborated with Valentina Vassilieva to compose a suite of 12 pieces entitled The Signs of the Zodiac. These compositions used the ANS along with the sounds of voices, natural noises and musical instrumentation. I am currently working on a fantastic piece named “An Unexpected Visit,” for ANS synthesizer with transformed natural noises and percussion instruments”

Stanislav Kreichi 2001

Yevgeny Alexandrovich Murzin. Russia 1914 - 1970

Yevgeny Alexandrovich Murzin. Russia 1914 – 1970

Biographical Information:

Murzin began his academic life studying municipal building at the Moscow Institute of Engineers. When Nazi Germany invaded the USSR in 1941 he joined the soviet Artillery Academy as a senior technical lieutenant. During his time in military service Murzin was responsible for developing an electro-mechanical anti aircraft detector which was later adopted by the soviet army. After the war Murzin joined the Moscow Higher Technical School where he completed a thesis on Thematics and was involved in the development of military equipment including an artillery sound ranging device, instruments for the guidance of fighters to enemy bombers and air-raid defence systems.

Murzin had a reputation as an admirer of jazz but when a colleague introduced him to the works of Scriabin, Murzin became obsessed with the composers work and synaesthetic concepts. It was these ideas that inspired Murzin to begin his ‘Universal Synthesiser’ project around 1948 which was to lead to the ANS synthesiser some decades later. Murzin presented his proposal to Boris Yankovsky and N.A.Garbuzov at the Moscow Conservatory where, despite initial reluctance, he was given space to develop the instrument. Despite almost universal disinterest in his project Murzin continued over the next decade to develop the ANS prototype with funds from his own finances and working in his spare time with the help of several friends (including composers E.N Artem’eva, Stanislav Kreychi, Nikolai Nikolskiy and Peter Meshchaninov).

The first compositions using the ANS were completed in 1958 and exhibited in London and Paris. The ANS was moved to the Scriabin Museum in 1960 (ul. Vakhtangov 11, Moscow) and formed the basis of the USSR’s first electronic music studio which was used throughout the sixties’ by many world famous composers including Schnitke, Gubaydulina, Artem’ev, Kreychi, Nemtin and Meshchaninov.

Murzin and the ANS

Murzin and the ANS


Sources

http://snowman-john.livejournal.com/33729.html

Andrei Smirnov: Sound in Z – Experiments in Sound and Electronic Music The Theremin Institute, Moscow

Boris Yankovsky “The Theory and Practice of Graphic Sound”. Leningrad, 1939-1940

“Composer As Painter” excerpt from “Physics and Music”, Detgiz, 1963
Bulat M. Galeyev, “Musical-Kinetic Art in the USSR,” LlonardoU, No. 1, 41-47 (1991)