‘GROOVE Systems’, Max Mathews & Richard Moore, USA 1970

Max Mathews with the GROOVE system

Max Mathews with the GROOVE system

In 1967 the composer and musician Richard Moore began a collaboration with Max Mathews at Bell Labs exploring performance and  expression in computer music in a ‘musician-friendly’ environment. The result of this was a digital-analogue hybrid system called GROOVE  (Generated Realtime Operations On Voltage-controlled Equipment) in which a musician played an external analogue synthesiser and a computer monitored and stored the performer’s manipulations of the interface; playing notes, turning knobs and so-on. The objective being to build a real-time musical performance tool by concentrating the computers limited power, using it to store musical parameters of an external device rather than generating the sound itself :

“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wroteMusic 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. TheIBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University

Richard Moore with the Groove System

Richard Moore with the Groove System

The system, written in assembler, only ran on the Honeywell DDP224 computer that Bell had acquired specifically for sound research. The addition of a disk storage device meant that it was also possible to create libraries of programming routines so that users could create their own customised logic patterns for automation or composition. GROOVE allowed users to continually adjust and ‘mix’ different actions in real time, review sections or an entire piece and then re-run the composition from stored data. Music by Bach and Bartok were performed with the GROOVE at the first demonstration at a conference on Music and Technology in Stockholm organized by UNESCO  in 1970. Among the participants also several leading figures in electronic music such as Pierre Schaffer and Jean-Claude Risset.

“Starting with the Groove program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the radio-baton, plus a program, the conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.”
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University

The GROOVE System at the Bell Laboratories circa 1970

The GROOVE System at the Bell Laboratories circa 1970

The GROOVE system consisted of:

  • 14 DAC control lines scanned every 100th/second ( twelve 8-bit and two 12-bit)
  • An ADC coupled to a multiplexer for the conversion of seven voltage signal: four generated by the same knobs and three generated by 3-dimensional movement of a joystick controller;
  • Two speakers for audio sound output;
  • A special keyboard to interface with the knobs to generate On/Off signals
  • A teletype keyboard for data input
  • A CDC-9432 disk storage;
  • A tape recorder for data backup



Antecedents to the GROOVE included similar projects such as PIPER, developed by James Gabura and Gustav Ciamaga at the University of Toronto, and a system proposed but never completed by Lejaren Hiller and James Beauchamp at the University of Illinois . GROOVE was however, the first widely used computer music system that allowed composers and performers the ability to work in real-time. The GROOVE project ended in 1980 due to both the high cost of the system – some $20,000, and also  to advances in affordable computing power that allowed synthesisers and performance systems to work together flawlessly .


Sources

Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music, Prentice Hall, 1997.

F. Richard Moore, Elements of Computer Music, PTR Prentice Hall, 1990.

http://www.vintchip.com/mainframe/DDP-24/DDP24.html

‘MUSIC N’, Max Vernon Mathews, USA, 1957

Max Mathews was a pioneering, central figure in computer music. After studying engineering at California Institute of Technology and the Massachusetts Institute of Technology in 1954 Mathews went on to develop ‘Music 1′ at Bell Labs; the first of the ‘Music’ family of computer audio programmes and the first widely used program for audio synthesis and composition. Mathews spent the rest of his career developing the ‘Music N’ series of programs and became a key figure in digital audio, synthesis, interaction and performance. ‘Music N’ was the first time a computer had been used to investigate audio synthesis ( Computers had been used to generate sound and music with the CSIR M1 and Ferranti Mk1 as early as 1951, but more as a by-product of machine testing rather than for specific musical objectives) and set the blueprint for computer audio synthesis that remains in use to this day in programmes like CSound, MaxMSP and SuperCollider and graphical modular programmes like Reaktor.

IBM 704 System

IBM 704 System

“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines – they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”

Max Mathews “Horizons in Computer Music”, March 8–9, 1997, Indiana University:

MUSIC I 1957

Music 1 was written in Assembler/machine code to make the most of the technical limitations of the IBM704 computer. The audio output was a simple monophonic triangle wave tone with no attack or decay control. It was only possible to set the parameters of amplitude, frequency and duration of each sound. The output was stored on magnetic tape and then converted by a DAC to make it audible (Bell Laboratories, in those years, were the only ones in the United States, to have a DAC; a 12-Bit valve technology converter, developed by EPSCO), Mathews says;

In fact, we are the only ones in the world at the time who had the right kind of a digital-to-analog converter hooked up to a digital tape transport that would play a computer tape. So we had a monopoly, if you will, on this process“.

In 1957 Mathews and his colleague Newman Guttman created a synthesised 17 second piece using Music I, titled ‘The Silver Scale’ ( often credited as being the first proper piece of  computer generated music) and a one minute piece later in the same year called ‘Pitch Variations’ both of which were released on an anthology called ‘Music From Mathematics’ edited by Bell Labs in 1962.

Mathews and the IBM 7094

Mathews and the IBM 7094

MUSIC II 1958

Was an updated more versatile and functional version of Music I . Music II  still used assembler but for the transistor (rather than valve) based, much faster IBM 7094 series. Music II had four-voice polyphony and a was capable of generating sixteen wave shapes via the introduction of a wavetable oscillator.

MUSIC III 1960

“MUSIC 3 was my big breakthrough, because it was what was called a block diagram compiler, so that we could have little blocks of code that could do various things. One was a generalized oscillator … other blocks were filters, and mixers, and noise generators.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

The introduction of Unit Generators (UG) in MUSIC III was an evolutionary leap in music computing proved by the fact that almost all current programmes use the UG concept in some form or other. A Unit generator is essentially a pre-built discreet function within the program; oscillators, filters, envelope shapers and so-on, allowing the composer to flexibly connect multiple UGs together to generate a specific sound. A separate ‘score’ stage was added where sounds could be arranged in a musical chronological fashion. Each event was assigned to an instrument, and consisted of a series of values for the unit generators’ various parameters (frequency, amplitude, duration, cutoff frequency, etc). Each unit generator and each note event was entered onto a separate punch-card, which while still complex and archaic by today’s standards, was the first time a computer program used a paradigm familiar to composers.

“The crucial thing here is that I didn’t try to define the timbre and the instrument. I just gave the musician a tool bag of what I call unit generators, and he could connect them together to make instruments, that would make beautiful music timbres. I also had a way of writing a musical score in a computer file, so that you could, say, play a note at a given pitch at a given moment of time, and make it last for two and a half seconds, and you could make another note and generate rhythm patterns. This sort of caught on, and a whole bunch of the programmes in the United States were developed from that. Princeton had a programme called Music 4B, that was developed from my MUSIC 4 programme. And (theMIT professor) Barry Vercoe came to Princeton. At that time, IBM changed computers from the old 1794 to the IBM 360 computers, so Barry rewrote the MUSIC programme for the 360, which was no small job in those days. You had to write it in machine language.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

Max Mathews and Joan Miller at Bell labs

Max Mathews and Joan Miller at Bell labs

MUSIC IV

MUSIC IV was the result of the collaboration between Max Mathews and  Joan Miller completed in 1963 and was a more complete version of the MUSIC III system using a modified macro enabled version of the assembler language. These programming changes meant that MUSIC IV would only run on the Bell Labs IBM 7094.

“Music IV was simply a response to a change in the language and the computer. It Had some technical advantages from a computer programming standpoint. It made heavy use of a macro assembly program Which Existed at the time.”
Max Mathews 1980

MUSIC IVB, IVBF and IVF

Due to the lack of portability of the MUSIC IV system other versions were created independently of Mathews and the Bell labs team, namely MUSIC IVB at Princeton and MUSIC IVBF at the Argonne Labs. These versions were built using FORTRAN rather than assembler language.

MUSIC V

MUSIC V was probably the most popular of the MUSIC N series from Bell Labs. Similar to MUSIC IVB/F versions, Mathews abandoned assembler and built MUSIC V in the FORTRAN language specifically for the IBM 360 series computers. This meant that the programme was faster, more stable and  could run on any IBM 360 machines outside of  Bell Laboratories. The data entry procedure was simplified, both in Orchestra and in Score section. One of the most interesting news features was the definition of new modules that allow you to import analogue sounds into Music V. Mathews persuaded Bell Labs not to copyright the software meaning that MUSIC V was probably one of the first open-source programmes, ensuring it’s adoption and longevity leading directly to today’s CSound.

“… The last programme I wrote, MUSIC 5, came out in 1967. That was my last programme, because I wrote it in FORTRAN. FORTRAN is still alive today, it’s still in very good health, so you can recompile it for the new generation of computers. Vercoe wrote it for the 360, and then when the 360 computers died, he rewrote another programme called MUSIC 11 for the PDP-11, and when that died he got smart, and he wrote a programme in the C language called CSound. That again is a compiler language and it’s still a living language; in fact, it’s the dominant language today. So he didn’t have to write any more programmes.”
Max Mathews 2011 interview with Geeta Dayal, Frieze.

MUSIC V marked the end of Mathews involvement in MUSIC N series but established it as the parent for all future music programmes. Because of his experience with the real-time limitations of computer music, Mathews became interested in developing ideas for performance based computer music such as the GROOVE system (with Richard Moore in 1970) system in and The ‘Radio Baton’ (with Tom Oberheim in 1985 ).

YEAR VERSION PLACE AUTHOR
1957 Music I Bell Labs (New York) Max Mathews
1958 Music II Bell Labs (New York) Max Mathews
1960 Music III Bell Labs (New York) Max Mathews
1963 Music IV Bell Labs (New York) Max Mathews, Joan Miller
1963 Music IVB Princeton University Hubert Howe, Godfrey Winham
1965 Music IVF Argonne Laboratories (Chicago) Arthur Roberts
1966 Music IVBF Princeton University Hubert Howe, Godfrey Winham
1966 Music 6 Stanford University Dave Poole
1968 Music V Bell Labs (New York) Max Mathews
1969 Music 360 Princeton University Barry Vercoe
1969 Music 10  Stanford University John Chowning, James Moorer
1970 Music 7 Queen’s College (New York) Hubert Howe, Godfrey Winham
1973 Music 11 M.I.T. Barry Vercoe
1977 Mus10 Stanford University Leland Smith, John Tovar
1980 Cmusic University of California Richard Moore
1984 Cmix Princeton University Paul Lansky
1985 Music 4C University of Illinois James Beauchamp, Scott Aurenz
1986 Csound M.I.T. Barry Vercoe


Sources

http://www.computer-history.info/Page4.dir/pages/IBM.704.dir/

http://www.musicainformatica.org

Curtis Roads, Interview with Max Mathews, Computer Music Journal, Vol. 4, 1980.

‘Frieze’ Interview with Max Mathews. by Geeta Dayal

An Interview with Max Mathews.  Tae Hong Park. Music Department, Tulane University

The ‘Synthetic Tone’ Sewall Cabot, USA, 1918

Patent documents of Cabot's Synthetic Tone Instrument

Patent documents of Cabot’s Synthetic Tone Instrument

The ‘Synthetic Tone’ was an electro-mechanical instrument similar but much smaller to the Choracello designed by the Brookline, Massachusetts electrical engineer Sewall Cabot (Cabot, Quincy Sewall b: 4 SEP 1901 in New York d: MAR 1957 in New York). The instrument created complex tones by resonating metal bars with a tone-wheel generated electromagnetic charge.

“One object of my present invention is to provide an improved musical instrument of relatively small cost and small dimensions in comparison to those of a pipe-organ, but capable of attaining all the musically useful results of which a pipe-organ is capable. Another object is to provide an instrument that will produce desirable tonal effects not heretofore obtainable from a pipe-organ.”

Sewal Cabot Patent documents


Sources

Early Electronic Music Instruments: Time Line 1899-1950.Curtis Roads Computer Music Journal Vol. 20, No. 3 (Autumn, 1996), pp. 20-23 Published by: The MIT Press

The ‘Oscillon’ William Danforth & William Swann, USA, 1937

Oscillon

Mrs Danforth plays the ‘Oscillon’ 1937

The Oscillon was a one-off vacuum tube instrument created by Dr. W.E. Danforth to play the wind instrument parts for his local amateur Swarthmore Symphony Orchestra. The instrument was played by sliding the finger over the metal box to produce French Horn or Bass Clarinet tones fro  the loudspeaker:

When he is not experimenting on cosmic rays, high-haired Director William Francis Gray Swann of Franklin Institute’s Bartol Research Foundation, plays a cello. Young William Edgar Danforth, his assistant, plays a cello too. Both are mainstays of the Swarthmore (Pa.) Symphony Orchestra, a volunteer organization of about 40 men and women who play good music free. Because nobody in the orchestra can handle a French horn or a bass clarinet, Drs. Swann and Danforth built an electrical “oscillion” so ingenious that it can be made to sound like either, so simple that a child can master it. Last week at a Swarthmore concert the oscillion made its world debut, playing the long clarinet passages in Cesar Franck’s D Minor Symphony without a mishap. Listeners thought the oscillion lacked color, was a little twangier in tone, otherwise indistinguishable from the woodwind it replaced.

The Danforth & Swann oscillion is a simple-looking oblong wooden box with an electrical circuit inside. Current flows through a resistance, is stored up in a condenser, spills into a neon tube, becomes a series of electrical “pulses.” A loud speaker translates the pulses into sound.

To play music the oscillionist presses down on a keyboard and changes the resistance. This alters the frequency, thereby the pitch. As now constructed the oscillion has a range of five octaves which can easily be increased to eight. Inventors Danforth & Swann deplore the oscillion’s higher ranges, expect it will be most useful pinch-hitting for bass clarinet, bassoon, tuba and string bass.”

Courtesy: TIME http://www.time.com 2/4/2008


Sources

Time Magazine http://www.time.com 2/4/2008

Dr. W. E. Danforth, Bartol Research Foundation

Science Service at the Smithsonian Institute

http://www.amphilsoc.org/mole/view?docId=ead/Mss.B.Sw1-ead.xml

http://en.wikipedia.org/wiki/William_Francis_Gray_Swann

The ‘Musical Telegraph’ Elisha Gray. USA, 1876

Elisha Gray using a violin as a resonating amplifier for his Musical Telegraph

Elisha Gray using a violin as a resonating amplifier for his Musical Telegraph

Elisha Gray (born in Barnesville, Ohio, on Aug. 2, 1835, died Newtonville, Mass., on Jan. 21, 1901) would have been known to us as the inventor of the telephone if Alexander Graham bell hadn’t got to the patent office one hour before him. Instead, he goes down in history as the accidental creator of one of the first electronic musical instruments – a chance by-product of his telephone technology.

Elisha Grays patent for the Singing Arc

Elisha Grays patent for the Singing Arc

gray_patent_02

Elisha Gray’s Patent for the ‘Musical Telegraph’ 1876

Gray accidentally discovered that he could control sound from a self vibrating electromagnetic circuit and in doing so invented a basic single note oscillator. Using this principle he designed a musical instrument; The ‘Musical Telegraph’.

Elisha Gray's Musical Telegraph keyboard transmitter.

Elisha Gray’s Musical Telegraph keyboard transmitter.

Gray’s invention used steel reeds whose oscillations were created and transmitted , over a telephone line, by electromagnets. Gray also built a simple loudspeaker device in later models consisting of a vibrating diaphragm in a magnetic field to make the oscillator audible.After many years of litigation, A.G.Bell was legally named the inventor of the telephone and in 1872, Gray founded the Western Electric Manufacturing Company, parent firm of the present Western Electric Company. Two years later he retired to continue independent research and invention and to teach at Oberlin College.

Performance of the Musical Telegraph

Elisha Gray gave the first public demonstration of his invention for transmitting musical tones at the Presbyterian Church in Highland Park, Illinois on December 29, 1874 and transmitted “familiar melodies through telegraph wire” according to a newspaper announcement– possibly using a piano as a resonating amplifier.

Elisha Gray’s first “musical telegraph” or “harmonic telegraph” contained enough single-tone oscillators to play two octaves and later models were equipped with a simple tone wheel control. Gray took the instrument on tour with him in 1874. Alexander Graham Bell also designed an experimental ‘ Electric Harp’ for speach transmission over a telephone line using similar technology to Gray’s.

Gray's patent for the Musical Telegraph

Gray’s patent for the Musical Telegraph

gray2c

Biographical Information:

Elisha Gray, the American inventor, who contested the invention of the telephone with Alexander Graham Bell. He was born in Barnesville, Ohio, on Aug. 2, 1835, and was brought up on a farm. He had to leave school early because of the death of his father, but later completed preparatory school and two years at Oberlin College while supporting himself as a carpenter. At college he became fascinated by electricity, and in 1867 he received a patent for an improved telegraph relay. During the rest of his life he was granted patents on about 70 other inventions, including the Telautograph (1888), an electrical device for reproducing writing at a distance.On Feb. 14, 1876, Gray filed with the U.S. Patent Office a caveat (an announcement of an invention he expected soon to patent) describing apparatus ‘for transmitting vocal sounds telegraphically.’ Unknown to Gray, Bell had only two hours earlier applied for an actual patent on an apparatus to accomplish the same end. It was later discovered, however, that the apparatus described in Gray’s caveat would have worked, while that in Bell’s patent would not have. After years of litigation, Bell was legally named the inventor of the telephone, although to many the question of who should be credited with the invention remained debatable.In 1872, Gray founded the Western Electric Manufacturing Company, parent firm of the present Western Electric Company. Two years later he retired to continue independent research and invention and to teach at Oberlin College. He died in Newtonville, Mass., on Jan. 21, 1901.”


Sources:

(Kenneth M. Swezey [author of "Science Shows You How"] The Encyclopedia Americana — International Edition Vol. 13. Danbury, Connecticut: Grolier Incorporated, 1995. 211)”

The ‘Audion Piano’ and Audio Oscillator. Lee De Forest. USA, 1915

 “Audion Bulbs as Producers of Pure Musical Tones”  from 'The Electrical Experimenter' December 1915

“Audion Bulbs as Producers of Pure Musical Tones” from ‘The Electrical Experimenter’ December 1915

Lee De Forest , The self styled “Father Of Radio” ( the title of his 1950 autobiography) inventor and holder of over 300 patents, invented the triode electronic valve or ‘Audion valve’ in 1906- a much more sensitive development of John A. Fleming’s diode valve. The immediate application of De Forest’s triode valve was in the emerging radio technology of which De Forest was a tenacious promoter. De Forest also discovered that the valve was capable of creating audible sounds using the “heterodyning”/beat frequency technique: a way of creating sounds by combining two high frequency signals to create a composite lower frequency within audible range and in so doing inadvertently invented the first true audio oscillator that paved he way for future electronic instruments and music.

Lee De Forest's Triode Valve of 1906

Lee De Forest’s Triode Valve of 1906

De Forest Created the ‘Audion Piano’, the first vacuum tube instrument in 1915 based on earlier audio experiments in 1907 and by using his invention of the triode tube as an audio oscillator  had laid the blueprint for most future electronic instruments until the emergence of transistor technology some fifty year later. The Audion Piano was the first instrument to use a beat-frequency or “heterodyning” oscillator system and also the first to use body capacitance to control pitch and timbre ( The heterodyning effect was later much exploited by the Leon Termen with his Theremin series of instruments and Maurice Martenot’s Ondes-Martenot amongst many others. ). The Audion Piano, controlled by a single keyboard manual, used a single triode valve per octave controlled by a set of keys allowing one monophonic note to be played per octave. This audio signal could be processed by a series of capacitors and resistors to produce variable and complex timbres and the output of the instrument could be sent to a set of speakers placed around a room giving the sound a novel spatial effect. De Forest planned a later version of the instrument that would have separate valves per key allowing full polyphony- it is not known if this instrument was ever constructed.
De Forest described the Audio Piano as capable of producing:

“Sounds resembling a violin, Cello, Woodwind, muted brass and other sounds resembling nothing ever heard from an orchestra or by the human ear up to that time – of the sort now often heard in nerve racking maniacal cacophonies of a lunatic swing band. Such tones led me to dub my new instrument the ‘Squawk-a-phone’….The Pitch of the notes is very easily regulated by changing the capacity or the inductance in the circuits, which can be easily effected by a sliding contact or simply by turning the knob of a condenser. In fact, the pitch of the notes can be changed by merely putting the finger on certain parts of the circuit. In this way very weird and beautiful effects can easily be obtained.”
(Lee De Forest’s Autobiography “The Father Of Radio”)

And From a 1915 news story on a concert held for the National Electric Light Association

“Not only does de Forest detect with the Audion musical sounds silently sent by wireless from great distances,but he creates the music of a flute, a violin or the singing of a bird by pressing button. The tune quality and the intensity are regulated by the resistors and by induction coils…You have doubtless heard the peculiar, plaintive notes of the Hawaiian ukulele, produced by the players sliding their fingers along the strings after they have been put in vibration. Now, this same effect,which can be weirdly pleasing when skilfully made, can he obtained with the musical Audion.”

Advert for De Forest wireless equipment

Advert for De Forest wireless equipment

De Forest, the tireless promoter, demonstrated his electronic instrument around the New York area at public events alongside fund raising spectacles of his radio technology. These events were often criticised and ridiculed by his peers and led to a famous trial where De Forest was accused of misleading the public for his own ends:
“De Forest has said in many newspapers and over his signature that it would be possible to transmit human voice across the Atlantic before many years. Based on these absurd and deliberately misleading statements, the misguided public … has been persuaded to purchase stock in his company. “
Lee De Forest, August 26, 1873, Council Bluffs, Iowa. Died June 30, 1961

Lee De Forest, August 26, 1873, Council Bluffs, Iowa. Died June 30, 1961

De Forest collaborated with a sceptical Thadeus Cahill in broadcasting early concerts of the Telharmonium using his radio transmitters (1907). Cahill’s insistence on using the telephone wire network to broadcast his electronic music was a major factor in the demise of the Telharmonium. Vacuum tube technology was to dominate electronic instrument design until the invention of transistors in the 1960′s. The Triode amplifier also freed electronic instruments from having to use the telephone system as a means of amplifying the signal.


Sources:

Lee De Forest “Father Of Radio” (Autobiography).
Wireless: From Marconi’s Black-Box to the Audion (Transformations: Studies in the History of Science and Technology) 2001 author(s) Sungook Hong
Lee de Forest: King of Radio, Television, and Film 2012. Mike Adams (auth.).
Theremin: Ether Music and Espionage. By Albert Glinsky
Electronic Music. Nicholas Collins, Margaret Schedel, Scott Wilson
Media Parasites in the Early Avant-Garde: On the Abuse of Technology and Communication. Arndt Niebisch 2012
Electric Relays: Principles and Applications. Vladimir Gurevich

The ‘Staccatone’. Hugo Gernsback & C.J.Fitch. USA, 1923

Hugo Gernsback's 'Staccatone'

Hugo Gernsback’s ‘Staccatone’

Hugo Gernsback, perhaps better known as the ‘Father of Science Fiction’  (and currently eponymously celebrated in the ‘Hugos’ Science Fiction Awards) also invented and built an early electronic instrument called the Staccatone in 1923 (with Clyde.J.Fitch)  which was later developed into one of the first polyphonic instruments, the Pianorad in 1926. Gernsback was a major figure in the development and popularisation of television, radio and amateur electronics, his multiple and sometimes shady businesses included early science fiction publishing, pulp fiction, self-help manuals and DIY electronics magazines as well as his own science fiction writing.
Practical_Electrics_Mar_1924_Cover
The Staccatone was conceived as a self-build project for amateur electronics enthusiasts via Gernsback’s ‘Practical Electrics’ magazine. The instrument consisted of a single vacuum tube oscillator controlled by a crude switch based 16 note ‘keyboard’. The switch based control gave the note a staccato attack and decay – hence the ‘Staccatone’. Gernsback promoted the instrument through his many publication and on his own radio station WJZ New York:
The musical notes produced by the vacuum tubes in this manner have practically no overtones. For this reason the music produced on the Pianorad is of an exquisite pureness of tone not realised in any other musical instrument. The quality is better than that of a flute and much purer. the sound however does not resemble that of any known musical instrument. The notes are quite sharp and distinct, and the Pianorad can be readily distinguished by its music from any other musical instrument in existence.”
Hugo Gernsback

Hugo Gernsback, born Hugo Gernsbacher August 16, 1884 of Jewish Luxembourgoise descent, moved to New York in 1904 and died on August 19, 1967

Self-build instructions for the Staccatone from ‘Practical Electrics’ magazine 1924:

Sources:

Hugo Gernsback: “The ‘Pianorad’ a New Musical Instrument which combines Piano and Radio Principles” Radio News viii (1926)

Electronic and Experimental Music: Technology, Music, and Culture. Thom Holmes

The ‘Pianorad’, Hugo Gernsback, USA, 1926

The Pianorad at WKNY

The Pianorad at WRNY

The Pianorad was a development of the Staccatone designed by Hugo Gernsback and built by Clyde Finch at the Radio News Laboratories in New York. the Pianorad had 25 single LC oscillators,one for every key for its two octave keyboard giving the instrument full polyphony, the oscillators produced virtually pure sine tones:

Hugo Gernsbacks' Pianorad

Hugo Gernsbacks’ Pianorad

“The musical notes produced by the vacuum tubes in this manner have practically no overtones. For this reason the music produced on the Pianorad is of an exquisite pureness of tone not realised in any other musical instrument. The quality is better than that of a flute and much purer. the sound however does not resemble that of any known musical instrument. The notes are quite sharp and distinct, and the Pianorad can be readily distinguished by its music from any other musical instrument in existence.”

Each one of the twenty five oscillators had its own independent speaker, mounted in a large loudspeaker horn on top of the keyboard and the whole ensemble was housed in a housing resembling a harmonium. A larger 88 non keyboard version was planned but not put into production. The Pianorad was first demonstrated on june 12, 1926 at Gernsback’s own radio station WRNY in New York City performed by Ralph Christman. The Pianorad continued to be used at the radio station for some time, accompanying piano and violin concerts.

Hugo Gernsback

Hugo Gernsback


Sources:

Hugo Gernsback: “The ‘Pianorad’ a New Musical Instrument which combines Piano and Radio Principles” Radio News viii (1926)

The ‘Rhythmicon’ Henry Cowell & Leon Termen. USA, 1930

Henry Cowell and the Rhythmicon

Henry Cowell and the Rhythmicon

In 1916 the American Avant-Garde composer Henry Cowell was working with ideas of controlling cross rhythms and tonal sequences with a keyboard, he wrote several quartet type pieces that used combinations of rhythms and overtones that were not possible to play apart from using some kind of mechanical control- “un-performable by any known human agency and I thought of them as purely fanciful”.(Henry Cowell) In 1930 Cowell introduced his idea to Leon Termen, the inventor of the Theremin, and commissioned him to build him a machine capable of transforming harmonic data into rhythmic data and vice versa.

“My part in its invention was to invent the idea that such a rhythmic instrument was a necessity to further rhythmic development, which has reached a limit more or less, in performance by hand, an needed the application of mechanical aid. The which the instrument was to accomplish and what rhythms it should do and the pitch it should have and the relation between the pitch and rhythms are my ideas. I also conceived that the principle of broken up light playing on a photo-electric cell would be the best means of making it practical. With this idea I went to Theremin who did the rest – he invented the method by which the light would be cut, did the electrical calculations and built the instrument.”

Henry Cowell

“The rhythmic control possible in playing and imparting exactitudes in cross rhythms are bewildering to contemplate and the potentialities of the instrument should be multifarious… Mr. Cowell used his rythmicon to accompany a set of violin movements which he had written for the occasion…. The accompaniment was a strange complexity of rhythmical interweavings and cross currents of a cunning and precision as never before fell on the ears of man and the sound pattern was as uncanny as the motion… The write believes that the pure genius of Henry Cowell has put forward a principle which will strongly influence the face of all future music.”
Homer Henly, May 20, 1932


The eventual machine was christened the “Rythmicon” or “Polyrhythmophone” and was the first electronic rhythm machine. The Rhythmicon was a keyboard instrument based on the Theremin, using the same type of sound generation – hetrodyning vacuum tube oscillators. The 17 key polyphonic keyboard produced a single note repeated in periodic rhythm for as long as it was held down, the rhythmic content being generated from rotating disks interrupting light beams that triggered photo-electric cells. The 17th key of the keyboard added an extra beat in the middle of each bar. The transposable keyboard was tuned to an unusual pitch based on the rhythmic speed of the sequences and the basic pitch and tempo could be adjusted by means of levers.Cowell wrote two works for the Rythmicon “Rythmicana” and “Music for Violin and Rythmicon” (a computer simulation of this work was reproduced in 1972). Cowell lost interest in the machine, transferring his interest to ethnic music and the machine was mothballed.

Rhythmicon Discs

Rhythmicon Discs

After Cowell, the machines were used for psychological research and one example (non working) of the machine survives at the Smithsonian Institute.The Rhythmicon was re-discoverd twenty-five years after its creation by the producer Joe Meek (creator of the innovative hit single ‘Telstar’, 1961) apparently discovered abandoned in a New York pawnbrokers. Meek brought it back to his home studio in London where it was used on several recordings. This Rhythmicon was used to provide music and sound effects for various movies in the Fifties and Sixties, including: ‘The Rains of Ranchipur’; ‘Battle Beneath the Earth’; Powell and Pressburgers’ ‘They’re a Weird Mob’; ‘Dr Strangelove’, and the sixties animated TV series ‘Torchy, The Battery Boy’.The Rhythmicon was also rumoured to have been used on several sixties and seventies records, including: ‘Atom Heart Mother’ by Pink Floyd; ‘The Crazy World of Arthur Brown’ by Arthur Brown, and ‘Robot’ by the Tornadoes. Tangerine Dream also used some sequences from the Rhythmicon on their album ‘Rubicon’.
Rhythmicon Discs

Rhythmicon Discs


Sources:

“Henry Cowell: A record of his activities” Compiled June 1934 by Olive Thompson Cowell.

‘Moog Synthesisers’ Robert Moog. USA, 1964

Robert Moog started working with electronic instruments at the age of nineteen when, with his father, he created his first company,  R.A.Moog Co to manufacture and sell Theremin kits (called the ‘Melodia Theremin’ the same design as Leon Termen’s theremin but with an optional keyboard attachment) and guitar amplifiers from the basement of his family home in Queens, New York. Moog went on to study physics at Queens College, New York in 1957 and electrical engineering at Columbia University and a Ph.D. in engineering physics from Cornell University (1965). In 1961 Moog started to produce the first transistorised version of the Theremin – which up until then had been based on Vacuum tube technology.

In 1963 with a $200 research grant from Columbia University Moog Collaborated with the experimental musician Herbert Deutsch  on the the design of what was to become the first modular Moog Synthesiser.


Herb Deutsch discusses his role in the origin of the Moog Synthesiser.

Herbert A. Deutsch working on the Development of the Moog Synthesiser c 1963

Herbert A. Deutsch working on the Development of the Moog Synthesiser c 1963

Moog and Deutsch had already been absorbing and experimenting with ideas about transistorised modular synthesisers from the German designer Harald Bode (as well as collaborating with Raymond Scott on instrument design at Manhattan Research Inc). In September 1964 he was invited to exhibit his circuits at the Audio Engineering Society Convention. Shortly afterwards in 1964,  Moog begin to manufacture electronic music synthesisers.

“…At the time I was actually still thinking primarily as a composer and at first we were probably more interested in the potential expansion of the musical aural universe than we were of its effect upon the broader musical community. In fact when Bob questioned me on whether the instrument should have a regular keyboard (Vladimir Ussachevsky had suggested to him that it should not) I told Bob “I think a keyboard is a good idea, after all, having a piano did not stop Schoenberg from developing twelve-tone music and putting a keyboard on the synthesizer would certainly make it a more sale-able product!!”
Herbert Deutsch 2004

Early version of the Moog Modular, 1964

Early version of the Moog Modular, 1964

The first instrument the Moog Modular Synthesiser produced in 1964 became the first widely used electronic music synthesiser and the first instrument to make the crossover from the avant-garde to popular music. The release in 1968 of Wendy Carlos’s album “Switched on Bach” which was entirely recorded using Moog synthesisers (and one of the highest-selling classical music recordings of its era), brought the Moog to public attention and changed conceptions about electronic music and synthesisers in general. The Beatles bought one, as did Mick Jagger who bought a hugely expensive modular Moog in 1967 (which was only used once, as a prop on Nicolas Roeg’s  film ‘Performance’  and was later sold to the German experimentalist rock group, Tangerine Dream). Over the next decade Moog created numerous keyboard synthesisers, Modular components (many licensed from design by Harald Bode), Vocoder (another Bode design), Bass pedals, Guitar synthesisers and so-on.

Early Moog Modular from 1964 at the interactive Music Museum, Ghent, Belgium.

Early Moog Modular from 1964 at the interactive Music Museum, Ghent, Belgium.

Moog’s designs set a standard for future commercial electronic musical instruments with innovations such as the 1 volt per octave CV control that became an industry standard and pulse triggering signals for connecting and synchronising multiple components and modules.

Despite this innovation, the Moog Synthesiser Company did not survive the decade, larger companies such as Arp and Roland developed Moog’s prototypes into more sophisticated and cost effective instruments. Moog sold the company to Norlin in the 1970′s whose miss-management lead to Moog’s resignation. Moog Music finally closed down in 1993. Robert Moog re-acquired the rights to the Moog company name in 2002 and once again began to produce updated versions of the Moog Synthesiser range. Robert Moog died in 2003.

Moog Production Instruments 1963-2013
Date Model
1963–1980 Moog modular synthesiser
1970–81 Minimoog
1974–79 Moog Satellite
1974–79 Moog Sonic Six
1975–76 Minitmoog
1975–79 Micromoog
1975–80 Polymoog
1976–83 Moog Taurus bass pedal
1978–81 Multimoog
1979–84 Moog Prodigy
1980 Moog Liberation
1980 Moog Opus-3
1981 Moog Concertmate MG-1
1981 Moog Rogue
1981 Moog Source
1982-1985 Memorymoog
Moog Company relaunch
1998–present Moogerfooger
2002–present Minimoog Voyager
2006–present Moog Little Phatty
2010 Slim Phatty
2011 Taurus 3 bass pedal
2012 Minitaur
2013 Sub Phatty

 

The Mini Moog Synthesiser with Herb Deutsch

Images of Moog Music Synthesisers


Sources

http://www.moogmusic.com/

http://moogarchives.com/

Bob Moog Foundation

INTERVIEW WITH HERBERT A. DEUTSCH. October 2003, and February 2004

Analog Days: The Invention and Impact of the Moog Synthesizer.  Trevor Pinch, Frank Trocco. Harvard University Press, 2004