‘GROOVE Systems’, Max Mathews & Richard Moore, USA 1970

Max Mathews with the GROOVE system
Max Mathews with the GROOVE system

“GROOVE is a hybrid system that interposes a digital computer between a human composer-performer and an electronic sound synthesizer. All of the manual actions of the human being are monitored by the computer and stored in its disk memory ”

Max Mathews and Richard Moore 1 Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music, Prentice Hall, 1997.p158

In 1967 the composer and musician Richard Moore began a collaboration with Max Mathews at Bell Labs exploring performance and  expression in computer music in a ‘musician-friendly’ environment. The result of this was a digital-analogue hybrid system called GROOVE  (Generated Realtime Operations On Voltage-controlled Equipment) in which a musician played an external analogue synthesiser and a computer monitored and stored the performer’s manipulations of the interface; playing notes, turning knobs and so-on. 2Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music, Prentice Hall, 1997.p158The objective being to build a real-time musical performance tool by concentrating the computers limited power, using it to store musical parameters of an external device rather than generating the sound itself :

“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V.  A host of others wrote Music 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”  3Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University.

Richard Moore with the Groove System
Richard Moore with the Groove System

The system, written in assembler, only ran on the Honeywell DDP224 computer that Bell had acquired specifically for sound research. The addition of a disk storage device meant that it was also possible to create libraries of programming routines so that users could create their own customised logic patterns for automation or composition. GROOVE allowed users to continually adjust and ‘mix’ different actions in real time, review sections or an entire piece and then re-run the composition from stored data. Music by Bach and Bartok were performed with the GROOVE at the first demonstration at a conference on Music and Technology in Stockholm organized by UNESCO  in 1970. Among the participants also several leading figures in electronic music such as Pierre Schaffer and Jean-Claude Risset.

“Starting with the Groove program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the radio-baton, plus a program, the conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.” 4 Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University.

The GROOVE System at the Bell Laboratories circa 1970
The GROOVE System at the Bell Laboratories circa 1970

The GROOVE system consisted of:

  • 14 DAC control lines scanned every 100th/second ( twelve 8-bit and two 12-bit)
  • An ADC coupled to a multiplexer for the conversion of seven voltage signal: four generated by the same knobs and three generated by 3-dimensional movement of a joystick controller;
  • Two speakers for audio sound output;
  • A special keyboard to interface with the knobs to generate On/Off signals
  • A teletype keyboard for data input
  • A CDC-9432 disk storage;
  • A tape recorder for data backup

Antecedents to the GROOVE included similar projects such as PIPER, developed by James Gabura and Gustav Ciamaga at the University of Toronto, and a system proposed but never completed by Lejaren Hiller and James Beauchamp at the University of Illinois . GROOVE was however, the first widely used computer music system that allowed composers and performers the ability to work in real-time. The GROOVE project ended in 1980 due to both the high cost of the system – some $20,000, and also  to advances in affordable computing power that allowed synthesisers and performance systems to work together flawlessly. 5 F. Richard Moore, Elements of Computer Music, PTR Prentice Hall, 1990.


  • 1
    Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music, Prentice Hall, 1997.p158
  • 2
    Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music, Prentice Hall, 1997.p158
  • 3
    Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University.
  • 4
    Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University.
  • 5
    F. Richard Moore, Elements of Computer Music, PTR Prentice Hall, 1990.

‘MUSIC N’, Max Vernon Mathews, USA, 1957

Max Mathews was a pioneering, central figure in computer music. After studying engineering at California Institute of Technology and the Massachusetts Institute of Technology in 1954 Mathews went on to develop ‘Music 1’ at Bell Labs; the first of the ‘Music’ family of computer audio programmes and the first widely used program for audio synthesis and composition. Mathews spent the rest of his career developing the ‘Music N’ series of programs and became a key figure in digital audio, synthesis, interaction and performance. ‘Music N’ was the first time a computer had been used to investigate audio synthesis ( Computers had been used to generate sound and music with the CSIR M1 and Ferranti Mk1 as early as 1951, but more as a by-product of machine testing rather than for specific musical objectives) and set the blueprint for computer audio synthesis that remains in use to this day in programmes like CSound, MaxMSP and SuperCollider and graphical modular programmes like Reaktor.

IBM 704 System
IBM 704 System . Image: The IBM 704 and 709 Systems

“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines – they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.” 2Max Mathews, (1997), Horizons in Computer Music, March 8–9, Indiana University.

MUSIC I 1957

Music 1 was written in Assembler/machine code to make the most of the technical limitations of the IBM704 computer. The audio output was a simple monophonic triangle wave tone with no attack or decay control. It was only possible to set the parameters of amplitude, frequency and duration of each sound. The output was stored on magnetic tape and then converted by a DAC to make it audible (Bell Laboratories, in those years, were the only ones in the United States, to have a DAC; a 12-Bit valve technology converter, developed by EPSCO), Mathews says;

In fact, we are the only ones in the world at the time who had the right kind of a digital-to-analog converter hooked up to a digital tape transport that would play a computer tape. So we had a monopoly, if you will, on this process“.3An Interview with Max Mathews. Tae Hong Park. Music Department, Tulane University. https://tinyurl.com/ypfdw2xb

In 1957 Mathews and his colleague Newman Guttman created a synthesised 17 second piece using Music I, titled ‘The Silver Scale’ ( often credited as being the first proper piece of  computer generated music) and a one minute piece later in the same year called ‘Pitch Variations’ both of which were released on an anthology called ‘Music From Mathematics’ edited by Bell Labs in 1962.

Max Mathews and an IBM mainframe at Bell Laboratories. (Courtesy Max Mathews.)
Max Mathews and an IBM mainframe at Bell Laboratories. (Courtesy Max Mathews.)
4image: ‘An Interview with Max Mathews’. Tae Hong Park. Music Department, Tulane University. https://tinyurl.com/ypfdw2xb


Was an updated more versatile and functional version of Music I . Music II  still used assembler but for the transistor (rather than valve) based, much faster IBM 7094 series. Music II had four-voice polyphony and a was capable of generating sixteen wave shapes via the introduction of a wavetable oscillator.


“MUSIC 3 was my big breakthrough, because it was what was called a block diagram compiler, so that we could have little blocks of code that could do various things. One was a generalized oscillator … other blocks were filters, and mixers, and noise generators.”
5Max Mathews, (2011), ‘Max Mathews (1926–2011)’, Interview with Geeta Dayal, Frieze Magazine.09 MAY 2011. https://www.frieze.com/article/max-mathews-1926-E2-80-932011

The introduction of Unit Generators (UG) in MUSIC III was an evolutionary leap in music computing proved by the fact that almost all current programmes use the UG concept in some form or other. A Unit generator is essentially a pre-built discreet function within the program; oscillators, filters, envelope shapers and so-on, allowing the composer to flexibly connect multiple UGs together to generate a specific sound. A separate ‘score’ stage was added where sounds could be arranged in a musical chronological fashion. Each event was assigned to an instrument, and consisted of a series of values for the unit generators’ various parameters (frequency, amplitude, duration, cutoff frequency, etc). Each unit generator and each note event was entered onto a separate punch-card, which while still complex and archaic by today’s standards, was the first time a computer program used a paradigm familiar to composers.

“The crucial thing here is that I didn’t try to define the timbre and the instrument. I just gave the musician a tool bag of what I call unit generators, and he could connect them together to make instruments, that would make beautiful music timbres. I also had a way of writing a musical score in a computer file, so that you could, say, play a note at a given pitch at a given moment of time, and make it last for two and a half seconds, and you could make another note and generate rhythm patterns. This sort of caught on, and a whole bunch of the programmes in the United States were developed from that. Princeton had a programme called Music 4B, that was developed from my MUSIC 4 programme. And (theMIT professor) Barry Vercoe came to Princeton. At that time, IBM changed computers from the old 1794 to the IBM 360 computers, so Barry rewrote the MUSIC programme for the 360, which was no small job in those days. You had to write it in machine language.”
6Max Mathews, (2011), ‘Max Mathews (1926–2011)’, Interview with Geeta Dayal, Frieze Magazine.09 MAY 2011. https://www.frieze.com/article/max-mathews-1926-E2-80-932011

Max Mathews with Joan Miller. (Courtesy Max Mathews.)
Max Mathews with Joan Miller co-author of Music V. (Courtesy Max Mathews.)
7image: ‘An Interview with Max Mathews’. Tae Hong Park. Music Department, Tulane University. https://tinyurl.com/ypfdw2xb


MUSIC IV was the result of the collaboration between Max Mathews and  Joan Miller completed in 1963 and was a more complete version of the MUSIC III system using a modified macro enabled version of the assembler language. These programming changes meant that MUSIC IV would only run on the Bell Labs IBM 7094.

“Music IV was simply a response to a change in the language and the computer. It Had some technical advantages from a computer programming standpoint. It made heavy use of a macro assembly program Which Existed at the time.”
Max Mathews 1980. 8Curtis Roads, ‘Interview with Max Mathews’, Computer Music Journal, Vol. 4, 1980.


Due to the lack of portability of the MUSIC IV system other versions were created independently of Mathews and the Bell labs team, namely MUSIC IVB at Princeton and MUSIC IVBF at the Argonne Labs. These versions were built using FORTRAN rather than assembler language.


MUSIC V was probably the most popular of the MUSIC N series from Bell Labs. Similar to MUSIC IVB/F versions, Mathews abandoned assembler and built MUSIC V in the FORTRAN language specifically for the IBM 360 series computers. This meant that the programme was faster, more stable and  could run on any IBM 360 machines outside of  Bell Laboratories. The data entry procedure was simplified, both in Orchestra and in Score section. One of the most interesting news features was the definition of new modules that allow you to import analogue sounds into Music V. Mathews persuaded Bell Labs not to copyright the software meaning that MUSIC V was probably one of the first open-source programmes, ensuring it’s adoption and longevity leading directly to today’s CSound.

“… The last programme I wrote, MUSIC 5, came out in 1967. That was my last programme, because I wrote it in FORTRAN. FORTRAN is still alive today, it’s still in very good health, so you can recompile it for the new generation of computers. Vercoe wrote it for the 360, and then when the 360 computers died, he rewrote another programme called MUSIC 11 for the PDP-11, and when that died he got smart, and he wrote a programme in the C language called CSound. That again is a compiler language and it’s still a living language; in fact, it’s the dominant language today. So he didn’t have to write any more programmes.”
9Max Mathews, (2011), ‘Max Mathews (1926–2011)’, Interview with Geeta Dayal, Frieze Magazine.09 MAY 2011. https://www.frieze.com/article/max-mathews-1926-E2-80-932011

MUSIC V marked the end of Mathews involvement in MUSIC N series but established it as the parent for all future music programmes. Because of his experience with the real-time limitations of computer music, Mathews became interested in developing ideas for performance based computer music such as the GROOVE system (with Richard Moore in 1970) system in and The ‘Radio Baton’ (with Tom Oberheim in 1985 ).

1957 Music I Bell Labs (New York) Max Mathews
1958 Music II Bell Labs (New York) Max Mathews
1960 Music III Bell Labs (New York) Max Mathews
1963 Music IV Bell Labs (New York) Max Mathews, Joan Miller
1963 Music IVB Princeton University Hubert Howe, Godfrey Winham
1965 Music IVF Argonne Laboratories (Chicago) Arthur Roberts
1966 Music IVBF Princeton University Hubert Howe, Godfrey Winham
1966 Music 6 Stanford University Dave Poole
1968 Music V Bell Labs (New York) Max Mathews
1969 Music 360 Princeton University Barry Vercoe
1969 Music 10  Stanford University John Chowning, James Moorer
1970 Music 7 Queen’s College (New York) Hubert Howe, Godfrey Winham
1973 Music 11 M.I.T. Barry Vercoe
1977 Mus10 Stanford University Leland Smith, John Tovar
1980 Cmusic University of California Richard Moore
1984 Cmix Princeton University Paul Lansky
1985 Music 4C University of Illinois James Beauchamp, Scott Aurenz
1986 Csound M.I.T. Barry Vercoe


  • 1
  • 2
    Max Mathews, (1997), Horizons in Computer Music, March 8–9, Indiana University.
  • 3
    An Interview with Max Mathews. Tae Hong Park. Music Department, Tulane University. https://tinyurl.com/ypfdw2xb
  • 4
    image: ‘An Interview with Max Mathews’. Tae Hong Park. Music Department, Tulane University. https://tinyurl.com/ypfdw2xb
  • 5
    Max Mathews, (2011), ‘Max Mathews (1926–2011)’, Interview with Geeta Dayal, Frieze Magazine.09 MAY 2011. https://www.frieze.com/article/max-mathews-1926-E2-80-932011
  • 6
    Max Mathews, (2011), ‘Max Mathews (1926–2011)’, Interview with Geeta Dayal, Frieze Magazine.09 MAY 2011. https://www.frieze.com/article/max-mathews-1926-E2-80-932011
  • 7
    image: ‘An Interview with Max Mathews’. Tae Hong Park. Music Department, Tulane University. https://tinyurl.com/ypfdw2xb
  • 8
    Curtis Roads, ‘Interview with Max Mathews’, Computer Music Journal, Vol. 4, 1980.
  • 9
    Max Mathews, (2011), ‘Max Mathews (1926–2011)’, Interview with Geeta Dayal, Frieze Magazine.09 MAY 2011. https://www.frieze.com/article/max-mathews-1926-E2-80-932011

Further Reading:


The ‘Synthetic Tone’ Sewall Cabot, USA, 1918

Patent documents of Cabot's Synthetic Tone Instrument
Patent documents of Cabot’s Synthetic Tone Instrument

The ‘Synthetic Tone’ was an electro-mechanical instrument similar but much smaller to the Choralcelo designed by the Brookline, Massachusetts electrical engineer Sewall Cabot (Cabot, Quincy Sewall b: 4 SEP 1901 in New York d: MAR 1957 in New York). The instrument created complex tones by resonating metal bars with a tone-wheel generated electromagnetic charge.

“One object of my present invention is to provide an improved musical instrument of relatively small cost and small dimensions in comparison to those of a pipe-organ, but capable of attaining all the musically useful results of which a pipe-organ is capable. Another object is to provide an instrument that will produce desirable tonal effects not heretofore obtainable from a pipe-organ.”

Sewal Cabot Patent documents


Early Electronic Music Instruments: Time Line 1899-1950.Curtis Roads Computer Music Journal Vol. 20, No. 3 (Autumn, 1996), pp. 20-23 Published by: The MIT Press

The ‘Oscillon’ William Danforth & William Swann, USA, 1937

Mrs Danforth plays the ‘Oscillon’ 1937

The Oscillon was a one-off vacuum tube instrument created by Dr. W.E. Danforth to play the wind instrument parts for his local amateur Swarthmore Symphony Orchestra. The instrument was played by sliding the finger over the metal box to produce French Horn or Bass Clarinet tones fro  the loudspeaker:

When he is not experimenting on cosmic rays, high-haired Director William Francis Gray Swann of Franklin Institute’s Bartol Research Foundation, plays a cello. Young William Edgar Danforth, his assistant, plays a cello too. Both are mainstays of the Swarthmore (Pa.) Symphony Orchestra, a volunteer organization of about 40 men and women who play good music free. Because nobody in the orchestra can handle a French horn or a bass clarinet, Drs. Swann and Danforth built an electrical “oscillion” so ingenious that it can be made to sound like either, so simple that a child can master it. Last week at a Swarthmore concert the oscillion made its world debut, playing the long clarinet passages in Cesar Franck’s D Minor Symphony without a mishap. Listeners thought the oscillion lacked color, was a little twangier in tone, otherwise indistinguishable from the woodwind it replaced.

The Danforth & Swann oscillion is a simple-looking oblong wooden box with an electrical circuit inside. Current flows through a resistance, is stored up in a condenser, spills into a neon tube, becomes a series of electrical “pulses.” A loud speaker translates the pulses into sound.

To play music the oscillionist presses down on a keyboard and changes the resistance. This alters the frequency, thereby the pitch. As now constructed the oscillion has a range of five octaves which can easily be increased to eight. Inventors Danforth & Swann deplore the oscillion’s higher ranges, expect it will be most useful pinch-hitting for bass clarinet, bassoon, tuba and string bass.”

Courtesy: TIME http://www.time.com 2/4/2008


Time Magazine http://www.time.com 2/4/2008

Dr. W. E. Danforth, Bartol Research Foundation

Science Service at the Smithsonian Institute



The ‘Musical Telegraph’ or ‘Electro-Harmonic Telegraph’, Elisha Gray. USA, 1874

Elisha Gray using a violin as a resonating amplifier for his Musical Telegraph
Elisha Gray demonstrating the results of his ‘bathtub experiments’ using a variable electric current to vibrate a silver plate fixed to the instrument’s body.

Elisha Gray would have been known to us as the inventor of the telephone if Alexander Graham bell hadn’t got to the patent office one hour before him. Instead, he goes down in history as the accidental creator of one of the first electronic musical instruments. As legend has it, Gray was inspired to investigate electro-acoustic effects after witnessing his nephew playing with his uncle’s equipment. The child had connected one end of a battery to himself and the other to a bathtub; by rubbing his hand on the bathtub’s surface he created an audible humming tone proportional to the electric current.

Elisha Gray’s patent of the Musical or Harmonic Telegraph of 1876

Gray discovered that he could control sound from a self vibrating electromagnetic circuit and in doing so invented a basic single note oscillator. The original intention was to use this principle to develop an early version of multiplex telegraphic transmission; sending multiple telegraphic messages encoded as different pitches simultaneously over the same line which could be decoded at the receiving end.  Using this principle he designed a musical instrument; The ‘Musical Telegraph’ or ‘Electro-Harmonic Telegraph’ initially to demonstrate and promote his ideas.

Elisha Gray’s Musical Telegraph keyboard transmitter.

My invention primarily consists in a novel art of producing musical impressions or sounds by means of a series of properly-tuned vibrating reeds or bars thrown into action by means of a series of keys opening or closing electric circuits. It also consists in a novel art of transmitting tunes so produced through an electric circuit and reproducing them at the receiving end of the line.

1Elisha Gray; Patent notes No. 173,618,  Feb. 15, 1876.

Gray’s invention used and electro-acoustic principle whereby a set of tuned steel reeds where vibrated by an electromagnetic current the resulting self-oscillating current could then be transmitted over a telephone line as a buzzing musical tone. Gray built a simple receiver and loudspeaker device called the ‘Washbasin Receiver’ – essentially a large telephone-like speaker built from an old washbasin mounted close to the poles of an electromagnet. By vibrating the metal washbasin the receiver recreated and amplified the sound of the instrument (which in this pre-amplifier era was the only way to make the instrument audible.)

washbasin receiver of 1847
washbasin receiver of 1874. This device was designed to receive and amplify the signal remotely transmitted from the Musical Telegraph.

With each key having an associated ‘oscillator’ the Musical Telegraph was truly polyphonic. To prevent sympathetic vibrations from non-active keys Gray used a series of mechanical stops allowing the production of a clean individual tone per key.

Elisha Gray's patent of the Musical or Harmonic Telegraph of 1876
Elisha Gray’s patent of the Musical or Harmonic Telegraph of 1876 showing the electromechanically vibrating tines and stops to prevent sympathetic vibration in other keys: Image 
Elisha Gray's patent of the Musical or Harmonic Telegraph of 1876
Elisha Gray’s patent of the Musical or Harmonic Telegraph of 1876
Performance of the Musical Telegraph
Elisha Gray gave the first public demonstration of his invention for transmitting musical tones at the Presbyterian Church in Highland Park, Illinois on December 29, 1874 and transmitted “familiar melodies through telegraph wire” according to a newspaper announcement– possibly using a piano as a resonating amplifier.
The 'Two-Tone' transmitter of 1874
The ‘Two-Tone’ transmitter of 1874

Elisha Gray’s first “Musical Telegraph” or “Harmonic Telegraph”used a simple two ‘oscillator’ keyboard design but later versions contained enough single-tone oscillators to play two octaves – Gray suggested that ‘Obviously the number of keys may be increased’ – and later models were equipped with a simple tone wheel control. Gray took the instrument on tour with him to the UK in 1874 transmitting musical tones over a distance of 200 miles or more.

Gray also promoted his discoveries in the USA. On April 2, 1877, Elisha Gray staged a ‘Telephone concert’ at Steinway Hall on East 14th Street in New York – despite the fact that no telephone was actually used. Playing remotely from the Western Union office in Philadelphia, the famous pianist Frederick Boscovitz performed on the 16 note version of the Musical Telegraph to the astonished New York audience. The receiver at the Steinway Hall consisted of 16 resonant hollow wooden tubes, ranging from six inches to two feet in length, joined by a wooden bar with a receiver electromagnet attached. The whole receiver was mounted on a grand piano to further resonate and colour the buzzing tone of the electronic instrument. The tones were reported to be distinct though the higher notes were considered ‘rather feeble’ with the timbre somewhat resembling an organ;

“as a novelty, was highly entertaining, though unless an almost incredible improvement be effected, it is difficult to see how the transmission of music over the new instrument can be of permanent practical value.”

National Republican (Washington, D.C.), April 10, 1877, page 1:

This initial concert was followed by five more performances in the same week, three in Steinway Hall, one at the Brooklyn Academy of Music, and one at Lincoln Hall in Washington:



Airs Played In Philadelphia Distinctly Audible In Washington–Description of the Apparatus–Its Sound and What It Resembles–The Performance a Great Success.

The atmospheric conditions last evening were far from favorable to the reception of music by telegraph, and it was not surprising, therefore, that the majority of those who went to Lincoln hall last evening to presence the latest triumph of American science–the telephone–were more or less doubtful of the success of the experiment they were about to witness. The interest manifested by our citizens in this grand and important invention could not have been attested in a more substantial manner, for the hall was filled to almost its amplest capacity by as intelligent and discriminating an audience as has gathered in that resort this season.

The preparation for the exhibition of the telephone were quite simple and were easily observable. Several wires depended from the aperture over the chandelier in the centre of the room, and communicated some with a regular telegraphic instrument on the stage to the left of the audience, others with the receiving apparatus of the telephone. The latter was placed on the floor of the stage, to the right of the audience. It is a small apparatus, about six feet long and less than two feet high, and consists of sixteen square boxes, resembling in appearance and arrangement the tubes of a large organ.

The entertainment began with the concert which Mr. Maurice Strakosch had provided, evidently to offset any disappointment that the audience might experience in the event of the inability of the telephone to surmount the obstacles of the inclement weather. The following was the programme:

Miss Fannie Kellogg is a young lady of prepossessing appearance, but evidently still a novice in the concert-room. Her rendition of the Polonaise from “Mignon,” which is an extremely difficult passage, requiring the greatest flexibility and control of voice, was not even a mediocre performance, although she took the liberty of omitting the trills and substituting a few notes of her own for those of the composer, and to cap the climax the finale of the air was sang entirely out of key as well as out of time. Indeed, it was as complete a faux pas as we have ever witnessed at a first-class concert. Miss Kellogg, nevertheless, found many admirers, for she was loudly encored, and in response to repeated calls essayed that sweet and plaintive air of Apt’s–Embarrassment–which she sang but indifferently well. To Signor Tagliapietra we cannot award too much praise. He was in exquisite voice, and his singing was perfection itself. Mr. S. Liebling’s performance on the piano was artistic and finished.

At the conclusion or the first part of the concert the piano was closed, and two young men raised the “receiving” apparatus of the telephone and placed it on the piano, after which a wire was adjusted to it, thus establishing direct communication with the “sending” instrument, in the office of the Western Union Telegraph Company in Philadelphia, presided over by Mr. F. Boscovitz.  A telegraph operator next appeared and took up his position at the little table above referred to. Immediately afterwards a tall, spare gentleman with a beard came forward. This was Professor Gray, the inventor of the telephone. The Professor declared that he did not desire to exhibit the telephone as a great musical instrument, and if anybody expected to listen to grand music, he would inform them in advance that they would be disappointed. The Professor, although doubtless a genius in some respects, cannot be said to number oratory among his gifts. In a rambling, disconnected and ungrammatical speech, out or which it was impossible for the life of us to make head or tail, the Professor endeavored to explain in a scientific manner many things connected with the telephone. He was not permitted to continue the infliction very long, for the audience grew impatient, and manifested their feelings in a quiet way. The Professor was not slow to take the hint, and concluded his introductory remarks by requesting the greatest silence. He then directed the telegraph operator to inform Mr. Boscovitz at Philadelphia that everything was in readiness and he might begin. Within three or four seconds the first notes of “Home, Sweet Home” were distinctly audible in every part of the spacious ball, the melody being recognized perfectly.

We can best describe the music of the telephone as heard last night by comparing it to the sound that would be produced slowly on an organ with one finger. The higher notes were rather feeble. The utmost stillness prevailed, and at the finish the applause was long and enthusiastic. The remaining selections on the programme were played in the order given, all with the same success, as follows:

1. “Home, Sweet Home.”
2. “Come Genil.”–Don Pasquale.
3. “Then You’ll Remember Me”–(Bohemian Girl.)
4. “The Last Rose of Summer.”
5. “M’Appari,” Romance–(Martha.)
6. “The Carnival of Venice.”

At the conclusion of the exhibition the judgment of all present was highly flattering to what may yet be numbered among the greatest inventions of modern times.

New York Times, July 10, 1874

National Republican., April 10, 1877
National Republican (Washington, D.C.), April 10, 1877, page 1

After many years of litigation, Alexander Graham Bell was legally named the inventor of the telephone despite Gray’s allegations that Bell had plagiarised his ideas and Gray seems to have lost interest in his musical exploration soon after the legal battles with Bell.The One Octave transmitter built in the summer of 1874. The left-most electromagnet is missing.

The One Octave transmitter built in the summer of 1874. The seventh (left) electromagnet is missing.

Elisha Gray's Musical Telegraph keyboard transmitter.
Elisha Gray’s two octave keyboard transmitter now held at the Smithsonian Institution.

Despite this, Gray’s ideas had a profound influence on other inventors. Thaddeus Cahill was influenced by the Harmonic Telegraph when designing his Telharmonium of 1897; Cahill, rather unfairly, criticised the numerous shortcomings of Gray’s instrument in a letter supporting his patent application, highlighting the superiority and uniqueness of his own invention. These faults, according to Cahill, included low power –affecting transmission range and volume – and the lack tone shaping ability or expression control resulting in an unpleasant overall sound. Cahill declared Gray’s instrument to be:

” practically useless . No person of taste or culture could be supposed to derive any enjoyment from music rendered in poor, harsh tones with uneven power and absolutely without expression or variation”.

Thaddeus Cahill application for letters patent to the commissioner of patents April 1915. quoted in ”Magic Music from the Telharmonium’ Reynold Weidenaar

Grays ideas were further developed in 1885 by the German physicist Ernst Lorenz who added an experimental envelope control to Gray’s design. Alexander Graham Bell also designed an experimental ‘ Electric Harp’ for speech transmission over a telephone line using similar technology to Gray’s.

Gray later founded the Western Electric Manufacturing Company In 1872 – parent firm of the present Western Electric Company – and two years later he retired to continue independent research and teaching at Oberlin College (Oberlin, Ohio, USA).

Gray's patent for the Musical Telegraph
Gray’s patent for the Musical Telegraph


Biographical Information:

(born; Barnesville, Ohio, on Aug. 2, 1835, died Newtonville, Mass., on Jan. 21, 1901)

Elisha Gray, the American inventor, who contested the invention of the telephone with Alexander Graham Bell. He was born in Barnesville, Ohio, on Aug. 2, 1835, and was brought up on a farm. He had to leave school early because of the death of his father, but later completed preparatory school and two years at Oberlin College while supporting himself as a carpenter.

At college he became fascinated by electricity, and in 1867 he received a patent for an improved telegraph relay. During the rest of his life he was granted patents on about 70 other inventions, including the Telautograph (1888), an electrical device for reproducing writing at a distance.On Feb. 14, 1876, Gray filed with the U.S. Patent Office a caveat (an announcement of an invention he expected soon to patent) describing apparatus ‘for transmitting vocal sounds telegraphically.’ Unknown to Gray, Bell had only two hours earlier applied for an actual patent on an apparatus to accomplish the same end. It was later discovered, however, that the apparatus described in Gray’s caveat would have worked, while that in Bell’s patent would not have. After years of litigation, Bell was legally named the inventor of the telephone, although to many the question of who should be credited with the invention remained debatable.

In 1872, Gray founded the Western Electric Manufacturing Company, parent firm of the present Western Electric Company. Two years later he retired to continue independent research and invention and to teach at Oberlin College. Gray died in Newtonville, Mass., on Jan. 21, 1901.


Kenneth M. Swezey  The Encyclopedia Americana — International Edition Vol. 13. Danbury, Connecticut: Grolier Incorporated, 1995. 211

‘Whose Phone Is It, Anyway: Did Bell Steal The Invention?’ By Steve Mirsky . Scientific American January 9, 2008

‘Electronic and Experimental Music: Technology, Music, and Culture’ By Thom Holmes. 1985, 2002 Thom Holmes; 2008 Taylor & Francis. P6.

Holmes, Thomas B, ‘Electronic and Experimental Music: Pioneers in Technology and Composition’, Routledge 2002, 42.

Weidenaar, Reynold, ‘Magic Music from the Telharmonium’ The scarecrow press, inc. Metuchen, n.J., & london, 1995 , 1995, 19.

Hounshell, David, ‘Elisha Gray and the Telephone: On the Disadvantages of Being an Expert’, Technology and Culture Vol. 16, No. 2 (Apr., 1975), 133-161.

“Music by Telegraph,” New York Times, April 3, 1877

“Telephone Concerts,” Steinway Hall Programme, April 2, 1877

“When Music Was Broadcast by Telephone,” New York Times, May 11, 1975, D17.

National Republican (Washington, D.C.), April 10, 1877, 1.


  • 1
    Elisha Gray; Patent notes No. 173,618,  Feb. 15, 1876.

The ‘Audion Piano’ and Audio Oscillator. Lee De Forest. USA, 1915

 “Audion Bulbs as Producers of Pure Musical Tones” from 'The Electrical Experimenter' December 1915
“Audion Bulbs as Producers of Pure Musical Tones” from ‘The Electrical Experimenter’ December 1915

Lee De Forest , The self styled “Father Of Radio”  (the title of his 1950 autobiography) inventor and holder of over 300 patents, invented the triode electronic valve or ‘Audion valve’ in 1906- a much more sensitive development of John A. Fleming’s diode valve.

The immediate application of De Forest’s triode valve was in the emerging radio technology of which De Forest was a tenacious promoter. De Forest also discovered that the valve was capable of creating audible sounds using the heterodyning or beat frequency technique: a way of creating audible sounds by combining two high frequency signals to create a composite lower frequency within audible range – a technique that was used by Leon Termen in his Theremin and Maurice Martenot in the Ondes Martenot some years later. In doing so, De Forest inadvertently invented the first true audio oscillator and paved the way for future electronic instruments and music.

Lee De Forest's Triode Valve of 1906
Lee De Forest’s Triode Valve of 1906

In 1915 De Forest used the discovery of the heterodyning effect in an experimental instrument that he christened the ‘Audion Piano’ . This instrument – based on previous experiments as early as 1907 – was the first vacuum tube instrument and established the blueprint for most future electronic instruments until the emergence of transistor technology some fifty year later.

The Audion Piano, controlled by a single keyboard manual, used a single triode valve per octave, controlled by a set of keys allowing one monophonic note to be played per octave. This audio signal could be processed by a series of capacitors and resistors to produce variable and complex timbres and the output of the instrument could be sent to a set of speakers placed around a room giving the sound a novel spatial effect. De Forest planned a later version of the instrument that would have separate valves per key allowing full polyphony- it is not known if this instrument was ever constructed.

De Forest described the Audio Piano as capable of producing:

“Sounds resembling a violin, Cello, Woodwind, muted brass and other sounds resembling nothing ever heard from an orchestra or by the human ear up to that time – of the sort now often heard in nerve racking maniacal cacophonies of a lunatic swing band. Such tones led me to dub my new instrument the ‘Squawk-a-phone’….The Pitch of the notes is very easily regulated by changing the capacity or the inductance in the circuits, which can be easily effected by a sliding contact or simply by turning the knob of a condenser. In fact, the pitch of the notes can be changed by merely putting the finger on certain parts of the circuit. In this way very weird and beautiful effects can easily be obtained.”
(Lee De Forest’s Autobiography “The Father Of Radio”)

And From a 1915 news story on a concert held for the National Electric Light Association

“Not only does de Forest detect with the Audion musical sounds silently sent by wireless from great distances,but he creates the music of a flute, a violin or the singing of a bird by pressing button. The tune quality and the intensity are regulated by the resistors and by induction coils…You have doubtless heard the peculiar, plaintive notes of the Hawaiian ukulele, produced by the players sliding their fingers along the strings after they have been put in vibration. Now, this same effect,which can be weirdly pleasing when skilfully made, can he obtained with the musical Audion.”

Advert for De Forest wireless equipment
Advert for De Forest wireless equipment

De Forest, the tireless promoter, demonstrated his electronic instrument around the New York area at public events alongside fund raising spectacles of his radio technology. These events were often criticised and ridiculed by his peers and led to a famous trial where De Forest was accused of misleading the public for his own ends:

“De Forest has said in many newspapers and over his signature that it would be possible to transmit human voice across the Atlantic before many years. Based on these absurd and deliberately misleading statements, the misguided public … has been persuaded to purchase stock in his company. “
Lee De Forest, August 26, 1873, Council Bluffs, Iowa. Died June 30, 1961
Lee De Forest, August 26, 1873, Council Bluffs, Iowa. Died June 30, 1961

De Forest collaborated with a sceptical Thadeus Cahill in broadcasting early concerts of the Telharmonium using his radio transmitters (1907). Cahill’s insistence on using the telephone wire network to broadcast his electronic music was a major factor in the demise of the Telharmonium. Vacuum tube technology was to dominate electronic instrument design until the invention of transistors in the 1960’s. The Triode amplifier also freed electronic instruments from having to use the telephone system as a means of amplifying the signal.


Lee De Forest “Father Of Radio” (Autobiography).

Wireless: From Marconi’s Black-Box to the Audion (Transformations: Studies in the History of Science and Technology) 2001 author(s) Sungook Hong

Lee de Forest: King of Radio, Television, and Film 2012. Mike Adams (auth.).

Theremin: Ether Music and Espionage. By Albert Glinsky

Electronic Music. Nicholas Collins, Margaret Schedel, Scott Wilson

Media Parasites in the Early Avant-Garde: On the Abuse of Technology and Communication. Arndt Niebisch 2012

Electric Relays: Principles and Applications. Vladimir Gurevich

The ‘Staccatone’. Hugo Gernsback & C.J.Fitch. USA, 1923

Hugo Gernsback’s ‘Staccatone’ c 1923
Hugo Gernsback, perhaps better known as the ‘Father of Science Fiction’  (and currently eponymously celebrated in the ‘Hugos’ Science Fiction Awards) also invented and built an early electronic instrument called the Staccatone in 1923 (with Clyde.J.Fitch)  which was later developed into one of the first polyphonic instruments, the Pianorad in 1926. Gernsback was a major figure in the development and popularisation of television, radio and amateur electronics, his multiple and sometimes shady businesses included early science fiction publishing, pulp fiction, self-help manuals and DIY electronics magazines as well as his own science fiction writing.
The Staccatone was conceived as a self-build project for amateur electronics enthusiasts via Gernsback’s ‘Practical Electrics’ magazine. The instrument consisted of a single vacuum tube oscillator controlled by a crude switch based 16 note ‘keyboard’. The switch based control gave the note a staccato attack and decay – hence the ‘Staccatone’. Gernsback promoted the instrument through his many publication and on his own radio station WJZ New York:
The musical notes produced by the vacuum tubes in this manner have practically no overtones. For this reason the music produced on the Pianorad is of an exquisite pureness of tone not realised in any other musical instrument. The quality is better than that of a flute and much purer. the sound however does not resemble that of any known musical instrument. The notes are quite sharp and distinct, and the Pianorad can be readily distinguished by its music from any other musical instrument in existence.”
Hugo Gernsback
Self-build instructions for the Staccatone from ‘Practical Electrics’ magazine 1924:


Hugo Gernsback: “The Staccatone” Practical Electrics. March 1924. P.248

Holmes, T. (2020). Electronic and experimental music: Technology, music, and culture (Sixth ed.). New York: Routledge.

The ‘Pianorad’, Hugo Gernsback, Clyde.J.Fitch, USA, 1926

Gernsback’s ‘Pianorad’ at the WRNY radio studio, New York, USA in 1926.Image: Radio News, vol. 8, no. 5, November 1926

The Pianorad, designed by Hugo Gernsback and built by Clyde Finch at the Radio News Laboratories in New York was a development of Gernsback’s Staccatone of 1923. the Pianorad had 25 single vacuum tube oscillators, one for every key for its two octave keyboard making the instrument the first valve based electronic instrument to achieve full polyphony*. The sound from the tubes was passed through a rudimentary mechanical filter that removed harmonic distortion producing virtually pure sine tones. The instrument played sound through a top mounted speaker or could be connected directly for radio broadcast.

Hugo Gernsbacks' Pianorad
Hugo Gernsbacks’ Pianorad’ showing the cabinet containing 25 vacuum tubes – one for each note. Image: Radio News, vol. 8, no. 5, November 1926

Theory of the Instrument

The Pianorad has a keyboard like an ordinary piano, and there is a radio vacuum tube for each one of the piano keys. Every time a key is depressed, there is energized a radio-oscillator circuit which gives rise to a pure, flutelike note through the loud-speaker connected to the device. It is possible to connect any number of loud-speakers to the Pianorad if it is desired to flood an auditorium with its tones. Also, by arranging suitable outlets for loud-speakers on different floors or different rooms, the sounds of the Pianorad can be heard all over any large building.

The musical notes produced by the vacuum tubes in this manner have practically no overtones. For this reason the music produced on the Pianorad is of an exquisite pureness of tone not realised in any other musical instrument. The quality is better than that of a flute and much purer. the sound however does not resemble that of any known musical instrument. The notes are quite sharp and distinct, and the Pianorad can be readily distinguished by its music from any other musical instrument in existence.

Electric, Not Sound Waves

The loud-speaker arrangement makes it possible for an artist to play the keyboard while the music emerges, perhaps miles away from the Pianorad. It is thus possible for the pianist to play the instrument in absolute silence while the music is produced at a distance. This requires simply that a wire line must connect the output end of the Pianorad instrument with the loud-speaker at some distance away. It is quite feasible for the Pianorad to be played in New York while the music will be heard at the Chicago end, with any number of loudspeakers connected by amplifiers to a long-distance telephone wire line.

A novel idea is the connection of the Pianorad direct to the broadcast-station transmitter. In this case, instead of using a loud-speaker in the studio, the Pianorad is connected electrically to the broadcast transmitter. The artist now plays the Pianorad in the studio in absolute silence. No sound is heard. The radio audience, however, will enjoy the music, although no one in the studio can hear it. In order that the pianist may hear what he is playing, he will wear a set of head receivers attached to an ordinary radio set. The music, therefore, is picked out from the air by the receiver and thus only the artist hears it. In the studio itself, no sound is audible for the Pianorad itself is silent.

Developments Still Continuing

The Pianorad has as yet not entered the commercial stage. The instrument illustrated in this article has 25 keys and therefore, 25 notes. A full 88-note Pianorad has as yet not been constructed, but will be built in a short time. The larger instrument could have been built at once, but it would occupy almost as much space as a piano; and as this amount of room was not then available in the studio of WRNY, for which the first Pianorad was especially constructed, the smaller instrument was built instead.

The Pianorad at WRNY is usually accompanied by piano or violin or both; very pleasing combinations are produced in this manner. At present it uses a single stage of amplification, giving volume enough, in connection with one loud-speaker, to more than fill a fair sized room. By adding several stages of audio-frequency amplification, sufficient volume can be obtained to fill a large church or auditorium.

The Pianorad was first demonstrated publicly Saturday, June 12 at 9 P.M., with a number of brilliant selections played on it by Mr. Ralph Christman; the concert being broadcast over WRNY at The Roosevelt, New York.

The principle embodied in this instrument was first demonstrated in 1915 by Dr. Lee de Forest, inventor of the Audion. At that time Dr. de Forest was able to produce musical tones by means of vacuum tubes, but the radio art at that time had not progressed sufficiently to make possible the Pianorad.

An article by Clyde J. Fitch describing the construction of the Pianorad will appear in the December issue of Radio News.

Each one of the twenty five oscillators had its own independent speaker, mounted in a large loudspeaker horn on top of the keyboard and the whole ensemble was housed in a housing resembling a harmonium. A larger 88 non keyboard version was planned but not put into production. The Pianorad was first demonstrated on june 12, 1926 at Gernsback’s own radio station WRNY in New York City performed by Ralph Christman. The Pianorad continued to be used at the radio station for some time, accompanying piano and violin concerts.

Pianorad’s 25 units designed to eliminate harmonics.Image: Radio News, vol. 8, no. 5, November 1926

*The Telharmonium at the beginning of the 20th century earlier was a polyphonic electronic instrument but, because it generated sound using tone-wheels, it can be considered an eletro-acoustic instrument.


Hugo Gernsback: “The ‘Pianorad’ a New Musical Instrument which combines Piano and Radio Principles” Radio News, vol. 8, no. 5, November 1926

The ‘Rhythmicon’ Henry Cowell & Leon Termen. USA, 1930

Henry Cowell and the Rhythmicon
Henry Cowell and the Rhythmicon

In 1916 the American Avant-Garde composer Henry Cowell was working with ideas of controlling cross rhythms and tonal sequences with a keyboard, he wrote several quartet type pieces that used combinations of rhythms and overtones that were not possible to play apart from using some kind of mechanical control- “un-performable by any known human agency and I thought of them as purely fanciful”.(Henry Cowell) In 1930 Cowell introduced his idea to Leon Termen, the inventor of the Theremin, and commissioned him to build him a machine capable of transforming harmonic data into rhythmic data and vice versa.

“My part in its invention was to invent the idea that such a rhythmic instrument was a necessity to further rhythmic development, which has reached a limit more or less, in performance by hand, an needed the application of mechanical aid. The which the instrument was to accomplish and what rhythms it should do and the pitch it should have and the relation between the pitch and rhythms are my ideas. I also conceived that the principle of broken up light playing on a photo-electric cell would be the best means of making it practical. With this idea I went to Theremin who did the rest – he invented the method by which the light would be cut, did the electrical calculations and built the instrument.”

Henry Cowell

“The rhythmic control possible in playing and imparting exactitudes in cross rhythms are bewildering to contemplate and the potentialities of the instrument should be multifarious… Mr. Cowell used his rythmicon to accompany a set of violin movements which he had written for the occasion…. The accompaniment was a strange complexity of rhythmical interweavings and cross currents of a cunning and precision as never before fell on the ears of man and the sound pattern was as uncanny as the motion… The write believes that the pure genius of Henry Cowell has put forward a principle which will strongly influence the face of all future music.”
Homer Henly, May 20, 1932

The eventual machine was christened the “Rythmicon” or “Polyrhythmophone” and was the first electronic rhythm machine. The 17 key polyphonic keyboard produced a single note repeated in periodic rhythm for as long as it was held down, the rhythmic content being generated from rotating disks interrupting light beams that triggered photo-electric cells. The 17th key of the keyboard added an extra beat in the middle of each bar. The transposable keyboard was tuned to an unusual pitch based on the rhythmic speed of the sequences and the basic pitch and tempo could be adjusted by means of levers.Cowell wrote two works for the Rythmicon “Rythmicana” and “Music for Violin and Rythmicon” (a computer simulation of this work was reproduced in 1972). Cowell lost interest in the machine, transferring his interest to ethnic music and the machine was mothballed.

Rhythmicon Discs
Rhythmicon Discs
After Cowell, the machines were used for psychological research and one example (non working) of the machine survives at the Smithsonian Institute. The Rhythmicon was re-discoverd twenty-five years after its creation by the producer Joe Meek (creator of the innovative hit single ‘Telstar’, 1961) apparently discovered abandoned in a New York pawnbrokers. Meek brought it back to his home studio in London where it was used on several recordings. This Rhythmicon was used to provide music and sound effects for various movies in the Fifties and Sixties, including: ‘The Rains of Ranchipur’; ‘Battle Beneath the Earth’; Powell and Pressburgers’ ‘They’re a Weird Mob’; ‘Dr Strangelove’, and the sixties animated TV series ‘Torchy, The Battery Boy’.The Rhythmicon was also rumoured to have been used on several sixties and seventies records, including: ‘Atom Heart Mother’ by Pink Floyd; ‘The Crazy World of Arthur Brown’ by Arthur Brown, and ‘Robot’ by the Tornadoes. Tangerine Dream also used some sequences from the Rhythmicon on their album ‘Rubicon’.
Rhythmicon Discs
Rhythmicon Discs


“Henry Cowell: A record of his activities” Compiled June 1934 by Olive Thompson Cowell.

‘Moog Synthesisers’ Robert Moog. USA, 1964

Robert Moog started working with electronic instruments at the age of nineteen when, with his father, he created his first company,  R.A.Moog Co to manufacture and sell Theremin kits (called the ‘Melodia Theremin’ the same design as Leon Termen’s Theremin but with an optional keyboard attachment) and guitar amplifiers from the basement of his family home in Queens, New York. Moog went on to study physics at Queens College, New York in 1957 and electrical engineering at Columbia University and a Ph.D. in engineering physics from Cornell University (1965). In 1961 Moog started to produce the first transistorised version of the Theremin – which up until then had been based on Vacuum tube technology.

In 1963 with a $200 research grant from Columbia University Moog Collaborated with the experimental musician Herbert Deutsch  on the the design of what was to become the first modular Moog Synthesiser.

Herb Deutsch discusses his role in the origin of the Moog Synthesiser.

Herbert A. Deutsch working on the Development of the Moog Synthesiser c 1963
Herbert A. Deutsch working on the Development of the Moog Synthesiser c 1963

Moog and Deutsch had already been absorbing and experimenting with ideas about transistorised modular synthesisers from the German designer Harald Bode (as well as collaborating with Raymond Scott on instrument design at Manhattan Research Inc). In September 1964 he was invited to exhibit his circuits at the Audio Engineering Society Convention. Shortly afterwards in 1964,  Moog begin to manufacture electronic music synthesisers.

“…At the time I was actually still thinking primarily as a composer and at first we were probably more interested in the potential expansion of the musical aural universe than we were of its effect upon the broader musical community. In fact when Bob questioned me on whether the instrument should have a regular keyboard (Vladimir Ussachevsky had suggested to him that it should not) I told Bob “I think a keyboard is a good idea, after all, having a piano did not stop Schoenberg from developing twelve-tone music and putting a keyboard on the synthesizer would certainly make it a more sale-able product!!”
1Interview with H.A.Deutsch, October 2003, and February 2004: http://moogarchives.com/ivherb01.htm

Early version of the Moog Modular, 1964
Early version of the Moog Modular, 1964

The first instrument the Moog Modular Synthesiser produced in 1964 became the first widely used electronic music synthesiser and the first instrument to make the crossover from the avant-garde to popular music. The release in 1968 of Wendy Carlos’s album “Switched on Bach” which was entirely recorded using Moog synthesisers (and one of the highest-selling classical music recordings of its era), brought the Moog to public attention and changed conceptions about electronic music and synthesisers in general. The Beatles bought one, as did Mick Jagger who bought a hugely expensive modular Moog in 1967 (which was only used once, as a prop on Nicolas Roeg’s  film ‘Performance’  and was later sold to the German experimentalist rock group, Tangerine Dream). Over the next decade Moog created numerous keyboard synthesisers, Modular components (many licensed from design by Harald Bode), Vocoder (another Bode design), Bass pedals, Guitar synthesisers and so-on.

Early Moog Modular from 1964 at the interactive Music Museum, Ghent, Belgium.
Early Moog Modular from 1964 at the  Musée de la music in Paris, France

Moog’s designs set a standard for future commercial electronic musical instruments with innovations such as the 1 volt per octave CV control that became an industry standard and pulse triggering signals for connecting and synchronising multiple components and modules.

Despite this innovation, the Moog Synthesiser Company did not survive the decade, larger companies such as Arp and Roland developed Moog’s prototypes into more sophisticated and cost effective instruments. Moog sold the company to Norlin in the 1970’s whose miss-management lead to Moog’s resignation. Moog Music finally closed down in 1993. Robert Moog re-acquired the rights to the Moog company name in 2002 and once again began to produce updated versions of the Moog Synthesiser range. Robert Moog died on Aug 21 2005.

Moog Production Instruments 1963-2013
Date Model
1963–1980 Moog modular synthesiser
1970–81 Minimoog
1974–79 Moog Satellite
1974–79 Moog Sonic Six
1975–76 Minitmoog
1975–79 Micromoog
1975–80 Polymoog
1976–83 Moog Taurus bass pedal
1978–81 Multimoog
1979–84 Moog Prodigy
1980 Moog Liberation
1980 Moog Opus-3
1981 Moog Concertmate MG-1
1981 Moog Rogue
1981 Moog Source
1982-1985 Memorymoog
Moog Company relaunch
1998–present Moogerfooger
2002–present Minimoog Voyager
2006–present Moog Little Phatty
2010 Slim Phatty
2011 Taurus 3 bass pedal
2012 Minitaur
2013 Sub Phatty



  • 1
    Interview with H.A.Deutsch, October 2003, and February 2004: http://moogarchives.com/ivherb01.htm

Further reading:



Bob Moog Foundation

Trevor Pinch, Frank Trocco, Analog Days: The Invention and Impact of the Moog Synthesizer, Harvard University Press, 2004.