The Pattern Playback was not a musical instrument as such but an early hardware device designed to synthesise and analyse speech, designed and built by Dr. Franklin S. Cooper and his colleagues, including John M. Borst and Caryl Haskins, at Haskins Laboratories in the late 1940s and completed in 1950.
Diagram showing the function of the Pattern Playback machine
The device converted a picture or ‘spectrogram’ of a sound back in to sound. The ‘Pattern Playback’ machine functioned in a very similar way to the Russian ANS Synthesiser using a photo-electrical system; a mercury arc-light was projected through a rotating glass disc printed with fifty harmonics of a fundamental frequency as a way of generating a range of tones. The light is then projected through an acetate ‘black and transparent’ spectrogram image that lets through the portions of light that carry frequencies corresponding to the spectrogram. The resulting ‘filtered’ light hits a photo-voltaic cell which generated the final audible sound .
The Pattern Playback machine
The Pattern Playback machine
Several versions of the device were built at Haskins Laboratories and used up until 1976. The Pattern Playback now resides in the Museum at Haskins Laboratories in New Haven, Connecticut.
The Baldwin organ was an electronic organ, many models of which have been manufactured by the Baldwin Piano & Organ Co. since 1946. The original models were designed by Dr Winston E. Kock who became the company’s director of electronic research after his return from his studies at the Heinrich-Hertz-Institute, Berlin, in 1936. The organ was a development of Kock’s Berlin research with the GrosstonOrgel using the same neon-gas discharge tubes to create a stable, affordable polyphonic instrument. The Baldwin Organ were based on an early type of subtractive synthesis; the neon discharge tubes generating a rough sawtooth wave rich in harmonics which was then modified by formant filters to the desired tone.
Tone modifying circuits of the Baldwin organ
Another innovative aspect of the Baldwin Organ was the touch sensitive keyboard designed to create a realistic variable note attack similar to a pipe organ. As the key was depressed, a curved metal strip progressively shorted out a carbon resistance element to provide a gradual rather than sudden attack (and decay) to the sound. This feature was unique at that time, and it endowed the Baldwin instrument with an unusually elegant sound which captivated many musicians of the day.
“How did it sound? I have played Baldwin organs at a time when they were still marketed and in my opinion, for what it is worth, they were pretty good in relative terms. That is to say, they sounded significantly better on the whole than the general run of analogue organs by other manufacturers, and they were only beaten by a few custom built instruments in which cost was not a factor. It would not be true to say they sounded as good as a good digital organ today, but they compared favourably with the early Allen digitals in the 1970′s. Nor, of course, did they sound indistinguishable from a pipe organ, but that is true for all pipeless organs. To my ears they also sounded much better and more natural than the cloying tone of the more expensive Compton Electrone which, like the Hammond, also relied on attempts at additive synthesis with insufficient numbers of harmonics.”
From ‘Winston Kock and the Baldwin Organ; by Colin Pykett
Electronic Tone Generator of the early model Baldwin Organ showing neon gas-discharge tube oscillators.
Kock’s 1938 Patent of the Baldwin organ
Winston Kock playing his early experimental electronic instrument 1932
Winston E. Kock Biographical Details:
Winston Kock was born into a German-American family in 1909 in Cincinnati, Ohio. Despite being a gifted musician he decided to study electrical engineering at Cincinnati university and in his 20’s designed a highly innovative, fully electronic organ for his master’s degree.
The major problem of instrument design during the 1920′s and 30′s was the stability and cost of analogue oscillators. Most commercial organ ventures had failed for this reason; a good example being Givelet & Coupleux’s huge valve Organ in 1930. it was this reason that Laurens Hammond (and many others) decided on Tone-Wheel technology for his Hammond Organs despite the inferior audio fidelity.
Kock had decided early on to investigate the possibility of producing a commercially viable instrument that was able to produce the complexity of tone possible from vacuum tubes. With this in mind, Kock hit upon the idea of using much cheaper neon ‘gas discharge’ tubes as oscillators stabilised with resonant circuits. This allowed him to design an affordable, stable and versatile organ.
Kock’s Sonar device during WW2
In the 1930’s Kock, fluent in German, went to Berlin to study On an exchange fellowship (curiously, the exchange was with Sigismund von Braun, Wernher von Braun’s eldest brother –Kock was to collaborate with Wernher twenty five years later at NASA) at the Heinrich Hertz Institute conducting research for a doctorate under Professor K W Wagner. At the time Berlin, and specifically the Heinrich Hertz Institute, was the global centre of electronic music research. Fellow students and professors included; Jörg Mager, Oskar Vierling, Fritz Sennheiser, Bruno Helberger, Harald Bode, Friedrich Trautwein, Oskar Sala and Wolja Saraga amongst others. Kock’s study was based around two areas: – improving the understanding of glow discharge (neon) oscillators, and developing realistic organ tones using specially designed filter circuits.
Kock worked closely with Oskar Vierling for his Phd and co-designed the GrosstonOrgel in 1934 but disillusioned by the appropriation of his work by the newly ascendant Nazi party he decided to leave for India, sponsored by the Baldwin Organ Company arriving at the Indian Institute of Music in Bangalore in 1935.
Returning from India in 1936, Dr Kock became Baldwin’s Director of Research while still in his mid-twenties, and with J F Jordan designed many aspects of their first electronic organ system which was patented in 1941.
Winston E Kock (L) as the first Director of Engineering Research at NASA
When the USA entered the second world war Kock moved to Bell Telephone Laboratories where he was involved on radar research and specifically microwave antennas. In the mid-1950’s he took a senior position in the Bendix Corporation which was active in underwater defence technology. He moved again to become NASA’s first Director of Engineering Research, returning to Bendix in 1966 where he remained until 1971 when he became Acting Director of the Hermann Schneider Laboratory of the University of Cincinatti. Kock Died in Cincinatti in 1982.
Winston Kock was a prolific writer of scientific books but he also wrote fiction novels under the pen name of Wayne Kirk.
Acoustic lenses developed by Winston Kock at the Bell Labs in the 1950′s
Acoustic lenses developed by Winston Kock at the Bell Labs in the 1950′s
Acoustic lenses developed by Winston Kock at the Bell Labs in the 1950′s
Hugh Davies. The New Grove Dictionary of Music and Musicians
The Mastersonic Organ was an improved tone wheel organ designed to produce more accurate pipe organ sounds. The designers, John Goodell and Ellsworth Swedien, discovered that if they shaped the tone-wheel ‘pickups’ they could induce tones with different ‘natural’ harmonic content – rather than attempt to create a pure sine wave and artificially colour it as in the Hammond Organ. To achieve this the Mastersonic had individually shaped magnets for each tone wheel sound; a “string” magnet, a “flute” magnet, a “diapason” magnet, and so on.
Mastersonic Tone Generation (Alan Conway Ashton ‘electronics, Music and Computers’ 1971)
“…There were twelve shafts with seven pitch wheels each which rotated near the irregularly shaped magnets wound with coils. Each of the pitch wheels contained twice as many rectangular teeth as the preceding one, so seven octaves were produced per shaft. Several differently shaped poles were dispersed radially around each wheel.”
Alan Conway Ashton electronics, Music and Computers
Each tone-wheel was shielded against magnetic interference from the other, adding to the bulk and complexity of the instrument. The instrument was controlled by a seven octave special keyboard, designed to simulate attack envelopes. The resulting sound was indeed a much more accurate pipe organ sound but at the expense of size; the Mastersonic was a huge, complex and expensive machine and few were built or sold.
‘Microsound’ Curtis Roads MIT 2001
ELECTRONICS, MUSIC AND COMPUTERS. Alan Conway Ashton. December 1971 UTEC-CSc-71-117
The Allen Computer Organ was one of the first commercial digital instruments, developed by Rockwell International (US military technology company) and built by the Allen Organ Co in 1971. The organ used an early form of digital sampling allowing the user to chose pre-set voices or edit and store sounds using an IBM style punch-card system.
The Rockwell/Allen Computer Organ engineering team with a prototype model.
The sound itself was generated from MOS (Metal Oxide Silicon) boards. Each MOS board contained 22 LSI (Large Scale Integration) circuit boards (miniaturised photo-etched silicon boards containing thousands of transistors – based on technology developed by Rockwell International for the NASA space missions of the early 70′s) giving a total of 48,000 transistors; unheard of power for the 1970′s.
Publicity photograph demonstrating the punch-car reader
In 1967 the composer and musician Richard Moore began a collaboration with Max Mathews at Bell Labs exploring performance and expression in computer music in a ‘musician-friendly’ environment. The result of this was a digital-analogue hybrid system called GROOVE (Generated Realtime Operations On Voltage-controlled Equipment) in which a musician played an external analogue synthesiser and a computer monitored and stored the performer’s manipulations of the interface; playing notes, turning knobs and so-on. The objective being to build a real-time musical performance tool by concentrating the computers limited power, using it to store musical parameters of an external device rather than generating the sound itself :
“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wroteMusic 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. TheIBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.” Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University
Richard Moore with the Groove System
The system, written in assembler, only ran on the Honeywell DDP224 computer that Bell had acquired specifically for sound research. The addition of a disk storage device meant that it was also possible to create libraries of programming routines so that users could create their own customised logic patterns for automation or composition. GROOVE allowed users to continually adjust and ‘mix’ different actions in real time, review sections or an entire piece and then re-run the composition from stored data. Music by Bach and Bartok were performed with the GROOVE at the first demonstration at a conference on Music and Technology in Stockholm organized by UNESCO in 1970. Among the participants also several leading figures in electronic music such as Pierre Schaffer and Jean-Claude Risset.
“Starting with the Groove program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the radio-baton, plus a program, the conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.” Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University
The GROOVE System at the Bell Laboratories circa 1970
The GROOVE system consisted of:
14 DAC control lines scanned every 100th/second ( twelve 8-bit and two 12-bit)
An ADC coupled to a multiplexer for the conversion of seven voltage signal: four generated by the same knobs and three generated by 3-dimensional movement of a joystick controller;
Two speakers for audio sound output;
A special keyboard to interface with the knobs to generate On/Off signals
A teletype keyboard for data input
A CDC-9432 disk storage;
A tape recorder for data backup
Antecedents to the GROOVE included similar projects such as PIPER, developed by James Gabura and Gustav Ciamaga at the University of Toronto, and a system proposed but never completed by Lejaren Hiller and James Beauchamp at the University of Illinois . GROOVE was however, the first widely used computer music system that allowed composers and performers the ability to work in real-time. The GROOVE project ended in 1980 due to both the high cost of the system – some $20,000, and also to advances in affordable computing power that allowed synthesisers and performance systems to work together flawlessly .
Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music, Prentice Hall, 1997.
F. Richard Moore, Elements of Computer Music, PTR Prentice Hall, 1990.
Max Mathews was a pioneering, central figure in computer music. After studying engineering at California Institute of Technology and the Massachusetts Institute of Technology in 1954 Mathews went on to develop ‘Music 1′ at Bell Labs; the first of the ‘Music’ family of computer audio programmes and the first widely used program for audio synthesis and composition. Mathews spent the rest of his career developing the ‘Music N’ series of programs and became a key figure in digital audio, synthesis, interaction and performance. ‘Music N’ was the first time a computer had been used to investigate audio synthesis ( Computers had been used to generate sound and music with the CSIR M1 and Ferranti Mk1 as early as 1951, but more as a by-product of machine testing rather than for specific musical objectives) and set the blueprint for computer audio synthesis that remains in use to this day in programmes like CSound, MaxMSP and SuperCollider and graphical modular programmes like Reaktor.
IBM 704 System
“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines – they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”
Max Mathews “Horizons in Computer Music”, March 8–9, 1997, Indiana University:
MUSIC I 1957
Music 1 was written in Assembler/machine code to make the most of the technical limitations of the IBM704 computer. The audio output was a simple monophonic triangle wave tone with no attack or decay control. It was only possible to set the parameters of amplitude, frequency and duration of each sound. The output was stored on magnetic tape and then converted by a DAC to make it audible (Bell Laboratories, in those years, were the only ones in the United States, to have a DAC; a 12-Bit valve technology converter, developed by EPSCO), Mathews says;
“In fact, we are the only ones in the world at the time who had the right kind of a digital-to-analog converter hooked up to a digital tape transport that would play a computer tape. So we had a monopoly, if you will, on this process“.
In 1957 Mathews and his colleague Newman Guttman created a synthesised 17 second piece using Music I, titled ‘The Silver Scale’ ( often credited as being the first proper piece of computer generated music) and a one minute piece later in the same year called ‘Pitch Variations’ both of which were released on an anthology called ‘Music From Mathematics’ edited by Bell Labs in 1962.
Mathews and the IBM 7094
MUSIC II 1958
Was an updated more versatile and functional version of Music I . Music II still used assembler but for the transistor (rather than valve) based, much faster IBM 7094 series. Music II had four-voice polyphony and a was capable of generating sixteen wave shapes via the introduction of a wavetable oscillator.
MUSIC III 1960
“MUSIC 3 was my big breakthrough, because it was what was called a block diagram compiler, so that we could have little blocks of code that could do various things. One was a generalized oscillator … other blocks were filters, and mixers, and noise generators.” Max Mathews 2011 interview with Geeta Dayal, Frieze.
The introduction of Unit Generators (UG) in MUSIC III was an evolutionary leap in music computing proved by the fact that almost all current programmes use the UG concept in some form or other. A Unit generator is essentially a pre-built discreet function within the program; oscillators, filters, envelope shapers and so-on, allowing the composer to flexibly connect multiple UGs together to generate a specific sound. A separate ‘score’ stage was added where sounds could be arranged in a musical chronological fashion. Each event was assigned to an instrument, and consisted of a series of values for the unit generators’ various parameters (frequency, amplitude, duration, cutoff frequency, etc). Each unit generator and each note event was entered onto a separate punch-card, which while still complex and archaic by today’s standards, was the first time a computer program used a paradigm familiar to composers.
“The crucial thing here is that I didn’t try to define the timbre and the instrument. I just gave the musician a tool bag of what I call unit generators, and he could connect them together to make instruments, that would make beautiful music timbres. I also had a way of writing a musical score in a computer file, so that you could, say, play a note at a given pitch at a given moment of time, and make it last for two and a half seconds, and you could make another note and generate rhythm patterns. This sort of caught on, and a whole bunch of the programmes in the United States were developed from that. Princeton had a programme called Music 4B, that was developed from my MUSIC 4 programme. And (theMIT professor) Barry Vercoe came to Princeton. At that time, IBM changed computers from the old 1794 to the IBM 360 computers, so Barry rewrote the MUSIC programme for the 360, which was no small job in those days. You had to write it in machine language.” Max Mathews 2011 interview with Geeta Dayal, Frieze.
Max Mathews and Joan Miller at Bell labs
MUSIC IV was the result of the collaboration between Max Mathews and Joan Miller completed in 1963 and was a more complete version of the MUSIC III system using a modified macro enabled version of the assembler language. These programming changes meant that MUSIC IV would only run on the Bell Labs IBM 7094.
“Music IV was simply a response to a change in the language and the computer. It Had some technical advantages from a computer programming standpoint. It made heavy use of a macro assembly program Which Existed at the time.”
Max Mathews 1980
MUSIC IVB, IVBF and IVF
Due to the lack of portability of the MUSIC IV system other versions were created independently of Mathews and the Bell labs team, namely MUSIC IVB at Princeton and MUSIC IVBF at the Argonne Labs. These versions were built using FORTRAN rather than assembler language.
MUSIC V was probably the most popular of the MUSIC N series from Bell Labs. Similar to MUSIC IVB/F versions, Mathews abandoned assembler and built MUSIC V in the FORTRAN language specifically for the IBM 360 series computers. This meant that the programme was faster, more stable and could run on any IBM 360 machines outside of Bell Laboratories. The data entry procedure was simplified, both in Orchestra and in Score section. One of the most interesting news features was the definition of new modules that allow you to import analogue sounds into Music V. Mathews persuaded Bell Labs not to copyright the software meaning that MUSIC V was probably one of the first open-source programmes, ensuring it’s adoption and longevity leading directly to today’s CSound.
“… The last programme I wrote, MUSIC 5, came out in 1967. That was my last programme, because I wrote it in FORTRAN. FORTRAN is still alive today, it’s still in very good health, so you can recompile it for the new generation of computers. Vercoe wrote it for the 360, and then when the 360 computers died, he rewrote another programme called MUSIC 11 for the PDP-11, and when that died he got smart, and he wrote a programme in the C language called CSound. That again is a compiler language and it’s still a living language; in fact, it’s the dominant language today. So he didn’t have to write any more programmes.” Max Mathews 2011 interview with Geeta Dayal, Frieze.
MUSIC V marked the end of Mathews involvement in MUSIC N series but established it as the parent for all future music programmes. Because of his experience with the real-time limitations of computer music, Mathews became interested in developing ideas for performance based computer music such as the GROOVE system (with Richard Moore in 1970) system in and The ‘Radio Baton’ (with Tom Oberheim in 1985 ).
The Oscillon was a one-off vacuum tube instrument created by Dr. W.E. Danforth to play the wind instrument parts for his local amateur Swarthmore Symphony Orchestra. The instrument was played by sliding the finger over the metal box to produce French Horn or Bass Clarinet tones fro the loudspeaker:
When he is not experimenting on cosmic rays, high-haired Director William Francis Gray Swann of Franklin Institute’s Bartol Research Foundation, plays a cello. Young William Edgar Danforth, his assistant, plays a cello too. Both are mainstays of the Swarthmore (Pa.) Symphony Orchestra, a volunteer organization of about 40 men and women who play good music free. Because nobody in the orchestra can handle a French horn or a bass clarinet, Drs. Swann and Danforth built an electrical “oscillion” so ingenious that it can be made to sound like either, so simple that a child can master it. Last week at a Swarthmore concert the oscillion made its world debut, playing the long clarinet passages in Cesar Franck’s D Minor Symphony without a mishap. Listeners thought the oscillion lacked color, was a little twangier in tone, otherwise indistinguishable from the woodwind it replaced.
The Danforth & Swann oscillion is a simple-looking oblong wooden box with an electrical circuit inside. Current flows through a resistance, is stored up in a condenser, spills into a neon tube, becomes a series of electrical “pulses.” A loud speaker translates the pulses into sound.
To play music the oscillionist presses down on a keyboard and changes the resistance. This alters the frequency, thereby the pitch. As now constructed the oscillion has a range of five octaves which can easily be increased to eight. Inventors Danforth & Swann deplore the oscillion’s higher ranges, expect it will be most useful pinch-hitting for bass clarinet, bassoon, tuba and string bass.”
The Chamberlin was an early pre-cursor of the modern digital sampler using a complex mechanism that stored analogue audio samples on strips of audio tape – 1 tape for each key. When a key on the keyboard was pressed the tape strip played forward and when released the play head returns to the beginning of the tape. The note had a limited length, eight seconds on most models. The instrument was designed as an ‘amusing’ novelty instruments for domestic use but later found favour with rock musicians in the sixties and seventies.
The first Chamberlin Model200
All the original sounds were recordings of the Lawrence Welk Orchestra made by Harry Chamberlin at his home in California. The recording technique produced clean unaffected sound but with a heavy vibrato added by the musicians. The full set of sound that came with the Chamberlin were:
Brass: Alto Sax, Tenor Sax, Trombone, Trumpet, French Horn, Do Wah Trombone, Slur Trombone and Muted Trumpet.
Wind: flute, oboe, and bass clarinet.
Voice: Male Voice (solo) and Female Voice (solo).
Strings: 3 violins, Cello and Pizzicato violins.
Plucked strings: Slur Guitar, Banjo, Steel Guitar, Harp solo, Harp Roll, Harp 7th Arpeggio (harp sounds were not available to the public), Guitar and Mandolin.
Effects: Dixieland Band Phrases and Sound Effects.
In 1962 two Chamberlins were taken to Great Britain where they were used as the basis for the design for the Mellotron keyboard:
The Chamberlin was invented in the US in 1946 by Harry Chamberlin who had the idea (allegedly) when setting up his portable tape recorder to record himself playing his home organ. It is rumoured that it occured to him that if he could record the sound of a real instrument, he could make a keyboard instrument that could replay the sound of real instruments and thus the Chamberlin was born. Chamberlin’s idea was ‘simple’ – put a miniature tape playback unit underneath each key so that when a note was played, a tape of ‘real’ instruments would be played. At the time, the concept was totally unique.
In the ’50s, at least 100 Chamberlins were produced and to promote his instrument, Harry teamed up with a guy called Bill Fransen who was (allegedly) Harry’s window cleaner. Fransen was (allegedly) totally fascinated by this unique invention and subsequently became Chamberlin’s main (and only) salesman. However, there were terrible reliability problems with the Chamberlin and it had a very high (it is said 40%) failure rate with the primitive tape mechanism which resulted in tapes getting mangled.
Fransen felt that Chamberlin would never be able to fix these problems alone and so, unknown to Chamberlin (allegedly), Fransen brought some Chamberlins to the UK in the early ’60s to seek finance and a development partner. He showed the Chamberlin to a tape head manufacturer, Bradmatics, in the Midlands and the Bradley brothers (Frank, Leslie and Norman who owned Bradmatics) were (allegedly) very impressed with the invention and (allegedly) agreed to refine the design and produce them for Fransen but…Under the mistaken impression that the design was actually Fransen’s (allegedly)!
A new company, Mellotronics, was set up in the UK to manufacture and market this innovative new instrument and work got underway with the Bradley brothers (allegedly) unaware that they were basically copying and ripping off someone else’s idea! Of course, it wasn’t long before Harry Chamberlin got to hear of this and he too went to the UK to meet with the Bradley brothers. After some acrimonious discussions, the two parties settled with Harry selling the technology to the Bradleys. Mellotronics continued to develop their ‘Mellotron’ whilst Harry returned to the US where he continued to make his Chamberlins with his son, Richard, in a small ‘factory’ behind his garage and later, a proper factory in Ontario, a small suburb in Los Angeles. In total, they made a little over 700 units right through until 1981. Harry died shortly afterwards.
But whatever happened in those early meetings almost 40 years ago is inconsequential – the fact of the matter is that the two instruments are almost indistinguishable from each other. Each key has a playback head underneath it and each time a key is pressed, a length of tape passes over it that contains a recording of a ‘real’ instrument. The tape is of a finite length lasting about eight seconds and a spring returns it to its start position when the note is finished. As you can see from the photograph above though, the Chamberlin is smaller (although some mammoth dual-manual Chamberlins were also produced!).
Many claim that the Chamberlin had a better sound – clearer and more ‘direct’ …. which is strange because the Mellotron was (allegedly) better engineered than the Chamberlin. But there is a lot of confusion between the two instruments not helped by the fact that some Chamberlin tapes were used on the Mellotron and vice versa…. so even though the two companies were in direct competition with each other, they shared their sounds….. weird!
It also seems that some users were also confused and credited a ‘Mellotron’ on their records when in fact it might well have been a Chamberlin that they used (allegedly). However, given the similarities between the two, this confusion is understandable and it’s a tribute to Mellotronics’ marketing that they got the upper hand on the original design.
To be honest, the whole story is shrouded in hearsay and music history mythology and we may never know the truth (especially now that the original people involved are sadly no longer with us) but regardless of this, the Bradley brothers were obviously more successful with their marketing of the idea than Chamberlin himself. Although it was originally aimed at the home organ market with cheesy rhythm loops and silly sound effects, the Mellotron went on to become a legend in the history of modern music technology and the mere mention of its name can invoke dewy eyed nostalgia amongst some people. On the other hand, however, few people have even heard of the Chamberlin which is sad because Harry Chamberlin’s unique invention preceded the Mellotron by some fifteen years or more and by rights, it is the Chamberlin that deserves the title of “the world’s first sampler”.
Nostalgia has a lovely Chamberlin string sound that captures the original Chamberlin character quite authentically. Unlike the original, though, the sound is looped but, like the original, it has the same keyboard range (G2-F5) and is not velocity sensitive.
Created in 1949, The ‘ Rhythmate’ was one of the first electronic drum machines ever produced. The instrument was designed and built (probably only ten machines were ever produced) by Harry Chamberlin in Upland, California. With the success of the Chamberlin keyboards in the 1960s Harry Chamberlin updated the drum machine – the Rhythmate model25/35/45 produced from 1960-1969 with 100 models sold.
Control panel of the Chamberlin Rhythmate 1960′s model
The Rhythmate was a tape loop based drum machine designed to accompany an organ player. the instrument had 14 tape loops with a sliding head that allowed playback of different tracks on each piece of tape, or a blending between them. It contained a volume and a pitch/speed control and also had a separate amplifier with bass, treble, and volume controls, and an input jack for a guitar, microphone or other instrument. The tape loops were of real acoustic jazz drum kits playing different style beats, with some additions to tracks such as bongos, clave, castanets, etc. The Rhythmate has a built-in amplifier and 12″ speaker.
In 1951, Harry Chamberlin used his idea of magnetic tape playback to create the Chamberlin Model 200 keyboard. The Model 300/350, 400, 500 and 600/660 models followed.
Inside the Chamberlin Rhythmate showing amplifier 10″ speaker and tape loops
Elisha Gray using a violin as a resonating amplifier for his Musical Telegraph
Elisha Gray (born in Barnesville, Ohio, on Aug. 2, 1835, died Newtonville, Mass., on Jan. 21, 1901) would have been known to us as the inventor of the telephone if Alexander Graham bell hadn’t got to the patent office one hour before him. Instead, he goes down in history as the accidental creator of one of the first electronic musical instruments – a chance by-product of his telephone technology.
Elisha Grays patent for the Singing Arc
Elisha Gray’s Patent for the ‘Musical Telegraph’ 1876
Gray accidentally discovered that he could control sound from a self vibrating electromagnetic circuit and in doing so invented a basic single note oscillator. Using this principle he designed a musical instrument; The ‘Musical Telegraph’.
Gray’s invention used steel reeds whose oscillations were created and transmitted , over a telephone line, by electromagnets. Gray also built a simple loudspeaker device in later models consisting of a vibrating diaphragm in a magnetic field to make the oscillator audible.After many years of litigation, A.G.Bell was legally named the inventor of the telephone and in 1872, Gray founded the Western Electric Manufacturing Company, parent firm of the present Western Electric Company. Two years later he retired to continue independent research and invention and to teach at Oberlin College.
Elisha Gray gave the first public demonstration of his invention for transmitting musical tones at the Presbyterian Church in Highland Park, Illinois on December 29, 1874 and transmitted “familiar melodies through telegraph wire” according to a newspaper announcement– possibly using a piano as a resonating amplifier.
Elisha Gray’s first “musical telegraph” or “harmonic telegraph” contained enough single-tone oscillators to play two octaves and later models were equipped with a simple tone wheel control. Gray took the instrument on tour with him in 1874. Alexander Graham Bell also designed an experimental ‘ Electric Harp’ for speach transmission over a telephone line using similar technology to Gray’s.
Gray’s patent for the Musical Telegraph
Elisha Gray, the American inventor, who contested the invention of the telephone with Alexander Graham Bell. He was born in Barnesville, Ohio, on Aug. 2, 1835, and was brought up on a farm. He had to leave school early because of the death of his father, but later completed preparatory school and two years at Oberlin College while supporting himself as a carpenter. At college he became fascinated by electricity, and in 1867 he received a patent for an improved telegraph relay. During the rest of his life he was granted patents on about 70 other inventions, including the Telautograph (1888), an electrical device for reproducing writing at a distance.On Feb. 14, 1876, Gray filed with the U.S. Patent Office a caveat (an announcement of an invention he expected soon to patent) describing apparatus ‘for transmitting vocal sounds telegraphically.’ Unknown to Gray, Bell had only two hours earlier applied for an actual patent on an apparatus to accomplish the same end. It was later discovered, however, that the apparatus described in Gray’s caveat would have worked, while that in Bell’s patent would not have. After years of litigation, Bell was legally named the inventor of the telephone, although to many the question of who should be credited with the invention remained debatable.In 1872, Gray founded the Western Electric Manufacturing Company, parent firm of the present Western Electric Company. Two years later he retired to continue independent research and invention and to teach at Oberlin College. He died in Newtonville, Mass., on Jan. 21, 1901.”
(Kenneth M. Swezey [author of "Science Shows You How"] The Encyclopedia Americana — International Edition Vol. 13. Danbury, Connecticut: Grolier Incorporated, 1995. 211)”