Sogitec 4X synthesiser. Giuseppe Di Giugno, France, 1981

Giuseppe Di Giugno with the 4X at IRCAM
Giuseppe Di Giugno with the 4X at IRCAM

The inspiration for what became the 4X Synthesiser was Luciano Berio’s concept that electronic sound should be composed of at least 1000 sine waves to be musically viable and interesting. Starting in 1976, The Italian particle physicist Giuseppe Di Giugno was commissioned by Berio to meet this challenge by developing a new and powerful real-time audio computer for the new Electroacoustic Centre at IRCAM, Paris.

4X (centre) at the IRCAM Machine Room
4X (centre) at the IRCAM Machine Room

The 4X, based originally around Di Giugno’s 4A/B processor and later the 4X digital signal processor running from a PDP-11/55 computer was essentially a custom built modular digital audio workstation.  The 4X had eight internal custom-built processor cards, all capable of being programmed separately. The processors ran at 200 MIPS giving the equivalent of 1000 sine waves, 500 filters or 450 second order filters. Each processor contained a data-memory, an address-memory, a microprogram-memory and a function-memory. For calculations it used 24 bit fixed point units consisting of a multiplier, a arithmetic and logic unit. It also had 256 internal (programmable) clocks and a large dual buffer for recording and playing.

Giuseppe Di Giugno IRCAM
Giuseppe Di Giugno IRCAM

The 4X was intended to be a future-proof platform for musical composition, a new version, the 5a was planned but the increasing demands of complexity and speed combined with the availability of cheaper and more powerful platforms meant the 4X was IRCAM’s last huge hardware development project before it turned towards software such as MAX/MSP around 1988.

REMEMBRANCE OF LUCIANO BERIO

It was one evening in November of 1974. while having dinner I get a call: “I’m Luciano Berio, I would like to speak with Prof. Di Giugno.” That phone call changed my life dramatically. During that time I taught physics at the University of Naples , conducting research on Elementary Particle Physics at the Institute of Nuclear Physics at Frascati and at CERN in Geneva. During leisure, I enjoyed building “digital synthesisers sounds” controlled by a computer. Luciano The day after that phone call came to find me at the Institute of Physics and listening to what you could do with a computer, was stunned. gave me 12 note and told me to play them, and switch between them with various rules. I listened to the result immediately and I said, “I do for the same thing it took me a month. ” Then he invited me to Rome and I presented a draft of a sound synthesiser for the IRCAM (then in design). In those days the Phonology Studio in Milan had 9 sound generators. Luciano proposed the construction of a machine with 1,000 sound generators. This was a visionary idea and practically impossible with the technology of the time. told him that I thought. He invited me for six months at IRCAM and in June 1975 I made ​​a machine (4A) capable of producing 256 different sounds in real time. Luciano used this very machine, realising his idea of composing not adding sounds, but – starting by a large mass of sound – proceeding with the remove of frequencies … as one does with a sculpture not sticking together many pieces of marble, but removing the pieces from a large block. This idea to Move large sound masses was then used by many composers.

Later I built another machine (4X), capable of producing 2,000 sounds simultaneously and in real time. “Real Time” means that if a key Pigio the sound you hear immediately. many composers preferred to compose in “deferred time ‘: a computer that is calculated sound that the composer had planned, but the sound was off, depending on the complexity, after minutes or hours. There was no gesture. Luciano not ever composed in this way, because he said the music is from the heart not the brain. together when we realized the study at Villa Strozzi in Florence, called it just “Real Time.”

Back in Italy, yet another revolutionary idea. A stationary Luciano did not like the sounds that came out of a number of speakers. wanted the sounds of the various instruments we movessero in space according to certain rules dictated by the composer. So I realized, in collaboration with IRIS Paliano (Industry and Research Institute of Performing Arts), a system called “spatializer” that allowed Luciano to carry out his latest works in various theaters around the world. electronic music today has spread across the planet, but few know that many music applications are the result of the “visionary” Luciano Berio (with my technological collaboration.) This is a good example to cite when talking about the union-ART SCIENCE.

Giuseppe Di Giugno Paliano 22 5 2013.”

The 4X was used by many composers at IRCAM during the 1980s for synthesis, comopsition and for digital processing of realtime audio. Works include  Epigenesis (1986) by Jean-Baptiste Barrière, Halls of Mirrors (1986) by Robert Rowe e Growing Elements, You Name It! (1986) by Bobby Few, Jupiter (1986/87) by Philippe Manoury, Aloni (1986/87) by Thierry Lancino, Antara (1986/87) by George Benjamin and Aér (1986/87) by Fançois Bayle, Répons by Pierre Boulez


Sources:

http://www.musicainformatica.org/topics/giuseppe-di-giugno.php http://articles.ircam.fr/textes/Boulez88c/ http://www.lucianoberio.org/

Yamaha GS1& GS2 Yamaha Corp, Japan, 1981

Yamaha GS1 FM Synthesiser
Yamaha GS1 FM Synthesiser

In 1960 the composer, musician, percussionist and mathematician, John Chowning taught computer-sound synthesis and composition at Stanford University’s Department of Music and developed a version of Max Mathews MUSIC audio programming language, MUSIC II for the PDP8 computer. During this period he began experimenting with high frequency modulation of a sine tone and discovered that by using audio-rate modulation (rather than a lower frequency control-rate LFO type modulation) he could create new tones rich in harmonics. In 1973 Chowning published his research in a paper ‘The Synthesis of Complex Audio Spectra by Means of Frequency Modulation’ which eventually lead to the creation of a new approach to audio synthesis known as ‘Frequency Modulation Synthesis’ or FM Synthesis and to the development of the world’s best selling synthesiser; yamaha’s DX range ( Stanford university is rumoured to have collected more than $20 million in license fees and enabling it rebuild the Computer Research in Music and Acoustics (CCRMA) department).

Yamaha GS1 programmer
Yamaha GS1 external programmer

In 1971 Max Mathews suggested to Chowning that he create a library of recognisable sounds exploiting FM Synthesis’ ability to emulate harmonic rich timbres – brass, percussion, strings and so-on – and to use Stanford University to approach companies for him. After being turned down by several US based companies such as Wurlitzer and Hammond, Chowning and Stanford approached, somewhat desperately, Yamaha in Japan. Yamaha were looking for a new type of electronic instrument having failed to capitalise on the success of the CS80 and GX1 Synthesisers. Yamaha’s Organ Division bought a license for one year; enough to investigate the commercial potential of FM synthesis. The first application of Chownings FM algorithm was in 1975; a monophonic prototype digital synthesiser called MAD. This was soon followed by a polyphonic FM synthesiser prototype released as a production model in 1981 as the Yamaha GS1.

Advert for the GS1 in 1982
Advert for the GS1 in 1982

The GS1 was an expensive (around £12,000 in 1981) FM synthesiser (but not the first FM synthesiser, this was the even more expensive New England Digital Synclavier released in 1978). The arrival of FM synthesis was greeted with confusion and horror by electronic musicians who had just become used to subtractive modular analogue systems. FM synthesis is a radically different approach to sound synthesis; subtractive starts with a complex waveform and subtracts harmonics and tone with filters and modulation to produce the desired timbre whereas Additive Synthesis has no filters but creates varying timbres through the application of combinations of modulators or ‘operators’.

Advert for the GS1 in 1981
Advert for the GS1 in 1981

The GS1 had eight operators arranged as four modulators per voice (two on the GS2 model) – which was a very basic implementation of FM. Despite this, the sound quality of the instrument was very impressive, and, despite the perceived complexity of programming FM (alleviated by yamaha supplying a bank of 500 preset sounds on a data stick) the GS1 found favour amongst the large recording studios who could afford them (only around 100 units were sold).

Yamaha ce20 preset FM synthesiser
Yamaha ce20 preset FM synthesiser

The GS1&2 were superseded in 1982 by the more affordable (£850) mass-market, preset CE20 and CE25 FM keyboards and then a year later in 1983 by the legendary DX7 FM synthesiser.






 

 

john m chowning
john m chowning

John M Chowning Biographical notes

Chowning was born in Salem, New Jersey in 1934. Following military service and four years at Wittenberg University, he studied composition in Paris with Nadia Boulanger.  He received the doctorate in composition (DMA) from Stanford University in 1966, where he studied with Leland Smith.  In 1964, with the help of Max Mathews of Bell Telephone Laboratories and David Poole of Stanford University, he set up a computer music program using the computer system of Stanford’s Artificial Intelligence Laboratory. Beginning the same year he began the research that led to the first generalized surround sound localization algorithm.  Chowning discovered the frequency modulation synthesis (FM) algorithm in 1967. This breakthrough in the synthesis of timbres allowed a very simple yet elegant way of creating and controlling time-varying spectra. Inspired by the perceptual research of Jean-Claude Risset, he worked toward turning this discovery into a system of musical importance, using it extensively in his compositions.

In 1973 Stanford University licensed the FM synthesis patent to Yamaha in Japan, leading to the most successful synthesis engine in the history of electronic musical instruments. Chowning was elected to the American Academy of Arts and Sciences in 1988. He was awarded the Honorary Doctor of Music by Wittenberg University in 1990.  The French Ministre de la Culture awarded him the Diplôme d’Officier dans l’Ordre des Arts et Lettres in 1995 and he was awarded the Doctorat Honoris Causa in 2002 by the Université de la Méditerranée and in 2010 by Queen’s University, Belfast. He taught computer-sound synthesis and composition at Stanford University’s Department of Music.  In 1974, with John Grey, James (Andy) Moorer, Loren Rush and Leland Smith, he founded the Center for Computer Research in Music and Acoustics (CCRMA), which remains one of the leading centers for computer music and related research.

________________________________________________________________

Sources

‘The Synthesis of Complex Audio Spectra by Means of Frequency Modulation’ Chowning J.  Journal of the Audio Engineering Society.J. Audio Eng. Soc. 21 (7), 526-534. 1973

http://www.soundonsound.com/sos/Aug01/articles/retrofmpt1.asp

http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=4018121.PN.&OS=PN/4018121&RS=PN/4018121

http://www.spoogeworld.com/music/instruments/yamaha/main.php

http://oreilly.com/digitalmedia/2006/04/12/fm-synthesis-tutorial.html