Digital Sound & Music: Concepts, Applications, & Science, Chapter 6, last updated 6/25/2013
3
There were two big problems in early sound synthesis systems. First, they required a great
deal of space, consisting of a variety of microphones, signal generators, keyboards, tape
recorders, amplifiers, filters, and mixers. Second, they were difficult to communicate with. Live
performances might require instant reconnection of patch cables and a wide range of setting
changes. “Composed” pieces entailed tedious recording, re-recording, cutting, and splicing of
tape. These problems spurred the development of automated systems. The Electronic Music
Synthesizer, developed at RCA in 1955, was a step in the direction of programmed music
synthesis. Its second incarnation in 1959, the Mark II, used binary code punched into paper to
represent pitch and timing changes. While it was still a large and complex system, it made
advances in the way humans communicate with a synthesizer, overcoming the limitations of
what can be controlled by hand in real-time.
Technological advances in the form of transistors and voltage controllers made it possible
to reduce the size of synthesizers. Voltage controllers could be used to control the oscillation
(i.e., frequency) and amplitude of a sound wave. Transistors replaced bulky vacuum tubes as a
means of amplifying and switching electronic signals. Among the first to take advantage of the
new technology in the building of analog synthesizers were Don Buchla and Robert Moog. The
Buchla Music Box and the Moog Synthesizer, developed in the 1960s, both used voltage
controllers and transistors. One main difference was that the Moog Synthesizer allowed standard
keyboard input, while the Music Box used touch-sensitive metal pads housed in wooden boxes.
Both, however, were analog devices, and as such, they were difficult to set up and operate. The
much smaller MiniMoog, released in 1970, were more affordable and user-friendly, but the
digital revolution in synthesizers was already under way.
When increasingly inexpensive microprocessors and integrated circuits became available
in the 1970s, digital synthesizers began to appear. Where analog synthesizers were programmed
by rearranging a tangle of patch cords, digital synthesizers could be adjusted with easy-to-use
knobs, buttons, and dials. Synthesizers took the form of electronic keyboards like the one shown
in Figure 6.1, with companies like Sequential Circuits, Electronics, Roland, Korg, Yamaha, and
Kawai taking the lead in their development. They were certainly easier to play and program than
their analog counterparts. A limitation to their use, however, was that the control surface was
not standardized, and it was difficult to get multiple synthesizers to work together.
Figure 6.1 Prophet-5 Synthesizer
In parallel with the development of synthesizers, researchers were creating languages to
describe the types of sounds and music they wished to synthesize. One of the earliest digital
sound synthesis systems was developed by Max V. Mathews at Bell Labs. In its first version,
created in 1957, Mathews’ MUSIC I program could synthesize sounds with just basic control
over frequency. By 1968, Mathews had developed a fairly complete sound synthesis language in
MUSIC V. Other sound and music languages that were developed around the same time or
shortly thereafter include CSound (created by Barry Vercoe, MIT, in the 1980s), Structured
Audio Orchestras Language (SAOL, part of MPEG 4 standard), Music 10 (created by John
Previous Page Next Page