MSM - Intro to Electronic Music Spring Semester

Spring Class 1:  Intro and Aesthetics of Mixed Music

Reading: History of Mixed Music Article

Historical Survey of mixed works:

Instrument + Tape

     - Steve Reich

Instrument + Amplification

Instrument + Processing

  • Static or analog control

  • IRCAM model (early digital)

  • Saariaho

Instrument + AI

Instrument + Commercial Software

Instrument + robotics/physical devices

Instrument + Video

Recent Mixed works

Spring Class 2: Notation of Mixed Music and Technical Riders

>>Template for Technical Rider<<

The Notation of Electronics

There is no standard way of notating electronics. However, the basic principle one should follow is to notate the electronics in a clear and concise way that provides all of the information the performer needs to synchronize and react musically. Each piece using electronics will require a slightly different approach, but the following list of items to consider may help:

• Rhythmic information

• Dynamic information

• Pitch information

• A time code if using click track or clock synchronization

• Graphical representation of the electronic sound

• Proportional notation

• Markings to differentiate sound files from real-time processes

• Loudspeaker assignment of the sound

• Textual description of the sound file or real-time process

• Cue number that links to the devices/software used (ie where new sound files may begin and end or some process is triggered)

Just like the technical rider, the notation of the electronics should present all necessary information clearly in such a way that the work could be realized without the presence of the composer.

Score examples:

Nina Young - Sun Propeller

Score

Technical Rider

Nina Young provides all the basic information in her score and notates the electronics in the score itself clearly and succinctly for the performer. A separate technical rider is provided that goes into precise details and allows the piece to be realized by a third party technician without any additional information from the composer.

Mary Kouyoumjian - Dandelion

score
violin and tape

Mary Kouyoumjain provides a detailed Technical Rider in her score and notates the electronic component of the piece clearly. The electronic component is largely drone-like in nature, and is provided with a mixture of traditional and graphic notation.

Panayiotis Kokoras — Jet

recorder and electronics

Kokoras uses the combination of stopwatch markings with the visual indications of the spectrogram in the score. This is most effective for pieces that do not require precise synchronization but more of a general idea of the electronic sound.

Alex Temple - Willingly

flute, piano, and electronics

Temple notates the electronics in a hybrid fashion - providing notes, words, and rhythms in traditional notation where possible and embellishing this traditional notation with graphic illustrations.

Ethan Hayden - What Happened

3 voices and computer

More generalized instructions to a technician who acts as a performer of the electronics.

Dan Trueman - Nostalgic Synchronic Etude 1

Score

midi keyboard

Electronic sounds are worked into the performers staff as smaller notes with triangle noteheads.

Kaija Saariaho - Nuits Adieux

Score

four voices

Saariaho uses two microphones with different processing and indicated which microphone the singers should be singing into in the score.

Spring Class 3: Microphones + Recording into Logic


Microphone Types:

Dynamic Microphone
Dynamic microphones use thick diaphragms and tend to accentuate the middle range of the frequency spectrum. These microphones tend to be highly durable and do not require phantom power.

Condenser Microphones
Condenser Microphones have thinner diaphragms than dynamic microphones and are therefor less durable but more capable of capturing high frequencies and generally record with a higher fidelity. Condenser Microphones require phantom power to boost the signal for recording.

Ribbon Microphones
RIbbon microphones use a long and thin diaphragm and generally provide a more colored or warm quality in their recordings. Ribbon microphones are usually quite fragile and can difficult to use. Ribbon microphones, like Condensers, require phantom power.

Recording Into Logic

Additional Microphone Resources:
Polar Pattern Summary
What is Phantom Power?
Microphone Frequency Response
What is Proximity Effect?

Spring Class 4: Perception of Space and Stereo Recording

Mono versus Stereo Recording

Mono = one audio source
Stereo = two audio sources

Stereo Standard Microphone Techniques:

stereo recording patterns.jpg

Spring Class 5: Advanced Editing + MIDI

VST instruments
MIDI basics
MIDI piano roll in Logic Pro

MIDI note numbers:

Midi Note Numbers.png

Aftertouch

Pitch Bend

Control Changes:

Bank Select (MSB)

  • Modulation Wheel

  • 2 Breath controller

  • 3 Undefined

  • Foot Pedal (MSB)

  • 5 Portamento Time (MSB)

  • Data Entry (MSB) - cc100=0 & cc101=0 is pitch bend range

  • Volume (MSB)

  • 8 Balance (MSB)

  • 9 Undefined

  • 10 Pan position (MSB)

  • 11 Expression (MSB)

  • 12 Effect Control 1 (MSB)

  • 13 Effect Control 2 (MSB)

  • 14 Undefined

  • 15 Undefined

  • 16 Ribbon Controller or General Purpose Slider 1

  • 17 Knob 1 or General Purpose Slider 2

  • 18 General Purpose Slider 3

  • 19 Knob 2 General Purpose Slider 4

  • 20 Knob 3 or Undefined

  • 21 Knob 4 or Undefined

  • 22-31 undefined

  • 32 Bank Select (LSB) 

  • 33 Modulation Wheel (LSB)

  • 34 Breath controller (LSB)

  • 35 Undefined

  • 36 Foot Pedal (LSB)

  • 37 Portamento Time (LSB)

  • 38 Data Entry (LSB)

  • 39 Volume (LSB)

  • 40 Balance (LSB)

  • 41 Undefined

  • 42 Pan position (LSB)

  • 43 Expression (LSB)

  • 44 Effect Control 1 (LSB) 

  • 45 Effect Control 2 (LSB)

  • 46-63 Undefined.

  • 64 Hold/Sustain Pedal (on/off) 

  • 65 Portamento (on/off)

  • 66 Sustenuto Pedal (on/off)

  • 67 Soft Pedal (on/off)

  • 68 Legato Pedal (on/off)

  • 69 Hold 2 Pedal (on/off)

  • 70 Sound Variation

  • 71 Resonance (Timbre)

  • 72 Sound Release Time

  • 73 Sound Attack Time

  • 74 Frequency Cutoff 

  • 75 Sound Control 6

  • 76 Sound Control 7

  • 77 Sound Control 8

  • 78 Sound Control 9

  • 79 Sound Control 10

  • 80 Decay or General Purpose Button 1 (on/off) 

  • 81 Hi Pass Filter Frequency or General Purpose Button 2 (on/off) 

  • 82 General Purpose Button 3 (on/off) 

  • 83 General Purpose Button 4 (on/off)

  • 84-90 Undefined

  • 91 Reverb Level

  • 92 Tremolo Level

  • 93 Chorus Level

  • 94 Celeste Level or Detune

  • 95 Phaser Level

  • 96 Data Button increment

  • 97 Data Button decrement

  • 98 Non-registered Parameter (LSB)

  • 99 Non-registered Parameter (MSB)

  • 100 Registered Parameter (LSB)

  • 101 Registered Parameter (MSB)

  • 102-119 Undefined

  • 120 All Sound Off

  • 121 All Controllers Off

  • 122 Local Keyboard (on/off)

  • 123 All Notes Off

  • 124 Omni Mode Off

  • 125 Omni Mode On

  • 126 Mono Operation

  • 127 Poly Operation

In Addition to Standard MIDI, the VST instruments that come with Logic use what they call “fader” values that may be automated but aren’t necessarily hooked up to a MIDI control. These faders can be found in the same way that you automate things like volume and panning on an audio track. Additionally you can connect them to a midi controller using the “midi learn” function.

VST instruments packed into Logic Pro: https://www.apple.com/logic-pro/plugins-and-sounds/

Spring Class 6: Basic Synthesis

Analog vs Digital Synthesizers:

The difference between digital and analog synthesizers are that digital synths use digital processors and analog synthesizers use analog circuitry. The earliest analog synths typically used vacuum tubes, which were eventually replaced by transistors to create waveform oscillators, filters, and other sound processing modules. There is much debate over which sounds better, which is a highly subjective issue.

Analog synths, especially older ones, tend to have a particular color to their sound, often described as a “warmth” or “individuality” and because they do not require a DAC, the fidelity of the sound will not be altered by this conversion to voltage from the digital realm.

Digital synths tend to be more accessible, easier to use/record, and since the 1990s are able to be much more feature-rich than their analog counterparts. For example, granular and sample-based synthesis is much easier to realize with a digital synth and the number of oscillators and filters are essentially limitless on a modern CPU.


Modular Synthesis:

Modular synthesis may involve any type of synthesis, with modules of waveform oscillators, Low frequency modulators, ADSR filters, and so on as possible modules. Most synthesizers are modular in design.

From www.synthesizers.com

From www.synthesizers.com



ADSR filter:

An important part of nearly every synthesizer is the ADSR, or Attack-Decay-Sustain-Release, which provides the “shape” of the notes.

2560px-ADSR_parameter.svg.png

Additive Synthesis:

Every sound could theoretically described as a combination of sine tones. This concept will be explored more in the class on spectral techniques, but is also the basis for additive synthesis.

Basic Schematic:

Additive_synthesis.svg.png

Historical examples:

Earliest commercial example, Hammond Model A (1936): https://www.youtube.com/watch?v=8PpCJVBKcp0

Haskins Laboratories cerca 1950 realizing speech synthesis:

http://www.haskins.yale.edu/featured/sentences/ppsentences.html

The Harmonic Tone Generator form the University of Illinois (1974): https://web.archive.org/web/20131228061841/http://ems.music.uiuc.edu/beaucham/htg_sounds/

Digital Synthesizer from Bell Labs was the first digital synth (1977): http://retiary.org/ls/music/realaudio/ob_sys/05_alles_synth_improv.rm

The Synclavier II (1979) offered a way to interpolate between different timbres. This system had both additive, frequency modulation, and sample-based capabilities and was used extensively by Frank Zappa in the latter half of his career:

Demo: https://www.youtube.com/watch?v=ikHtUq48rWE

Zappa: https://www.youtube.com/watch?v=UrOK98q_ILA

Grammy-winning Jazz from Hell: https://www.youtube.com/watch?v=WtDa1Wk5YBk

In Contemporary music, Additive synthesis was used extensively by Tristan Murail both as electronic parts as well as a theory of orchestration in works such as Allégories (1990). The most famous example of all would be Partiels, by Grisey, who uses the principles of additive synthesis to turn the timbre of a low E on a trombone into a harmony:

https://www.youtube.com/watch?v=1v7onrjN6RE


Subtractive Synthesis:

Filtering of a sound source, most often a wave form or noise generator. Typically, the sound source to be filtered (ie subtracted from) should be rich in harmonics, which makes it more useful to realize timbres using the subtractive method of synthesis.

Typical sound sources in subtractive synthesis:

  • sine wave: a pure tone, not very useful for subtractive synthesis because it contains no harmonics

  • sawtooth wave: rich overtones, nasal sounding

  • triangle wave: odd harmonics, clarinet-like

  • square wave: thick and brassy

  • pulse generator: thick and warm, variable timbre

  • Wavetable: see below

  • noise generator: white, pink, or brown noise - rich in spectral content and completely harmonic. In theory, white noise has an equal level of energy throughout the entire frequency spectrum.

Waveforms.svg.png

The above sound sources will then be filtered by low pass, high pass, comb, and other types of EQs as well as an ADSR (attack, decay, sustain, release) filter.


Frequency Modulation Synthesis:

The first digital synth of widespread popularity, the Yamaha DX7, used FM synthesis as devised by John Chowning. For FM synthesis you need two waveform oscillators, one to act as the carrier and the other as the modulator. The frequency of the carrier is modulated by the modulator, resulting in the original frequency plus sidebands that are the sum and difference between the carrier and modulator.

FM synthesis is therefore able to produce complex timbres with very basic means and the sound of FM synthesis is often described as “bell-like.”

Van Halen’s “Jump” features the DX7, which was also a favorite among contemporary composer like Gyorgi Ligeti because of its microtonal capabilities.

https://www.youtube.com/watch?v=SwYN7mTi6HM

From Logic Pro Documentation

From Logic Pro Documentation


Physical Modeling:

A type of synthesis that uses a mathematical model to imitate the physical, acoustic creation of sound. The Sculpture synth is Logic is a physical modeling synth. Typically a physical model will feature modeled exciters (pluck, bowed, struck, scraped, blown, etc), sound source type (string, membrane, tube, etc.) and resonant body attributes (size, shape, material type, etc.).

This type of synthesis is generally less developed and more “experimental,” though there are now a few commercial VST instruments that rely on physical modelling, the most successful among them is pianoteq, which features physical models of various pianos, harps, harpsichord, and steel drum that sound surprisingly realistic. The advantage of this approach is that, ideally, the synthesis will sound very realistic and varied beyond the possibilities of a sampler instrument while also being very light on hard drive space and CPU usage.

from Princton “Computing Sound” from Perry Cooke

from Princton “Computing Sound” from Perry Cooke


Wavetable/Vector-based Synthesis:

Introduced in the mid 1980s, wavetable/vector synths read the data stored in a matrix as a waveform. This allows for interesting interpolation effects between the timbres created by the wavetables. The Prophet Synth is the best known synthesizer of this type:

https://www.youtube.com/watch?v=cbKNICg-REA

From “audiofanzine.com”

From “audiofanzine.com”


Sample-based Synthesis:

In place of waveforms, these synthesizers generate sound using samples. The famous Lichtorgel that Scriabin wrote for is the earliest example of this in the analog domain, but the famous Mellotron instrument is also based on playing back a sample from magnetic tape. In logic the ESX sampler and electro-beat both have sample-based capabilities.


Granular Synthesis

Granular Synthesis is a particular type of Sample-based synthesis where an audio file is cut into very small “grains” of sound that can then be played back in a variety of ways. A grain can be anywhere from 30 milliseconds to several seconds in duration and their amplitude generally follows a curve or window function that cross-fades with other grains.

Iannis Xenakis invented granular synthesis and first implemented it by cutting and splicing magnetic tape for his Analogique A-B for string orchestra and tape (1959).

Granular synthesis is particularly adept for time freezing effect or to create interesting micro variations in a sustained sound. For this reason, granular synthesis is often used in video games for sound effects to create interactive wind, fire, and other environmental sound.

from www.granularsynthesis.com

from www.granularsynthesis.com


Spring Class 7: Spectral Techniques + FFT based Effects + Cleaning audio


Spectral Techniques

This class focuses on spectral techniques in processing, analyzing, and editing audio. The studio computer has two pieces of software that work in this way: Spear (which is freeware) and Audiosculpt (part of Ircam suite). These programs are different from standard audio editing programs because they are based on FFT analysis of sound, which allows for different ways to filter and edit the sound.


FFT (fast faurier transform) based effects

The faurier tranform is a method to analyze signals into their component parts that dates all the way back to 1805. For the purposes of audio, FFT will analyze a sound and break it down into its sinusoidal components. In FFT analysis these sine waves are considered to be infinite in time, and so the fft analysis must be applied to short grains of sound (windows) that are then recombined.

Audio file in -> FFT analysis -> Processing -> Resynthesis -> Audio file out


Cleaning Audio - gating versus filtering versus spectral approaches

FFT-based noise removal tends to be the highest fidelity way to clean audio, though it is very heavy on the CPU, especially when realized in real time. In class will compare noise gating, EQ filtering, and FFT noise removal as methods for cleaning noisy audio files.


Spring Class 8: Live Electronics in Logic Pro

using Logic Pro for Live processing and synchronizing electronics.


Spring Class 9: Preliminary Scores and Recordings Due

Review of scores, recordings, and technical riders.


Spring Class 10: Synchronization (click) + Listening Session

How to synchronize the lie performer with your electronic part
Click Track
Cues
Stopwatch
Score Following


Week 10: Listening session

Listening and critiquing your works in progress as a class


Week 11:  Mixing a multi-track concert recording

Balancing room mics and close mics
Using busses
Automation
Compression and Riding


Week 12: Class replaced by Individual Meetings