The A to Z of computer music: F
24th Jun 2013 | 15:27
Our compendium of digital production terms falls on frequency, filters, feedback and more
Get your digital music fix with a frenzy of F words. We've separated fact from fiction and put it all in the key of F.
As well as their creative application for gradually introducing or removing sounds from a mix, applying very short fade-ins and fade-outs is an essential part of good audio editing practice, acting to avoid clicks and pops at the start and/or end of samples, caused by the audio signal sharply jumping from/to the zero point rather than smoothly coming to rest on it.
Some DAWs, such as Ableton Live, actually apply tiny fades to imported and recorded audio clips by default, under the assumption that it almost always needs doing, and if it doesn't, the listener won't notice the fades anyway. Watch out, though, as this can sometimes dull the initial transient of attacking sounds like drums and plucked strings.
The top waveform ends on a non-zero value and will audibly click; below, we've fixed it with a fade-out.
A vertical (or, occasionally, horizontal) sliding controller, either virtual or physical. Faders are commonly used to set the channel volume levels in a mixer, but they can also be found in the interfaces of many plugin synths, samplers and effects.
When the output of a device is routed back into its input (either via cable or microphone in the physical world) feedback occurs, with the signal layering repeatedly on top of itself.
This can be a useful phenomenon when employed in a controlled manner (in delay plugins, for instance, to create repeated echoes that gradually get quieter,) or it can be the undesired result of an accidental routing mishap, in which case the feedback can build up fast and loud enough to cause damage to loudspeakers and other equipment, including your ears.
Fast Fourier Transform. In very simple terms, an algorithm that converts waveform-style audio to - and from - frequency graph-style audio, such as you'd get in a graphical editor like Adobe Audition. This can be used for analysis (think spectral analysers) or to process the converted audio in some way before turning it back into waveform-style audio for output. Most software with 'spectral' in the title uses FFT to do its thing.
In the context of music production, a filter is a device for attenuating specific user-selected frequencies in an audio signal. The frequency at which this attenuation begins is called the cutoff point, and depending on the filter type, the attenuated frequencies will either be above (low-pass), below (high-pass), around (notch) or on both sides of (band-pass) the cutoff.
A resonance control is often present for applying a bit of boost to the frequencies directly around the cutoff point, and the attenuation's steepness is determined by the number of 'poles' in the filter, each one adding 6dB of attenuation for every octave the signal moves away from the cutoff frequency. So, a 4-pole low-pass filter lowers the volume of the signal by 24dB for every octave it goes above the cutoff point.
One final point: we said the attenuation begins at the cutoff point, but in reality, it's not such a precise science as the transition is gradual.
Developed by Apple in the early '90s, FireWire (or IEEE 1394, to give it its technical name) was, for a while, the fastest connection standard available for external hard disks and audio interfaces. FireWire 400 offers a theoretical maximum transfer speed of 400Mbit/s, while FireWire 800 doubles this to 800Mbit/s - more than enough for bi-directional 8-channel audio I/O.
USB 3, eSATA and Thunderbolt have since trounced FireWire in terms of speed, but it's still a popular connection choice.
There's hardware, there's software, and then there's firmware sitting somewhere between the two. While your audio interface or MIDI controller interfaces with a driver on your Mac or PC, it's the firmware that actually does the talking and listening - without it, your digital device is just a box of componentry.
A firmware update could change the default assignments on your MIDI controller, perhaps, or fix a compatibility issue between your audio interface and a new version of your computer's operating system. If you discover that a new firmware version is available for any given device, it's a good idea to apply it.
Originally known as Fruity Loops, Image-Line's quirky Windows DAW is one of the most popular music applications ever made.
From humble beginnings as a 4-track sample-based drum machine, it's grown into a complete and powerful production system complete with its own instruments and effects. There's even a mobile version for iOS and Android, and we gather Mac support might well be coming soon, too. Read our review of the latest version.
A classic studio effect allegedly invented by Abbey Road engineer Ken Townsend, flanging originally involved playing back two identical signals together on two tape decks, with one of them offset in time by a very small amount. The characteristic whooshing sound was created by applying changing pressure to one of the tape reels in order to vary this offset.
Later came flanger effects units, which mimicked the sound using delays and LFOs. These days, flanger plugins can be used for this same task.
In digital signal processing, signal values can either be represented as floating or fixed point numbers.
In floating point numbers, the decimal point can be moved around (ie, 2546.77, 25467.7, 254.677, etc), enabling the representation of very large and very small numbers. In fixed point systems, the decimal point is always in the same place (ie, in a system with a fixed one-place decimal point, 43129.9 can't be floated to become 4312.99).
What this means for music software is that the possible dynamic range in a 32-bit floating point system is absolutely enormous, making it extremely difficult (practically impossible during normal use) to overload or clip the audio signal path since the clipping point is far higher than 0dB. With floating point, quiet signals are represented with as much relative accuracy as louder ones, which isn't the case in a fixed point system.
A form of synthesis involving the modulation of one audio signal's frequency by another, FM can be implemented using analogue oscillators, but is generally thought of as the concept behind digital FM synths like Yamaha's seminal DX7 (pictured above and gloriously reborn in Native Instruments' FM8.)
The operator (as FM oscillators are known) generating the main signal is called the 'carrier', while the one doing the modulating is called the 'modulator'. The number of operators available in an FM synth typically ranges from two to eight, and obviously the more you have, the more potentially complex the modulation setup and, consequently, sound will be.
FM synthesis is known for its punchy basses, sparkling pianos and bells, and generally bright tones. It also has a reputation for being unapproachable and intimidating, though this is largely a hangover from the bad old days of programming a Yamaha DX7 via its tiny, cryptic LCD system.
The sound of the human voice is largely defined by the frequency-based characteristics of vowels (generated via movement of the vocal tract, tongue and lips) and these 'phonetic signatures' are known as formants.
Vocal emulation software like Yamaha's Vocaloid, as well as many synth and effects plugins, enable manipulation of formants in order to make a male vocal sound female, for example, or turn a non-vocal sound into something at least vaguely human - think of 'talking' synth sounds.
Whether released through sheer magnanimity on the part of a developer looking to make their name, or as a spin-off from a full commercial release, freeware is quite simply software that's freely available.
These days, there are freeware plugins available that outclass premium offerings from only a few years ago.
While Computer Music's own CM Plugins suite is indeed free with every issue of the magazine, it's not freeware as such because it's not freely available elsewhere. It's generally dubbed 'magware'.
First introduced in Logic Pro, track freeze is now a feature of most DAWs. Invoking it simply renders the contents of the target track (usually including all loaded effects) in place, thus freeing up CPU and RAM, with the option to instantly unfreeze it at any point for further editing.
The frequency of an audio wave is the speed at which it vibrates in air, measured in Hertz (cycles per second) and directly related to pitch. The human ear is theoretically capable of hearing frequencies around 20Hz-20kHz, but this drops off with age, such that by 40, the average person's upper range stops at about 14kHz.
Manipulation of frequency using such tools as filters, EQ and multiband dynamics processing is one of the key concepts in music production, with each instrument type occupying its own characteristic frequency range that generally needs to be cleared to make room for it.
Inaudible frequencies also have their uses; for instance, the LFOs in synths can produce signals that would be too low in pitch to discern, but using such a signal to control other elements of the synth results in very audible effects. For instance, a 1Hz LFO applied to the filter cutoff frequency will make the filter move up and down in a continuous, second-long cycle.
The frequency response of a device (a pair of loudspeakers or a plugin effect, for example) is a measure of its spectral output in response to a given input.
It can be measured in various ways, but the ultimate goal in most pro audio gear is for the response to be as close to linear as is possible - ie, the input spectrum curve matches the output spectrum curve - at least when the device's settings are in a neutral position.
A plugin's frequency response can be measured by feeding it white or pink noise then analysing the output with a spectral analyser.
Pitchshifting a signal retains the harmonic relationships between its various components - ie, shifting a 440Hz wave up by an octave means doubling it to 880Hz, which also shifts its second harmonic at 880Hz up to 1760Hz.
Frequency shifting, on the other hand, raises or lowers all affected frequencies by the same amount, rather than by the same ratio, so shifting a 440Hz signal up an octave to 880Hz will push its second harmonic up to 1320Hz (880 + 440.)
The resulting sound is very different to the more 'realistic' pitch shifting, and both have their uses. Frequency shifting, for instance, is useful for tuning percussion, since it can be used to adjust the fundamental frequency without affecting the upper frequencies so much.
The fundamental frequency (or just 'fundamental') is the frequency in a waveform that determines its discernible pitch, and it's often (but not always) the lowest and/or loudest frequency present. The fundamental frequency of the note A4 is 440Hz, while middle C cycles at 261.63Hz. The fundamental is accompanied by a series of harmonics, which are multiples of it.
Abbreviation of 'effects', but used primarily to describe sound effects rather than effects processors. Sample libraries, for example, often come with a folder full of 'FX', comprising risers, booms, wooshes and the like.