this space intentionally left blank

December 31, 2008

Filed under: music»recording»production

Don't Fear the ReaFIR

I don't really know how to say this, but: I forgot to get you anything for the holidays. I feel terrible, honestly. And after you got me such a lovely sweater.

(Belle! Take the sweater off the cat! She has some dignity to preserve!)

I'm sorry. Let me make it up to you. Here, have a peppermint stick and a quick tutorial on cheap noise reduction.

There are two cardinal sins of audio that I've committed, and which I've noticed in work by others, since it became easy to produce digital audio and video--by cardinal sins, I mean errors that make it instantly evident that this is not a professional production. The first is bad mike technique--having the microphone too far back, or too close, or using the wrong kind of microphone for the task at hand. The second is noise--noise from preamps, noise from wind and AC systems, or just the hum of a bad ambient environment.

The thing is, mike technique is hard. And you don't always have the option of great equipment, or the time to perfectly position it. You can't fix mike technique for free. And noise is also hard--I have noisy recordings all the time, because I use relatively dirty preamps with very quiet microphones, and I record in locations that aren't soundproofed (it is also likely that I'm simply not as good at this as I think I am). But constant and regular noise (such as that caused by a cheap preamp or a climate-control system) can be cleaned up (or at least, minimized), for free, after recording. And it gives us a chance to learn about DSP! Who doesn't love that?

Before going into the details of our signal processing, though, a disclaimer: sometimes simpler ways of dealing with noise are better. For example, rather than worry about filtering you could always just mask the noise with background music. Or you could use a noise gate, which would dip the volume when a person isn't talking. But I find that without music or something else to fill the spectrum, a gate can even make noise more noticable when the voice "pops" in from the silence. Besides, there are plenty of times when background music just doesn't match the desired mood, or when it's distracting. In this case, a slight amount of filtering combined with a gentle gate has produced very good results for me.

So let's say that we've got an interview recorded in a room with the AC fans running in the background, and on playback it doesn't sound great. What we're going to use to strip the white noise out of this audio clip is a Finite Impulse Response (FIR) filter. As might be obvious, this kind of filter is in contrast to an Infinite Impulse Response. Both work using the same basic principles, FIR just limits its scope a bit. Although the math for these filters quickly becomes complex, at its heart they rely on a very simple principle of weighted averages.

Remember that digital audio is represented as a series of numbers, each of which represents the value of a sample at a specific point of time. From sample to sample, sounds with high frequency content will show more change than those with little high frequency content, simply because the innate property of a high-frequency wave is its rapid change over time. So to filter out high frequencies, the easy approach is simply to generate a new wave, where each sample is the average of itself and the samples around it. That "smoothes out" the high frequency sounds, but leaves the low frequencies--which, after all, change much less from sample to sample--basically unaltered. Other kinds of EQ filters can be generated by altering the weights for each sample in the average.

What's really interesting about FIR is that you can combine it with a Fast Fourier Transform (also known as a FFT, which is a fascinating process for doing spectral analysis using math I don't completely understand) to determine the weighting for a desired filter curve. This is what the plugin we'll be using, ReaFIR, does to perform its noise reduction. Using the FFT analysis window, it takes a fingerprint of the noise we want to remove, and then sets up an filter to subtract that from the audio stream.

Let's see it in action, step by step:


click to view larger

  1. Add a ReaFIR instance to the track on which you want to perform noise reduction. Set the Mode pulldown to Subtract.
  2. Find a nice, long (1-4 seconds) of relative silence. We're going to use this to build the reduction fingerprint, so you want as pure a sample of noise by itself as possible. If there are any other sounds, they'll be incorporated into the fingerprint, and you may find yourself filtering out parts of the sound that you didn't want. This sounds really weird, and not usually in a good way.
  3. Check the "Automatically build noise profile" checkbox, and then using the DAW transport, play the clip you've picked for training. You should see yellow lines representing the frequency domain of the noise jumping across the display, with the red line (which represents the filter) fitting itself with the average of the yellow. Be sure to stop playback before you hit any voice or non-noise content. I often cut the noise out and move it to an isolated section at the end of the track, just in case I let it run too long by mistake.
  4. Now uncheck the "build noise profile" checkbox, and your filter is all set! If you play the track now, the noise should be magically gone, even during other sounds. You'll also probably hear a few artifacts, the most common of which is a slight whistling in the high frequencies due to resonance in the filter bands. I usually find that you can apply a gentle lowpass filter and tame this until it's unnoticeable.
This is really just the simplest trick that you can pull with ReaFIR, although it's the function I use most often. Another neat feature is to apply it as a mastering EQ (making sure to switch the mode from "subtract" to "EQ) after using the FFT to grab a fingerprint from a CD or a piece of music--it'll "clone" the sound of that track for your own, which works well if they're in the same style. An analysis EQ like this is a very useful tool to have around.

Well, I'm glad we got this sorted out. I'm sure you'll agree it's much better than a fruitcake, which was my backup gift. And just think: now that you've got this under control, we can celebrate the next holiday with an in-depth discussion of convolution reverb, which is based on many of the same principles. Why, maybe we could even start now...

Oh, you have to go? So soon? Ah, that's a shame, but if you must...? Then you must. I understand. Have a safe trip, then. And happy new year!

September 23, 2008

Filed under: music»recording»production

Unnatural Selections

Buying an MP3 player for the first time has made me think a bit more about the weirdness of contemporary popular music.

I used to rail about MP3, but writing the Audiofile articles for Ars opened my eyes on a lot on the realities of the technology. I've also mellowed out on sound quality when it became obvious that MP3 was a disruptive technology for individual musicians, and as I thought more about the ecological impact of CDs. I'm still not very keen on buying MP3s directly, so I'm trying out the Zune music subscription, and so far I like it quite a bit. I find it's helpful to think about it as a paid replacement for Pandora, one with lots of extra features and offline capability, instead of as a "rental" system.

But as I go through the honeymoon period with the hardware, I'm listening to a lot more music. I'm listening to it a lot more closely, trying to keep my "producer's ear" in practice (as much as it ever was). And when you do that, the surreality of modern recordings is really fascinating.

For example, I was talking to a friend a while back about recording tricks, and I mentioned the standard technique of using a sidechained compressor on drum tracks to make the snare "pop" more or tame boominess. Most people are aware of compression in general terms, as part of the mastering step--the prevalence of Loudness Wars articles makes sure of that. But I don't think most listeners are aware that individual tracks are also compressed, and that the compression can be triggered by other, separate tracks--or that this is, in fact, a special effect that's part of the modern rock sound.

To the average person, this kind of production is transparent, because it sounds "natural" to us now. We think of that as the way music would sound--under great conditions, granted, but still plausible. But when you start to break apart the processing that's done on even stripped-down productions, and you consider how that compares to, say, a person standing in a room with a band, it starts to form a bizarre picture. Take the following list:

  • The guitars and half the drums may be tied together in one "room" or acoustic space by a reverb.
  • Bass and kick-drum usually don't get reverb because it muddies the mix, so they're in another "room," one that's acoustically dead.
  • Vocals get yet another reverb setting, usually, depending on the effect the engineer's looking for.
  • Drum levels are compressed, often separately, in a way that sometimes--but not always--mimics the response of the human ear to loud sounds. Other tracks, however, are not compressed with the same psychoacoustic triggers. It's like some things are "loud" without actually being higher in the mix.
  • Even simple guitar parts are often double- or triple-tracked, and they're recorded with mikes right up next to the cabinet, as if the listener had their ear right in front of the speakers.
  • Simultaneously, the listener is also directly "in front" of the vocalist, who is also standing (in the stereo field) probably in front of the drums.
  • None of these elements cast any kind of acoustic shadow, or block any of the others from being heard.
It's a profoundly unreal set of manipulations, perversely designed to make music that sounds more real to the listener. It's so good, in fact, that it sounds more real than the real thing. Audio pundits often complain about the glossy perfection of music production, but there's another way to think about it, and that is that all of this production is intended to flatter the listener with the powers of omniscience. The reason producers work so hard to eradicate mistakes is that the audience will be able to hear everything in a way that no physical person ever could.

March 2, 2008

Filed under: music»recording»production

Why Records DO Sound All the Same

There's a little-watched video on Maroon Five's YouTube channel which documents the torturous, tedious process of crafting an instantly forgettable mainstream radio hit. It's fourteen minutes of elegantly dishevelled chaps sitting in leather sofas, playing $15,000 vintage guitars next to $200,000 studio consoles, staring at notepads and endlessly discussing how little they like the track (called Makes Me Wonder), and how it doesn't have a chorus. Even edited down, the tedium is mind-boggling as they play the same lame riff over and over and over again. At one point, singer Adam Levine says: "I'm sick of trying to engineer songs to be hits." But that's exactly he proceeds to do.

...from Tom "Music Thing" Bramwell's article in Word Magazine.

Every year someone writes an article along these lines--between digital technology, aggressive mastering, and the monolithic industry control of radio, they say, music's all shot to hell and we're all going to die. I mean no disrespect to Tom, who (as always in these articles) raises a lot of points I happen to agree with. But you're preaching to the choir, friends.

A lot of this is just disguised fervor for the good old days of analog, when making music was hard and expensive. That can be safely discounted. For the rest, which basically laments that "commercial" sound, what's there to say? I personally doubt that cheap earbuds are going to end the trend, and frankly high-def sound formats show no sign of taking off. Compression and pop mastering are here to stay.

But look at it this way: The Shins made Chutes Too Narrow in 2003, and no-one would call that a "polished"-sounding record. After Garden State, everyone may well be sick of the album, but the point remains that people are still making music without a stereotypical studio sound. I can name three or four without even trying hard. They're not on the radio, though, and they're not going to be.

In the meantime, berating the music that is on the radio when it's commercial-sounding is a lot like burning yourself on the stove and then getting angry at it for being hot. What did you expect? That's what it's for. If you don't like it, quit sticking your hands in the flames.

January 10, 2008

Filed under: music»recording»production

Rubber Factory

A little while back, David Byrne had a piece in Wired about the new digital landscape for musicians. He's now published some corrections based on feedback from musicians who say that you can't possibly make a record for nothing, as he claimed.

Well, it's true that he exaggerated, but I'm not sure that his correspondents aren't doing the same.

"While it's true that the laptop recording setup made self-produced recordings worlds easier than before, the simple truth is that laptops alone don't make records. First off, there is the peripheral equipment needed...microphones, stands, cables, pre-amps, sound cards, headphones, speakers, hard-drives, instruments, etc. And while the cost of the aforementioned has cascaded in the past decade, a complete and flexible home studio setup still comes at a price. Then, of course, there is the issue of know-how--recording skills and technique--two incredibly important factors in making a decent sounding recording, and two things that don't come "with the laptop". Lastly, there is mastering, currently hovering (at the low end scale) at around $750-$1,000. Even these moderate costs can make recording out of reach for many bands.

All tolled, in addition to the laptop, a band is looking at between $5,000 - $10,000 in extra costs just to have the ability to record themselves (I am talking about having enough equipment to record a four-piece band live with enough channels to mic a drum-kit). Yes, there are alternatives, rental being one of them. But, that still doesn't account for the skills and technique part of the equation. The only analogy that comes to me is, you can buy a cheap pair of scissors at every corner store, but that doesn't mean everyone (wants to or) should be out there cutting their own hair."

There are a couple of respectful objections I think should be raised: First, rock bands are not the end-all and be-all of home recording. Not everyone needs to simultaneously record a full drum kit with the rest of a four-piece. Not everyone even has a drummer. Many genres of music--techno, industrial, dance, hip-hop, and some of the weirder indie stuff--can easily be done using minimal hardware, recording one track at a time. Even rock and blues can be done on a shoestring: the Black Keys' Thickfreakness was recorded on a Tascam 8-track from the 80s in the drummer's basement, and Rubber Factory--which I told someone the other day is my pick for the top album of the decade--was done in an abandoned building. It's only the obsession with perfect clarity and the "processed" sound that says that you need to do things with lots of tracks and expensive equipment.

Second, the question of mastering seems to me like it's less urgent in these days of shuffled MP3s, and given the emphasis on digital distribution in Byrne's article. How much mastering do you need to put something online? I'm not the most experienced engineer, but I think you can do pretty well with an analyzer, a decent EQ plugin, and a limiter (Kjaerhaus gives away their old mastering limiter for free, and I've had good results from it). Most people just aren't listening to music that closely for it to matter whether you had it professionally mastered.

But there are good points to be made about the cost of equipment. I'm lucky enough to scratch my purchasing itch regularly, but most people--particularly many people who want to be "professional musicians" can't do that. So it occurs to me that although the last thing the world needs is a new social network, there should be a place for musicians to get together and pool their resources for playing and recording. If I own a laptop, and you own an interface, and she owns some drum mikes, and that guy over there owns a decent preamp, it only makes sense for everyone to help each other out. Add some reputation systems to the mix, and see what self-organizes.

June 25, 2007

Filed under: music»recording»production

Arpeggios and Pedagogy

The next couple of days are all about the WBI Learning Week event--and more specifically, for my part, they're all about the expert interview podcasts. I wrote this theme on Friday for the podcasts, and then spent about an hour over the weekend beefing it up and remixing it. The defining feature is a three-octave arpeggio of a chord that I can't entirely place (I think it's a C major with a flatted 7th, the components are C, E, G, and Bb). I also hooked a drum map into the arpeggiator, latched the whole bunch, and then split the keyboard so that I could play the string accompaniment on the rest of the keys. It was a surprisingly effective way to put together a tune, and something I would have probably never done on my little two-octave keyboard.

Frankly, I'm writing more music nowadays on synth at work than I am on bass at home, and I still feel like my theory knowledge could be stronger (see limited chord knowledge above). So I have decided that I need to relearn how to play keys, and I might as well brush up on my reading while I'm at it. One of the video editors has a Yamaha DX27 that he brought in and abandoned, and I may see if he'll let me borrow it until I can save my pennies for a synth of my own. Next stop, Ben Folds transcription book.

In case you are wondering (you probably aren't), the arpeggios, string pad, high-hat, and kick in the theme are the X-Pand! synth--in fact, they're all on the same track, thanks to that split keboard trick and X-Pand's multi-voice patches. The indian flute, tamborine, cabasa, and choir are all Sampletank SE. I have to work around the bargain-basement palette of these two plugins most of the time--for example, I wanted the swells at the end of each phrase to be muted trumpets, but neither synth does them well, so instead they're a combination of choir and cello section. I imagine it would be interesting next year if my successor starts looking at soft synths come purchasing time, but it wasn't a priority this time around.

April 30, 2007

Filed under: music»recording»production»post

Filmsound

For my own future reference, but others may find it interesting: FilmSound is a site devoted to sound design and scoring for films, including foley and post-production. There's a large section on Star Wars that's fairly interesting, including the method of creating the lightsaber (a mic was placed inside a long tube, then that was waved between speakers playing the lightsaber "drone" to create its doppler-like movement).

April 9, 2007

Filed under: music»recording»production

We Have a Theme Song

I love theme songs. If the Bank promised me I could write one a week, I'd never leave. I'd also never have time to write here, as you can tell.

Gender Statistics Theme

I like this one. It's for a narrated slideshow on gender statistics, aimed at policymakers. I spent an hour or two writing it, and although I really wanted formless female vocals to fill it out, there aren't a lot of singers in my department, and I think the Cello in Sampletank is lovely.

So I call the task manager in to listen, being very proud of myself. He gets through ten seconds, and then says: "No, no, no. This sounds like a theme for gender statistics. I don't want that."

You didn't?

"No. Gender statistics is exciting! I wanted something with a beat!" You didn't say that. I can only write for what I'm told. "What did we use for those instructional videos?" So I play him the crappy stock music, and "Perfect. That's what we'll use."

Anyone want a cello theme? Slightly used? But seriously, I'm thinking about fleshing it out a bit this weekend, and it could be a great backing track. Not a total loss.

Slum Upgrading Theme

This snippet is for a self-running slideshow on urban slum upgrading. In the next thirty years, urban areas will grow tremendously in the developing world, and most of them will be slums unless we do something about it. The slideshow is meant to give people a few options, and keep them from losing the political will to take action, because the solutions aren't honestly that difficult.

It's also being presented in Nairobi in a week or two, and the task manager wanted drums. I hate drums as a shorthand for Africa, and I always feel hypocritical doing it. So I worked with one of the team leaders from the project, and proposed doing something more like this instead. It uses a rhythmic synth drone to give the piece movement and an urban feel, a little West African-wannabe guitar, and kalimba to provide the "tick-tock" sounds. There's African instrumentation, in other words, but it doesn't scream "Tarzan." It also doesn't sound too depressed, too excited, or too martial, all of which would be inappropriate for slums.

February 19, 2007

Filed under: music»recording»production

Studio C

Post-weekend organizing:

That's a pretty good little home studio, if I say so myself. I got my virtual effects rig up and running again this weekend. It's all battery-powered now, so theoretically I could play live with it, but I think it's better suited for recording only. The laptop's battery life still isn't spectacular, and the audio interface sounds a little flat through the bass amp.

February 14, 2007

Filed under: music»recording»production

M-Powerment

At some point, I'm going to have to pick up a copy of Pro Tools for my home studio, for two reasons. First, I'll probably be consulting for WBI after my contract runs out, and I want to be able to open project files without any import/export issues. Second, I've honestly grown to like Pro Tools. Cubase is good software, but in my (admittedly limited) experience with it, it's the kind of software that believes every function needs a new window. Soft synths? New window! Mixer? New window! MIDI editing? New window! Channel strip? GUESS WHAT WE'RE DOING!

I kid because I love, of course. Cubase also has a lot of great features that Pro Tools doesn't have, like support for VST plugins and a wider range of hardware. And even though it opens up all those windows, it does get credit for making it easy to move between them--from almost anywhere in the application, you can get to a track's plugins, sends, and channel strip information. But for fast and flexible audio production, Pro Tools is still a monster.

But it's never been budget software, frankly. Some may disagree with the comparison, but Digidesign (the people behind Pro Tools) have always reminded me a bit of Apple: they like to use their own hardware (the merits of which are strongly debated), they're seen as more expensive than the competition, and they're pretty much standard issue at professional studios. Of course, in professional studios, a Pro Tools rig doesn't mean the same thing that we've got here at the Bank. A top-of-the-line Pro Tools HD setup offloads the effects and signal processing to outboard DSP chips contained in big, expensive rackmount units. Home users unwilling to spend more than $10,000 on a recording setup have two Pro Tools choices that are file-compatible but use host-based processing instead: Pro Tools LE and M-Powered.

My dilemma comes from picking between those two home versions. At the Bank, we're using an LE system (the very nice Digi 002), which requires Digidesign hardware (in this case, a big mixer-looking chunk of machinery that acts as both an interface and a control surface). That used to be the only budget choice. But then Avid (the parent company of Digidesign) bought M-Audio, makers of a ton of audio interfaces, and together they put out Pro Tools M-Powered, which is basically identical to LE but only runs using M-Audio hardware.

Now, I already own an M-Audio interface, the Firewire Solo. I like it. It seems solid, it's very low-latency, and other M-Audio hardware is relatively cheap so it would be easy to upgrade to a bigger system. Going to Pro Tools just means buying the software, which runs about $250. The downsides are that it's dongle-protected (so I'd have to carry around a little USB key in addition to everything else) and it doesn't come with as many plugins as the LE packs do.

Normally, I'd just have to bite the bullet on those deficiencies, because Digidesign's most affordable LE system was the Mbox 2, starting at $450. But they've just started shipping the Mbox 2 Mini, a small USB-based LE system. It doesn't offer very much in the way of input, but it's only $300, or $50 more than the M-Powered system. The Mbox also acts as a hardware dongle, meaning that I wouldn't need to carry the iLock dongle in order to use the software. On the other hand, it's expensive to upgrade an LE system (the cost of software is built into the price for new hardware) and I might still need a USB key for authorization if I bought any plugins.

So although I'm tempted, in the end I have to believe that for my small studio M-Powered will be the most logical choice. I like having a bigger choice of hardware, even if it does all have to come from M-Audio, and the overall costs are probably much lower (on par or cheaper with competitors, actually). If I had a couple thousand dollars, I'd probably want to shell out for a the Digi 002 system, because I've learned to appreciate having well-built physical faders for mixing, and then I'd add an Mbox 2 Mini for portable work. But I don't have that kind of money. I'm thinking I'll pick up the M-Powered, and then use the extra $50 toward either a Jamlab USB interface (very portable) or fxpansion's VST-to-Pro Tools plugin wrapper (thus negating the only feature I'll really miss from Cubase and Ableton Live).

January 8, 2007

Filed under: music»recording»production

Quick Fixes for Better Sound

Craig Anderton has an article in EQ Magazine this month with lots of cheap and easy recording fixes. I'm interested in making the change to 88.2kHz, 24-bit audio myself, after a Sound On Sound editorial discussed why bass sounds better when there's more resolution available to describe low-frequency waveforms.

Not to come across like a broken record, but what you can't do is increase audio quality after discarding large chunks of digital information--i.e., compression to MP3 or AAC. Wired reviews a few devices that claim to restore audio quality to portable music. My favorite snake oil is the third review, the Creative X-Fi, which garners a good score even though Wired can't say for sure what it did, or even if it did much at all. Judging by the demo on Creative's site, it sounds like a smile EQ setting, and possible a little drive for warmth. You'd be better off buying a bigger hard drive, and just ripping your CDs lossless. It takes a bit more work, but there would be a real, valuable audio difference.

Future - Present - Past