this space intentionally left blank

January 15, 2015

Filed under: music»recording

Ponographic

This week, Neil Young finally made the dreams of heavy-walleted audiophiles a reality by releasing the PonoPlayer, a digital audio player that's specifically made for lossless files recorded at 192KHz. Basically, it plays master recordings, as opposed to the downsampled audio that ends up on CDs (or, god forbid, those horrible MP3s that all the kids are listening to these days). It's been a while since I've written about audio or science, so let's talk about why the whole idea behind Pono — or indeed, most audiophile nattering about sample rate — is hogwash.

To understand sample rates, we need to back up and talk about one of the fundemental theories of digital audio: the Nyquist limit, which says that in order to accurately record and reproduce a signal, you need to sample at twice the frequency of that signal. Above the limit, the sampler doesn't record often enough to preserve the variation of the wave, and the input "wraps around" the limit. The technical term for this is "aliasing," because the sampled wave becomes indistinguishable from a lower-frequency waveform. Obviously, this doesn't sound great: at a 10KHz sample rate, an 9KHz audio signal would wrap around and play in the recording as 1KHz — a transition in scale roughly the same as going from one end of the piano to another.

To solve this problem, when digital audio came of age with CDs, engineers did two things. First, they put a filter in front of the sample input that filters out anything above the Nyquist limit, which keeps extremely high-frequency sounds from showing up in the recording as low-frequency noises. Secondly, they selected a sample rate for playback that would be twice the frequency range of normal human hearing, ensuring that the resulting audio would accurately represent anything people could actually hear. That's why CDs use 44.1KHz sampling: it gives you signal accuracy at up to 22.05KHz, which is frankly generous (most human hearing actually drops off sharply at around 14KHz). There's not very much point in playback above 44.1KHz, because you couldn't hear it anyway.

There's a lot of misunderstanding of how this works among people who consider themselves to be audiophiles (or musicians). They look at something like the Nyquist limit and what they see is information that's lost: filtered out before sampling, then filtered again when it gets downsampled from the high-resolution Pro Tools session (which may need the extra sample data for filtering and time-stretching). But truthfully, this is a glass-half-full situation. Sure, the Nyquist limit says we can't accurately record above 1/2 the sample rate — but on the other hand, below that limit accuracy is guaranteed. Everything that people can actually hear is reproduced in CD-quality audio.

This isn't to say that the $400 you'll pay for a PonoPlayer is a total scam. Although the digital-analog converter (DAC) inside it probably isn't that much better than the typical phone headphone jack, there are lots of places where you can improve digital audio playback with that kind of budget. You can add a cleaner amplifier, for example, so that there's less noise in the signal. But for most people, will it actually sound better? Not particularly. I think it's telling that one of their testimonials compares it to a high-end turntable — vinyl records having a notoriously high noise floor and crappy dynamic range, which is the polar opposite of what Pono's trying to do. You'd probably be better off spending the money on a really nice set of headphones, which will make a real difference in audio quality for most people.

I think the really interesting question raised by Pono is not the technical gibberish on their specifications page (audiophile homeopathy at its best), but rather to ask why: why is this the solution? Neil Young is a rich, influential figure, and he's decided that the industry problem he wants to solve is MP3 bitrates and CD sampling, but why?

I find Young's quest for clarity and precision fascinating, in part, because the rock tradition he's known for has always been heavily mediated and filtered, albeit in a way that we could generously call "engineered" (and cynically call "dishonest"). A rock recording is literally unnatural. Microphones are chosen very specifically for the flavor that they bring to a given instrument. Fake reverb is added to particular parts of the track and not to others, in a way that's not at all like live music. Don't even get me started on distortion, or the tonal characteristics of recording on magnetic tape.

The resulting characteristics that we think of as a "rock sound" are profoundly artificial. So I think it's interesting — not wrong, necessarily, but interesting — that someone would spend so much time on recreating the "original form" (their words) of music that doesn't sound anything like its live performance. And I do question whether it matters musically: one of my favorite albums of all time, the Black Keys' Rubber Factory, is a cheaply-produced and badly-mastered recording of performances in an abandoned building. Arguably Rubber Factory might sound better as MP3 than it does as the master, but the power it has musically has nothing to do with its sample rate.

(I'd still rather listen to it than Neil Young, too, but that's a separate issue.)

At the same time, I'm not surprised that a rock musician pitched and sold Pono, because it seems very much of that genre — trying to get closer to analog sound, because it came from an age of tape. These days, I wonder what would be the equivalent "quality" measurement for music that is deeply rooted in digital (and lo-fi digital, at that). What would be the point of Squarepusher at 192KHz? How could you remaster the Bomb Squad, when so much of their sound is in the sampled source material? And who would care, frankly, about high-fidelity chiptunes?

It's kind of fun to speculate if we'll see something like Pono in 20 years aimed at a generation that grew up on digital compression: maybe a hypertext hyperaudio player that can connect songs via the original tunes they both sample, and annotate lyrics for you a la Rap Genius? 3D audio, that shifts based on position? Time-stretching and resampling to match your surroundings? I don't know, personally. And I probably won't buy it then, either. But I like to think that those solutions will be at least more interesting than just increasing some numbers and calling it a revolution.

July 31, 2012

Filed under: music»recording

History, Sampled

In a move guaranteed to bring every troll of a certain age crashing into the comments section, an NPR All Songs Considered intern wrote this month about listening to Public Enemy's It Takes A Nation of Millions to Hold Us Back for the first time, comparing it unfavorably to (of all people) Drake. I can sympathize, because I too am still new to a lot of classic hip-hop, and I too do not always think it lives up to its reputation. On the other hand, even I don't go around kicking the whole Internet in the shins these days.

Surprisingly, in the kind of serendipity that sometimes rescues online slapfights like this, ?uestlove from the Roots dropped into the comments alongside the vitriol and lent some balanced advice on putting those recordings into context. He also wrote, in a follow-up on Twitter:

Man, That NPR/PE piece isn't bothering me as much as the position of both sides: youngins (I don't listen to music older than me) oldies (I cry for this generation) so you got one side that is dismissive to learning, we got another side dismissive on how to teach. Which leads to that "hip hop on trial" clip in which I spoke about the absence of sampling in hip hop is killing interest in music in general. At its worst sampling is a gateway drug to music you forgot about (listen to "talking all that jazz" by stet).

As a rock musician, I didn't get sampling for a long time, because I didn't really understand the relationship between listeners and producers that samples create. It didn't become clear to me until I started hanging out with the dancers in Urban Artistry, many of whom are also DJs or ferociously dedicated music fans, and realized that their knowledge of music was incredibly deep in part because they were listening to sampled music. What seemed like a lazy way to construct songs disguises an incredibly active listening experience.

That's why I love ?uestlove's commentary, because now that I listen to a lot more hip-hop I catch myself doing exactly what he describes: listening with one ear tuned to the present, and one to the past. Looking up the origins for a beat is a great way to discover classic tunes that I missed, or that I was too young to hear when they were first released. Recognizing a sample sometimes reveals a sly in-joke for a song, or a link somewhere else by virtue of shared DNA. It's not that other genres of music don't do the same thing--jazz musicians do this with riffs, and when I started learning bass, there was a whole canon I was expected to learn, from Pastorious to Prestia--but I guess I like the irony of it: here's the drummer for the greatest hip-hop "live band" justifiably lamenting the lack of sampling because it removes context and discoverability from the music.

A common lament among historians like Jeff Chang or Joseph Schloss is that hip-hop culture is distinctly apocryphal. It's an oral tradition: even in the dance community, moves like the CC or the Skeeter Rabbit are named after their creators as a way of maintaining continuity. Far from disrespecting the original artists, hip-hop music uses samples to put them in a privileged position. Knowing where the sample originates--the song, the record, the artist--marks a fan as someone who's doing their homework, in the tradition of DJs "digging in the crates" for new records to play. The future challenge for both historians and participants in hip-hop is walk a fine line: preserving the culture without disrupting either its innovative spirit or its built-in mechanisms of respect.

April 8, 2010

Filed under: music»recording»mp3

Bitrot

Digital files don't wear out, right? This is one of the big advantages of the medium, particularly in studio situations: people love the warmth of tape, but it's fragile and it loses a tiny bit of fidelity every time you play it, much less when you make a copy. If you read a lot of studio how-to articles (a guilty pleasure of mine), a common theme is the engineer who records on tape for the sound, then immediately dumps it into Pro Tools for actual editing and mixing. And of course you can make a perfect copy of a digital file, where as there's no such thing in analog.

With one exception: back when DRM'd music sales were the norm, the typical way to remove that DRM was to burn the file to a CD and re-rip to MP3 format. This was seen as kind of a kludge, because the process involves conversion to a lossless .WAV format and then back into lossy pyschoacoustic compression. In theory, every time this happens, the latter step means a loss of information, and thus fidelity.

But how much of a loss? I started wondering this when I went to make a CD for a fellow dance student from some MP3 files I'd gotten from More Than A Stance. I didn't know how he planned to play them or how tech-savvy he was, so audio CDs seemed like a better choice than audio files on a data CD. But if he decided to rip the CDs back, how bad would the quality hit be? I decided to find out.

Using some shell scripting (first PowerShell, then old-fashioned batch files--never use a computer without at least one scripting option, kids), I sent a couple of MP3 files through a conversion roundtrip a few hundred times. My choices were "Beam Katana Chronicles" from the No More Heroes soundtrack and a remix of the Jackson Five's "Life of the Party" from DJ D.L.'s Soul Movement II, picking these particular tracks for a few reasons:

  • Both tracks are relatively close to the real-world case I was trying to figure out, with the latter being an actual dance track.
  • Both were layered compositions, with plenty of detail to lose during conversion.
  • Both included strong percussion tracks with plenty of hi-hat and snare--the kinds of high-frequency transient noises that easily smear and blur under psychoacoustic compression.
I used LAME to do the decoding and encoding at a 256kbps bitrate. On the first test, I actually ran the file out to a separate .wav and back. The second time, I figured out how to pipe the stdout from one LAME instance to the stdin of a second, and just bounced it between two MP3 files, which was much faster.

The results were surprising. Here's a table with some samples (caution: may be loud), which I'll summarize below.

iterations trackaudio
original No More Heroes
DJ D.L.
50 No More Heroes
DJ D.L.
100 No More Heroes
DJ D.L.
500 No More Heroes
DJ D.L.
At under 10 iterations, I can't tell a difference between the two files. At 30-50, it's subtle--there's a little bit of swirliness around the high end, and the transients are a little blurry, but nothing more than you'd expect from, say, a turntable. It's not until you hit 100 iterations--that's 100 times going from an MP3 file to a WAV and back--that it starts to become noticeable. At that point, there's some definite artifacting, and you can start to hear a little bit of pumping in the volume after each peak. Even still, it's not much beyond the extremes of dynamic compression that have emerged from the loudness wars, and if you snuck it into my playlist I wouldn't guarantee that I'd pick it out. Once you get beyond 100, it becomes more obvious that something's broken. By 500, there's some real glitchiness going on when the track hits full volume--surprisingly, much more in the NMH track than the J5, although the latter also has its "underwater washing machine" moments.

There are a few holes in my experiment that would be interesting to test:

  • I used a symmetrical encoding and decoding process, with the same codec feeding into itself. It would be interesting to see how a mix of two or more encoders would change these results. It's likely that this would accelerate the decay rate, but would it be enough to overcome the sizeable margin in this test?
  • Likewise, this was a test of high-bitrate encoding--simply because that's the scenario where most people would realistically encounter. I'm guessing the minimum bitrate for most people is 192kbps, and anything you buy these days is usually higher. But yes, at lower bitrates I'm guessing this is dramatically more detrimental.
  • Finally, this is a test of MP3. I like MP3, and I think the folks behind LAME have done about as good a job with it as they could, but it is a last-generation compression format. It'd be interesting to see how OGG, AAC, or WMA could stack up against it.

Still, I have to admit this is far better performance than I expected going in, and I was cheering for LAME to begin with. I think we can safely reach the conclusion that for limited, real-world cases of digital dubbing, there's no serious impact on sound quality that wasn't already lost in the first MP3 encoding. Burn and rip away!

December 31, 2008

Filed under: music»recording»production

Don't Fear the ReaFIR

I don't really know how to say this, but: I forgot to get you anything for the holidays. I feel terrible, honestly. And after you got me such a lovely sweater.

(Belle! Take the sweater off the cat! She has some dignity to preserve!)

I'm sorry. Let me make it up to you. Here, have a peppermint stick and a quick tutorial on cheap noise reduction.

There are two cardinal sins of audio that I've committed, and which I've noticed in work by others, since it became easy to produce digital audio and video--by cardinal sins, I mean errors that make it instantly evident that this is not a professional production. The first is bad mike technique--having the microphone too far back, or too close, or using the wrong kind of microphone for the task at hand. The second is noise--noise from preamps, noise from wind and AC systems, or just the hum of a bad ambient environment.

The thing is, mike technique is hard. And you don't always have the option of great equipment, or the time to perfectly position it. You can't fix mike technique for free. And noise is also hard--I have noisy recordings all the time, because I use relatively dirty preamps with very quiet microphones, and I record in locations that aren't soundproofed (it is also likely that I'm simply not as good at this as I think I am). But constant and regular noise (such as that caused by a cheap preamp or a climate-control system) can be cleaned up (or at least, minimized), for free, after recording. And it gives us a chance to learn about DSP! Who doesn't love that?

Before going into the details of our signal processing, though, a disclaimer: sometimes simpler ways of dealing with noise are better. For example, rather than worry about filtering you could always just mask the noise with background music. Or you could use a noise gate, which would dip the volume when a person isn't talking. But I find that without music or something else to fill the spectrum, a gate can even make noise more noticable when the voice "pops" in from the silence. Besides, there are plenty of times when background music just doesn't match the desired mood, or when it's distracting. In this case, a slight amount of filtering combined with a gentle gate has produced very good results for me.

So let's say that we've got an interview recorded in a room with the AC fans running in the background, and on playback it doesn't sound great. What we're going to use to strip the white noise out of this audio clip is a Finite Impulse Response (FIR) filter. As might be obvious, this kind of filter is in contrast to an Infinite Impulse Response. Both work using the same basic principles, FIR just limits its scope a bit. Although the math for these filters quickly becomes complex, at its heart they rely on a very simple principle of weighted averages.

Remember that digital audio is represented as a series of numbers, each of which represents the value of a sample at a specific point of time. From sample to sample, sounds with high frequency content will show more change than those with little high frequency content, simply because the innate property of a high-frequency wave is its rapid change over time. So to filter out high frequencies, the easy approach is simply to generate a new wave, where each sample is the average of itself and the samples around it. That "smoothes out" the high frequency sounds, but leaves the low frequencies--which, after all, change much less from sample to sample--basically unaltered. Other kinds of EQ filters can be generated by altering the weights for each sample in the average.

What's really interesting about FIR is that you can combine it with a Fast Fourier Transform (also known as a FFT, which is a fascinating process for doing spectral analysis using math I don't completely understand) to determine the weighting for a desired filter curve. This is what the plugin we'll be using, ReaFIR, does to perform its noise reduction. Using the FFT analysis window, it takes a fingerprint of the noise we want to remove, and then sets up an filter to subtract that from the audio stream.

Let's see it in action, step by step:


click to view larger

  1. Add a ReaFIR instance to the track on which you want to perform noise reduction. Set the Mode pulldown to Subtract.
  2. Find a nice, long (1-4 seconds) of relative silence. We're going to use this to build the reduction fingerprint, so you want as pure a sample of noise by itself as possible. If there are any other sounds, they'll be incorporated into the fingerprint, and you may find yourself filtering out parts of the sound that you didn't want. This sounds really weird, and not usually in a good way.
  3. Check the "Automatically build noise profile" checkbox, and then using the DAW transport, play the clip you've picked for training. You should see yellow lines representing the frequency domain of the noise jumping across the display, with the red line (which represents the filter) fitting itself with the average of the yellow. Be sure to stop playback before you hit any voice or non-noise content. I often cut the noise out and move it to an isolated section at the end of the track, just in case I let it run too long by mistake.
  4. Now uncheck the "build noise profile" checkbox, and your filter is all set! If you play the track now, the noise should be magically gone, even during other sounds. You'll also probably hear a few artifacts, the most common of which is a slight whistling in the high frequencies due to resonance in the filter bands. I usually find that you can apply a gentle lowpass filter and tame this until it's unnoticeable.
This is really just the simplest trick that you can pull with ReaFIR, although it's the function I use most often. Another neat feature is to apply it as a mastering EQ (making sure to switch the mode from "subtract" to "EQ) after using the FFT to grab a fingerprint from a CD or a piece of music--it'll "clone" the sound of that track for your own, which works well if they're in the same style. An analysis EQ like this is a very useful tool to have around.

Well, I'm glad we got this sorted out. I'm sure you'll agree it's much better than a fruitcake, which was my backup gift. And just think: now that you've got this under control, we can celebrate the next holiday with an in-depth discussion of convolution reverb, which is based on many of the same principles. Why, maybe we could even start now...

Oh, you have to go? So soon? Ah, that's a shame, but if you must...? Then you must. I understand. Have a safe trip, then. And happy new year!

September 23, 2008

Filed under: music»recording»production

Unnatural Selections

Buying an MP3 player for the first time has made me think a bit more about the weirdness of contemporary popular music.

I used to rail about MP3, but writing the Audiofile articles for Ars opened my eyes on a lot on the realities of the technology. I've also mellowed out on sound quality when it became obvious that MP3 was a disruptive technology for individual musicians, and as I thought more about the ecological impact of CDs. I'm still not very keen on buying MP3s directly, so I'm trying out the Zune music subscription, and so far I like it quite a bit. I find it's helpful to think about it as a paid replacement for Pandora, one with lots of extra features and offline capability, instead of as a "rental" system.

But as I go through the honeymoon period with the hardware, I'm listening to a lot more music. I'm listening to it a lot more closely, trying to keep my "producer's ear" in practice (as much as it ever was). And when you do that, the surreality of modern recordings is really fascinating.

For example, I was talking to a friend a while back about recording tricks, and I mentioned the standard technique of using a sidechained compressor on drum tracks to make the snare "pop" more or tame boominess. Most people are aware of compression in general terms, as part of the mastering step--the prevalence of Loudness Wars articles makes sure of that. But I don't think most listeners are aware that individual tracks are also compressed, and that the compression can be triggered by other, separate tracks--or that this is, in fact, a special effect that's part of the modern rock sound.

To the average person, this kind of production is transparent, because it sounds "natural" to us now. We think of that as the way music would sound--under great conditions, granted, but still plausible. But when you start to break apart the processing that's done on even stripped-down productions, and you consider how that compares to, say, a person standing in a room with a band, it starts to form a bizarre picture. Take the following list:

  • The guitars and half the drums may be tied together in one "room" or acoustic space by a reverb.
  • Bass and kick-drum usually don't get reverb because it muddies the mix, so they're in another "room," one that's acoustically dead.
  • Vocals get yet another reverb setting, usually, depending on the effect the engineer's looking for.
  • Drum levels are compressed, often separately, in a way that sometimes--but not always--mimics the response of the human ear to loud sounds. Other tracks, however, are not compressed with the same psychoacoustic triggers. It's like some things are "loud" without actually being higher in the mix.
  • Even simple guitar parts are often double- or triple-tracked, and they're recorded with mikes right up next to the cabinet, as if the listener had their ear right in front of the speakers.
  • Simultaneously, the listener is also directly "in front" of the vocalist, who is also standing (in the stereo field) probably in front of the drums.
  • None of these elements cast any kind of acoustic shadow, or block any of the others from being heard.
It's a profoundly unreal set of manipulations, perversely designed to make music that sounds more real to the listener. It's so good, in fact, that it sounds more real than the real thing. Audio pundits often complain about the glossy perfection of music production, but there's another way to think about it, and that is that all of this production is intended to flatter the listener with the powers of omniscience. The reason producers work so hard to eradicate mistakes is that the audience will be able to hear everything in a way that no physical person ever could.

June 27, 2008

Filed under: music»recording»mp3

Thumbs Up

If you love the sound of kalimba as much as I do, you may enjoy the pad I created for CQ's upcoming DTV Transition explainer:


MP3 download

Best soundtrack instrument ever. It's just exotic enough to add interest, but not so strange that it distracts from the video. If I had a set of gamelan samples to mix with it, I'd be a happy man.

April 21, 2008

Filed under: music»recording»sketchpad

Musical Sketchpad, Session Thirteen

Better Off Dead, as made famous by Bad Religion:

Download

Been a while. Give me a chance to explain this one.

Bad Religion's Stranger than Fiction caught my ear again a couple months ago. It's the album with a lot of the classic BR songs on it: Infected, 21st Century Digital Boy, and Incomplete, for starters. But there's an impressive level of songwriting in evidence, with sharp lyrics and chord progressions that--if not incredibly original--are more complicated than they sound, and certainly more involved than punk deserves.

In fact, I like the songs so much, I've had an itch to do the whole lot of them as acoustic, voice-and-bass covers, inspired in part by the sound of the baritone guitar on the most recent Evens CD. "Better Off Dead" just happened to be the first one I picked. I think it'd be a fun project, to cover the album from start to finish this way.

There's not much technique on display here, either in terms of production or musicianship. I experimented with doing some fingerstyle arpeggios, but in the end I just strummed and sang. This mix has barely even been mixed, and has had no EQ or compression or mastering, as far as I can remember. I don't know how it'll sound through your speakers. But I'm pretty happy with the performance, and still oddly taken with the idea--although I won't subject anyone else to it anymore. Just this once.

March 2, 2008

Filed under: music»recording»production

Why Records DO Sound All the Same

There's a little-watched video on Maroon Five's YouTube channel which documents the torturous, tedious process of crafting an instantly forgettable mainstream radio hit. It's fourteen minutes of elegantly dishevelled chaps sitting in leather sofas, playing $15,000 vintage guitars next to $200,000 studio consoles, staring at notepads and endlessly discussing how little they like the track (called Makes Me Wonder), and how it doesn't have a chorus. Even edited down, the tedium is mind-boggling as they play the same lame riff over and over and over again. At one point, singer Adam Levine says: "I'm sick of trying to engineer songs to be hits." But that's exactly he proceeds to do.

...from Tom "Music Thing" Bramwell's article in Word Magazine.

Every year someone writes an article along these lines--between digital technology, aggressive mastering, and the monolithic industry control of radio, they say, music's all shot to hell and we're all going to die. I mean no disrespect to Tom, who (as always in these articles) raises a lot of points I happen to agree with. But you're preaching to the choir, friends.

A lot of this is just disguised fervor for the good old days of analog, when making music was hard and expensive. That can be safely discounted. For the rest, which basically laments that "commercial" sound, what's there to say? I personally doubt that cheap earbuds are going to end the trend, and frankly high-def sound formats show no sign of taking off. Compression and pop mastering are here to stay.

But look at it this way: The Shins made Chutes Too Narrow in 2003, and no-one would call that a "polished"-sounding record. After Garden State, everyone may well be sick of the album, but the point remains that people are still making music without a stereotypical studio sound. I can name three or four without even trying hard. They're not on the radio, though, and they're not going to be.

In the meantime, berating the music that is on the radio when it's commercial-sounding is a lot like burning yourself on the stove and then getting angry at it for being hot. What did you expect? That's what it's for. If you don't like it, quit sticking your hands in the flames.

February 21, 2008

Filed under: music»recording

Free Rain

Rain Recording, an company that makes computers for audio production, asked if they could use an old post of mine for their "Pro" section. It's up now (with some revision) as the newest addition to the page: In the Garageband. Yes, there is some irony in writing a piece about cheap and free music software for a company that makes a $10k recording workstation, but I guess after spending that kind of money you'd be tempted to cut back elsewhere.

January 10, 2008

Filed under: music»recording»production

Rubber Factory

A little while back, David Byrne had a piece in Wired about the new digital landscape for musicians. He's now published some corrections based on feedback from musicians who say that you can't possibly make a record for nothing, as he claimed.

Well, it's true that he exaggerated, but I'm not sure that his correspondents aren't doing the same.

"While it's true that the laptop recording setup made self-produced recordings worlds easier than before, the simple truth is that laptops alone don't make records. First off, there is the peripheral equipment needed...microphones, stands, cables, pre-amps, sound cards, headphones, speakers, hard-drives, instruments, etc. And while the cost of the aforementioned has cascaded in the past decade, a complete and flexible home studio setup still comes at a price. Then, of course, there is the issue of know-how--recording skills and technique--two incredibly important factors in making a decent sounding recording, and two things that don't come "with the laptop". Lastly, there is mastering, currently hovering (at the low end scale) at around $750-$1,000. Even these moderate costs can make recording out of reach for many bands.

All tolled, in addition to the laptop, a band is looking at between $5,000 - $10,000 in extra costs just to have the ability to record themselves (I am talking about having enough equipment to record a four-piece band live with enough channels to mic a drum-kit). Yes, there are alternatives, rental being one of them. But, that still doesn't account for the skills and technique part of the equation. The only analogy that comes to me is, you can buy a cheap pair of scissors at every corner store, but that doesn't mean everyone (wants to or) should be out there cutting their own hair."

There are a couple of respectful objections I think should be raised: First, rock bands are not the end-all and be-all of home recording. Not everyone needs to simultaneously record a full drum kit with the rest of a four-piece. Not everyone even has a drummer. Many genres of music--techno, industrial, dance, hip-hop, and some of the weirder indie stuff--can easily be done using minimal hardware, recording one track at a time. Even rock and blues can be done on a shoestring: the Black Keys' Thickfreakness was recorded on a Tascam 8-track from the 80s in the drummer's basement, and Rubber Factory--which I told someone the other day is my pick for the top album of the decade--was done in an abandoned building. It's only the obsession with perfect clarity and the "processed" sound that says that you need to do things with lots of tracks and expensive equipment.

Second, the question of mastering seems to me like it's less urgent in these days of shuffled MP3s, and given the emphasis on digital distribution in Byrne's article. How much mastering do you need to put something online? I'm not the most experienced engineer, but I think you can do pretty well with an analyzer, a decent EQ plugin, and a limiter (Kjaerhaus gives away their old mastering limiter for free, and I've had good results from it). Most people just aren't listening to music that closely for it to matter whether you had it professionally mastered.

But there are good points to be made about the cost of equipment. I'm lucky enough to scratch my purchasing itch regularly, but most people--particularly many people who want to be "professional musicians" can't do that. So it occurs to me that although the last thing the world needs is a new social network, there should be a place for musicians to get together and pool their resources for playing and recording. If I own a laptop, and you own an interface, and she owns some drum mikes, and that guy over there owns a decent preamp, it only makes sense for everyone to help each other out. Add some reputation systems to the mix, and see what self-organizes.

Future - Present - Past