this space intentionally left blank

January 2, 2007

Filed under: music»recording»production

Cubase Tip: ASIO Selection

I just found this the other day when I reinstalled Cubase on my recording laptop. The copy protection for Cubase LE is luckily (and thankfully) nonexistent if you have a working copy somewhere, although I did have to register a couple of DLLs with Windows. But after loading the software, even with the new Firewire interface, sound latency was awful--something like a full second round trip. It's impossible to monitor yourself through software when the delay is that bad, and although there's a direct mode on the interface, it involves opening up a mixer window and fiddling with the inputs. I didn't really want to do that, especially since I know that Ableton Live and Phrazor are perfectly capable of fast software monitoring with this hardware.

It turns out that Cubase LE can manage just fine, but the option is hidden. By default, Steinberg includes a driver that wraps the lowest levels of Windows functionality (MME, or maybe DirectX) in an ASIO layer. It works, but it's really slow. I'd known this all along, but I thought it was a restriction built into Cubase LE to encourage upgrades. That'll teach me to be cynical--they've just give the menu a very strange name. To change ASIO drivers in Cubase, choose "Device Setup" from the Devices menu, and then open the "VST Multitrack" tab. There's a pulldown menu that will initially read "ASIO Multimedia Driver"--when I opened it up, there was my Firewire Solo (as well as ASIO4All). Cubase still has trouble operating at the very lowest latency, but I had a comfortable experience running VST plugins on a recording track with 44.1KHz, 16-bit audio and a sample size of 128. Much better.

The only reasons that I can think for Steinberg's weird choices here are two-fold. First, they've obviously included the ASIO-MME driver for compatibility, and they don't want to take a chance on auto-selecting the wrong driver. Second, they've hidden the option in the "VST Multitracker" tab (which I had seen, but always ignored) because Cubase started as a sequencer instead of a recording workstation. Latency isn't as important for predetermined MIDI sequences, and from that perspective they might have referred to audio I/O as a "multitracker" instead of more straighforward terminology.

December 4, 2006

Filed under: music»recording»production

Standard Time

In a throwaway line from the introduction to his piece on tempo maps in Cubase SX, Sound on Sound columnist Mark Wherry asks:

It might be an interesting study to see how much music has been written with a fixed tempo of 120 bpm in four/four time over the years, just because this is the starting point presented to the user in almost every sequencer of the last 20 years.

This is a very good question. Another good question, depending on the answer to the first, would be to ask if 120 bpm (or divisions thereof) has become a unconscious tempo for many musicians precisely because we've heard so much music at that speed since the rise of digital sequencers.

November 20, 2006

Filed under: music»recording»production»post

The Amateur teaches Pro Tools

Apart from producing the podcasts, recording voiceovers, and editing a couple of radio shows, most of my production work at the Bank involves supporting the video editors with their soundtracks. We use Final Cut Pro in the Multimedia Center, and although I'm sure it's a fine tool for video editing it doesn't seem to be very effective for more than the most basic audio work. For one thing, the effects need to be rendered before you can hear them, and I can never actually get any audible changes out of them (I'm probably doing something wrong, but the video people avoid the audio side like vampires in an Italian kitchen, so they're not much help). Soundtrack, the tool that accompanies Final Cut, is all well and good--but it's not really my cup of tea, and none of the editors want to put in the work to learn it.

So with all that in mind, here is a quick set of two tutorials for the common tasks that I perform while working my magic on soundtracks. I need to write a tutorial for my co-workers anyway, it might as well be now. Also, note that although Pro Tools can work with multi-track audio through the OMF/QT import part of the DV toolkit, you can also work with a stereo .wav of the whole soundtrack--and I often do.

1. Better Vocal Ducking:

When you bring in a voiceover on top of background noise, you want that background to get out of the way so that the vocals can be understood. That means lowering the volume, usually. Now, you can go through and manually adjust the volume using the track automation, but that's a pain and you might miss something. I prefer to do it through plugins. Now this is a pretty standard part of audio production, and you can google plenty of advice on it--assign a compressor to your background track, sidechain it to the vocals with a medium-high ratio, and its volume will automatically lower whenever the vocal track plays.

The problem with this, as my manager immediately pointed out when I first started using Pro Tools, is that it only ducks the volume right as the vocal comes out, and it sounds more natural if the background can start to fade just a little bit before the vocals actually enter, as if someone had anticipated the vocals. This also preserves the first word of the voiceover--it doesn't get lost in the slight pause before the compressor really kicks in.

I solve this problem, like most of my audio magic, with creative use of sends. Basically, you want to split your vocal track into two directions. Change its output to a bus, say Bus 1, and add a Send to a different bus, Bus 2 perhaps. Use Bus 1 as the background track compressor's sidechain input (the key icon in RTAS plugins will activate sidechaining if it's supported, and let you pick an input), but instead of setting the compressor to the usual settings, you want to give it a moderate attack and a very long release. I usually use the following settings (although I'm quoting from memory, so it may be off a little):

  • Ratio: 4.3
  • Soft knee
  • Attack: 153ms
  • Release: 2.5 sec
  • Threshold: -45db
Now, create an Aux track that takes input from Bus 2 and outputs to your speakers (or wherever). This is the path for the voice that you'll actually hear. Add a delay to it (I use the stock medium delay plugin set to 100% wet and 333ms, which is coincidentally twice as long as the attack on the compressor).

See what we've done? You've used the vocals to trigger a slow compressor, hopefully creating a fade, while simultaneously adding a delay to the audible vocal so that it'll arrive after the compressor turns down the background. You do need to be careful with timing, obviously, because the timeline display is now 333ms offset from your ears, but if you're after precision you should probably just invest in a plugin that supports look-ahead. This is the cheap way to do it.

2. Remove a noisy camera/audience:

Here's something to remember about audio: it's easy to add, but not so easy to subtract. Unlike video or images, you can't just cut a noise out, because it's part of the soundwave. Some tools, like Adobe Audition, will let you paint out chunks of the spectrum, but it's still not a perfect solution. And remember, sound is a representation of a physical phenomenon--in a lot of cases, it's the mic capsule or coil moving that is reproduced through your speakers. So physical movement or different sound frequencies can actually mask other sounds, because they're physically moving the mic and changing its interaction with the sound.

That's all very fun and technical, but what it amounts to in real life is that a lot of footage is shot in bad locations, through crappy equipment and sub-optimal mike technique. Maybe a speech was shot using the camera mike at the back of the room instead of plugging into the PA system. Maybe it was done on the move, and there's a lot of wind and crowd noise. Editors want that gone, or at least reduced, so you can hear the subject.

This is relatively easy to do. Remember that most of a voice's content takes place between the frequences of 100-5000Hz. Outside of that, you might miss some of the sibilant consonants, or low vocal rumble, but you'll be able to understand a person. Also, most electrical and camera noise takes place at the upper and lower limits of the spectrum. So to remove physical handling noise, like bumps, and boomy acoustics, I open up an EQ plugin and set a high-pass filter with a very sharp cutoff at around 180Hz, fine-tuning a little through headphones. I put a low-pass filter at around 6KHz, which minimizes clicks and a lot of tape whirr. If you add these through the AudioSuite menu of Pro Tools, using the preview function to listen before you apply them, you'll end up with a new, processed region that you can export back out for Final Cut.

October 7, 2006

Filed under: music»recording»production

Sing Me Spanish Techno

GDLN World Forum Soundtrack, Take 1

As I said, I had to compose some soundtrack music for the GDLN World Forum this weekend. The only guidance I got was that one of them needed to be a techno remix, and the other needed to be more calm. They also needed a transition between the two. I'm no Fatboy Slim, but I think the results sound pretty good, and it's surprisingly fun doing this kind of grid-based music in Pro Tools.

Everything was created using the XPand! softsynth, including the techno breakbeats--I can't take credit for those, unfortunately, but it's not like most house composers waste a lot of time on their beats, and I was actually trying to get as close to the Amen Break as possible. I'm thinking about adding some audio samples if I have time on Monday. I'd like to get my coworkers to come in, say a few phrases in different languages, and then chop those up on top of the second half.

October 3, 2006

Filed under: music»recording»production

Book Review: Real World Digital Audio, by Peter Kirn

At Create Digital Music, Peter Kirn blogs about all kinds of electronic and computer-based music tools, ranging from newly-announced keyboards to tips on running Ableton Live--generally the bleeding edge of digital sound technology. I like reading CDM not only for the news that applies to my own projects, but also because it's a peek into a musical world where I don't spend much time: that of the visualized DJ/sample-based laptop musician. I've had Peter's book, Real World Digital Music, for a few months now, and I recommend it for much the same reasons. It's a little biased towards that type of modern electronic artist, but it's also a good reference that beginners can use.

The book starts all the way back at the physics of sound, from compression and rarefraction to harmonic overtones. From there it moves quickly through the basics of converting analog to digital, setting up a studio, and preparing a computer for audio production. From that point, Kirn begins to get more specific on different kinds of digital production, such as loop-based arrangement, MIDI, synthesis, and traditional DAWs. These chapters are pretty comprehensive, especially considering the wide field of different software and situations that are involved. I was pleasantly surprised to see an entire chapter on different types of microphones and miking techniques for a variety of instruments, since I think that's one of the more difficult tasks for an amateur musician. Likewise, the guide to different effects is well-written and logically-sorted, with plenty of illustrations.

The last three chapters of the book are more niche-oriented. They cover creating printed scores, scoring video, and performing live. I hesitate to say that these were unnecessary, and they fit with the book's theme of being a broad guide to all things digital. But there are areas that I would have preferred to see more in depth--more detail about EQ for different instruments, for example. Still, I'm nitpicking about areas that other, analog-oriented guides probably have covered. This is a clear, thoughtful text, and it's made more practical by the inclusion of a DVD containing a load of free software, such as a demo of Ableton Live that can be used for many of the book's examples. Most of them are free downloads if readers search them out, but having them collected is very nice.

In the end, I think Real World Digital Audio is a good introduction to computer-based sound production. Although it's aimed at musicians (and perhaps musicians aspiring to a very particular niche of music-making), most of the text doesn't require a theoretical background. In fact, I'm considering using it as a reference when teaching other colleagues at the World Bank Institute, due mainly to the way it covers both the basics and intermediate topics without talking down to the reader. It's a good starter text, even accounting for the chapters most people will never use.

September 10, 2006

Filed under: music»recording»production

World Music: Outcome

After a little departmental and internal drama, I've pretty much finished the regional jingles for the GDLN World Forum. These will be played as introductions for the various speakers and visiting participants. Basically, I went with a common organ/drum pad that resolves to C major over four chords, then added a melody line played by a different instrument for each region. In the overall conference theme, the instruments will trade off with the melody--I'm still working on doing that gracefully. Considering that almost everything was written in Pro Tools on the XPand! software synth, I think it sounds pretty good. Here's three samples:

Kalimba (Africa)
Guitar (Latin America & Caribbean)
Sitar (South Asia)

The drum loop was actually lifted from the Soundtrack Pro sample library, and then I split it into two parts. One part had the treble dropped, to accent the thud. The other part mixes in a ring modulator set at a fairly low frequency, which creates those mechanical "hoots" and a little bit of a noise pad. It keeps it from sounding too clean or rock n' roll.

August 2, 2006

Filed under: music»recording»production

World Music

One of my managers comes up to me the other day and asks me if--now that the Bank has all this audio technology--I could put together some "jingles" for the upcoming GDLN World Forum. Basically, when each region is introduced for a presentation or acknowledgement, they want a short musical sting to play. I suggested that instead of creating one piece or going with commercial music (the licensing of which can be awkward), that instead I could write one 15-second chunk of music with instruments and harmonies from each region, and then we could just solo the regional instrumentation for each group.

I'm a little nervous about the assignment. Not because I can't do it on deadline--because the idea of being paid for playing music on the software synth isn't exactly painful. I'm more concerned that I'm a random American being asked to put together music that will represent different regions--not even just individual cultures (if there is such a thing), but large areas that might contain many different cultures and ethnic groups.

The potential to be unintentionally offensive is there, and I don't think I'm overthinking it. In other media, musical cues are often used as a shorthand for racial or cultural jokes--introducing a Japanese character with a gong strike, for example, or giving a character from Jamaica a steel drum theme. It's lazy, a bit disrespectful, and it's certainly not something that the World Bank should be using.

So I'm working off a few guidelines, to keep myself from basically producing a parody of an "edgy" sitcom soundtrack. The first is to request samples of appropriate music as templates from regional experts at the Bank. At least then I can point directly at the sample when asked what were you thinking? Second, I'm making it a basic policy to try to avoid stereotypical instrument choices--no sitars for South Asia, and I'd like to avoid using drums for Africa. That may not be possible, since sometimes those instruments are stereotyped for a reason (a lot of African music does play up the role of percussion), but I hope it will keep me from producing something that's either chintzy or reduces an entire section of the globe to just one country. Finally, I'm going to take advantage of the diversity here at the World Bank, and get at least a couple of opinions on the piece before I hand it over.

Any other guidelines that I should remember? Travesties to avoid? Anecdotes of other bad musical choices? The comments are there for a reason.

Update: On the other hand, the reaction from other Bank staff tends to be "you're making this into way too big a deal." I've been in meetings all day, literally. Maybe I'm just overanalyzing.

July 18, 2006

Filed under: music»recording»production

Studio B

The new apartment is getting closer to its final unpacked state, and while Belle has taken charge of themes and decorations for most of it, I do get this little corner for my home studio. I've got a giant clock face to mount above it. Groovy.

So here's what I'm tracking with nowadays, from left to right:

  • Laptop loaded with Cubase SE and an assortment of free software and programs. It's all low-end stuff, but I make pretty low-end music, and it's a big improvement over a four track.
  • Tascam US-122 interface running into the laptop for tracking bass and vocals. I didn't have the mike assembled in this picture (it's next to the lava lamp), and I still need another XLR cable to run from the bass amp's DI.
  • Behringer Eurodesk 502 mixer, for my random mixing needs. I don't do this much, but I guess if I needed to reamp through an effect that I don't have in software, this would be a good way to do it.
  • GK400RB amp for the bass. This is my second real bass amp, and I like the sound of it okay--but I really bought it for the volume and the portability. My old 1x15 amp was a nightmare to carry.
  • Unused effects pedals: a Boss Chorus (not the bass version), Digitech Bass Driver (great distortion pedal), and Boss Auto-wah (never really happy with the sound of it). I'm thinking of hooking these three together for a bass synth kind of sound. I used to use the Bass Driver as my full-time distortion, and it really does do stunning imitations of an SVT, a ProCo Rat, or a digital fuzz, but it sucks a nine-volt dry in forty minutes, and I hate plugging in my pedals. And because it does so many things, I always found myself spending more time playing with it than actually making music. I'm more productive with fewer options.
  • Current effects lineup on the floor consists of a Line 6 DL4 for looping, MRX M80 Bass DI, and DOD FX25B Envelope Filter (boosted through a Boss LS-2). The MXR is a really great pedal--it has two preamp channels, one clean and the other distorted. The clean adds a lot of lows and low mids, almost Stingray-esque. The distortion is typically pretty fuzzy, but I've found a setting I like and just leave it there.
  • Rolls DI box, mostly used nowadays at open mikes.
  • NES, waiting for me to pick up a midi keyboard and one of those synth cartridges.
  • Lava lamp. Very important.
  • And of course, the All-Star bass.

July 6, 2005

Filed under: music»recording»production

Music Boxes

I have just spent an hour or so playing with new audio toys. Krystal is a lot of fun, and it actually runs VST plugins (a sore spot with Audacity). This means I can have access to virtual synths and filters that I could never justify buying in hardware. Ditto for the Jesusonic, the blasphemously-named but incredibly versatile multi-effects simulator from the same guys who created Winamp. I haven't had a chance to really get down and experiment with it, but it basically gives you fine access to the equivalent of a pedalboard, complete with custom-built stompboxes, assuming you're willing to do some automation work and hook a laptop into your signal chain. That idea got me playing around again with Hammerhead, the great freeware drum machine.

I don't intend to exploit any of these tools to their full potential.

Earlier today I wrote about how empowering it is to have such powerful studio tools for practically free. As a writer and a bit of a technophiliac, the thought makes me giddy--a part of me can't wait to hook the bass up to a whole set of midi-controlled filters and start playing "In The Hall Of The Mountain King." The catch to all that power is that it can be a distraction. I've owned hardware multieffects units before, and it's possible to spend weeks just twiddling with the parameters of a few patches, never really being satisfied.

My current philosophy of music says that the fewer options I have, the better I can exploit them. I want the maximum level of flexibility with the least amount of complexity. I formed a lot of this mindset when I was playing live--I had a multieffects unit, which I then replaced with an ingenious but complex pedalboard. Both were too complicated onstage, and there was too strong a possibility that A) something would go hideously wrong and kill the whole signal chain, or B) a mistake would throw the whole thing into confusion, and I'd end up using a clean tone for everything anyway. Singing and playing bass is hard enough without doing a tap-dance to try to fix the result of a badly-aimed kick.

Nowadays, I use only two pedals. One is a 3-channel preamp, which does my clean/thick/distorted sounds. The other is my looper, incorporating a whole set of its own headaches. I miss the other sonic possibilities from my pedal collection (chorus and envelope are most tempting), but keeping it simple has forced me to be a better player. For example, where you play on a bass can have a dramatic effect on the sound. Playing back by the bridge with a little bit of a bend in the note can mimic an envelope for my purposes, and the sound is in my hands instead of on the floor. Moving up on the neck or palm-muting achieves the opposite--a thick, bassier tone with less growl. Combining greater control of my technique with a simpler range of effects (albeit ones that I know inside and out) lets me intuitively build my sonic palette, and I never have to stop the performance to turn a knob or press a button--very useful when building, layering, or exiting loops.

This is not to say that I'm against technology in music. I have unlimited respect for people who have learned to play a studio, as it were. The things a clever engineer can do amaze me, as my admiration for Trent Reznor and Rick Rubin shows. I'm also fascinated by bands like Muse that have incredibly complicated effect racks controlled by midi boards--but they have lots of money and people to figure those connections out for them. They make different music than I do. My vision as an experimental bassist is the equivalent of an electric blues guitarist--dirty, chunky Rock produced by one person. I shouldn't need more than some overdrive and a little EQ to pull that off, and I like the challenge that comes with restriction.

I certainly plan to use the tools I've got. If it records as well as it looks, Krystal will replace Audacity as my default studio. I may mess with Jesusonic for overdubs of sounds I can't create in analog. Just as the Ministry fans have hoped, I'm already thinking of ways to use Hammerhead live, converting the looper back to a delay and creating drum patterns instead of sampling my bass percussion. But these are just added compositional tools, and if they conflict with my vision or my comfort zone I'll toss them right back out. Music should be about the songs, not about the process.

Besides, if I take the time to sit down and worry about my tone again, I'll never get anything written. I'd rather be a musician with an armful of clumsy originals than a technician with a flawless orchestration applied to nothing in particular.

Filed under: music»recording»production

"It's me ma's. She thinks I'm having it cleaned."*

While we're on the topic of music, I saw this note on MusicThing the other day about building a home studio for less than $50. The Thing notes that, assuming you already own a decent computer (one that can see this web site), you probably already have most of what's necessary to record good music. Personally, I find that to be incredibly empowering.

I'll admit right up front that I'm a terrible amateur producer. I don't know the tools very well, I never balance the vocals correctly against the instruments, and I overcompress like crazy. My recording attempts sound like a guy who tossed something together as fast as he could, and there's a good reason for that. But that doesn't mean that it's not possible to put together something really incredible with the equipment I've got--namely, myself, a bass, some pedals, a cheap vocal mic, and an eight-year-old laptop with a buggy sound card. The software I use (Audacity) is free, and I understand there's even better stuff (Krystal) out there for a tiny amount of money (assuming that you're using it for commercial purposes).

There are an awful lot of classic rock albums we now regard as timeless that were done on a four-track tape recorder, where finished portions would have to be "bounced" onto other tracks to make room for more vocals or instruments. The Beatles and the Boss both made great music this way, but you don't have to--Audacity lets you record as many tracks as you want, move bits around, take out parts you don't like... It's more than enough to put together a demo, at least. I'll tell you a secret: only the snobs care if you did it with great technique and expensive equipment, and they won't like your music anyway. My old band got a pretty good CD recorded in my drummer's basement using an old ADAT machine, cheap mics, and a shower for reverb. One of the best albums I've ever heard, the Black Keys' Rubber Factory was, in fact, recorded in a rubber factory using whatever they could get their hands on. It is dirty and imperfect and it rocks like nobody's business.

The only real barriers are time and ambition. It takes time to learn, and it takes ambition to stick with it. It's become much harder to find excuses for my own lack of productivity since I realized that.

* A classic line from The Committments, when the band's manager pawns an old stuffed bird in exchange for a drum kit.

Future - Present - Past