this space intentionally left blank

August 3, 2009

Filed under: journalism»new_media

Generation Gap

Although it's the description I use professionally, I'm ambivalent about the term "new media." I worry that it implies a wider gap between print/broadcast and Internet-based journalism--when really, both are more similar than not. But then I see something like Ian Shapira's Washington Post op-ed, and I realize: sometimes, you have to spell these things out.

Shapira is very, very upset that a blog excerpted parts of his story, added commentary, and then linked to the original Post article. No, seriously: he spends 1,900 words complaining that The Internets Stole His Bucket.

My article was ripe fodder for the blogosphere's thrash-and-bash attitude: a profile of a Washington-based "business coach," Anne Loehr, who charges her early-Gen-X/Boomer clients anywhere from $500 to $2,500 to explain how the millennial generation (mostly people in their 20s and late teens) behaves in the workplace. Gawker's story featured several quotations from the coach and a client, and neatly distilled Loehr's biography -- information entirely plucked from my piece. I was flattered.

But when I told my editor, he wrote back: They stole your story. Where's your outrage, man?

They stole your story? That's a bit melodramatic, Anonymous Editor. They quoted chunks of it, summarized the rest with some snarky editorial commentary, and then linked both to the original article and its (badly-formatted) sidebar. In doing so, they drove a fair amount of traffic to the Post, something Shapira even admits:
Gawker was the second-biggest referrer of visitors to my story online. (No. 1 was the "Today's Papers" feature on Slate, which is owned by The Post.) Though some readers got their fill of Loehr and never clicked the link to my story, others found their way to my piece only by way of Gawker.

Even if I owe Nolan for a significant uptick in traffic, are those extra eyeballs helping The Post's bottom line?

A: Yes, since it's an ad-supported site. This has been another episode of short answers to stupid questions.

Shapira ends his piece with a weak plea for earlier credit and shorter excerpts, as if Gawker should just put up a link reading "Ian Shapira's Awesome Article at Washington Post" and leave it at that. But between the opening and closing paragraphs, he spends a significant amount of time blaming the Internet for killing journalism. He interviews a lawyer who's trying to get newspapers the ability to sue websites that excerpt their material, and who states "If you don't change the law to stop this, originators of news reports cannot survive." Yes, legislating success has worked out well for other industries, hasn't it?

There are a lot of reasons why the originators of news reports may be finding it hard to survive, but being quoted in a high-traffic blog like Gawker is not one of them. On the other hand, being the kind of news organization that spends nearly 2,000 words on this kind of whining probably isn't helping your case.

A little while back, one of my managers asked me to define "webby" as a sanity check after someone tried to use it in an excuse. It's not a word I'd personally use, I said, but I'd basically argue that it means doing three things: link to other people, make it easy for them to link to you, and take advantage of the format to adapt your voice. That's basically what "new media" means to me. You still do good journalism, but you realize that it's no longer published in a vacuum. I'm not sure why that's so hard for reporters and editors to understand. But by all means, guys, keep getting angry when people send traffic your way. Let's see how that works out for you.

July 8, 2009

Filed under: journalism»new_media

Your Scattered Congress, Continued

It's that time again: CQ has posted the newest version of its yearly vote studies, ranking legislators on party unity and presidential support. Again, this uses my Flash applets for presenting the tabular data, as well as a scatter/distribution graphing.

As far as interesting emergent storylines go, there's not a lot for me to say yet. From the visualization end, I added medians to a couple of the plots but otherwise did relatively little tweaking. The one notable change was an adjustment to the House unity algorithm, due to the score of Rep. Walt Minnick, D-Idaho (and to a lesser extent, Rep. Bobby Bright, D-Alabama). Minnick has a unity score of 40%, the lowest of the House Democrats. As a result, I had to widen the "window" for that graph, which previously had no member with a unity score less than 50%. This had already been done in the Senate, thanks to Sen. Olympia Snowe, R-Maine.

You may notice some artifacting in the graph so far, particularly on the Democratic presidential support distribution. According to the editors for this data, it's probably due to the low amount of votes tallied for 2009 so far, causing a "clumping" around a few support values. As we accumulate more data and update these numbers, a more natural distribution curve should emerge.

My remaining technical gripes with these graphs, which I haven't had time to correct, are the confusing method of listing members in distribution views and the odd scaling that's used to fit them all in. I suspect they can both be solved by reducing the pixel size in those modes far enough that a 1:1 ratio is reached--no overlapping of values within columns. And I think we're going to take it widescreen, to make that easier--realistically, the whole thing's due for a design overhaul anyway. But in the meantime, I think it continues to work reasonably well, and it's still one of my favorite projects here.

May 27, 2009

Filed under: journalism»new_media

SCOTUS Nom nom nom

Are you, like all of DC, enraptured by the Sotomayor nomination? Feel free to keep track of the confirmation process (and compare it to past nominees, both successful and not) using CQ's interactive Supreme Court nomination graphic.

I'll tell you what killed me on this one: fonts. Our new special projects team member used to be the print graphics reporter, and as such she wanted to use the print fonts, like Benton Gothic. Don't get me wrong, Benton Gothic is a really nice font. But embedding fonts in Flash--particularly via code--is not a fun process. To be blunt, it's clumsy and unreliable. Of course, if your computer has the font, Flash will often pick it up. So interactives with embeds have to be tested on a (separate) clean platform, for which I use one of my VMs. This reveals another frustration: text rendering in Flash is incredibly inconsistent across platforms.

Now, better people than I (read: people who actually care) have commented on the difference between text on Mac, Windows, and Linux. In general, Microsoft respects the pixel grid, while Apple mimics print. Linux, as usual, offers a range of choices that approximate (but don't exactly match) the other two. I should add that while lots of people complain about font rendering on Linux, in my experience it's not that the type engine is bad so much as the fonts themselves are awful. Microsoft has spent a lot of money on great-looking screen fonts, and Apple just licenses classic print fonts, neither of which is easy for free software to match.

Regardless, for whatever reason, Flash seems to piggyback on the host platform's font rendering for its text. This may seem odd, given Adobe's prominence in type-layout software, but I'm guess it's meant to be "cheap" in terms of runtime size and speed--both factors in Flash's success over Java as a multiplatform client. Now that they're dominant, though, I wish they'd spend a few kB on better font handling. When I look at my interactives on a different OS, the rendering changes don't just mean that it looks a little different, maybe blurrier or a bit more spidery, depending on your preference. Suddenly, fonts overflow their textfields, or dynamic layouts shift in undesired directions. If I wanted to fight with the text engine, I'd use HTML!

That's not even to discuss the outright bugs. In our scrolling map, there's a "tooltip" consisting of a floating TextField object. It follows the mouse and identifies specific districts, obviating the need to manage 435 labels at a variety of zoom depths. One day I got to spend an hour debugging why, for whatever reason, the tooltip was simply disappearing in Safari and IE. Turns out it was autosizing incorrectly--which kind of defeats the point of an "autosize" parameter.

Or how about this one: for no apparent reason, adding a TextField object to the display list of a sprite causes the bitmap filters to distort slightly. If you look closely at the Supreme Court graphic above, you'll notice that bars with text inside them (or next to them) are sometimes 1 pixel taller, seemingly because the GlowFilter being used to create an unscaled 1 pixel outline decided to be 1.5 pixels. The problem disappears if you zoom in. Why does this happen? Who knows?

Flash 10 includes some low-level improvements to text, which is a good start. But as far as I can tell, they primarily make working with fonts better within a single platform, and are aimed at people creating flexible layouts, like word-processing applications. People like me who work in graphic-intensive apps like data visualization and gaming are still probably out of luck.

April 21, 2009

Filed under: journalism»new_media

The Precision Hack

Yesterday, Jeff Atwood at Coding Horror linked to "Inside the Precision Hack", a blog entry describing the process by which 4chan hackers broke the Time 100 poll. The poll, which is meant to nominate the "world's most influential people," had practically no security built into the voting mechanism. The kids from notorious Internet sewer and discussion board 4chan were able to manipulate it to the point where they could spell out messages, acrostic-style, at the top of the list.

Since Coding Horror is a programming blog run by a guy who's relatively new to web programming, he mainly sees this as a funny way to make a point: look how easy it is to bypass security when it's incompetent! But there's a wider question that ought to be raised, which would be: is this level of competency (or lack thereof) actually uncommon in journalism? And as newspapers and other outlets increasingly work through "new media," will they do so securely? What are the risks if they don't? These are relatively simple questions, and ones of self-evident importance. But as journalism conducts its internal debate regarding "innovation" in reporting, they're not questions that I'm seeing asked as often as they perhaps should be.

So what did Time do wrong? Turns out that they made lots of basic mistakes. The voting was submitted in plaintext using URL variables, and you could request the page using a GET instead of a POST, so innocent people could be enlisted simply by embedding an iframe on an unrelated page. When it became clear that this was skewing the vote, Time added a verification parameter consisting of the URL and a secret code run through an MD5 hash. Unfortunately, it sounds like they left the secret code in the Flash file as a literal, which is pretty easy to extract with one of the many SWF decompilers out there. These are some pretty weak security measures--a low barrier to entry that made it easy for some relatively-unskilled hackers to precisely manipulate Time's poll.

I want to make it clear that I'm not bringing this up at Time's expense, as Atwood is (I like Coding Horror, but he's not exactly a crack security researcher). In fact, I sympathize with Time. Security is hard! And expensive! And if you're not used to thinking about it from the very beginning, you're going to screw it up.

But why did it happen? Here's my completely unsubstantiated hunch: they got caught trying to do more with less. News organizations these days are caught between two directives: cut costs, and simultaneously jump onto the Web 2.0 bandwagon. These goals are directly opposed to each other. You can't get the kinds of programmers that you need to keep up with Google/Yahoo/Microsoft for cheap. So what happens? Chances are, you take journalists that are a little technically inclined, give them a few books on Ruby on Rails, and ta-da! you've got an "innovation"* team. It's not a recipe for tight security.

It doesn't help that the buzz in newsrooms for years has basically been around "hybrid journalists" that are video producers/writers/programmers all at once. Now, I have some respect for that idea. I personally believe in being well-rounded. But it's not always realistic, and more importantly, some things are too important to be left to generalists. Security is one of those things. Not only can poor data security undermine your instititional reputation, but it can be dangerous for your reporting, as well.

Take note, for example, of this article from Poynter on data visualizations. Washington Post reporter Sarah Cohen explains how graphing data isn't just useful for external audiences, but it can also help reporters zero in on interesting stories, or eliminate stories that actually aren't newsworthy. In fact, she says, the internal usage is probably far greater than the amount that makes it to the web or to print. It's a great explanation of why data visualization is an actual reporting tool that gets lost in the fuss over Twitter and blogging ethics panels.

So newsroom data isn't only meant for public consumption. It's a real source for journalists, particularly in number-heavy beats like public policy or business. And that means that data needs to be trusted. As long as it's siloed away inside the building, that's probably fine. Once it's moved outside and exposed through any kind of API, measures need to be taken to ensure it isn't tampered with in any way. And if it's used for any kind of crowdsourcing (which, to be fair, I have advocated in the past), that goes double.

So am I saying we should back away from opening up our newsrooms to online audiences? Not at all. But we should understand the gravity of the situation first, making sure that resources have been expended commensurate with reputational risk. And let's be honest: while it's great that NPR and the New York Times are making neat API calls and interactive polls available to everyone, maybe that's simply not appropriate--or aligned with the newsroom's primary mission--at smaller organizations.

Journalism has to come first. That journalism has to be trustworthy, down to the data on which it relies. Think of it as an editorial bar that needs to be cleared: if you don't feel like your security is up to the task, perhaps caution is in order. On the other hand, if you can't justify security from the start (as Time clearly couldn't), what you're really saying is that your results don't really matter (Time's certainly shouldn't). In that case, is it really the best use of your time?

March 24, 2009

Filed under: journalism»new_media

The Hard Parts

It's funny, when I first started at CQ, I thought that the most difficult part of the job would be doing quality work on a small staff. Unlike the New York Times or the Washington Post, we can't throw fourteen people at a project, so it often takes longer than I'd like--especially factoring in CQ's well-deserved reputation for fact-checking and accuracy. It's true, lack of resources has been a sticking point at times, but what has surprised me is that it's not the hard part.

For example, today we're launching our new district results maps. Previously, maps like these had been done in a very clumsy manner--literally, each state was a frame on the Flash timeline, with manually-placed zoom areas linked to another frame for districts that are very small, like downtown New York City and parts of urban California. This was not only a poor design, but it was practically unmaintainable. This time, with the help of our graphics team, we started from a complete, Flash-native vector map. Then I designed a UI framework that would not only allow Google Maps-style dragging, but also programmatically auto-zooms to states and to small or oddly-shaped districts. The result is easier to navigate, looks better, and will be far more adaptable for representing other datasets--it was a piece of cake to take the original House results map and change it to display presidential results. I'm very proud of how it turned out, and feedback has been stellar.

Because we are a small shop, bottlenecks abound, and this map took two weeks from conception to finish. We still have work to do: searching and deep-linking are not yet enabled, and I want to refactor some of it out into a separate library. But still, this is the easy part: we were able to work completely within a known framework of Flash, Ruby, and internal XML. The hard part comes when we decide to place it on the actual website. CQ's publishing system, like many newspapers and magazines, is not geared toward interactive material. It doesn't understand it, and can't embed it--even embedding static images online is cumbersome, as for a long time CQ eschewed graphics in its print daily, and our web system is based on the print pipeline.

The result is a series of workarounds that we are still trying to streamline. Since we can't embed interactives, we end up storing them on other servers, which hurts our traffic numbers. Our search function also can't index items outside the standard text content management system, so we currently have to create pseudo-articles to link to them with metadata. And unless we are careful, such ad-hoc arrangements have a tendency to sprawl across directory structures and servers in such a way that they become impossible to manage in the future. I don't want to give the impression that we're standing still--CQ was a leader in early online journalism, and substantial upgrades to address these issues are in the making--but right now it's a real hassle.

If we do our jobs right, and I think for the most part we have, none of this is ever visible to our readers. But it's awkward and time-consuming from the newsroom's perspective, and we are certainly far from the only publication having these problems. To this day, I can't remember the last time I used a newspaper search box and got the results I wanted (Google finds them just fine, however). Perhaps this is why the industry has entered into such a state of hysteria over the Internet's corrosive influence, but it reflects a fundamental misunderstanding: these aren't indications that journalism fails to work online, but that print formatting fails. Does that seem obvious? Yeah, well: welcome to the conversation.

January 7, 2009

Filed under: journalism»industry

Next Week: Taco Bell Not Actually Mexican

Shorter Washington Post:

Have you ever been to an Asian supermarket? We hadn't! They sell all kinds of different food there, which is not like real American food at all. Apparently it is from wacky countries that are far away from us--and it could be infested with poisonous bacteria. Awesome!
Perhaps I'm reading too much into this, but it might say something about the composition of the newsroom--or who the Post thinks buys its papers--when the story itself says that these stores are in neightborhoods where Asian Americans make up 20-40% of the population, half the store's customer base is not East Asian in descent but also includes South Asians, Latin Americans, the mildly inquisitive, and the poor (thus including everyone except wealthy, sheltered white people)... and yet an article that's basically a catalog of the shelves was still considered "newsworthy" instead of "familiar."

But hey, it's the Internet that's killing journalism, right?

January 5, 2009

Filed under: journalism»new_media

Ushahidi and the War on Gaza

During this most recent flare-up in Israeli-Palestinian conflict, Al Jazeera has done something interesting: they're tracking incidents and attacks both from their own reporting and from incident reports (via SMS/Twitter) by people in the Gaza area. To power the page, found here, they're using a combination of Microsoft mapping and Ushahidi, the conflict-tracking system developed for reporting in Kenya and used across Africa since that time.

Ushahidi's blog has a bit more info about it here. In comments, one of the AJ team members behind the project also has some interesting notes on why it does not have a lot of reader input at this time:

we havent seen much coming in from gaza/israel- i'm assuming thats for a number of reasons:
1) with such a huge amount of activity going on people dont have the time to send out texts- those who are sending out information are sending video/images (if they get a connection) to show the aftermath of a missile strike etc.
2) the networks are going crazy and are very busy, also with the power outage in the area getting hold of people in the conflict zone is difficult- so i'm guessing sending information out is just as difficult

These are, of course, known problems with using new media for conflict situations: even if you can get your portal known widely enough for it to be a priority for those on the ground, you have to hope that the infrastructure is still intact--or, as in Burma, that the government/authorities don't cut access to prevent communication and coordination. I'm not yet aware of a decentralized solution for getting around those kinds of blocks or filtering, although the scattershot approach favored by Chinese dissent bloggers might provide some clues.

Oddly enough, Al Jazeera does not seem to be offering one of Ushahidi's more useful features: the ability to sign up for location-based updates via SMS, email, or RSS feed. They're also not yet offering a timeline for conflict reports, as Ushahidi has done for their Kenyan post-election data. Both of these are a natural fit for a news organization, and if I had to guess, I'd say they're probably in the works for Gaza as Al Jazeera gets the bugs (both technical and practical) worked out.

Technologies like this for grassroots journalism are interesting for two reasons. First, they open up the process of newsgathering to be faster and more widespread--this is the real face of "citizen journalism," not Jeff Jarvis and his cult of ex-media bloggers. Second, they cut out the middleman. Although there are editors and administrators running the system, Ushahidi and systems like it make it possible for people to report to each other on a local basis, while aggregating reporting from paid journalists into the feed. This is being done in the US already, via EveryBlock, which also integrates crime feeds as published by local police.

The degree to which this technology can instigate, supplement, or even replace acts of paid journalism is as of yet unclear. I don't think it's the end of the newspaper, if anyone's making that claim, but it clearly has value. I am surprised that it's not being used by more small-town papers yet, who have very small staffs and would probably like to be able to leverage them more effectively. Reporting tools like EveryBlock or Ushahidi aren't just useful for readers, after all: they're also valuable sources of information for reporters as a new take on the tipline.

December 8, 2008

Filed under: journalism»industry

Layoff Stories

In the last week, Gannett (owner of USA Today and a whole host of regional papers) laid off a large chunk of its workforce, currently up to about 1,900 people. Gannett Blog has been collecting layoff stories--short notes from those who have been cut loose. They're interesting reading.

I've linked to this before, but it should be highlighted again: Gannett papers participating in the layoffs typically have very healthy profit margins, the highest of which is the Green Bay Press-Gazette with 42.5%. You can find the cuts for each regional paper here. Green Bay has cut 22 jobs, amounting to 7% of its workforce.

This is, sadly, not out of the ordinary. As the Project for Excellence in Journalism notes, the average newspaper recorded 18.5% pre-tax margins in 2007: "The industry remains profitable, but it has come time to take the 'obscenely' out of that commonplace observation." In a world of news management that's driven by Wall Street stock prices, there's no place for profits that aren't 'obscene.' And newsrooms are taking the fall.

December 1, 2008

Filed under: journalism»industry

Paved with Good Attentions

The word on the street is that the Internet is killing newspapers, and there are two points of view on it: Jeff Jarvis and the new media pundit class see it as cause for rejoicing, while everyone else bemoans the loss of the industry to the filthy, basement-dwelling bloggers. Both are, I suspect, operating from false premises.

(As First Draft points out in Athenae's continued series of "how I'm killing journalism" posts, newspapers are far from dying. They're just not growing at the same rates that the owners would like. Indeed, if anything's killing journalism, it's management.)

But I'm going to play devil's advocate for a minute: I think there's a very plausible way that the Internet actually could kill journalism (as in the public service of reporting), and it would be a real tragedy. Ironically, it would be caused by the advances in ad-driven revenue and reader-tracking made possible by the Internet.

Now, news has long been funded by advertising. And it has mechanisms for keeping ads separated from editorial control--a much-vaunted firewall preventing money from corrupting the reporting process, at least in theory. But advertising online is a fundamentally different proposition from newspaper ads: the former is a pay-per-clickthrough or viewership-based scheme, while the latter was (like its vehicle) more of a broadcast package. Online, as opposed to offline, editors can directly see which stories are getting more attention, more hits, and thus creating more clickthroughs. And in a world where journalists are pushed harder and harder to keep profits high, that's possibly a recipe for trouble.

Because the thing is, the stories that get the hits are not necessarily the ones that journalism-as-a-public-service should be pursuing. People don't always read about foreign affairs, or financial news, or social issues with the same energy that they'll pursue, say, gadget blogs and novelty videos. The hard news can be turned into interesting, compelling stories that get readership commensurate with their importance--but it takes a lot of work. In a physical newspaper, that work has been basically subsidized by the rest of the paper: since everything falls under the same ad revenue package, editors are free to pursue stories that they deem important, such as series about complicated issues or investigative reporting. Split them out, and the picture becomes a lot more complicated.

I'm not saying, of course, that online editors will immediately turn to fluff in order to grab eyeballs. The process is one of slow erosion. When certain stories get disproportionately large visitor numbers (and therefore contribute larger amounts to ad revenue), it's only natural to go back to that well for a similar story. And if the stories that get visitors tend to be lighter and easier to produce than long investigations, even if the cost/benefit equation is roughly equivalent, given a choice between the two it's an easy decision to go with the less work-intensive article. Indeed, it's even easily rationalized: you're just giving the people exactly what they want (or at least what they respond to, which is not quite the same thing). Over time, that adds up. It's roughly the same problem as that faced by television news: when white women in trouble get higher viewer numbers, even without an overt editorial decision, there will be a tendency for more stories about white women in trouble. And now CNN's unwatchable.

Again, these are not new problems. The fact that investigative journalism is expensive and may not pay for itself has been a truism for a long time--but I don't think it's ever been directly provable. Now, it can be quantified. And while journalists are distracted (perhaps rightly so) over the industry's buyouts and complaints about profitability, the question of influence in the role of journalism gets short shrift. It's not helped by publishers who think the way to save their "ailing" paper is to replace back sections with lightweight, locally-focused supplements.

This is why philanthropic (or non-profit) models of journalism fascinate me. I think they have the best possible shot of creating good, public service journalism that ignores click-based feedback in a digital format. Of course, they're also vulnerable to funding loss and accusations of populist bias even beyond that of the corporate media. I'd like to think that it's not a case of the devil we know against the devil we know even better, but then I don't write here because of my sunny, optimistic viewpoint.

October 10, 2008

Filed under: journalism»industry

The Dichotomy

I'm still obsessed with This American Life's Giant Pool of Money show. The reason for my obsession is that it won't go away: I see this thing getting press from all kinds of people. Music blogs post it. Game blogs link to it. Of course, every political blog on Earth has mentioned it.

If you're trying to get people to read/view your news site online, which has been on my mind for obvious reasons, you have to look at that and ask: What did they do right? What makes that different? How do we get that kind of traffic and acclaim? Especially if you're a business reporter or editor--when Ira Frakkin' Glass is producing the best financial coverage of the year, you really need to re-evaluate your product.

The obvious tactic is "do good journalism," but there's more to it than that. TGPoM is good journalism, but there's lots of good journalism out there. The mechanics of the piece are not what makes it special. It wasn't researched more thoroughly than the average NPR report, or edited differently from many TAL shows. I believe that the difference between TGPoM and the financial coverage elsewhere is a simple one, but one that illustrates a wider, more fundamental problem with the American press: it was about policy instead of politics.

What I mean by that is that the reporters for the Pool of Money piece worked hard to make the mortgage crisis relatable to listeners. They didn't approach it as bickering between Wall Street and Main Street, and they didn't go for quick partisan soundbites, then call it a day. They answered simple questions: what is this thing? How did it happen? Could it happen to me? What are the real stakes? This sounds like it should be easy to do, but a vast amount of reporting--particularly business and political reporting--doesn't bother to do it. Business coverage is often wonky, detailed stuff aimed at elites. Political journalism has devolved into either he-said-she-said parroting of campaign lines or grossly-inflated trivialities. When the two meet, as they did recently, the results are typically disastrous.

Covering policy is hard. It requires expertise, or talking to experts. It requires a journalist to sometimes make a call as to whether one side of an issue is more accurate, likely, or truthful than the other. And more importantly, to do it right, you have to be able to look at an issue and make connections, putting the news into context.

By contrast, covering "politics" is easy. You still have to talk to people, but you don't have to work nearly as hard to understand it, or double-check it, or figure out what it means for the average reader. There's no "translation effort." If you're covering campaigns by politics instead of policy, you don't check whether a given candidate's plan actually does what they say it would. When you fact-check, you only check whether their statements are superficially true or false, not (for example) whether their plans would actually be effective when applied to national policy.

But clearly, the rewards for doing accessible explainer pieces are great. Which is why it amazes me that no-one else seems to have done it for the bailout. The New York Times does not have something along these lines. The Washington Post doesn't. CQ, much to my chagrin and shame, certainly didn't, although our in-depth coverage of the bailout bill was phenomenal. (Actually, someone may have done an explainer. But since there's not a single newspaper in the country with a decent search function, I can't find it if they have, so it might as well not exist.)

Until this weekend, that is. Because then This American Life did it again. They put together Another Frightening Show about the Economy, and it is just as good as the first program. It can stand alone, or it can fill in the gaps. As with the previous version, after listening, I can not only read other coverage and understand it (not, I might add, a minor point), I can also teach it to other people. It really is an extraordinary piece of explainer journalism.

There's nothing wrong with reporting the details, or providing journalism for topic experts. I suspect that many business reporters have trouble making a transition to a general audience, exactly because most of the time their audience is not general, but an elite. They're reporting for business junkies. Political reporting, similarly, is written both by and for people who thrive on little nasty details of the process, instead of people who will actually be affected by its implementation.

And while the business press can possibly be excused (no-one was exactly clamoring for a detailed look at the commercial paper market before it clammed up), there's no excuse for journalists who choose to cover politics instead of policy. The latter--the outcomes that affect all of us in much more direct ways--is vastly important for a functional democracy. Deficiencies there don't just help to explain why the media is considered untrustworthy, and why the industry's revenues are falling. They point to a wider fracture in our civic life. When journalism obsesses over nitpicks in a speech or comment, rather than the details of applying governmental power, everyone loses.

Future - Present - Past