this space intentionally left blank

March 13, 2013

Filed under: journalism»industry

Pay Me

They always want the writer to work for nothing. And the problem is that there's so goddamn many writers who have no idea that they're supposed to be paid every time they do something, they do it for nothing! ... I get so angry about this, because you're undercut by all the amateurs. It's the amateurs who make it tough for the professionals, because when you act professional, these people are so used to getting it for nothing, and for mooching...

--Harlan Ellison

Last week, Nate Thayer wrote a well-linked post about being asked to write for The Atlantic for free--well, for "exposure," which is free in a funny hat. It's gotten a lot of attention in the journalism community, including a good piece on the economics of web-scale journalism by Atlantic editor Alexis Madrigal.

I read this kind of stuff and think that I have never been happier to find a niche within journalism that makes me marketable. I mean, not that marketable: I had to switch industries when I moved out of DC, after all. But inside the beltway, I didn't have to freelance anymore, and I would have had plenty of options if I decided to leave CQ and head somewhere else. Data journalism was good to me, and I can't imagine having to go back to the scramble of being just a writer again.

But beneath that relief, I feel angry. And the fact that Madrigal can write a well-reasoned piece about why they're asking people to write for free doesn't make me any less angry. The fact that Ta-Nehisi Coates, who I respect greatly, can write about how writing for free launched the best part of his career, doesn't make me feel any less annoyed. I'm getting older but I'm still punk enough that when someone tells me the system is keeping us down, my response isn't to say, "well, I guess that's just how it is." The system needs to change.

Let's be clear: I don't expect writers to make a lot of money. They never have. People don't get into journalism because they expect to be rich. But writing--serious writing, not just randomly blogging on your pet peeves like I do here 90% of the time--is hard work. The long-form pieces that I've done have been drawn-out, time-consuming affairs: research, interviews, collecting notes, writing, rewriting, editing, trimming, and rewriting again. People think that writing is easy, but it's not, and it should be a paid job. (Even when it's not paid, it's not easy: I've been editing this post for three days now.)

As Ellison says, when publications can get the work for free, it makes it really hard to be paid for your writing. I'm not sure I'd phrase it with the same antipathy for "amateurs" (let's be clear: Ellison is a terrifying human being that I happen to agree with in this particular case), but it's certainly true that the glut of people willing to write for free causes a serious problem for those of us who write (or have written) for a living. They're scabs, in the union sense: they take work that should be paid, and drive down the cost of labor (see also: unpaid musicians).

And journalism is an industry increasingly dependent on free writing labor (or, even worse, perpetual unpaid internships instead of paid staff). As Cord Jefferson (in, of all places, Gawker) notes,

All in all, the creative landscape is starting to look more toxic than it's been in our lifetimes: Artists with million-dollar checks in their pockets are telling other artists that they shouldn't expect to get paid; publications are telling writers that they shouldn't expect to get paid, either; and meanwhile everyone wonders why we can't get more diversity in the creative ranks. One obvious way to reverse media's glut of wealthy white people would be to stop making it so few others but wealthy white people can afford to get into media. But in the age of dramatic newsroom layoffs and folding publications, nobody wants to hear that.
When your publishing model depends on people writing for free, there are a lot of people who aren't going to get published. I couldn't afford internships during college, meaning that I had a hard time breaking in--but I was still relatively lucky. I worked in office jobs with flexible hours and understanding bosses. If I wanted to take an early lunch break in order to do a phone interview, I could. I had evenings free to work on writing and research. I could take jobs that paid 10¢ a word, because I only had a day job. A lot of people don't have that chance, including a disproportionate number of minorities.

It adds insult to injury when you look at some of the people who are published precisely because they could afford internships and writing for free. Sure, it's wrong to base an argument on a few highly-visible outliers. But it's hard not to be a little furious to see the NYT sending good money to Tom Friedman (the obvious travesty), or Roger Cohen, or David Brooks when the industry claims it can't offer new writers recompense. It burns to see The Atlantic insisting that paying people isn't sustainable when they gave Megan McArdle (a hack's hack if there ever was one) a career for years, not to mention running propaganda for the Church of Scientology. If you're going to claim that you're trying as hard as you can to uphold a long-standing journalistic legacy in tough economic times, you'd better make sure your hands are clean before you hold them out in supplication.

I am skeptical, personally, of claims that the industry as a whole can't afford to pay writers. I have heard newsroom financials and profit margins, both for my own employer and for others. The news is no longer a business that prints money, but it remains profitable, as far as I can tell--if not as profitable as management would often like. Perhaps that's not true of The Atlantic: I don't know the details of their balance sheet, although this 2010 NYT article says they made "a tidy profit of $1.8 million this year" and this 2012 article credits them with three years of profitability. That's an impressive bankroll for someone who claims they don't have the budget to pay writers for feature work.

That said, let's accept that I am not an industry expert. It's entirely possible that I'm wrong, and these are desparate times for publications. I can't solve this problem for them. But I can choose a place to stand on my end. I don't work for free, unless it's explicitly for myself under terms that I completely control (i.e., this blog and the others that I fail to maintain as diligently), the same way that I don't take gigs from paying musicians just because I like playing in front of an audience.

Coates may defend working for free, because it got him a guest spot at the publication where he now works. But to me, the most important part of the story is that he got that spot on the strength of his blogging, which drew the attention of other writers and editors. You want exposure? There's nothing wrong with making it for yourself. Please start a blog, and hustle for it like crazy. But don't let other people tell you that it's the same as a paycheck--especially when they're not working for "exposure." They're on salary.

Is there a chance that, as with Coates and so many others, that exposure could lead to better gigs? Sure, the same way that a musician might get discovered while playing folk covers at a Potbelly sandwich shop. But it's a lottery, and pointing to successful writers who came up that way ignores the order of magnitude more that wrote for exposure and promptly sank into obscurity. You can't pay your rent with publicity, and you never could. We're professionals, and we should demand to be treated that way.

December 12, 2012

Filed under: journalism»industry

The Platform

Last week, Rupert Murdoch's iPad-only tabloid The Daily announced that it was closing its doors on Thursday, giving it a total lifespan of just under one year. Lots of people have written interesting things about this, because the schadenfreude is irresistable. Felix Salmon makes a good case against its format, while former staffer Peter Ha noted that its publication system was unaccountably terrible. Dean Starkman at CJR believes, perhaps rightly, that it will take more than a Murdoch rag going under to form any real conclusions.

Around the same time, Nieman Lab published a mind-bogglingly silly pitch piece for 29th Street Publishing, a middleman that republishes magazine content as mobile apps. "What if getting a magazine into Apple's Newsstand was as easy as pushing the publish button on a blog?" Nieman asked on Twitter, demonstrating once again that the business side of the news industry will let nothing stand between it and the wrong questions.

The problem publications face is not that getting into Apple's storefront is too hard--it's that they have a perfectly good (cross-platform) publishing system right in front of them in HTML ("as easy as pushing the publish button on a blog," one might say) and they're completely unwilling to find a business model for it other than throwing up their hands and ceding 30% of their income (and control of their future) to a third party in another industry with a completely different set of priorities. (Not to mention the barriers to search, sharing, and portability that apps throw up.)

What publishers need to be doing is finding a way to monetize the content that they've already got and can already publish using tools that are--well, probably not quite as easy as blogging, but undoubtably far easier than becoming a mobile software developer. One way to do that is with a leaky paywall: it's been a definite success for the NYT, and the Washington Post is considering one. I suspect that when calmer heads prevail, this will become a lot more common. The problem with paywalls is mobile: even if consumers were not conditioned to want "apps," sign-in on mobile is a frustrating user experience problem.

But let's say apps remain a hot topic in news boardrooms. I've been thinking about this for a few days: how could the news industry build a revenue model out of the best of both worlds, with clean mobile HTML deployed everywhere but leveraging the easy payment mechanism of an app store--assuming, in fact, that "payment is hard" is actually a problem the industry has, and given the NYT's success, I'm not honestly sure that it is. My best solution takes inspiration from two-factor authentication (which everyone should be using).

My plan goes like this: just like today, you visit the app store on your platform of choice. You download a yearly "subscription key" application, pay for it in the usual way, and then open it. Behind the scenes, the app talks to the content server and generates a one-time password, then opens a corresponding URL in the default site browser, setting a cookie so that further browser visits will always be signed in--but you as the user don't see any of that. All you see is that the content has been unlocked for you without any sign-in hassle. Next year, you renew your subscription the same way.

In an ideal world, there would be a standard for this that platform authors could implement. Your phone would have one "site key" application (not without precedent), and content publishers could just plug add-on apps into it for both purchasing and authentication. Everyone wins. But of course, that's not a sexy startup idea for milking thousands of dollars from gullible editors. Nor is it helpful for computer companies looking to keep you from leaving their platform: I'm pretty sure an application like this violates Apple's store rules. Personally, that's reason enough for me to consider them unacceptable, because I don't believe the correct response to exploitation is capitulation. That's probably why nobody lets me make business decisions for a major paper.

Assume we can't publish an app: two-factor auth still works in lots of ways that are mobile-friendly, post-purchase. You could visit the website, click a big "unlock" button and be sent a URL via text message, e-mail, Facebook, Twitter, or whatever else you'd like. A site built in HTML and monetized this way works everywhere, instead of locking you into the iPad or another single platform. It lets the publisher, not a third party, retain control of billing and access. And it can be layered onto your existing system, not developed from scratch. Is it absolutely secure? No, of course not. But who cares? As the Times has proven, all you need to do is monetize the people who are willing to pay, not the pirates.

This is just one sane solution that lets news organizations control their own content, and their destiny. Will it happen? Probably not: the platform owners won't let them, and news organizations don't seem to care about having a platform that they themselves own. To me this is a terrible shame: after years of complaining that the Internet made everyone a publisher, news organizations don't seem to be interested in learning that same lesson when the shoe is on the other foot. But perhaps there's an upside: for every crappy app conversion startup funded by desparate magazine companies, there are jobs being created in a recovering economy. Thanks for taking one for the team, journalism.

November 13, 2012

Filed under: journalism»new_media»data_driven

Nate Silver: Not a Witch

In retrospect, Joe Scarborough must be pretty thrilled he never took Nate Silver's $1,000 bet on the outcome of the election. Silver's statistical model went 50 for 50 states, and came close to the precise number of electoral votes, even as Scarborough insisted that the presidential campaign was a tossup. In doing so, Silver became an inadvertent hero to people who (unlike Joe Scarborough) are not bad at math, inspiring a New Yorker humor article and a Twitter joke tag ("#drunknatesilver", who only attends the 50% of weddings that don't end in divorce).

There are two things that are interesting about this. The first is the somewhat amusing fact that Silver's statistical model, strictly speaking, isn't actually that sophisticated. That's not to take anything away from the hard work and mathematical skills it took to create that model, or (probably more importantly) Silver's ability to write clearly and intelligently about it. I couldn't do it, myself. But when it all comes down to it, FiveThirtyEight's methodology is just to track state polls, compare them to past results, and organize the results (you can find a detailed--and quite readable--explanation of the entire methodology here). If nobody has done this before, it's not because the idea was an unthinkable revolution or the result of novel information technology. It's because they couldn't be bothered to figure out how.

The second interesting thing about Silver's predictions is how incredibly hard the pundits railed against them. Scarborough was most visible, but Politico's Dylan Byers took a few potshots himself, calling Silver a possible "one-term celebrity." You can almost smell sour grapes rising from Byers' piece, which presents on the one side Silver's math, and on the other side David Brooks. It says a lot about Byers that he quoted Brooks, the rodent-like New York Times columnist best known for a series of empty-headed books about "the American character," instead of contacting a single statistician for comment.

Why was Politico so keen on pulling down Silver's model? Andrew Beaujon at Poynter wrote that the difference was in journalism's distaste for the unknown--that reporters hate writing about things they can't know. There's an element of truth to that sentiment, but in this case I suspect it's exactly wrong: Politico attacked because its business model is based entirely on the cultivation of uncertainty. A world where authority derives from more than the loudest megaphone is a bad world for their business model.

Let's review, just for a second, how Politico (and a whole host of online, right-leaning opinion journals that followed in its wake) actually work. The oft-repeated motto, coming from Gabriel Sherman's 2009 profile, is "win the morning"--meaning, Politico wants to break controversial stories early in order to work its brand into the cable and blog chatter for the rest of the day. Everything else--accuracy, depth, other journalistic virtues--comes second to speed and infectiousness.

To that end, a lot of people cite Mike Allen's Playbook, a gossipy e-mail compendium of aggregated fluff and nonsense, as the exemplar of the Politico model. Every morning and throughout the day, the paper unleashes a steady stream of short, insider-ey stories. It's a rumor mill, in other words, one that's interested in politics over policy--but most of all, it's interested in Politico. Because if these stories get people talking, Politico will be mentioned, and that increases the brand's value to advertisers and sources.

(There is, by the way, no small amount of irony in the news industry's complaints about "aggregators" online, given the long presence of newsletters like Playbook around DC. Everyone has one of these mobile-friendly link factories, and has for years. CQ's is Behind the Lines, and when I first started there it was sent to editors as a monstrous Word document, filled with blue-underlined hyperlink text, early every morning for rebroadcast. Remember this the next time some publisher starts complaining about Gawker "stealing" their stories.)

Politico's motivations are blatant, but they're not substantially different from any number of talking heads on cable news, which has a 24-hour news hole to fill. Just as the paper wants people talking about Politico to keep revenue flowing, pundits want to be branded as commentators on every topic under the sun so they can stay in the public eye as much as possible. In a sane universe, David Brooks wouldn't be trusted to run a frozen yoghurt stand, because he knows nothing about anything. Expertise--the idea that speaking knowledgably requires study, sometimes in non-trivial amounts--is a threat to this entire industry (probably not a serious threat, but then they're not known for underreaction).

Election journalism has been a godsend to punditry precisely because it is so chaotic: who can say what will happen, unless you are a Very Important Person with a Trusted Name and a whole host of connections? Accountability has not traditionally been a concern, and because elections hinge on any number of complicated policy questions, this means that nothing is out of bounds for the political pundit. No matter how many times William Kristol or Megan McArdle are wrong on a wide range of important issues, they will never be fired (let's not even start on poor Tom Friedman, a man whose career consists of endlessly sorting the wheat from the chaff and then throwing away the wheat). But FiveThirtyEight undermines that thought process, by saying that there is a level of rigor to politics, that you can be wrong, and that accountability is important.

The optimistic take on this disruption is, as Nieman Journalism Lab's Jonathan Stray argues, that specialist experts will become more common in journalism, including in horse race election coverage. I'm not optimistic, personally, because I think the current state of political commentary owes as much to industry nepotism as it does to public opinion, and because I think political data is prone to intentional obfuscation. But it's a nice thought.

The real positive takeaway, I think, is that Brooks, Byers, Scarborough, and other people of little substance took such a strong public stance against Silver. By all means, let's have an open conversation about who was wrong in predicting this election--and whose track record is better. Let's talk about how often Silver is right, and how often that compares to everyone calling him (as Brooks did) "a wizard" whose predictions were "not possible." Let's talk about accountability, and expertise, and whether we should expect better. I suspect Silver's happy to have that talk. Are his accusers?

February 13, 2012

Filed under: journalism»political

Red Letter Day

Ah, budget day: the most annoying day of a data journalist's year. Even now that I'm no longer covering Congress, it still bugs me a little--except now, instead of being frustrated by the problem of finding stories, I'm just annoyed by the coverage itself. Few serious policy documents create so much noise from so little data.

For those who are unaware, on the night before budget day, each senator or representative places a constituent's tooth under their pillow before going to bed. While they're asleep, dreaming of filibusters and fundraising, the White House Chief of Staff creeps into their bedrooms and takes the tooth away, leaving a gift in return. Oh, the cries of joy when the little congresscritters wake to find a thick trio of paperback budget documents waiting for them!

Casting the budget as a fairy tale isn't as snarky as it might seem, because the president's budget is almost entirely wishful thinking. The executive branch, after all, does not control the purse strings of government--that power lies with the legislature. The budget is valuable in that it sets an agenda and expresses priorities, but any numbers in it are a total pipe dream until the appropriations process finishes. And if you want an example of how increasingly dysfunctional Congress has become, look no further than appropriations.

Although money for the next fiscal year is supposed to be allocated into appropriations bills by October 1st (the start of the federal fiscal year), they are increasingly late, often months late. In the meantime, Congress passes what are called "continuing resolutions"--stopgap measures that fund the government at (usually) reduced levels until real funding is passed. You can actually see the delays getting worse in a couple of graphics that my team put together at CQ: first, the number of "bill days" delayed since 1983, and then the number of "bill months" delayed by committee since 1990. Needless to say, this probably isn't helping the federal government run at its most efficient.

The connection between the president's budget and the resulting sausage is therefore tenuous at best (don't even get me started on tracking funds through appropriations itself). Even worse, from the perspective of a data-oriented reporter, is that the numbers in the budget are not static. They are revised multiple times by the White House in the months after release--and not only are they revised, they are often revised retroactively as new economic data comes in and the numbers must be adjusted to fit the actual policy environment. So even if we could talk about budget numbers as though they were "real money," the question remains: which budget numbers? And from when?

During my first couple of years at CQ, around January I would sit down with the Budget Tracker team and the economics editor, and propose a whole series of cool interactive features for budget season. And each time, they would politely and carefully explain all these caveats, which collectively added up to: we could talk about the budget in print, where numbers would not be charted against each other, and we could talk about the ways the budget/appropriations process is broken. But there simply isn't enough solid data to graph or visualize those numbers, since that lends them a visual credibility that they don't actually have.

The result is that I find budget day frustrating, even after leaving the newsroom, because it feels like a failure--something we should have been able to explain to our readers more fully, but couldn't quite grasp ourselves. Simultaneously, I often find coverage by other outlets annoying because they report on the budget as thought it's more meaningful than it actually will be, or they'll chart it across visualizations as though imaginary numbers could be compared to each other (there is an element of jealousy to this, no doubt: it must be nice to work in a place where you can get away with a little editorial sloppiness). It's a shame, because the budget itself is not broken. As an indication of what the White House thinks is important for the upcoming year, it's a great resource. But it is not a long-term financial plan, and shouldn't be reported as such.

January 18, 2012

Filed under: journalism»new_media

Your Scattered Congresses

Once more with feeling: today, I'm happy to bring you my last CQ vote study interactive. This version is something special: although it lacks the fancy animations of its predecessor, it offers a full nine years of voting data, and it does so faster and in more detail. Previously, we had only offered data going back to 2009, or a separate interactive showing the Bush era composite scores.

We had talked about this three-pane presentation at CQ as far back as two years ago, in a discussion with the UX team on how they could work together with my multimedia team. Our goal was to lower the degree to which a user had to switch manually between views, and to visually reinforce what the scatter plot represents: a spatial view of party discipline. I think it does a pretty good job, although I do miss the pretty transitions between different graph types.

Technically speaking, loading nine years of votestudy data was a challenge: that's almost 5,000 scores to collect, organize, and display. The source files necessarily separate member biodata (name, district, party, etc) from the votestudy data, since putting the two into the same data structure would bloat the file size from repetition (many members served in multiple years). But keeping them separate causes a lag problem while interacting with the graphic: doing lookups based on XML queries tends to be very slow, particularly over 500K of XML.

I tried a few tricks to find a balance between real-time lookup (slow interaction, quick initial load) and a full preprocessing step (slow initial load, quick interactions). In the end, I went with an approach that processes each year when it's first displayed, adding biodata to the votestudy data structure at that time, and caching member IDs to minimize the lookup time on members who persist between years. The result is a slight lag when flipping between years or chambers for the first time, but it's not enough to be annoying and the startup time remains quick.

(In a funny side note, working with just the score data is obscenely quick. It's fast enough, in fact, that I can run through all nine years to find the bounds for the unity part of graph to keep it consistent from year to yearin less than a millisecond. That's fast enough that I can be lazy and do that before every re-render--as long as I don't need any names. Don't optimize prematurely, indeed.)

The resulting graphic is typical of CQ interactives, in that it's a direct view on our data without a strong editorial perspective--we don't try to hammer a story through here. That said, I think there's some interesting information that emerges when you can look at single years of data going back to 2002:

  • The Senate is generally much more supportive of the president than the House is. While you can't directly compare scores across chambers (because the votes are different), the trend is striking. It's well known that House members tend to be more radical than senators, but I suspect the difference is also procedural: in the House, the leadership controls the agenda much more tightly than in the Senate, which can be held up by filibuster. As a result, the House may vote on bills that would never reach the Senate floor, just because the majority party can force the issue.
  • Although the conventional wisdom on the left since the Gingrich years has been that Republican discipline is stronger for political reasons, I'm not sure that's entirely borne out by these graphics. Party unity over the last nine years appears roughly symmetrical most of the time, while presidential support (and opposition) appears to shift in direct response to the strength of the White House due to popularity and/or election status. 2007-2009 was a particularly strong time for the Democrats in terms of uniting around or against a presidential agenda, for obvious reasons. This year the Republicans rallied significantly, particularly in the House.
  • There is one person who's explicitly taken out of the graphs (and not removed due to lack of participation or other technical reasons). That person is Zell Miller, everyone's favorite Bush-era iconoclast. If you're like me, you haven't thought about Zell Miller in 6 or 7 years, but there he was when I loaded the Senate file for the first time. Miller voted against his party so often that he had ridiculously low scores in 2003 and 2004, resulting in a vast expanse of white space on the plots with one lonely blue dot at the bottom. Rather than let him make everyone too small to click, I dropped him from the dataset as an outlier.
All of this, of course, is just my amateur political analysis. While I'm arguably more informed (possibly too informed!) about congressional practice than the average person, I'm no expert. For that, you may want to check out CQ's always-fantastic editorial graphics on the votestudies, which show in more detail the legislative trends of the last few decades. It's very cool stuff.

Finally, I did mention that this is my last CQ votestudy interactive. It's been a fantastic ride at Congressional Quarterly, and I'm grateful for the opportunities and education I received there. But it's time to move on, and to find something closer to home here in Seattle: at the end of this month, I'll be starting in a new position, doing web development at Big Fish Games. Wish me luck!

November 9, 2011

Filed under: journalism»new_media

Reaction

As the deadlines creep forward for the Joint Special Committee on Deficit Reduction, my team at CQ has put together a package of new and recent debt interactives covering the automatically-triggered budget cuts, the proposals on the table, the schedule set for committee action, and more.

The centerpiece of the package is a "reactive document" showing how the automatic cuts will go into effect if Congress does not pass cuts totalling $1.2 trillion by January 15. A series of sliders set the size of the hypothetical cuts, and the text and diagrams of the document adjust themselves to match. It's a neat idea, and one that's kind of a natural match for CQ: wordy, but still wonky.

Like a lot of people, I encountered the idea of reactive documents through Bret Victor's essay Explorable Explanations. Victor is an ex-Apple UI designer who wants to re-think the way people teach math, and reactive documents are one of the tools he wants to use. His explorations of learning design via reactive documents, such as Up and Down the Ladder of Abstraction, are breathtaking. As he writes,

There's nothing new about scenario modeling. The authors of this proposition surely had an Excel spreadsheet which answered the same questions. But a spreadsheet is not an explanation. It is merely a dataset and model; it cannot be read. An explanation requires an author, to interpret the results of the model, and present them to the reader via language and graphics.

The reactive document integrates spreadsheet-like models into authored text. It can be read at multiple levels, depending on the reader's level of interest. The hurried reader can skim it. The casual reader can read it as-is. The curious reader can adjust the author's scenarios. The engaged reader can explore scenarios of his own devising.

Unlike a spreadsheet, the barrier to exploration here is extremely low -- simply click and drag. This invites casual readers to become engaged and start exploring. It transforms readers from passive to active.

Victor's idea is a clever one, and as someone who often describes interactives using the same "layered reading" mechanism, it appeals to my storytelling sense. I also like that it embraces the original purpose of the web--to present hypertext documents--without sacrificing the rich interactions that browser applications have developed. That said, I'm not entirely convinced that reactive documents like this are actually terribly useful or novel.

The main problem with this method of presenting interactive information is that it's actually really burdensome for the playful user. It's easy to read, but if you change anything, you have to basically either read and process the entire paragraph again, or you have to learn to pick out individual changes and their meaning from a jumble of words. Besides, sometimes words are not a very good description of an effect or process--imagine describing complex machinery only in paragraph form.

Victor also has some examples that avoid this flaw by making the reactive document incorporate diagrams and graphs alongside his formulas. These are great, but they also illustrate the fact that, once you make reactive "documents" more visual and take away the intertextual trickery, they're really just regular interactives. They're stunningly designed, and I'm always in favor of more multimedia, but there's nothing new about them.

This probably comes off as a little more adversarial to the concept of reactive documents than I actually am, most of which is just my rhetorical background leaking out. I think they're neat, and I would guess that Victor himself thinks of them less as a complete solution and more as a different shade in his teaching palette. In some places, they're helpful, in others not so much.

As an Excel enthusiast, though, I do take exception to Victor's description of spreadsheets as something that "cannot be read," with a high barrier to entry. People read and create spreadsheets all the time, although (to my frustration) they often use them as layout tools. But a spreadsheet that's already set up for someone and locked up to prevent mistakes is barely any more difficult to use than his draggable text--the only real difference is the need to type a number. Regular people may find spreadsheet formulas difficult to connect with cells, but those same people are unlikely to be creating Victor's reactive documents either.

Ultimately, I'm wary of claims that any tool is a silver bullet for education or explainer journalism. It's easy to be blinded by slick UX, and to forget that we're basically just re-inventing storytelling tools used by great teachers for centuries. That shouldn't eliminate interactive games and illustrations from our kit. But reading Victor's site, it's easy to give the technology credit for its thought-provoking qualities, when the credit really goes to his lucid, considered reasoning and clear writing (both of which mean that the technology is well-applied). Sadly, there's no script for that.

October 12, 2011

Filed under: journalism»new_media»data_driven

The Big Contract

Recently my team worked on an interactive for a CQ Weekly Outlook on contracts. Government contracting is, of course, a big deal in these economic times, and the government spent $538 billion on contractors in FY2010. We wanted to show people where the money went.

I don't think this is one of our best interactives, to be honest. But it did raise some interesting challenges for us, simply because the data set was so huge: the basic table of all government contracts for a single fiscal year from USA Spending is around 3.5 million rows, or about 2.5GB of CSV. That's a lot of data for the basic version: the complete set (which includes classification details for each contract, such as whether it goes to minority-owned companies) is far larger. When the input files are that big, forget querying them: just getting them into the database becomes a production.

My first attempt was to write a quick PHP script that looped through the file and loaded it into the table. This ended up taking literally ten or more hours for each file--we'd never get it done in time. So I went back to the drawing board and tried using PostgreSQL's COPY command. COPY is very fast, but the destination has to match the source exactly--you can't skip columns--which is a pain, especially when the table in question has so many columns.

To avoid hand-typing 40-plus columns for the table definition, I used a combination of some command line tools, head and sed mostly, to dump the header line of the CSV into a text file, and then added enough language for a working CREATE TABLE command, everything typed as text. With a staging table in place, COPY loaded millions of rows in just a few minutes, and then I converted a few necessary columns to more appropriate formats, such as the dollar amounts and the dates. We did a second pass to clean up the data a little (correcting misspelled or inconsistent company names, for example).

Once we had the database in place, and added some indexes so that it wouldn't spin its wheels forever, we could start to pull some useful data, like the state-by-state totals for a basic map. It's not surprising that the beltway bandits in DC, Maryland, and Virginia pull an incredible portion of contracting money--I had to clamp the maximum values on the map to keep DC's roughly $42,000 contract dollars per resident from blowing out the rest of the country--but there are some other interesting high-total states, such as New Mexico and Connecticut.

Now we wanted to see where the money went inside each state: what were the top five companies, funding agencies, and product codes? My inital attempts, using a series of subqueries and count() functions, were tying up the server with nothing to show for it, so I tossed the problem over to another team member and went back to working on the map, thinking I wanted to have something to show for our work. He came back with a great solution--PostgreSQL's PARTITION command, which splits a table into component parts, combined with the rank() function for filtering--and we were able to find the top categories easily. A variation on that template gave us per-agency totals and top fives.

There are a couple of interesting lessons to be learned from this experience, the most obvious of which is the challenges of journalism at scale. There are certain stories, particularly on huge subjects like the federal budget, where they're too big to be feasibly investigated without engaging in computer-assisted reporting, and yet they require skills beyond the usual spreadsheet-juggling.

I don't think that's going away. In fact, I think scale may be the defining quality of the modern information age. A computer is just a machine for performing simple operations at incredibly high speeds, to the point where they seem truly miraculous--changing thousands (or millions) of pixels each second in response to input, for example. The Internet expands that scale further, to millions of people and computers interacting with each other. Likewise, our reach has grown with our grasp. It seems obvious to me that our governance and commerce have become far more complex as a result of our ability to track and interact with huge quantities of data, from contracting to high-speed trading to patent abuse. Journalists who want to cover these topics are going to need to be able to explore them at scale, or be reliant on others who can do so.

Which brings us to the second takeaway from this project: in computer-assisted journalism, speed matters. If hours are required to return a query, asking questions becomes too expensive to waste on undirected investigation, and fact-checking becomes similarly burdensome. Getting answers needs to be quick, so that you can easily continue your train of thought: "Who are the top foreign contractors? One of them is the Canadian government? What are we buying from them? Oh, airplane parts--interesting. I wonder why that is?"

None of this is a substitute for domain knowledge, of course. I am lucky to work with a great graphics reporter and an incredibly knowledgeable editor, the combination of often saves me from embarrassing myself by "discovering" stories in the data that are better explained by external factors. It is very easy to see an anomaly, such as the high level of funding in New Mexico from the Department of Energy, and begin to speculate wildly, while someone with a little more knowledge would immediately know why it's so (in this case, the DoE controls funding for nuclear weapons, including the Los Alamos research lab in New Mexico).

Performing journalism with large datasets is therefore a three-fold problem. First, it's difficult to prepare and process. Second, it's tough to investigate without being overwhelmed. And finally, the sheer size of the data makes false patterns easier to find, requiring extra care and vigilance. I complain a lot about the general state of data journalism education, but this kind of exercise shows why it's a legitimately challenging mix of journalism and raw technical hackery. If I'm having trouble getting good results from sources with this kind of scale, and I'm a little obsessed with it, what's the chance that the average, fresh-out-of-J-school graduate will be effective in a world of big, messy data?

June 22, 2011

Filed under: journalism»new_media»data_driven

Against the Grain

If I have a self-criticism of the work I'm doing at CQ, it's that I mostly make flat tools for data-excavation. We rarely set out with a narrative that we want to tell--instead, we present people with a window into a dataset and give them the opportunity to uncover their own conclusions. This is partly due to CQ's newsroom culture: I like to think we frown a bit on sensationalism here. But it is also because, to a certain extent, my team is building the kinds of interactives we would want to use. We are data-as-playground people, less data-as-theme-park.

It's also easier to create general purpose tools than it is to create a carefully-curated narrative. But that sounds less flattering.

In any case, our newest project does not buck this trend, but I think it's pretty fascinating anyway. "Against the Grain" is a browseable database of dissent on party unity votes in the House and Senate (party unity votes are defined by CQ as those votes where a majority of Republicans and a majority of Democrats took opposing sides on a bill). Go ahead, take a look at it, and then I'd like to talk about the two sides of something like this: the editorial and the technical.

The Editorial

Even when you're building a relatively straightforward data-exploration application like this one, there's still an editorial process in play. It comes through in the flow of interaction, in the filters that are made available to the user, and the items given particular emphasis by the visual design.

Inescapably, there are parallels here to the concept of "objective" journalism. People are tempted to think of data as "objective," and I guess at its most pure level it might be, but from a practical standpoint we don't ever deal with absolutely raw data. Raw data isn't useful--it has to be aggregated to have value (and boy, if there's a more perilous-but-true phrase in journalism these days than "aggregation has value," I haven't heard it). Once you start making decisions about how to combine, organize, and display your set, you've inevitably committed to an editorial viewpoint on what you want that data to mean. That's not a bad thing, but it has to be acknowledged.

Regardless, from an editorial perspective, we had a pretty specific goal with "Against the Grain." It began as an offshoot of a common print graphic using our votestudy data, but we wanted to be able to take advantage of the web's unlimited column inches. What quickly emerged as our showcase feature--what made people say "ooooh" when we talked it up in the newsroom--was to organize a given member's dissenting votes by subject code. What are the policy areas on which Member X most often breaks from the party line? Is it regulation, energy, or financial services? How are those different between parties, or between chambers? With an interactive presentation, we could even let people drill down from there into individual bills--and jump from there back out to other subject codes or specific members.

To present this process, I went with a panel-oriented navigation method, modeled on mobile interaction patterns (although, unfortunately, it still doesn't work on mobile--if anyone can tell me why the panels stack instead of floating next to each other on both Webkit and Mobile Firefox, I'd love to know). By presenting users with a series of rich menu options, while keeping the previous filters onscreen if there's space, I tried to strike a balance between query-building and giving room for exploration. Users can either start from the top and work down, by viewing the top members and exploring their dissent; from the bottom up, by viewing the most contentious votes and seeing who split from the party; or somewhere in the middle, by filtering the two main views through a vote's subject code.

We succeeded, I think, in giving people the ability to look at patterns of dissent at a member and subject level, but there's more that could be done. Congressional voting is CQ's raison d'etre, and we store a mind-boggling amount of legislative information that could be exploited. I'd like to add arbitrary member lookup, so people could find their own senator or representative. And I think it might be interesting to slice dissent by vote type--to see if there's a stage in the legislative process where discipline is particularly low or high.

So sure, now that we've got this foundation, there are lots of stories we'd like it to handle, and certain views that seem clunkier than necessary. It's certainly got its flaws and its oddities. But on the other hand, this is a way of browsing through CQ's vote database that nobody outside of CQ (and most of the people inside) have never had before. Whatever its limitations, it enables people to answer questions they couldn't have asked prior to its creation. That makes me happy, because I think a certain portion of my job is simply to push the organization forward in terms of what we consider possible.

So with that out of the way, how did I do it?

The Technical

"Against the Grain" is probably the biggest JavaScript application I've written to date. It's certainly the best-written--our live election night interactive might have been bigger, but it was a mess of display code and XML parsing. With this project, I wanted to stop writing JavaScript as if it was the poor man's ActionScript (even if it is), and really engage on its own peculiar terms: closures, prototypal inheritance, and all.

I also wanted to write an application that would be maintainable and extensible, so at first I gave Backbone.js a shot. Backbone is a Model-View-Controller library of the type that's been all the rage with the startup hipster crowd, particularly those who use obstinately-MVC frameworks like Ruby on Rails. I've always thought that MVC--like most design patterns--feels like a desparate attempt to convert common sense into jargon, but the basic goal of it seemed admirable: to separate display code from internal logic, so that your code remains clean and abstracted from its own presentation.

Long story short, Backbone seems designed to be completely incomprehensible to someone who hasn't been writing formal MVC applications before. The documentation is terrible, there's no error reporting to speak of, and the sample application is next to useless. I tried to figure it out for a couple of hours, then ended up coding my own display/data layer. But it gave me a conceptual model to aim for, and I did use Backbone's underlying collections library, Underscore.js, to handle some of the filtering and sorting duties, so it wasn't a total loss.

One feature I appreciated in Backbone was the templating it inherits from Underscore (and which they got in turn from jQuery's John Resig). It takes advantage of the fact that browsers will ignore the contents of <script> tags with a type set to something other than "text/javascript"--if you set it to, say, "text/html" or "template," you can put arbitrary HTML in there. I created a version with Mustache-style support for replacing tags from an optional hash, and it made populating my panels a lot easier. Instead of manually searching for <span> IDs and replacing them in a JavaScript soup, I could simply pass my data objects to the template and have panels populated automatically. Most of the vote detail display is done this way.

I also wanted to implement some kind of inheritance to simplify my code. After all, each panel in the interactive shares a lot of functionality: they're basically all lists, most of them have a cascading "close" button, and they trigger new panels of information based on interaction. Panels are managed by a (wait for it...) PanelManager singleton that handles adding, removing, and positioning them within the viewport. The panels themselves take care of instantiating and populating their descendants, but in future versions I'd like to move that into the PanelManager as well and trigger it using custom events.

Unfortunately, out-of-the-box JavaScript inheritance is deeply weird, and it's tangled up in the biggest flaw of the language: terrible variable scoping. I never realized how important scope is until I saw how many frustrations JavaScript's bad implementation creates (no real namespaces! overuse of the "this" keyword! closures over loop values! ARGH IT BURNS).

Scope in JavaScript is eerily like Inception: at every turn, the language drops into a leaky subcontext, except that instead of slow-motion vans and antigravity hotels and Leonardo DiCaprio's dead wife, every level change is a new function scope. With each closure, the meaning of the "this" keyword changes to something different (often to something ridiculous like the Window object), a tendency worsened in a functional library like Underscore. In ActionScript, the use of well-defined Event objects and real namespaces meant I'd never had trouble untangling scope from itself, but in JavaScript it was a major source of bugs. In the end I found it helpful, in any function that uses "this" (read: practically everything you'll write in JavaScript), to immediately cache it in another variable and then only use that variable if possible, so that even inside callbacks and anonymous functions I could still reliably refer to the parent scope.

After this experience, I still like JavaScript, but some of the shine has worn off. The language has some incredibly powerful features, particularly its first-class functions, that the community uses to paper over the huge gaps in its design. Like Lisp, it's a small language that everyone can extend--and like Lisp, the downside is that everyone has to do so in order to get anything done. The result is a million non-standard libraries re-implementing basic necessities like classes and dependencies, and no sign that we'll ever get those gaps filled in the language itself. Like it or not, we're largely stuck with JavaScript, and I can't quite be thrilled about that.

Conclusions

This has been a long post, so I'll try to wrap up quickly. I learned a lot creating "Against the Grain," not all of it technical. I'm intrigued by the way these kinds of interactives fit into our wider concept of journalism: by operating less as story presentations and more as tools, do they represent an abandonment of narrative, of expertise, or even a kind of "sponsored" citizen journalism? Is their appearance of transparency and neutrality dangerous or even deceptive? And is that really any less true of traditional journalism, which has seen its fair share of abused "objectivity" over the years?

I don't know the answers to those questions. We're still figuring them out as an industry. I do believe that an important part of data journalism in the future is transparency of methodology, possibly incorporating open source. After all, this style of interactive is (obviously, given the verbosity on display above) increasingly complex and difficult for laymen to understand. Some way for the public to check our math is important, and open source may offer that. At the same time, the role of the journalist is to understand the dataset, including its limitations and possible misuses, and there is no technological fix for that. Yet.

April 26, 2011

Filed under: journalism»new_media»data_driven

Structural Adjustment

Here are a few challenges I've started tossing out to prospective new hires, all of which are based on common, real-world multimedia tasks:

  • Pretend you're building a live election graphic. You need to be able to show the new state-by-state rosters, as well as the impact on each committee. Also, you need to be able to show an updated list of current members who have lost their races for reelection. You'll get this data in a series of XML feeds, but you have the ability to dictate their format. How do you want them structured?
  • You have a JSON array of objects detailing state GDP data (nominal, real, and delta) over the last 40 years. Using that data, give me a series of state-by-state lists of years for each state in which they experienced positive GDP growth.
  • The newsroom has produced a spreadsheet of member voting scores. You have a separate XML file of member biographical data--i.e., name, seat, date of birth, party affiliation, etc. How would you transform the spreadsheet into a machine-readable structure that can be matched against the biodata list?
What do these have in common? They're aimed at ferreting out the process by which people deal with datasets, not asking them to demonstrate knowledge of a specific programming language or library. I'm increasingly convinced, as we have tried to hire people to do data journalism at CQ, that the difference between a mediocre coder and a good one is that the good ones start from quality data structures and build their program outward, instead of starting with program flow and tacking data on like decorations on a Christmas tree.

I learned this the hard way over the last four years. When I started working with ActionScript in 2007, it was the first serious programming I'd done since college, not counting some playful Excel macros. Consequently I had a lot of bad habits: I left a lot of variables in the global scope, stored data in ad-hoc parallel arrays, and embedded a lot of "magic number" constants in my code. Some of those are easy to correct, but the shift in thinking from "write a program that does X" to "design data structure Y, then write a program to operate on it" is surprisingly profound. And yet it makes a huge difference: when we created the Economic Indicators project, the most problematic areas in our code were the ones where the underlying data structures were badly-designed (or at least, in the case of the housing statistics, organized in a completely different fashion from the other tables).

Oddly enough, I think what caused the biggest change in my thinking was learning to use JQuery. Much like other query languages, the result of almost any JQuery API call is a collection of zero or more objects. You can iterate over these as if they were arrays, but the language provides a lot of functional constructs (each(), map(), filter(), etc.) that encourage users to think more in terms of generic operations over units of data (the fact that those units are expressed in JavaScript's lovely hashmap-like dynamic objects is just a bonus).

I suspect that data-orientation makes for better programmers in any field (and I'm not alone), but I'm particularly interested in it on my team because what we do is essentially to turn large chunks of data (governmental or otherwise) into stories. From a broad philosophical perspective, I want my team thinking about what can be extracted and explained via data, and not how to optimize their loops. Data first, code second--and if concentrating on the former improves the latter, so much for the better.

February 9, 2011

Filed under: journalism»industry

Store Policy

I have argued vociferously in the recent past that the journalistic craze for native clients--an enthusiasm seemingly rekindled by Rupert Murdoch's ridiculous Daily iPad publication--is a bad idea from a technical standpoint. They're clumsy, require a lot of platform-specific work, and they're not exactly burning up the newstands. It continues to amaze me that, despite the ubiquity of Webkit as a capable cross-platform hypertext runtime, people are still excited about recreating the Multimedia CD-ROM.

But beyond the technical barriers, publishing your news in a walled-garden application market raises some serious questions of professional journalistic ethics. Curation (read: a mandatory, arbitrary approval process) exacerbates the dilemma, but even relatively open app stores are, in my opinion, on shaky ground. These problems emerge along three axes: accountability, editorial independence, and (perhaps most importantly) the ideology of good journalism.

Accountability

One of the hallmarks of the modern web is intercommunication based on a set of simple, high-level protocols. From a system of URLs and HTTP, a whole Internet culture of blog commentary, trackbacks, Rickrolls, mashups, and embedded video emerged. Most recently, Twitter created a new version of the linkblog (and added a layer of indirection via link shortening). For a journalist, this should be exciting: it's a rich soup of comments and community swarming around your work. More importantly, it's a constant source of accountability. What, you thought corrections went away when we went online?

But that whole ecosystem of viral sharing and review gets disconnected when you lock your content into a native client. At least on Android, you can send content to other applications via the powerful Intent mechanism (the iOS situation is much less well-constructed, and I have no idea how Windows Mobile now handles this), but even that has unpredictable results--what are you sharing, after all? A URL to the web version? The article text? Can the user choose? And when it comes to submitting corrections or feedback, native apps default to difficult: of the five major news clients I tried on Android this morning (NPR, CBS, Fox, New York Times, and USA Today), not one of them had an in-app way to submit a correction. Regret the error, indeed.

Editorial Independence

Accountability is an important part of professional ethics in journalism. But so is editorial independence, and in both cases the perception of misbehavior can be even more damaging than any actual foul play. The issue as I see it is: how independent can you be, if your software must be approved during each update by a single, fickle gatekeeper?

As Dan Gillmor points out, selling journalism through an app store is a partnership, and that raises serious questions of independence. Are news organizations less likely to be critical of Google, Apple, and Microsoft when their access to the platform could be pulled at any time from the virtual shelves? Do the content-restrictions on both mobile app stores change the stories that they're likely to publish? Will app stores stand behind journalists operating under governments with low press freedom, or will they buckle to a "terms of service" attack? On the web, a paper or media outlet can largely write whatever they want. Physical distribution is so diverse, a single retail entity can't really shut you down. But in an app store, you publish at the pleasure of the platform owner--terms subject to revision. That kind of scenario should give journalists pause.

Ideology and Solidarity

Organizing the news industry is like herding cats: it's a cutthroat business traditionally fueled by intra-city competition, and it naturally attracts argumentative, over-critical personality types. But it's time that newsrooms start to stick up for the basic ideology of journalism. That means that when the owners of an app store start censoring applications based on content, as happened to political cartoonist Mark Fiore or the Eucalyptus e-book reader, we need to make it clear that we consider that behavior unacceptable--pulling apps, refusing to partner for big launch events, and pursuing alternative publication channels.

There's a reason that freedom of the press is included next to speech, religion, and assembly in the Bill of Rights' first amendment. It's an important part of the feedback loop between people, events, and government in a democracy. And journalists have traditionally been pretty hardcore about freedom of the press: see, for example, the lawsuit over the publication of the Pentagon Papers, as well as the entirety of Reporters Without Borders. If the App Store were a country, its ranking for press freedom would be middling at best, and newspapers wouldn't be nearly as eager to jump into bed with it. The fact that these curated markets retain widespread publication support, despite their history of censorship and instability, is an shame for the industry as a whole.

Act, Don't React

Journalists have a responsibility to react against censorship when they see it, but we should also consider going on the offensive. While I don't actually think native news clients make sense when compared to a good mobile web experience, it is still possible to minimize or eliminate some of the ethical concerns they raise, through careful design and developer lobbying.

While it's unlikely that a native application could easily offer the same kind of open engagement as a website, designers can at least address accountability. News clients should offer a way to either leave comments or send corrections to the editors entirely within the application. A side effect of this would be cross-industry innovation in computerized correction tracking and display, something that few publications are really taking advantage of right now.

Simultaneously, journalists should be using their access to tech companies (who love to use newspapers and networks as keynote demos) to push for better policies. This includes more open, uncensored app stores, but it also means pushing for tools that make web apps first-class citizens in an app-centric world, such as:

  • JavaScript APIs for creating bookmarks on device homescreens (with, of course, user confirmation), so that a web application can be "installed" just like native code.
  • Support for "display: fixed" in mobile browsers. It's ridiculous that we still can't create toolbars without using costly DOM manipulation.
  • Better touch events. As PPK documents, the current state of touch events in mobile browsers is in real need of standardization.
There are other items that would be nice to see--access to accelerometer or camera sensors, for example--but these three are what most keep the browser from competing fairly. It's in the best interests of journalists with access to platform developers to push for these improvements for the rest of us. Otherwise, they're complicit in the unethical behaviors of the application stores that they're propping up.

We have so many interesting debates surrounding the business of American journalism--paywalls, ad revenue, user-generated content--can't we just call this one off? The HTML document, originally designed to publish academic papers, may be a frustrating technology for rich UIs, but it's perfectly suited for the task of presenting the news. It's as close as you can get to write-once-run-anywhere, making it the cheapest and most efficient option for mobile development. And it's ethically sound! Isn't it time we stood up for ourselves, and as an industry backed a platform that doesn't leave us feeling like we've sold out our principles for short-term gains? Come on, folks: let's leave that to the op-ed writers.

Past - Present - Future