this space intentionally left blank

April 16, 2015

Filed under: journalism»new_media»data_driven

Empty-handed

We have sent several people from the newsroom, including myself, to journalism conferences over the last few months. Most conferences are about 50% inspirational and 50% crap (tilted heavily crap-wards in the keynotes), but you meet good people and you get to see the nuts and bolts behind the scenes of some of the best interactive news stories published.

It's natural to come back from a conference with a kind of inferiority complex, and equally easy to conclude that we're not making similar rich presentations because we don't have the cool tools that those other (richer, more tech-savvy) newsrooms have. We too, according to this train of thought, need to be coding elaborate visualization generators and complicated new CMS features — or, as Ryan Pitts from Mozilla said to me last weekend at the Society for News Design workshop, "let's not rest until every paper in the country has built its own charting application."

I think better newsroom tech is important, but let's play devil's advocate for a bit with an unpopular hypothesis: developing tools for the editors and reporters at your newspaper is a waste of your time, and a distraction from the journalism you should be doing.

Why a waste of your time? Partly because newsroom tools get a lot less uptake than you probably think they do (certainly less than we'd hope they would). I've written a lot of internal applications in my time, and they've never been particularly popular, because most reporters and editors don't care. They're too busy doing journalism to use your solution (which is as it should be), and they are probably not big on technology anyway (I have a lot of reporters who can't use Excel, which pains me greatly). Creating tools for reporters is, most of the time, attacking the problem at the wrong point.

For many newsrooms, that wasted time will end up being twice as expensive, because development resources are scarce and UI is hard. Building a polished, feature-filled chart generator that the average journalist can use will take at least a couple of programmer months, which is time those developers aren't working on stories and visualizations that readers want. Are you willing to sacrifice that time, especially if you can't guarantee that it'll actually get used? That's a pretty big gamble, unless you have the resources of the New York Times. You're probably better off just going with an off-the-shelf package, or even finding a simpler solution.

I don't think it's a coincidence that, for all the noise people make about the new data journalism startups like Vox and FiveThirtyEight, 99% of their chart output does not come from a fancy tool or a complex interactive: they post JPEGs. And that's fine! No actual reader has ever complained about having to look at a picture of a graph instead of a souped-up vector rendering (in Vox's case, they're too busy complaining that the graph was stolen from someone else, but that's another story). JPEG is a perfectly decent solution when it comes to simply telling the story across the entire web platform — in fact, it's a great embodiment of "do the simplest thing that works," which has served me well as a guiding motto in life.

So, as a rule of thumb: don't build charting libraries. Don't build general-purpose databases. Don't build drag-and-drop slideshows. Leave these things to other people, who have time and energy to build them for a living. Does this mean you shouldn't create tools at all? No, but the target audience should be you, the news developer, and other semi-technical newsroom staff like the web producers. In other words, make technology for the people who will actually use it, and can handle something that's not polished to a mirror sheen.

I believe this is the big strength of web components, and one reason I'm so bullish on them at the Seattle Times. They're not glossy, end-user products, but they are a great balance between power and accessibility for people with a little technical skill, and they're very fast to build. If the day comes when we do choose to invest in a slicker newsroom app, we can leverage them anyway, the same way that the NYT's fancy chart designers are all based on the developer-oriented D3 library.

In the meanwhile, while I would consider an anti-tool stance a "strong opinion weakly held," I think there's a workable philosophy there. These days, I feel two concerns very strongly (outside of my normal news/editorial production, of course): how to get the newsroom to make use of our skills, and how to best use the limited developer resources we have. A "no tools" guideline is not an absolute rule, but it serves as a useful heuristic to weed out the kinds of projects that might otherwise take over our time.

November 13, 2012

Filed under: journalism»new_media»data_driven

Nate Silver: Not a Witch

In retrospect, Joe Scarborough must be pretty thrilled he never took Nate Silver's $1,000 bet on the outcome of the election. Silver's statistical model went 50 for 50 states, and came close to the precise number of electoral votes, even as Scarborough insisted that the presidential campaign was a tossup. In doing so, Silver became an inadvertent hero to people who (unlike Joe Scarborough) are not bad at math, inspiring a New Yorker humor article and a Twitter joke tag ("#drunknatesilver", who only attends the 50% of weddings that don't end in divorce).

There are two things that are interesting about this. The first is the somewhat amusing fact that Silver's statistical model, strictly speaking, isn't actually that sophisticated. That's not to take anything away from the hard work and mathematical skills it took to create that model, or (probably more importantly) Silver's ability to write clearly and intelligently about it. I couldn't do it, myself. But when it all comes down to it, FiveThirtyEight's methodology is just to track state polls, compare them to past results, and organize the results (you can find a detailed--and quite readable--explanation of the entire methodology here). If nobody has done this before, it's not because the idea was an unthinkable revolution or the result of novel information technology. It's because they couldn't be bothered to figure out how.

The second interesting thing about Silver's predictions is how incredibly hard the pundits railed against them. Scarborough was most visible, but Politico's Dylan Byers took a few potshots himself, calling Silver a possible "one-term celebrity." You can almost smell sour grapes rising from Byers' piece, which presents on the one side Silver's math, and on the other side David Brooks. It says a lot about Byers that he quoted Brooks, the rodent-like New York Times columnist best known for a series of empty-headed books about "the American character," instead of contacting a single statistician for comment.

Why was Politico so keen on pulling down Silver's model? Andrew Beaujon at Poynter wrote that the difference was in journalism's distaste for the unknown--that reporters hate writing about things they can't know. There's an element of truth to that sentiment, but in this case I suspect it's exactly wrong: Politico attacked because its business model is based entirely on the cultivation of uncertainty. A world where authority derives from more than the loudest megaphone is a bad world for their business model.

Let's review, just for a second, how Politico (and a whole host of online, right-leaning opinion journals that followed in its wake) actually work. The oft-repeated motto, coming from Gabriel Sherman's 2009 profile, is "win the morning"--meaning, Politico wants to break controversial stories early in order to work its brand into the cable and blog chatter for the rest of the day. Everything else--accuracy, depth, other journalistic virtues--comes second to speed and infectiousness.

To that end, a lot of people cite Mike Allen's Playbook, a gossipy e-mail compendium of aggregated fluff and nonsense, as the exemplar of the Politico model. Every morning and throughout the day, the paper unleashes a steady stream of short, insider-ey stories. It's a rumor mill, in other words, one that's interested in politics over policy--but most of all, it's interested in Politico. Because if these stories get people talking, Politico will be mentioned, and that increases the brand's value to advertisers and sources.

(There is, by the way, no small amount of irony in the news industry's complaints about "aggregators" online, given the long presence of newsletters like Playbook around DC. Everyone has one of these mobile-friendly link factories, and has for years. CQ's is Behind the Lines, and when I first started there it was sent to editors as a monstrous Word document, filled with blue-underlined hyperlink text, early every morning for rebroadcast. Remember this the next time some publisher starts complaining about Gawker "stealing" their stories.)

Politico's motivations are blatant, but they're not substantially different from any number of talking heads on cable news, which has a 24-hour news hole to fill. Just as the paper wants people talking about Politico to keep revenue flowing, pundits want to be branded as commentators on every topic under the sun so they can stay in the public eye as much as possible. In a sane universe, David Brooks wouldn't be trusted to run a frozen yoghurt stand, because he knows nothing about anything. Expertise--the idea that speaking knowledgably requires study, sometimes in non-trivial amounts--is a threat to this entire industry (probably not a serious threat, but then they're not known for underreaction).

Election journalism has been a godsend to punditry precisely because it is so chaotic: who can say what will happen, unless you are a Very Important Person with a Trusted Name and a whole host of connections? Accountability has not traditionally been a concern, and because elections hinge on any number of complicated policy questions, this means that nothing is out of bounds for the political pundit. No matter how many times William Kristol or Megan McArdle are wrong on a wide range of important issues, they will never be fired (let's not even start on poor Tom Friedman, a man whose career consists of endlessly sorting the wheat from the chaff and then throwing away the wheat). But FiveThirtyEight undermines that thought process, by saying that there is a level of rigor to politics, that you can be wrong, and that accountability is important.

The optimistic take on this disruption is, as Nieman Journalism Lab's Jonathan Stray argues, that specialist experts will become more common in journalism, including in horse race election coverage. I'm not optimistic, personally, because I think the current state of political commentary owes as much to industry nepotism as it does to public opinion, and because I think political data is prone to intentional obfuscation. But it's a nice thought.

The real positive takeaway, I think, is that Brooks, Byers, Scarborough, and other people of little substance took such a strong public stance against Silver. By all means, let's have an open conversation about who was wrong in predicting this election--and whose track record is better. Let's talk about how often Silver is right, and how often that compares to everyone calling him (as Brooks did) "a wizard" whose predictions were "not possible." Let's talk about accountability, and expertise, and whether we should expect better. I suspect Silver's happy to have that talk. Are his accusers?

October 12, 2011

Filed under: journalism»new_media»data_driven

The Big Contract

Recently my team worked on an interactive for a CQ Weekly Outlook on contracts. Government contracting is, of course, a big deal in these economic times, and the government spent $538 billion on contractors in FY2010. We wanted to show people where the money went.

I don't think this is one of our best interactives, to be honest. But it did raise some interesting challenges for us, simply because the data set was so huge: the basic table of all government contracts for a single fiscal year from USA Spending is around 3.5 million rows, or about 2.5GB of CSV. That's a lot of data for the basic version: the complete set (which includes classification details for each contract, such as whether it goes to minority-owned companies) is far larger. When the input files are that big, forget querying them: just getting them into the database becomes a production.

My first attempt was to write a quick PHP script that looped through the file and loaded it into the table. This ended up taking literally ten or more hours for each file--we'd never get it done in time. So I went back to the drawing board and tried using PostgreSQL's COPY command. COPY is very fast, but the destination has to match the source exactly--you can't skip columns--which is a pain, especially when the table in question has so many columns.

To avoid hand-typing 40-plus columns for the table definition, I used a combination of some command line tools, head and sed mostly, to dump the header line of the CSV into a text file, and then added enough language for a working CREATE TABLE command, everything typed as text. With a staging table in place, COPY loaded millions of rows in just a few minutes, and then I converted a few necessary columns to more appropriate formats, such as the dollar amounts and the dates. We did a second pass to clean up the data a little (correcting misspelled or inconsistent company names, for example).

Once we had the database in place, and added some indexes so that it wouldn't spin its wheels forever, we could start to pull some useful data, like the state-by-state totals for a basic map. It's not surprising that the beltway bandits in DC, Maryland, and Virginia pull an incredible portion of contracting money--I had to clamp the maximum values on the map to keep DC's roughly $42,000 contract dollars per resident from blowing out the rest of the country--but there are some other interesting high-total states, such as New Mexico and Connecticut.

Now we wanted to see where the money went inside each state: what were the top five companies, funding agencies, and product codes? My inital attempts, using a series of subqueries and count() functions, were tying up the server with nothing to show for it, so I tossed the problem over to another team member and went back to working on the map, thinking I wanted to have something to show for our work. He came back with a great solution--PostgreSQL's PARTITION command, which splits a table into component parts, combined with the rank() function for filtering--and we were able to find the top categories easily. A variation on that template gave us per-agency totals and top fives.

There are a couple of interesting lessons to be learned from this experience, the most obvious of which is the challenges of journalism at scale. There are certain stories, particularly on huge subjects like the federal budget, where they're too big to be feasibly investigated without engaging in computer-assisted reporting, and yet they require skills beyond the usual spreadsheet-juggling.

I don't think that's going away. In fact, I think scale may be the defining quality of the modern information age. A computer is just a machine for performing simple operations at incredibly high speeds, to the point where they seem truly miraculous--changing thousands (or millions) of pixels each second in response to input, for example. The Internet expands that scale further, to millions of people and computers interacting with each other. Likewise, our reach has grown with our grasp. It seems obvious to me that our governance and commerce have become far more complex as a result of our ability to track and interact with huge quantities of data, from contracting to high-speed trading to patent abuse. Journalists who want to cover these topics are going to need to be able to explore them at scale, or be reliant on others who can do so.

Which brings us to the second takeaway from this project: in computer-assisted journalism, speed matters. If hours are required to return a query, asking questions becomes too expensive to waste on undirected investigation, and fact-checking becomes similarly burdensome. Getting answers needs to be quick, so that you can easily continue your train of thought: "Who are the top foreign contractors? One of them is the Canadian government? What are we buying from them? Oh, airplane parts--interesting. I wonder why that is?"

None of this is a substitute for domain knowledge, of course. I am lucky to work with a great graphics reporter and an incredibly knowledgeable editor, the combination of often saves me from embarrassing myself by "discovering" stories in the data that are better explained by external factors. It is very easy to see an anomaly, such as the high level of funding in New Mexico from the Department of Energy, and begin to speculate wildly, while someone with a little more knowledge would immediately know why it's so (in this case, the DoE controls funding for nuclear weapons, including the Los Alamos research lab in New Mexico).

Performing journalism with large datasets is therefore a three-fold problem. First, it's difficult to prepare and process. Second, it's tough to investigate without being overwhelmed. And finally, the sheer size of the data makes false patterns easier to find, requiring extra care and vigilance. I complain a lot about the general state of data journalism education, but this kind of exercise shows why it's a legitimately challenging mix of journalism and raw technical hackery. If I'm having trouble getting good results from sources with this kind of scale, and I'm a little obsessed with it, what's the chance that the average, fresh-out-of-J-school graduate will be effective in a world of big, messy data?

June 22, 2011

Filed under: journalism»new_media»data_driven

Against the Grain

If I have a self-criticism of the work I'm doing at CQ, it's that I mostly make flat tools for data-excavation. We rarely set out with a narrative that we want to tell--instead, we present people with a window into a dataset and give them the opportunity to uncover their own conclusions. This is partly due to CQ's newsroom culture: I like to think we frown a bit on sensationalism here. But it is also because, to a certain extent, my team is building the kinds of interactives we would want to use. We are data-as-playground people, less data-as-theme-park.

It's also easier to create general purpose tools than it is to create a carefully-curated narrative. But that sounds less flattering.

In any case, our newest project does not buck this trend, but I think it's pretty fascinating anyway. "Against the Grain" is a browseable database of dissent on party unity votes in the House and Senate (party unity votes are defined by CQ as those votes where a majority of Republicans and a majority of Democrats took opposing sides on a bill). Go ahead, take a look at it, and then I'd like to talk about the two sides of something like this: the editorial and the technical.

The Editorial

Even when you're building a relatively straightforward data-exploration application like this one, there's still an editorial process in play. It comes through in the flow of interaction, in the filters that are made available to the user, and the items given particular emphasis by the visual design.

Inescapably, there are parallels here to the concept of "objective" journalism. People are tempted to think of data as "objective," and I guess at its most pure level it might be, but from a practical standpoint we don't ever deal with absolutely raw data. Raw data isn't useful--it has to be aggregated to have value (and boy, if there's a more perilous-but-true phrase in journalism these days than "aggregation has value," I haven't heard it). Once you start making decisions about how to combine, organize, and display your set, you've inevitably committed to an editorial viewpoint on what you want that data to mean. That's not a bad thing, but it has to be acknowledged.

Regardless, from an editorial perspective, we had a pretty specific goal with "Against the Grain." It began as an offshoot of a common print graphic using our votestudy data, but we wanted to be able to take advantage of the web's unlimited column inches. What quickly emerged as our showcase feature--what made people say "ooooh" when we talked it up in the newsroom--was to organize a given member's dissenting votes by subject code. What are the policy areas on which Member X most often breaks from the party line? Is it regulation, energy, or financial services? How are those different between parties, or between chambers? With an interactive presentation, we could even let people drill down from there into individual bills--and jump from there back out to other subject codes or specific members.

To present this process, I went with a panel-oriented navigation method, modeled on mobile interaction patterns (although, unfortunately, it still doesn't work on mobile--if anyone can tell me why the panels stack instead of floating next to each other on both Webkit and Mobile Firefox, I'd love to know). By presenting users with a series of rich menu options, while keeping the previous filters onscreen if there's space, I tried to strike a balance between query-building and giving room for exploration. Users can either start from the top and work down, by viewing the top members and exploring their dissent; from the bottom up, by viewing the most contentious votes and seeing who split from the party; or somewhere in the middle, by filtering the two main views through a vote's subject code.

We succeeded, I think, in giving people the ability to look at patterns of dissent at a member and subject level, but there's more that could be done. Congressional voting is CQ's raison d'etre, and we store a mind-boggling amount of legislative information that could be exploited. I'd like to add arbitrary member lookup, so people could find their own senator or representative. And I think it might be interesting to slice dissent by vote type--to see if there's a stage in the legislative process where discipline is particularly low or high.

So sure, now that we've got this foundation, there are lots of stories we'd like it to handle, and certain views that seem clunkier than necessary. It's certainly got its flaws and its oddities. But on the other hand, this is a way of browsing through CQ's vote database that nobody outside of CQ (and most of the people inside) have never had before. Whatever its limitations, it enables people to answer questions they couldn't have asked prior to its creation. That makes me happy, because I think a certain portion of my job is simply to push the organization forward in terms of what we consider possible.

So with that out of the way, how did I do it?

The Technical

"Against the Grain" is probably the biggest JavaScript application I've written to date. It's certainly the best-written--our live election night interactive might have been bigger, but it was a mess of display code and XML parsing. With this project, I wanted to stop writing JavaScript as if it was the poor man's ActionScript (even if it is), and really engage on its own peculiar terms: closures, prototypal inheritance, and all.

I also wanted to write an application that would be maintainable and extensible, so at first I gave Backbone.js a shot. Backbone is a Model-View-Controller library of the type that's been all the rage with the startup hipster crowd, particularly those who use obstinately-MVC frameworks like Ruby on Rails. I've always thought that MVC--like most design patterns--feels like a desparate attempt to convert common sense into jargon, but the basic goal of it seemed admirable: to separate display code from internal logic, so that your code remains clean and abstracted from its own presentation.

Long story short, Backbone seems designed to be completely incomprehensible to someone who hasn't been writing formal MVC applications before. The documentation is terrible, there's no error reporting to speak of, and the sample application is next to useless. I tried to figure it out for a couple of hours, then ended up coding my own display/data layer. But it gave me a conceptual model to aim for, and I did use Backbone's underlying collections library, Underscore.js, to handle some of the filtering and sorting duties, so it wasn't a total loss.

One feature I appreciated in Backbone was the templating it inherits from Underscore (and which they got in turn from jQuery's John Resig). It takes advantage of the fact that browsers will ignore the contents of <script> tags with a type set to something other than "text/javascript"--if you set it to, say, "text/html" or "template," you can put arbitrary HTML in there. I created a version with Mustache-style support for replacing tags from an optional hash, and it made populating my panels a lot easier. Instead of manually searching for <span> IDs and replacing them in a JavaScript soup, I could simply pass my data objects to the template and have panels populated automatically. Most of the vote detail display is done this way.

I also wanted to implement some kind of inheritance to simplify my code. After all, each panel in the interactive shares a lot of functionality: they're basically all lists, most of them have a cascading "close" button, and they trigger new panels of information based on interaction. Panels are managed by a (wait for it...) PanelManager singleton that handles adding, removing, and positioning them within the viewport. The panels themselves take care of instantiating and populating their descendants, but in future versions I'd like to move that into the PanelManager as well and trigger it using custom events.

Unfortunately, out-of-the-box JavaScript inheritance is deeply weird, and it's tangled up in the biggest flaw of the language: terrible variable scoping. I never realized how important scope is until I saw how many frustrations JavaScript's bad implementation creates (no real namespaces! overuse of the "this" keyword! closures over loop values! ARGH IT BURNS).

Scope in JavaScript is eerily like Inception: at every turn, the language drops into a leaky subcontext, except that instead of slow-motion vans and antigravity hotels and Leonardo DiCaprio's dead wife, every level change is a new function scope. With each closure, the meaning of the "this" keyword changes to something different (often to something ridiculous like the Window object), a tendency worsened in a functional library like Underscore. In ActionScript, the use of well-defined Event objects and real namespaces meant I'd never had trouble untangling scope from itself, but in JavaScript it was a major source of bugs. In the end I found it helpful, in any function that uses "this" (read: practically everything you'll write in JavaScript), to immediately cache it in another variable and then only use that variable if possible, so that even inside callbacks and anonymous functions I could still reliably refer to the parent scope.

After this experience, I still like JavaScript, but some of the shine has worn off. The language has some incredibly powerful features, particularly its first-class functions, that the community uses to paper over the huge gaps in its design. Like Lisp, it's a small language that everyone can extend--and like Lisp, the downside is that everyone has to do so in order to get anything done. The result is a million non-standard libraries re-implementing basic necessities like classes and dependencies, and no sign that we'll ever get those gaps filled in the language itself. Like it or not, we're largely stuck with JavaScript, and I can't quite be thrilled about that.

Conclusions

This has been a long post, so I'll try to wrap up quickly. I learned a lot creating "Against the Grain," not all of it technical. I'm intrigued by the way these kinds of interactives fit into our wider concept of journalism: by operating less as story presentations and more as tools, do they represent an abandonment of narrative, of expertise, or even a kind of "sponsored" citizen journalism? Is their appearance of transparency and neutrality dangerous or even deceptive? And is that really any less true of traditional journalism, which has seen its fair share of abused "objectivity" over the years?

I don't know the answers to those questions. We're still figuring them out as an industry. I do believe that an important part of data journalism in the future is transparency of methodology, possibly incorporating open source. After all, this style of interactive is (obviously, given the verbosity on display above) increasingly complex and difficult for laymen to understand. Some way for the public to check our math is important, and open source may offer that. At the same time, the role of the journalist is to understand the dataset, including its limitations and possible misuses, and there is no technological fix for that. Yet.

April 26, 2011

Filed under: journalism»new_media»data_driven

Structural Adjustment

Here are a few challenges I've started tossing out to prospective new hires, all of which are based on common, real-world multimedia tasks:

  • Pretend you're building a live election graphic. You need to be able to show the new state-by-state rosters, as well as the impact on each committee. Also, you need to be able to show an updated list of current members who have lost their races for reelection. You'll get this data in a series of XML feeds, but you have the ability to dictate their format. How do you want them structured?
  • You have a JSON array of objects detailing state GDP data (nominal, real, and delta) over the last 40 years. Using that data, give me a series of state-by-state lists of years for each state in which they experienced positive GDP growth.
  • The newsroom has produced a spreadsheet of member voting scores. You have a separate XML file of member biographical data--i.e., name, seat, date of birth, party affiliation, etc. How would you transform the spreadsheet into a machine-readable structure that can be matched against the biodata list?
What do these have in common? They're aimed at ferreting out the process by which people deal with datasets, not asking them to demonstrate knowledge of a specific programming language or library. I'm increasingly convinced, as we have tried to hire people to do data journalism at CQ, that the difference between a mediocre coder and a good one is that the good ones start from quality data structures and build their program outward, instead of starting with program flow and tacking data on like decorations on a Christmas tree.

I learned this the hard way over the last four years. When I started working with ActionScript in 2007, it was the first serious programming I'd done since college, not counting some playful Excel macros. Consequently I had a lot of bad habits: I left a lot of variables in the global scope, stored data in ad-hoc parallel arrays, and embedded a lot of "magic number" constants in my code. Some of those are easy to correct, but the shift in thinking from "write a program that does X" to "design data structure Y, then write a program to operate on it" is surprisingly profound. And yet it makes a huge difference: when we created the Economic Indicators project, the most problematic areas in our code were the ones where the underlying data structures were badly-designed (or at least, in the case of the housing statistics, organized in a completely different fashion from the other tables).

Oddly enough, I think what caused the biggest change in my thinking was learning to use JQuery. Much like other query languages, the result of almost any JQuery API call is a collection of zero or more objects. You can iterate over these as if they were arrays, but the language provides a lot of functional constructs (each(), map(), filter(), etc.) that encourage users to think more in terms of generic operations over units of data (the fact that those units are expressed in JavaScript's lovely hashmap-like dynamic objects is just a bonus).

I suspect that data-orientation makes for better programmers in any field (and I'm not alone), but I'm particularly interested in it on my team because what we do is essentially to turn large chunks of data (governmental or otherwise) into stories. From a broad philosophical perspective, I want my team thinking about what can be extracted and explained via data, and not how to optimize their loops. Data first, code second--and if concentrating on the former improves the latter, so much for the better.

January 5, 2011

Filed under: journalism»new_media»data_driven

Query, But Verify

About a month back, a prominent inside-the-Beltway political magazine ran a story on Tea Party candidates and earmarks, claiming that anti-earmark candidates were responsible for $1 billion in earmarks over 2010. I had just finished building a comprehensive earmark package based on OMB data, so naturally my editor sent me a link to the story and asked me to double-check their math. At first glance, the numbers generally matched--but on a second examination, the article's total double- and triple-counted earmarks co-sponsored by members of the Tea Party "caucus." Adjusting my query to remove non-distinct earmark IDs knocked about $100 million off the total--not really that much in the big picture (the sum still ran more than $900 million), but enough to fall below the headline-ready "more than $1 billion" mark. It was also enough to make it clear that the authors hadn't really understood what they were writing about.

In general, I am in favor of journalists learning how to leverage databases for better analysis, but it's an easy technology to misuse, accidentally--or even on purpose. There's a truism that the skills required to interpret statistics go hand in hand with the skills used to misrepresent them, and nowhere is that more pertinent than in the newsroom. Reporters and editors entering the world of data journalism need to hold onto the same critical skills they would use for any other source, not be blinded by the ease with which they can reach a catchy figure.

That said, journalists would do well to learn about these tools, especially in beats like economics and politics, if only to be able to spot their abuses. And there are three strong arguments for using databases (carefully!) for reporting: improving newsroom mathematical literacy, asking questions at modern scale, and making connections easier.

First, it's no secret that journalists and math are often uneasy bedfellows--a recent Washington Post ombudsman piece explored some of the reasons why numerical corrections are so common. In short: we're an industry of English majors whose eyes cross when confronted with simple sums, and so we tend to take numbers at face value even during the regular copy-editing process.

These anxieties are signs of a deeper problem that needs to be addressed, and there's nothing magical about SQL that will fix them overnight. But I think database training serves two purposes. First, it acclimatizes users to dealing with large sets of numbers, like treating nosocomephobia with a nice long hospital stay. Second, it reveals the dirty secret of programming, which is that it involves a lot of math process, but relatively little actual adding or subtracting, especially in query languages. Databases are a good way to get comfortable with numbers without having to actually touch them directly.

Ultimately, journalists need to be comfortable with numbers, because they're becoming an institutional hazard. While the state of government (and private-sector) data may still leave a lot to be desired from a programmer's point of view, it's practically flooded out over the last few years, with machine-readable formats becoming more common. This mass of data is increasingly unmanageable via spreadsheet: there are too many rows, too many edge cases, and too much filtering required. Doing it by hand is a pipe-dream. A database, on the other hand, is designed to handle queries across hundreds of thousands of rows or more. Languages like SQL let us start asking questions at the necessary scale.

Finally, once we've gotten over a fear of numbers and begun to take large data sets for granted, we can start using relational databases to make connections between data sets. This synthesis is a common visualization task that is difficult to do by hand--mapping health spending against immigration patterns, for example--but it's reasonably simple to do with a query in a relational database. The results of these kinds of investigations may not even be publishable, but they are useful--searching for correlation is a great jumping-off point for further reporting. One of the best things I've done for my team lately is set up a spare box running PostgreSQL, which we use for uploading, combining, searching, and then outputting translated versions of data, even in static form.

As always when I write these kinds of posts, remember that there is no Product X for saving journalism. Adding a database does not make your newsroom Web 2.0, and (see the example I opened with) it's not a magic bullet for better journalism. But new technology does bring opportunities for our industry, if we can avoid the Product X hype. The web doesn't save newspapers, but it can (and should) make sourcing better. Mobile apps can't save subscription revenues, but they offer better ways to think about presentation. And databases can't replace an informed, experienced editor, but they can give those journalists better tools to interrogate the world.

January 4, 2011

Filed under: journalism»new_media»data_driven

Your Scattered Congress 2010

Once again, I present CQ's annual vote studies in handy visualization form, now updated with the figures for 2010. This version includes some interesting changes from last year:

  • Load times should now be markedly faster. I decoupled the XML parsing pseudo-thread from the framerate by allowing it to run for up to 10ms before yielding back to the VM for rendering. Previously, it processed only a single member and then waited for the next timer tick, which probably meant at least 16ms per member even on machines capable of running much faster.
  • Clicking on a dot for a single member now loads that member's CQ profile page (subscribers only). Clicking on a dot representing multiple members will bring up a table listing all members, and clicking on one of these rows (or a row in the full Data Table view) will open the profile page in a new window.
  • Tooltips now respect the boundaries of the Flash embed, which makes them a lot more readable.
  • Most importantly, the visualization now collects multiple years in a single graphic, allowing you to actually flip between 2009 and 2010 for comparison. We have plans to add data going back to at least 2003 (CQ's vote studies actually go back more than 50 years, but the data isn't always easy to access). When that's done, you'll be able to visually observe shifts in partisanship and party unity over time.
Notably not changed: it's still in Flash. My apologies to the HTML5 crowd, but the idea of rendering and interacting with more than 500 alpha-blended display objects (four-fifths of which may be onscreen at any time), each linked to multiple XML collections, is not something I really consider feasible in cross-browser Javascript at this time.

The vote studies are one of those quintessentially CQ products: reliable, wonky, and relentlessly non-partisan. We're still probably not doing justice to it with this visualization, but we'll keep building out until we get there. Take a look, and let me know what you think.

Past - Present