this space intentionally left blank

October 12, 2011

Filed under: journalism»new_media»data_driven

The Big Contract

Recently my team worked on an interactive for a CQ Weekly Outlook on contracts. Government contracting is, of course, a big deal in these economic times, and the government spent $538 billion on contractors in FY2010. We wanted to show people where the money went.

I don't think this is one of our best interactives, to be honest. But it did raise some interesting challenges for us, simply because the data set was so huge: the basic table of all government contracts for a single fiscal year from USA Spending is around 3.5 million rows, or about 2.5GB of CSV. That's a lot of data for the basic version: the complete set (which includes classification details for each contract, such as whether it goes to minority-owned companies) is far larger. When the input files are that big, forget querying them: just getting them into the database becomes a production.

My first attempt was to write a quick PHP script that looped through the file and loaded it into the table. This ended up taking literally ten or more hours for each file--we'd never get it done in time. So I went back to the drawing board and tried using PostgreSQL's COPY command. COPY is very fast, but the destination has to match the source exactly--you can't skip columns--which is a pain, especially when the table in question has so many columns.

To avoid hand-typing 40-plus columns for the table definition, I used a combination of some command line tools, head and sed mostly, to dump the header line of the CSV into a text file, and then added enough language for a working CREATE TABLE command, everything typed as text. With a staging table in place, COPY loaded millions of rows in just a few minutes, and then I converted a few necessary columns to more appropriate formats, such as the dollar amounts and the dates. We did a second pass to clean up the data a little (correcting misspelled or inconsistent company names, for example).

Once we had the database in place, and added some indexes so that it wouldn't spin its wheels forever, we could start to pull some useful data, like the state-by-state totals for a basic map. It's not surprising that the beltway bandits in DC, Maryland, and Virginia pull an incredible portion of contracting money--I had to clamp the maximum values on the map to keep DC's roughly $42,000 contract dollars per resident from blowing out the rest of the country--but there are some other interesting high-total states, such as New Mexico and Connecticut.

Now we wanted to see where the money went inside each state: what were the top five companies, funding agencies, and product codes? My inital attempts, using a series of subqueries and count() functions, were tying up the server with nothing to show for it, so I tossed the problem over to another team member and went back to working on the map, thinking I wanted to have something to show for our work. He came back with a great solution--PostgreSQL's PARTITION command, which splits a table into component parts, combined with the rank() function for filtering--and we were able to find the top categories easily. A variation on that template gave us per-agency totals and top fives.

There are a couple of interesting lessons to be learned from this experience, the most obvious of which is the challenges of journalism at scale. There are certain stories, particularly on huge subjects like the federal budget, where they're too big to be feasibly investigated without engaging in computer-assisted reporting, and yet they require skills beyond the usual spreadsheet-juggling.

I don't think that's going away. In fact, I think scale may be the defining quality of the modern information age. A computer is just a machine for performing simple operations at incredibly high speeds, to the point where they seem truly miraculous--changing thousands (or millions) of pixels each second in response to input, for example. The Internet expands that scale further, to millions of people and computers interacting with each other. Likewise, our reach has grown with our grasp. It seems obvious to me that our governance and commerce have become far more complex as a result of our ability to track and interact with huge quantities of data, from contracting to high-speed trading to patent abuse. Journalists who want to cover these topics are going to need to be able to explore them at scale, or be reliant on others who can do so.

Which brings us to the second takeaway from this project: in computer-assisted journalism, speed matters. If hours are required to return a query, asking questions becomes too expensive to waste on undirected investigation, and fact-checking becomes similarly burdensome. Getting answers needs to be quick, so that you can easily continue your train of thought: "Who are the top foreign contractors? One of them is the Canadian government? What are we buying from them? Oh, airplane parts--interesting. I wonder why that is?"

None of this is a substitute for domain knowledge, of course. I am lucky to work with a great graphics reporter and an incredibly knowledgeable editor, the combination of often saves me from embarrassing myself by "discovering" stories in the data that are better explained by external factors. It is very easy to see an anomaly, such as the high level of funding in New Mexico from the Department of Energy, and begin to speculate wildly, while someone with a little more knowledge would immediately know why it's so (in this case, the DoE controls funding for nuclear weapons, including the Los Alamos research lab in New Mexico).

Performing journalism with large datasets is therefore a three-fold problem. First, it's difficult to prepare and process. Second, it's tough to investigate without being overwhelmed. And finally, the sheer size of the data makes false patterns easier to find, requiring extra care and vigilance. I complain a lot about the general state of data journalism education, but this kind of exercise shows why it's a legitimately challenging mix of journalism and raw technical hackery. If I'm having trouble getting good results from sources with this kind of scale, and I'm a little obsessed with it, what's the chance that the average, fresh-out-of-J-school graduate will be effective in a world of big, messy data?

Past - Present