this space intentionally left blank

February 14, 2017

Filed under: journalism»new_media

Quantity of Care

We posted Quantity of Care, our investigation into operations at Seattle's Swedish-Cherry Hill hospital, on Friday. Unfortunately, I was sick most of the weekend, so I didn't get a chance to mention it before now.

I did the development for this piece, and also helped the reporters filter down the statewide data early on in their analysis. It's a pretty typical piece design-wise, although you'll notice that it re-uses the watercolor effect from the Elwha story I did a while back. I wanted to come up with something fresh for this, but the combination with Talia's artwork was just too fitting to resist (watch for when we go from adding color to removing it).

I did investigate one change to the effect, which was to try generating the desaturated artwork on the client, instead of downloading a heavily-compressed pattern image. Unfortunately, doing image processing in JavaScript is too heavy on the main thread, and I didn't have time to investigate moving it into a worker. I'd also like to try doing it from a WebGL shader in the future, since I suspect that's actually the best way to get it done efficiently.

In the end, I'm just happy to have worked on something that feels like such an important, powerful investigative story.

October 28, 2016

Filed under: journalism»new_media

How I built the ST3 guide

On election day, voters in the Seattle metro area will need to choose whether or not to approve Sound Transit 3, a $54 billion funding measure that would add miles of light rail and rapid bus transit to the city over the next 20 years. It's a big plan, and our metro editor at the paper wanted to give people a better understanding of what they'd be voting on. So I worked with Mike Lindblom, the Times' transportation reporter, and Kelly Shea in our graphics department to create this interactive guide (source code).

The centerpiece of the guide is the system map, which picks out 12 of the projects that will be funded by ST3. As you scroll through the piece, each project is highlighted, and it zooms to fill the viewport. These kinds of scrolling graphics have become more common in journalism, in part because of the limitations of phone screens. When we can't prompt users to click something with hover state and have limited visual real estate, it's useful to take advantage of the most natural verb they'll have at their fingertips: scroll. I'm not wild about this UI trend, but nobody has come up with a better method yet, and it's relatively easy to implement.

The key to making this map work is the use of SVG (scalable vector graphics). SVG is the unloved stepchild of browser images — badly-optimized and only widely supported in the last few years — but it has two important advantages. First, it's an export option in Illustrator, which means that our graphics team (who do most of their work in Illustrator) can generate print and interactive assets at the same time. It's much easier to teach them how to correctly add the metadata I need in a familiar tool than it is to teach the artists an entirely new workflow, especially on a small staff (plus, we can use existing assets in a new way, like this Boeing retrospective that repurposed an old print spread).

Second, unlike raster graphic formats like JPG or PNG, SVG is actually a text-based format that the browser can control the same way that it does the HTML document. Using JavaScript and CSS, it's possible to restyle specific parts of the image, manipulate their position or size, or add/remove shapes dynamically. You can even generate them from scratch, which is why libraries like D3 have been using SVG to do data visualization for years.

Unfortunately, while these are great points in SVG's favor, there's a reason I described it in my Cascadia talk as a "box full of spiders." The APIs for interacting with parts of the SVG hierarchy are old and finicky, and they don't inherit improvements that browsers make to other parts of the page. As a way of getting around those problems, I've written a couple of libraries to make common tasks easier: Savage Query is a jQuery-esque wrapper for finding and restyling elements, and Savage Camera makes it easy to zoom and pan around the image in animated sequences. After the election, I'm planning on releasing a third Savage library, based on our <svg-map> elements, for loading these images into the page asynchronously (see my writeup on Source for details).

The camera is what really makes this map work, and it's made possible by a fun property of SVG: the viewBox attribute, which defines the visible coordinate system of a given image. For example, here's an SVG image that draws a rectangle from [10,10] to [90,90] inside of a 100x100 viewbox:

If we want to zoom in or out, we don't need to change the position of every shape in the image. Instead, we can just set the viewbox to contain a different set of coordinates. Here's that same image, but now the visible area goes from [0,0] to [500,500]:

And here's one where we've zoomed in on the lower-right corner, by viewing a 40-unit box starting at [60,60]:

Savage Camera was written to make it easy to manipulate the viewbox in terms of shapes, not just raw coordinates. In the case of the ST3 guide, each project description shares an ID with a group of shapes. When the description scrolls into the window, I tell the camera to focus on that group, and it handles the animation. SVG isn't very well-optimized, so on mobile this zoom is choppier than I'd like. But it's still way easier than trying to write my own rendering engine for canvas, or using a slippy map library like Leaflet (which can only zoom to pre-determined levels).

This is not the first time that I've built something on this functionality: we've also used it for our Paper Hawks and for visualizing connections between Seattle's impressive women in the arts. But this is the most ambitious use so far, and a great chance to practice working closely with an illustrator as the print graphic was revised and updated. In the future, if we can polish this workflow, I think there's a lot of potential for us to do much more interesting cross-media illustrations.

May 7, 2015

Filed under: journalism»new_media

Mayhem

I'm not sure what it says about Seattle that one of our biggest yearly events is a May Day protest that wrecks havoc across big chunks of downtown. What even competes? The Blue Angels shut down traffic on the bridges once a summer, and there's the Sea Fair downtown, but reception to those is always pretty muted in my experience. International Workers Day is the big show.

The May Day map I put together to track our reporters has quickly become one of my favorite projects for the Seattle Times. It was real-time, it posed interesting data challenges, and it really exploited our <leaflet-map> element more than anything else we've done so far. While I also wrote a post on it for our dev blog at work, I wanted to call out a couple of other interesting points here.

The most interesting technical detail here is the use of the Twitter streaming API, which delivers nearly instant updates for a search query (either on users, geolocation, or keyword). Node is a great fit for this, with the twitter module offering a readable stream that fires events as new items come in. Our scaffolding, on the other hand, is not intended to be run as a long-standing process, and I didn't really want to retrofit Grunt into a general-purpose application framework. I ended up writing the Twitter part of the app as a completely separate, continuous Node process, which then dumped out its data as a JSON file and started a standard build/deploy in a child process whenever new data arrived.

To store the tweets from the stream, the application uses a SQLite3 database, since that's the easiest way to query and update data. A static data store like this is not something that we've used on projects before, and I don't know if I'd re-use it again. Using SQLite itself is always a pleasure, but reliance on a local database means that I couldn't just clone the project from home and update it when I wanted to change the coloring on Saturday morning. Using cloud storage, like Google Sheets, has a lot of advantages for distributed and remote development.

Working with Twitter itself is an interesting problem, because it's clear that the company has no real coherent plan for outside developers. Over the last few years, the API for user access has been increasingly limited and broken as Twitter tried to drive third-party clients (which don't show ads and don't make money) out of existence. On the other hand, if you are building a Twitter bot, which our map effectively is, it remains a pretty useful and effective service for pub/sub communication. I'm not sure it says very much about Twitter's strategy that they'll let bots run wild while ordinary people are locked into a client monoculture, but that's honestly the least of my frustrations with them at this point.

All that said, I would personally use with this stack again in a heartbeat. Twitter is not the highest social traffic source for the Seattle Times, but almost all of our reporters use it anyway, and it's much nicer to program against compared to Facebook. The impending dilemma is if (or when) Twitter will decide to switch to a "curated" (read: algorithmically-tampered) stream a la Facebook's timeline. When that happens, its value to me as a news developer drops basically to nothing, because I won't be able to guarantee message delivery any more.

Which brings me to the most boring but probably most profound lesson of this project: we need a better build server. The May Day map ran on a box in the office we've affectionately dubbed "Cronda," which also currently tests our traffic alert application and previously powered the Seahawks fan map. In each of those cases, we've jury-rigged together a solution for pulling the latest source code and running builds at regular intervals (the cron Grunt task), but it's not optimal. We can't check on those builds remotely, or restart them if something goes wrong.

At some point, we'll probably move our builds from Cronda to an EC2 box that we can access remotely, but doing so doesn't honestly solve the problem — it just makes it less fragile. Eventually, I think we'll need to look into a real build monitor like Jenkins, which can automate deployments, track error logs, and respond to queries in our teach chat. I'm not entirely looking forward to that, since it feels like a very heavyweight solution, but the more complex our applications get, the more a little up-front rigor will pay off.

April 16, 2015

Filed under: journalism»new_media»data_driven

Empty-handed

We have sent several people from the newsroom, including myself, to journalism conferences over the last few months. Most conferences are about 50% inspirational and 50% crap (tilted heavily crap-wards in the keynotes), but you meet good people and you get to see the nuts and bolts behind the scenes of some of the best interactive news stories published.

It's natural to come back from a conference with a kind of inferiority complex, and equally easy to conclude that we're not making similar rich presentations because we don't have the cool tools that those other (richer, more tech-savvy) newsrooms have. We too, according to this train of thought, need to be coding elaborate visualization generators and complicated new CMS features — or, as Ryan Pitts from Mozilla said to me last weekend at the Society for News Design workshop, "let's not rest until every paper in the country has built its own charting application."

I think better newsroom tech is important, but let's play devil's advocate for a bit with an unpopular hypothesis: developing tools for the editors and reporters at your newspaper is a waste of your time, and a distraction from the journalism you should be doing.

Why a waste of your time? Partly because newsroom tools get a lot less uptake than you probably think they do (certainly less than we'd hope they would). I've written a lot of internal applications in my time, and they've never been particularly popular, because most reporters and editors don't care. They're too busy doing journalism to use your solution (which is as it should be), and they are probably not big on technology anyway (I have a lot of reporters who can't use Excel, which pains me greatly). Creating tools for reporters is, most of the time, attacking the problem at the wrong point.

For many newsrooms, that wasted time will end up being twice as expensive, because development resources are scarce and UI is hard. Building a polished, feature-filled chart generator that the average journalist can use will take at least a couple of programmer months, which is time those developers aren't working on stories and visualizations that readers want. Are you willing to sacrifice that time, especially if you can't guarantee that it'll actually get used? That's a pretty big gamble, unless you have the resources of the New York Times. You're probably better off just going with an off-the-shelf package, or even finding a simpler solution.

I don't think it's a coincidence that, for all the noise people make about the new data journalism startups like Vox and FiveThirtyEight, 99% of their chart output does not come from a fancy tool or a complex interactive: they post JPEGs. And that's fine! No actual reader has ever complained about having to look at a picture of a graph instead of a souped-up vector rendering (in Vox's case, they're too busy complaining that the graph was stolen from someone else, but that's another story). JPEG is a perfectly decent solution when it comes to simply telling the story across the entire web platform — in fact, it's a great embodiment of "do the simplest thing that works," which has served me well as a guiding motto in life.

So, as a rule of thumb: don't build charting libraries. Don't build general-purpose databases. Don't build drag-and-drop slideshows. Leave these things to other people, who have time and energy to build them for a living. Does this mean you shouldn't create tools at all? No, but the target audience should be you, the news developer, and other semi-technical newsroom staff like the web producers. In other words, make technology for the people who will actually use it, and can handle something that's not polished to a mirror sheen.

I believe this is the big strength of web components, and one reason I'm so bullish on them at the Seattle Times. They're not glossy, end-user products, but they are a great balance between power and accessibility for people with a little technical skill, and they're very fast to build. If the day comes when we do choose to invest in a slicker newsroom app, we can leverage them anyway, the same way that the NYT's fancy chart designers are all based on the developer-oriented D3 library.

In the meanwhile, while I would consider an anti-tool stance a "strong opinion weakly held," I think there's a workable philosophy there. These days, I feel two concerns very strongly (outside of my normal news/editorial production, of course): how to get the newsroom to make use of our skills, and how to best use the limited developer resources we have. A "no tools" guideline is not an absolute rule, but it serves as a useful heuristic to weed out the kinds of projects that might otherwise take over our time.

March 11, 2015

Filed under: journalism»new_media

Template Trouble

About nine months ago, I made the first check-in on the Seattle Times news app template. Since that time, it's been at the heart of pretty much everything we've done at the Times, ranging from big investigative projects to Super Bowl coverage to dog name analysis. We've adapted it to form the basis of our web component stack, and made a version that automates Leaflet map creation. It's been a pretty great tool, used by news apps developers, producers, and graphics team members alike.

That said, I think in digital journalism we often talk in glowing terms about our tools, but we don't nearly as often discuss the downsides they possess. So let's be honest with ourselves: I love this scaffolding, but it's not perfect. It has issues. And I think those issues say interesting things about not only the template itself, but also newsroom culture, and the challenges of creating tools that can operate there.

  • The templating situation can be confusing. Since it's all JavaScript, it's sometimes hard for scaffolding users to keep track of what's running during the build, and what will run on the client. Generally, we use a different library in each scenario (Lo-Dash during builds, ICanHaz or DoT in the browser), but it can still be odd for people who are used to a language split — and worse for those who have little or no programming experience.
  • Deployment could be better. This really has less to do with our scaffold, and more to do with the environment in which it operates. We don't have great CMS integration, because the hooks don't exist. And we have to keep credentials in a separate file (which isn't checked into Git), because many of users can't update environment variables on their own machines. We're also still trying to figure out what we check into Git: should Google Sheets go in there? What about their ID numbers?
  • It was great when the paper launched its new responsive site last month, because it meant we finally have reasonable default styles. The news app scaffolding has previously left these up to the project authors, and the result is that we're not nearly consistent enough. I think we have a fine line to walk between "build everything in" and "provide flexibility" — what's good for the main site may not be good for us.
  • Along those same lines, the new CMS offers us a better, responsive layout, but it also took away a lot of flexibility. The result is that we're probably overusing the news app template to compensate. While I think it's great that we have a place for generating unconventional pages, I'm not wild about effectively creating a parallel content system on S3 whenever we need a small amount of control over the page.
  • Old apps are locked to old dependencies. Like any good Node package, we load dependencies for the news app template on a per-project basis. But I've been tinkering with this framework for 8 months now, and several things have changed radically (most notably, a switch from RequireJS to Browserify). Stepping back into old projects often requires a bit of code archaeology to figure out where everything used to live.

What are the common threads here? While you could point to the static page approach as being part of the issue, I actually think what causes a lot of these problems is that the intended audience for the news app template is both broad and narrow. It's broad in that its users range from novice journalists to experienced developers (and, indirectly, non-technical editors and reporters feeding data into Google Sheets). It's narrow in that the actual production still requires a high level of technical comfort: familiarity with the command line, new kinds of tooling, and some ability to roll with unexpected bugs.

This is a tough, and self-contradictory, audience for a visualization toolkit. It's not, however, out of character for a general-purpose dev framework. And indeed, when we talk about app scaffolds from any news organization (not just The Seattle Times), that's what they are. They're written to be fast, to be portable, and to generate static files, because those are our priorities as deadline-driven journalists. They are also the far end of a range of newsroom tools, where news apps are at one end and pre-built widgets live on the other. I'm not really worried about where the template lives on that range, and I'm certainly not planning on reducing the complexity — I think it's at a sweet spot right now. But I do worry about the ways that it (and our CMS) fit into newsroom culture.

At the Times, like in many newsrooms, the online presence is largely run by "producers," who curate the stories on the home page and handle the print-to-digital transition process (it's not the same as a "producer" in software development). This process is complicated and highly-skilled, because news CMS systems are generally terrible. The web production staff also often work on projects that would, in print, fall under page design: building complex HTML presentations for special stories. This isn't because they're trained designers: producers are often younger, and while it's not entry-level work, it's close. They end up doing this work because trained, HTML-fluent designers are rare, and because nobody else in the newsroom bothers to learn web design.

As a result, we end up in a funny situation: the only people in the newsroom who really understand the web are the producers. Editors and reporters are discouraged from becoming more technically savvy because the workflow is print-first, and the CMS is so intimidating. Meanwhile, producers rarely become editors or reporters because the newsroom can't afford to lose their skills. There's a tremendous gap in newsroom culture between people who produce the content, and people who actually understand the medium in which that content is consumed. While the tooling is not entirely responsible for that, it is a contributing factor.

I think the challenge we face, as newsroom developers, is to be always aware and vigilant of that gap and its causes. Tools like the news app template are important, because they speed up our work, and the work of other technical people. But they don't mitigate the need for better, web-first publishing systems — something that can help diffuse web thinking from a producer-only skill to something that's available throughout the newsroom.

January 30, 2015

Filed under: journalism»new_media

Bowled Over

While we've still got plenty of interesting projects in the works, the Seahawks rampage into the Super Bowl has pretty much taken over the News Apps budget at the Seattle Times this week. As a result, we've got some interesting interactives you might have seen:

  • The Seahawks personality quiz and I-90 trivia quiz were put together by a new member of the team, and proved extremely popular. I suspect we'll be working on a <pop-quiz> web component soon enough as a result.
  • Top 12 Moments of the Seahawks season was a super-fast project that I tossed together this week, collecting the favorite plays of our knowledgeable football pundits along with some of our best sports photos.
  • (update: Jan. 30) Super Bowl Pick'em improves on our serverless infrastructure from the fan map to let people make their own score predictions (and read predictions by columnists and celebrities) even while hosting on S3.
  • (update: Jan. 30) Super Bowl Matchups is another scroll-oriented presentation, this time focusing on comparisons between the coaches, quarterbacks, and cornerbacks who are playing on Sunday. After this, I don't want to see another parallax layout until July.
Additionally, you may have seen that the Seahawks have launched I'm In, a fan map that bears a suspiciously close resemblance to our Hawks fan map. Personally, I prefer to think that since imitation is the sincerest form of flattery, a page like that is just Richard Sherman's way of letting me know how great I am.

More to come, obviously, as the road to the Super Bowl continues! Or, as Marshawn Lynch would say, "Yeah."

November 26, 2014

Filed under: journalism»new_media

Ex Post Facto

I had hoped, personally, that we were past the app craze in newsrooms, particularly since the New York Times (eternally the canary in the coalmine for the rest of the industry) started killing off its unsuccessful subscription apps. But there's a sucker born every minute, and this time it's the Washington Post, which is launching a special Kindle app, the main goal of which seems to be to remind everyone that Jeff Bezos bought the Post last year:

The app, which was designed to reduce the noise of the Web to something as streamlined as a print publication, will be automatically added to certain Kindle Fire tablets as part of a software update. It will feature two editions each day, at 5 a.m. and 5 p.m. Eastern time, when the company believes it will reach the most readers.

Two things stick out in that paragraph: first, the "noise of the Web," as though that's a thing that exists in and of itself and not a product of newspaper websites being largely assembled at the whims of a huge number of competing interests (advertising, editorial, IT, etc). If your web site is noisy, it's because you made it that way, and maybe you should fix it instead of launching a new platform.

Second, two editions? In a world of twenty-four hour online news, someone's making a digital news publication that updates (with exceptions for breaking events) twice a day? That's not a strategy for reaching readers, it's a sop to a print-oriented workflow that has to produce distinct physical "products" instead of a stream of content. It's not like they have to lay out a page, so what's the point? Why make people wait?

I expect this thing to go the way of The Daily within a year, quietly killed when the Post announces some new shiny object, probably. Of course, as a long-time web partisan, I think launching another native journalism app is a silly move anyway. The reasons for this are well-rehearsed and familiar: ease of production, greater audience reach, and creating a single path for content. But ultimately what makes native news apps fail is that they can't interoperate with other services the way the web can.

A lot of ink has been spilled about the NYTimes innovation report, for better or worse, but one of the big takeaways for me was the graph on page 23 of home page visitors compared to page views. The Times has seen no real drop-off in overall traffic, but the number of people seeing the home page has dropped by half over the last two years alone. And the reason for this is simple: most people don't go looking for journalism anymore. It finds them instead, when stories are shared through Facebook and Twitter and (increasingly less) RSS/Atom feeds.

Whether you're thinking of your app as a new home page or as a new publishing platform entirely, this trend seems equally grim — a choice between apathy or obscurity. It's probably possible, somehow, to make an app share to Facebook or Twitter. But it's never going to be as quick, as smooth, or as easy as sharing to those services via a simple URL. As much as anything else, this dooms native news apps from the start: if users can't share your content, it might as well be stored in a sealed vault. If you make the app share a web link as a workaround, everyone ends up on the site anyway, so why bother creating the app in the first place?

(Incidentally, this is why the line tossed around by some pundits that "native apps are too on the web, because they use HTTP" is nonsense. Does your native app have a front-facing URL? Can I link someone to a specific page in your app? No? Then it's not on the web.)

Don't get me wrong: I'm not necessarily sanguine about this state of affairs. The increasing role of social media in discovery and spread of journalism is worrying, from the silencing effects to the loss of control for publishers. One day I'd like to think we'll be out from underneath Facebook's thumb, or anyone else seeking to wall off the web until it pays up. We need better solutions for that problem, ones that don't make us sharecroppers on anyone else's land.

Meanwhile, however, this is the world we live in: the social networks dominate, and ultimately they run on URLs, not on binary blobs stored in a native bundle. Publishing two gimmicky "editions" a day through a fancy app, on a device that relatively few people use, is not going to change that anytime soon. If you want people to read your news, it had better be on the (sharable, linkable, endlessly flexible) web.

November 5, 2014

Filed under: journalism»new_media

Election Elements

You might have heard that there was an election this last week. Like every news organization, The Seattle Times had a live results page, powered by a Node-based scraper. It did pretty well: we had no glitches with pulling results, and the response has been solid. It also generated the source data for the print edition. Oh, and we put bunting on the front page, which is not something you get to do every day.

Behind the scenes, however, that results page has another interesting feature: as far as I'm aware, it's the first use of Web Components (at least, the custom elements part) in production by a news organization. Each of the Washington maps on the page is a custom-built <svg-map> element, which handles loading the image document and provides a set of convenience methods for manipulating the map once it's available.

SVG is one of those technologies that I really want to like, but has always been a total pain to actually use. It's an annoying format to author, doesn't seem to actually save any space compared to bitmap images, and has a ton of edge cases even in "standard" browsers (for example, Chrome will forget the state of an SVG document inside an object tag if that tag or its parents are set to display: none). Wrapping it up in a component that would manage its lifecycle and quirks for me just seemed like a no-brainer.

To create the component, I used Andrea Giammarchi's registerElement() shim instead of Polymer's polyfill layer — Giammarchi's script only shims the custom element portion of Web Components, but it works all the way back to IE9 and (more importantly) is only 2KB. On top of that, I used RSVP.js to create a quick shared cache for SVG source documents, ICanHaz for my templating, and a custom module called Savage to do SVG class/style manipulation.

From the outside, however, you don't need to know any of that. Instead, the interface is simple:

  1. Write a map element into the page, with a src attribute pointing to the SVG file you want to load. Put your tooltip template inside the tag.
  2. Attach callbacks to the element's ready promise to run code once the image is on the page.
  3. Use the eachPath method on the element to do painting, and set the onhover callback to pass in data for templating in the tooltip.
Using these maps, in other words, is basically just like using a regular element, if regular elements had a DOM API that wasn't written by psychopaths. All their complexity is tucked away inside, and what they present externally is clean, simple, and self-contained. The map element does 80% of what Pro Publica's Landline does, and I'd argue it does it better.

As a developer, I'm really excited by the potential of these new custom elements. Although I had used them at ArenaNet for building the new Guild Wars 2 trading post, those were used to create tight integration with the in-game interface, and only needed to work in a single browser. This is the first time I've used them in a wider ecosystem, and they worked like a charm.

But as a library consumer, and particularly as a harried newsroom dev, I think web components have a tremendous potential to make complex behavior way easier to build and train for. Take the afore-mentioned Landline, for example: wouldn't it be nice to simply include a script tag (or an HTML import) and then be able to write <landline-map> tags into the page, with an attribute pointing to a CSV or a Google Sheet containing the necessary data? Or consider Pym, NPR's responsive iframe library that's so great I forked and rewrote big chunks of it. Right now, using Pym on the parent page requires including the script, adding a dummy element, and then initializing the script — why shouldn't it just be <pym-embed> instead?

Distributing libraries not as modules or loose scripts, but as chunks of new HTML functionality, has the potential to radically change how we create new content on the web in the future. Newsrooms, which are always under pressure and often consume "pre-made" tools for interactive elements like timelines and galleries, are a perfect use-case for Web Components. After this election experience, I'm planning to lean heavily on them whenever possible, and I'm hoping other people will as well.

September 2, 2014

Filed under: journalism»new_media

Fanning the Flames

Over the weekend we soft-launched our Seahawks Fan Map project. It's a follow-up on last year's model, which was built on a Google Fusion Table map. The new one is better in almost every way: it lets you locate yourself based on GPS, provides autocomplete for favorite players, and clusters markers instead of throwing 3,000 people directly onto the map. It's also built using a couple of interesting technical choices: Google Apps Script and "web components" in jQuery.

Apps Script

Like my other news apps, the fan map is a static JavaScript application built on our news template. It ships all its data and code up to S3, and has no dynamic component. So how do we add new people to the map, if there's no server backing it up? When you fill out the fan form, where does your data go?

The answer, as with many newsrooms, is heavy use of Google Sheets as an ad-hoc CMS. I recently added the ability to for our news app scaffolding to pull from Sheets and cache the data locally as JSON, which is then available to the templating and build tasks. Once every few minutes, a cron job runs on a machine in our newsroom, which grabs the latest data and uploads a fresh copy to the cloud. Anyone can use a spreadsheet, so it's easy for editors and writers to update the data or mark a row as "approved," "featured," or "blocked."

Getting data from the form into the sheet is a more interesting answer. Last year's map embedded a Google Forms page, which is the source of many of its UI sins: they can't be styled, they don't offer any advanced form elements, and they can't be made responsive. Nobody really likes Google Forms, but they're convenient, so people use them all the time. We started from that point on this project, but kept running into features that we really wanted (particularly browser geolocation) that they wouldn't support, so I went looking for alternatives.

Most people use Google Apps Script as a JavaScript equivalent for Excel macros, but they have a little-known but extremely useful feature: an Apps Script can be published as a "web app" if it has a doGet() function to handle incoming requests. From that function, it can use any of the existing Apps Script APIs, including access to spreadsheets and (even better) the Maps geocoder. The resulting endpoint isn't fully CORS-compliant, but it's good enough for JSONP, making it possible to write a custom form and still submit to Sheets for storage. I've posted a sample of our handler code in this gist.

Combining the web endpoint for Apps Script with our own custom form gave us the best of both worlds. I could write a form that had pretty styling, geolocation, autocomplete, and validation, but it could still go through the same Google Docs workflow that our newsroom likes. Through the API, I could even handle geocoding during form submission, instead of writing a separate build step. The speed isn't great, but it's not bad either: most of the request time is spent getting a lock on the spreadsheet to keep simultaneous users from overwriting each other's rows. Compared to setting up (and securing) a server backend, I've been very happy with it, and we'll definitely be using this for other news apps in the future.

Web jQuery Components

I'm a huge fan of Web Components and the polyfills built on top of them, such as Polymer and Angular. But for this project, which does not involve putting data directly into the DOM (it's all filtered through Leaflet), Angular seemed like overkill. I decided that I'd try to use old-school technology, with jQuery and I Can Haz templates, but packaged in a component-like way. I used AMD to wrap each component up into a module, and dropped them into the markup as classes attached to placeholder elements.

The result, I think, is mixed. You can definitely build components using jQuery — indeed, I'm very happy with how readable and clean these modules are compared to the average jQuery library — but it's not particularly well-suited for the task. The resulting elements aren't very well encapsulated, don't respond to attribute values or changes, and must manually handle data binding and events in a way that Polymer and Angular safely abstract away. Building those capabilities myself, instead of just using a library that provides them, doesn't make much sense. If I were starting over (or as I consider the additional work we'll do on this map), it's very tempting to switch out my jQuery components for Angular directives or Mozilla's X-Tags.

That said, I'm glad I gave it a shot. And if you can't (or are reluctant to) switch away from jQuery, I'd recommend the following strategies:

  • Use a build process like AMD or Browserify that lets you write your element templates as .html files and bundle them into your components, then load those modules from your main scripts.
  • Delegate your event listeners, which forces you to write self-contained code that can handle re-templating, instead of attaching them directly.
  • Only communicate with external code via the data- attributes, in order to enforce encapsulation. Data attributes will automatically populate jQuery's internal storage, so they're a great way to feed your state at startup, but they're not bound two-way: you'll have to update them manually to reflect internal changes.

Still to come

The map you see today is only the first version — this is one of the few news projects I plan to maintain over an extended period. As we get more information, we'll add shaded layers for some of the extra questions asked on the form, so that you can see the average fan "lifespan" per state, or find out which players are favorites in countries around the world. We'll also feature people with great Seahawks stories, giving them starred icons on the map that are always displayed. And we'll use the optional contact info to reach out to a "fan of the week," making this both a fun interactive and a great reporting tool. I hope you enjoy the map, and if you're a Seahawks fan, I'll see you there!

August 4, 2014

Filed under: journalism»new_media

Angular Momentum

This week, my interactive work for the Seattle Times examines the bidding wars that are part and parcel of being one of the fastest growing cities in the country. It's got everything you need to be horrified by your local real estate market: high prices, short days-on-market, and a search function to see how dire it is next door. It is also the third or fourth interactive that I've built with Angular this year (source code here). There aren't a lot of people building news apps with Angular, which I find amazing: if your goal is to surface data on a deadline, I'd argue it's the best option out there.

Let's review what Angular brings to the table. At the most basic level, it's a library for doing two things:

  • Data-binding: Any data that you attach to the Angular scope will be used to update the page, and vice-versa — if users change the page (via form elements or other inputs), it'll automatically update your data.
  • Custom HTML: Angular directives let you create new HTML elements and behaviors, so that instead of loading a plugin for an auto-completion input, you can just write <auto-complete>.

As a news developer, this means that building visualizations based on data can be incredibly fast: you attach it to the scope, annotate your HTML, and you're all set. A smart, sortable table is less than 100 lines of code, and uses regular JavaScript objects instead of "collections" or other heavy classes. Meanwhile, directives keep your HTML clean and free of "div soup," and the use of "virtual DOM" means that updates are incredibly fast.

By contrast, when I look at code written in D3 (seemingly the most popular library for doing news visualizations), I see an entirely different set of priorities:

  • Elements and styles are written in the JavaScript code, instead of in the HTML/CSS where you'd expect them to be.
  • Values loaded into the page are done via lengthy chained functional expressions, which makes them harder to inspect compared to the Angular scope.
  • Performance in non-Chrome browsers tends to be terrible, because D3 spends a huge amount of time rewriting the DOM directly instead of batching its changes and abstracting the document away (and because of SVG, which is a dog in Firefox and IE).

After years of debugging spaghetti code in jQuery, this design seems both familiar and ominous, particularly the lack of templating and the long call chains. I've written my fair share of apps this way, and they tend to sprawl out into an unstructured, unmaintainable mess. That may not be a problem for the New York Times, which has more budgetary and development resources than I'll ever have. But as the (for now) only developer in the Seattle Times newsroom, I need to be able to respond instantly to feedback from designers, editors, and reporters. One of my favorite things to hear is "we didn't expect a change so fast!" Angular gives me the agility I need to iterate rapidly, try things out, and discard what doesn't work in favor of what does.

Speed and structure are good reasons to use Angular in a newsroom, but there's another, less obvious incentive. Angular is basically training wheels for Web Components: although it lacks the Shadow DOM, it includes equivalents for custom elements and HTML imports. It's a short hop from Angular to libraries like Polymer, and from there to a whole world of deadline-friendly tooling and reuse. Make no mistake, this is the future of web development, and it can't get here soon enough: I'd love to be able to simply send off an <interactive-feature> tag to the web producers, and I imagine they'd appreciate it too. The Google Web Components tags would be a similar godsend.

For me, this makes using Angular a no-brainer. It's fast, it's effective, it's great for visualizations, and it's forward-thinking. It shocks me that more people haven't seen its advantages — but then, given the way that most newsroom hackers seem to think of the browser as "that embarrassing thing that loads my server code," it probably shouldn't be surprising.

Past - Present