this space intentionally left blank

November 5, 2014

Filed under: journalism»new_media

Election Elements

You might have heard that there was an election this last week. Like every news organization, The Seattle Times had a live results page, powered by a Node-based scraper. It did pretty well: we had no glitches with pulling results, and the response has been solid. It also generated the source data for the print edition. Oh, and we put bunting on the front page, which is not something you get to do every day.

Behind the scenes, however, that results page has another interesting feature: as far as I'm aware, it's the first use of Web Components (at least, the custom elements part) in production by a news organization. Each of the Washington maps on the page is a custom-built <svg-map> element, which handles loading the image document and provides a set of convenience methods for manipulating the map once it's available.

SVG is one of those technologies that I really want to like, but has always been a total pain to actually use. It's an annoying format to author, doesn't seem to actually save any space compared to bitmap images, and has a ton of edge cases even in "standard" browsers (for example, Chrome will forget the state of an SVG document inside an object tag if that tag or its parents are set to display: none). Wrapping it up in a component that would manage its lifecycle and quirks for me just seemed like a no-brainer.

To create the component, I used Andrea Giammarchi's registerElement() shim instead of Polymer's polyfill layer — Giammarchi's script only shims the custom element portion of Web Components, but it works all the way back to IE9 and (more importantly) is only 2KB. On top of that, I used RSVP.js to create a quick shared cache for SVG source documents, ICanHaz for my templating, and a custom module called Savage to do SVG class/style manipulation.

From the outside, however, you don't need to know any of that. Instead, the interface is simple:

  1. Write a map element into the page, with a src attribute pointing to the SVG file you want to load. Put your tooltip template inside the tag.
  2. Attach callbacks to the element's ready promise to run code once the image is on the page.
  3. Use the eachPath method on the element to do painting, and set the onhover callback to pass in data for templating in the tooltip.
Using these maps, in other words, is basically just like using a regular element, if regular elements had a DOM API that wasn't written by psychopaths. All their complexity is tucked away inside, and what they present externally is clean, simple, and self-contained. The map element does 80% of what Pro Publica's Landline does, and I'd argue it does it better.

As a developer, I'm really excited by the potential of these new custom elements. Although I had used them at ArenaNet for building the new Guild Wars 2 trading post, those were used to create tight integration with the in-game interface, and only needed to work in a single browser. This is the first time I've used them in a wider ecosystem, and they worked like a charm.

But as a library consumer, and particularly as a harried newsroom dev, I think web components have a tremendous potential to make complex behavior way easier to build and train for. Take the afore-mentioned Landline, for example: wouldn't it be nice to simply include a script tag (or an HTML import) and then be able to write <landline-map> tags into the page, with an attribute pointing to a CSV or a Google Sheet containing the necessary data? Or consider Pym, NPR's responsive iframe library that's so great I forked and rewrote big chunks of it. Right now, using Pym on the parent page requires including the script, adding a dummy element, and then initializing the script — why shouldn't it just be <pym-embed> instead?

Distributing libraries not as modules or loose scripts, but as chunks of new HTML functionality, has the potential to radically change how we create new content on the web in the future. Newsrooms, which are always under pressure and often consume "pre-made" tools for interactive elements like timelines and galleries, are a perfect use-case for Web Components. After this election experience, I'm planning to lean heavily on them whenever possible, and I'm hoping other people will as well.

October 21, 2014

Filed under: journalism»investigation

Loaded with lead

I'm very proud to say that "Loaded with lead," a Seattle Times investigation into the ways that gun ranges poison their customers and workers, went live this weekend. I worked on all four interactives for this project, as well as doing the header design and various special effects. We'll have a post up soon on the developer blog about those headers, but what I'd like to talk about today is one particular graphic — specifically, the string-of-pearls chart from part 2.

The data underlying the pearl chart is a set of almost 300 blood tests. These are not all tests taken by range workers in Washington, just the ones that had to be reported after exceeding the safe threshold of 10 micrograms per deciliter. Although we know who some of the tested workers are, most of them are identified only by an anonymous patient ID and the name of their employer. My first impulse was to simply toss the data into a scatter chart, but as is often the case, that first impulse proved ill-advised:

  • Although the tests are taken from a ten-year period, for any given employer/employee the tests tend to be more compact in timeframe, which makes it tough to look at any series without either losing the wider context, or making it impossible to see the individual tests.
  • There aren't that many workers, or even many ranges, with lots of test results. It's hard to draw a trend when the filtered dataset might be composed of only a handful of points. And since within a range the tests might be from one worker or from many, they can't really be meaningfully compared.
  • Since these tests are only of workers that exceeded the safe limit, even those trends that can be graphed do not tell a good story visually: they usually show a high exposure, followed by gradually lowered lead levels. The impression given is that gun ranges are becoming safer, but the truth is that workers with hazardous blood lead levels undergo treatment and may be removed from the high-lead environment, resulting in lowered test results but not necessarily a lead-free workplace. It's one of those graphs that's "technically correct," but is actually misleading.

Talking with reporters, what emerged was that the time dimension was not really important to this dataset. What was important was to show that there was a repeated pattern of negligence: that these ranges posted high numbers repeatedly, over long periods of time (in several cases, more than five years). Once we discard a strict time axis, a lot more interesting options open up to us for data visualization.

One way to handle this would be with a traditional box and whiskers plot, which shows the median and variation within a statistical set. Unfortunately, box plots are also wonky and weird-looking for most readers, who are not statisticians and would not know a quartile if it offered them a grilled cheese sandwich. So one prototype simplified the box plot down to its simplest form — probably too simple: I rendered a bar that began and ended within the total range of test results for each range, with individual test results marked with a line inside that bar.

This version of the plot was visually interesting, but it had flaws. It made it easy to see the general level of blood tests found at each range, and compare gun ranges against each other, but it didn't show concentration. Since a single tick mark was shown within the bar no matter how many test results at a given level, there was litttle visual difference between two employers with the same range of test results, even if one employer mainly showed results at the top of the range, and the other results were clustered at the bottom. We needed a way to show not only level, but also distribution, of results.

Given that the chart was already basically a number line, with a bar drawn from the lowest to the highest test result, I removed the bar and replaced the tick marks with circles that were sized to match the number of test results at each amount. Essentially, this is a histogram, but I liked the way that the circles overlapped to create "blobs" around areas of common test results. You can immediately see where most of the tests fall for each employer, but you don't lose sight of the overall picture (which in some cases, like the contractors working outside of a ventilation hood at Wade's, can be horrific — almost three times the amount considered dangerous by the CDC). I'm not aware of anyone else who's done this kind of chart before, but it seems too simple for me to be the first to think of it.

I'd like to take a moment here to observe that pretty much all data visualization comes down to translating information into a form that our visual systems are evolved to quickly understand. There's a great post on how that translation functions here, with illustrations that show where each arrangement sits on a spectrum of perceived accuracy and meaning. It's not rocket science, but I think it's a helpful perspective: I'm just trying to trick your visual cortex into absorbing a table's worth of data at a glance.

But what I've been trying to stress in the newsroom from this example is less technical, and more about how much effective digital journalism comes from the simple process of iteration and self-evaluation. We shouldn't expect to come up with a brilliant interactive on the first try every time, or even any of the time. I think the string-of-pearls is a great example of that, going from a visualization that I was confusing and overly-broad to a more focused graphic statement, thanks to a lot of evolution and brainstorming. It was exhausting work, but it's become my favorite of the four visualizations for this project, and I'm looking forward to tweaking it for future stories.

October 8, 2014

Filed under: culture»internet

Serious (Pony) Business

There's nothing I'm going to write this week that's as moving, as horrifying, or as important as Kathy Sierra's take on trolls and being trolled. Sierra, who was seriously harassed and abused by Andrew "weev" Auernheimer, writes about how awful this harassment was, how common it is, and devastating it was to see Auernheimer's history of sociopathic trolling dismissed after his (admittedly flawed) prosecution for "hacking" AT&T.

But you all know what happened next. Something something something horrifically unfair government case against him and just like that, he becomes tech's "hacktivist hero." He now had A Platform not just in the hacker/troll world but in the broader tech community I was part of. And we're not just talking stories and interviews in Tech Crunch and HuffPo (and everywhere else), but his own essays in those publications. A tech industry award. His status was elevated, his reach was broadened. And for reasons I will never understand, he suddenly had gained not just status and Important Friends, but also "credibility".

Did not see that coming.

But hard as I tried to find a ray of hope that the case against him was, somehow, justified and that he deserved, somehow, to be in prison for this, oh god I could not find it. I could not escape my own realization that the cast against him was wrong. So wrong. And not just wrong, but wrong in a way that puts us all at risk. I wasn't just angry about the injustice of his case, I had even begun to feel sorry for him. Him. The guy who hates me for lulz. Guy who nearly ruined my life. But somehow, even I had started to buy into his PR. That's just how good the spin was. Even I mistook the sociopath for a misunderstood outcast. Which, I mean, I actually knew better.

And of course I said nothing until his case was prosecuted and he’d been convicted, and there was no longer anything I could possibly do to hurt his case. A small group of people — including several of his other personal victims (who I cannot name, obviously) asked me to write to the judge before his sentencing, to throw my weight/story into the "more reasons why weev should be sent to prison". I did not. Last time, for the record, I did NOTHING but support weev’s case, and did not speak out until after he’d been convicted.

But the side-effect of so many good people supporting his case was that more and more people in tech came to also... like him. And they all seemed to think that it was All Good as long as they punctuated each article with the obligatory "sure, he’s an ass" or "and yes, he's a troll" or "he's known for offending people" (which are, for most men, compliments). In other words, they took the Worst Possible Person, as one headline read, and still managed to reposition him as merely a prankster, a trickster, a rascal. And who doesn’t like a "lovable scoundrel"?

The whole post is well worth reading before Sierra possibly takes it offline. And who could blame her? Even as she was writing this, the #gamergate movement on Twitter has literally harassed women out of their homes, and yet (in an echo of weev's "hacktivism") still insists that they're all about "ethics in journalism."

For most, that's an obvious smokescreen — a way to cloak their behavior in respectability while planning their next attack. For those who claim to actually believe it, that's a declaration that writing about videogames is more important than hurting women in the worst possible ways. And there's the problem: you don't get to claim the moral high ground when you got there through the pain and suffering of other people.

October 3, 2014

Filed under: tech»web

WebGL and beyond

We came very close to using WebGL for a Seattle Times special report that will come out next week. Now that iOS 8 has shipped with support for WebGL, albeit in an unstable and slightly buggy form, it's common enough that I felt comfortable using it (with a scaled-down 2D fallback) for our audience. In the end, we went with a different design language and shelved the WebGL experiments, but the experience has left me very excited about the potential for mainstream usage.

It's probably easier to understand why WebGL is exciting by looking at what the regular 2D canvas does badly. 2D canvas is terrible at combining or masking its rendering functions (globalCompositeOperation in particular is dog-slow in Firefox). It doesn't give users easy access to the image data directly, which is frustrating in a bitmap-based drawing API. It doesn't like changing colors or styles frequently. But its biggest weakpoint is actually that it's tied so closely to JavaScript, which is a single-threaded language running in the browser event loop. The more pixels you touch in detail, the slower it gets.

WebGL, by contrast, is great at blending, filtering, and masking. But most importantly, WebGL code moves most (if not all) of the math-heavy graphics code you'd normally write in JavaScript — scaling and transforms, patterns, and color — over to the GPU. Your graphics card is a massive parallel-processing machine, so all your drawing occurs simultaneously, not sequentially. You can alter every pixel in the frame, if you want, and it'll barely take any more time than if you change only a few.

Once I spent a little time writing some simple shaders, I realized that there's a whole range of experiences you can write in WebGL that simply aren't possible on a 2D canvas. I could shift the colors or custom-filter an image on a per-pixel basis. I wrote a dust simulation that simulated thousands of motes on a low-end machine, even with the physics still running in JavaScript. I even created a faux-3D effect a la Depthy, by displacing each pixel by the value of a second texture's lightness and the mouse position.

None of these experiments involved 3D math in any way. They're not spinning teapots, or Unreal Engine demos, or elaborate parallax effects. I suspect that the real value of WebGL isn't going to be from any of those things. It's going to be the fact that it gives the web platform the free-drawing capability of canvas, but uncoupled from the JavaScript execution model that it's been shackled to.

There's an obvious parallel here, which is the first two major versions of Android. Because it was designed to run on low-end hardware, Android drew all its UI via software until 3.0 (and hardware acceleration didn't become widespread until 4.0). The resulting lag was never as bad as critics claimed, but it did mean that a lot of Android looked and felt a bit utilitarian. You wouldn't see something like Material Design emerge until the system supported using the GPU for rendering ordinary UI.

It's not a coincidence that Google's moving to Material Design on both Android and the web. Its design language — a smoothly-animated world of flat, geometric shapes — is attractive, but more importantly it's well-matched to the kinds of flat, geometric shapes that can be animated fluidly in a browser, using the 3D acceleration that's already built into the composition layer. Web Components will give developers a way to package those elements up, and make them reusable. Flexbox makes their layouts scalable and responsive.

But for the web platform to move forward, we need more than just a decent look and feel. We need the ability to write the kinds of applications that people insist that it can't run. WebGL is a step in that direction: graphics with near-native speed and capability, instantly deployed and paired with a surprisingly powerful UI toolkit. The kinds of apps and experiences we can write othe web, for a mainstream and mobile audience, just got a lot bigger. And I for one am looking forward to pushing those boundaries as much as I can.

September 23, 2014

Filed under: gaming»media

Press Play

I have started, and then failed to finish, three posts on the #GamerGate nonsense, in which a gang of misogynists led by 4Chan have attempted to hound Anita Sarkeesian and Zoe Quinn off the Internet for daring to be women with opinions about video games. There's very little insightful you can say about this, because they don't have any real arguments for someone to engage, and also because they're dumb as toast. At some point, however, someone decided that they'd use "ethics in journalism" as a catchphrase for their trolling.

What they mean by that is anyone's guess. This Vox explainer does its best to extract an explanation, but other than some noise about "objectivity," there aren't any concrete demands, and the links to various arguments are hilariously silly. One claims that there's a difference between "journalist" and "blogger" based on some vague measure of competence (read: the degree to which you agree with it), which veterans of the "blogger ethics panel" meme circa 2005 will enjoy. Frankly, the #GameGate movement's concept of journalism is itself pretty fuzzy, and tough to debate. As a journalist with actual newsroom experience, I think there are a few things that we should clear up.

Game journalism, isn't

Real journalists make phone calls. They dig into stories, find other viewpoints, and perform fact-checking. It's not glamorous work, which may be why reporters are so prone to self-mythologizing (a tendency I'm not immune to), but it is hard and often tedious. It's also, in its best moments, confrontational. There's an old saying: journalism is publishing what someone doesn't want to be published — everything else is just public relations. Some of that is just more myth-making, but it's also true.

When's the last time you read "gaming news" that had multiple sources? That actively investigated wrongdoing in the industry? That had something critical to say about more than how a single game played? You can probably think of exceptions, but that's what they are: exceptions. The vast majority of what gamers call "journalism" isn't anything like real reporting.

This isn't unusual, or even wrong. It's pretty typical for trade press, particularly in the entertainment industry. After all, there's only so combatative you can be when you're dependent on cooperation with game studios and publishers in order to have anything to write about. I don't expect hard-hitting investigations from Bass Player or The A.V. Club either. It's not journalism, but it still has value.

#GamerGate doesn't understand the difference

Unfortunately, as far as anyone can tell, the cries of "ethics in journalism" actually translate to a desire for more press releases and PR, like in the halcyon days of Nintendo Power. That's probably comforting for a lot of people, because PR is inherently more comforting than critical thought, but it means what they want and what they claim they want are very different things.

There's a kind of irony of calling for "objectivity" in gaming press, where the dominant mode of writing is through previews and reviews. People making this call aren't asking for actual objectivity, because that wouldn't make sense — what's an "objective" review? One that can definitively state that yes, the game exists? It's a code word for "tell me about the graphics, and the genre, and leave any pesky context out of it."

That isn't much of a review, frankly. It's the kind of thinking that gives four stars to Triumph of the Will because the cinematography is groundbreaking, no matter what the content might have been. Incidentally, the author of the definitive — if satirical — Objective Game Reviews site has a really nice post about this.

We have met the enemy, and he is us

Here's what's funny about #GamerGate and its muddled, incoherent demands for journalism: by all accounts, the person who actually meets a lot of their criteria is none other than Anita Sarkeesian, the person they loathe the most. I mean, think about it:
  • About half of her videos are just unaltered clips from games — that's what makes them so powerful, since they speak for themselves. And what's more objective than plain footage?
  • She states her biases right out of the gate. The YouTube channel is called "Feminist Frequency," so people know what they're getting into.
  • She's funded by people who paid for her kickstarter, not gaming companies or PR firms. There's no corporate influence.

It's exactly what they're asking for! And they hate it! It's almost as though, protests to the contrary, this isn't about journalism at all. As if there's actually an agenda being pushed that's more about forcing women and alternative viewpoints out. Imagine that.

September 2, 2014

Filed under: journalism»new_media

Fanning the Flames

Over the weekend we soft-launched our Seahawks Fan Map project. It's a follow-up on last year's model, which was built on a Google Fusion Table map. The new one is better in almost every way: it lets you locate yourself based on GPS, provides autocomplete for favorite players, and clusters markers instead of throwing 3,000 people directly onto the map. It's also built using a couple of interesting technical choices: Google Apps Script and "web components" in jQuery.

Apps Script

Like my other news apps, the fan map is a static JavaScript application built on our news template. It ships all its data and code up to S3, and has no dynamic component. So how do we add new people to the map, if there's no server backing it up? When you fill out the fan form, where does your data go?

The answer, as with many newsrooms, is heavy use of Google Sheets as an ad-hoc CMS. I recently added the ability to for our news app scaffolding to pull from Sheets and cache the data locally as JSON, which is then available to the templating and build tasks. Once every few minutes, a cron job runs on a machine in our newsroom, which grabs the latest data and uploads a fresh copy to the cloud. Anyone can use a spreadsheet, so it's easy for editors and writers to update the data or mark a row as "approved," "featured," or "blocked."

Getting data from the form into the sheet is a more interesting answer. Last year's map embedded a Google Forms page, which is the source of many of its UI sins: they can't be styled, they don't offer any advanced form elements, and they can't be made responsive. Nobody really likes Google Forms, but they're convenient, so people use them all the time. We started from that point on this project, but kept running into features that we really wanted (particularly browser geolocation) that they wouldn't support, so I went looking for alternatives.

Most people use Google Apps Script as a JavaScript equivalent for Excel macros, but they have a little-known but extremely useful feature: an Apps Script can be published as a "web app" if it has a doGet() function to handle incoming requests. From that function, it can use any of the existing Apps Script APIs, including access to spreadsheets and (even better) the Maps geocoder. The resulting endpoint isn't fully CORS-compliant, but it's good enough for JSONP, making it possible to write a custom form and still submit to Sheets for storage. I've posted a sample of our handler code in this gist.

Combining the web endpoint for Apps Script with our own custom form gave us the best of both worlds. I could write a form that had pretty styling, geolocation, autocomplete, and validation, but it could still go through the same Google Docs workflow that our newsroom likes. Through the API, I could even handle geocoding during form submission, instead of writing a separate build step. The speed isn't great, but it's not bad either: most of the request time is spent getting a lock on the spreadsheet to keep simultaneous users from overwriting each other's rows. Compared to setting up (and securing) a server backend, I've been very happy with it, and we'll definitely be using this for other news apps in the future.

Web jQuery Components

I'm a huge fan of Web Components and the polyfills built on top of them, such as Polymer and Angular. But for this project, which does not involve putting data directly into the DOM (it's all filtered through Leaflet), Angular seemed like overkill. I decided that I'd try to use old-school technology, with jQuery and I Can Haz templates, but packaged in a component-like way. I used AMD to wrap each component up into a module, and dropped them into the markup as classes attached to placeholder elements.

The result, I think, is mixed. You can definitely build components using jQuery — indeed, I'm very happy with how readable and clean these modules are compared to the average jQuery library — but it's not particularly well-suited for the task. The resulting elements aren't very well encapsulated, don't respond to attribute values or changes, and must manually handle data binding and events in a way that Polymer and Angular safely abstract away. Building those capabilities myself, instead of just using a library that provides them, doesn't make much sense. If I were starting over (or as I consider the additional work we'll do on this map), it's very tempting to switch out my jQuery components for Angular directives or Mozilla's X-Tags.

That said, I'm glad I gave it a shot. And if you can't (or are reluctant to) switch away from jQuery, I'd recommend the following strategies:

  • Use a build process like AMD or Browserify that lets you write your element templates as .html files and bundle them into your components, then load those modules from your main scripts.
  • Delegate your event listeners, which forces you to write self-contained code that can handle re-templating, instead of attaching them directly.
  • Only communicate with external code via the data- attributes, in order to enforce encapsulation. Data attributes will automatically populate jQuery's internal storage, so they're a great way to feed your state at startup, but they're not bound two-way: you'll have to update them manually to reflect internal changes.

Still to come

The map you see today is only the first version — this is one of the few news projects I plan to maintain over an extended period. As we get more information, we'll add shaded layers for some of the extra questions asked on the form, so that you can see the average fan "lifespan" per state, or find out which players are favorites in countries around the world. We'll also feature people with great Seahawks stories, giving them starred icons on the map that are always displayed. And we'll use the optional contact info to reach out to a "fan of the week," making this both a fun interactive and a great reporting tool. I hope you enjoy the map, and if you're a Seahawks fan, I'll see you there!

August 28, 2014

Filed under: tech»education

Process Over Programs

This fall, I'll be teaching ITC 210 at Seattle Central College, which is the capstone class of the web development program there. It's taught as a combined class with WEB 210 (the designer's capstone). The last time I taught this course, it didn't go particularly well: although the goal is for students to implement a WordPress site for a real-world client, many of them weren't actually that experienced with the technology.

More importantly, they had never been taught any of the development methods that let teams work together efficiently. I suggested some of the basics — using source control, setting tasks, and using a "waterfall" structure — but I didn't require them, which was a mistake. Under pressure, students fell back on improvised strategies, and many of them ended up in a crunch as a result.

For the upcoming quarter, I plan to remedy those mistakes. But to do so, it's helpful to look at the web development program from a macro level. What is that we're trying to do here, and what should this capstone class actually mean to students?

Although the name has changed, Seattle Central is still very much a community college, and this is very much a trade program. We need to focus on practical job skills, not on CS theory. And so while the faculty are still working on many of the details, one of our goals for curriculum redesign was to create a simple progression between the three web applications classes: first teach basic programming in ITC 240, followed by an MVC framework in ITC 250, and finish the process with a look at development processes (agile, waterfall, last-minute panic, etc.) in ITC 260. By the end, students should feel like they can take a project from start to finish as part of a team in an organized fashion.

Of course, just because that's what our intentions were doesn't mean that it's working out that way. These changes are large shifts in the SCC curriculum, and like steering an Oldsmobile, those take time. So while it would be nice to assume that students have been through the basics of project management by the time that they reach the capstone, I can't count on it — and even then, they probably won't have put it to practice in teams, since the prior classes are individually-graded.

To bring this back to ITC 210, then, we have two problems. First, students don't know how to manage development, because they've spent most of their time just learning how to code. Second, the structure of the class hasn't historically encouraged them to develop those skills. Assignments on the development side tend to be based around the design milestones, which makes their workload "lumpy:" a lot of waiting for design resources, followed by an intense, panicky burst at the end. This may sometimes be an accurate picture of the job, but it's a terrible class experience. Ideally, we want the developers to be working constantly throughout the quarter.

So here's my new plan: this year, ITC 210 will be organized for students around a series of five agile sprints, just like any real-world coding project. At the start of each sprint, they'll assign time and staff to tasks, and at the end of each sprint they'll do a retrospective to help determine their velocity. Grades will be largely organized around documentation of this process. During the last sprint, they'll pick up another team's site and file bugs against it as QA, while fixing the bugs that are filed against them.

This won't entirely smooth out the development process — devs will still be bottlenecked on design work from time to time — but it will make it clear that I expect them to be working the entire time on laying groundwork. It'll also familiarize them with the ways that real teams coordinate their efforts, and it will force them to fit into a common workflow instead of fragmenting into a million angry swarms of random methodology.

I tend to make fun of programmers for thinking that they're the only ones who can invent a workflow, but it's easy to forget that coordinating a team is hard, and nobody comes by it naturally. I made that mistake last time around, and although we scraped by, there were times when it was rough. This quarter, I'm not giving students a choice: they'll work like a regular software team, or they'll fail the course. It may seem harsh, but I think it'll pay off for them when it comes time to do this for a living.

August 11, 2014

Filed under: movies»commentary»superhero

12 Percent of a Plan

I would love to have been in the meeting where someone pitched Guardians of the Galaxy. "We're going to take all the good will you've built up through the Marvel comic movie franchise, and then spend it on a space movie with characters that nobody really knows, one of whom is a heavily-armed raccoon." And then even weirder, it worked: Guardians is pretty good. Maybe it tells when it should show a little too often, but it never stopped me from enjoying myself. It's got a great soundtrack, good writing, well-done special effects, and most importantly, a really watchable cast.

Of course, this has been the case with most of the Marvel movies. I mean, let's be honest about, say, the Thor franchise, which have been a fun pair of movies considering that they're composed almost entirely of gibberish: Norse gods with British accents (who are actually aliens) fighting against elves and ice trolls (who are also aliens)! The whole thing is completely incoherent, but nobody cares because of the casting: everybody onscreen is good-looking, compulsively charming, and clearly having fun with a very silly premise.

But there's one thing that's been bugging me about the Marvel flicks, including Guardians, which is their endings. Namely, that they've all got the same one: the bad guys summon/control/take over a huge flying object, which immediately crashes headlong into a city.

  • The Avengers: Aliens crash giant worms into New York City.
  • Thor: The Dark World: Elves crash giant spaceship into England.
  • Captain America: The Winter Soldier: Nazis crash giant aircraft into the Potomac.
  • Guardians of the Galaxy: Lee Pace crashes giant accordian into alien planet.

Explosions follow, while the heroes rush to tackle the portal/controller/big bad at the wheel. Lots of buildings fall over in the process, and people run through the streets while looking up and behind them (oddly enough, hardly anyone ever trips). Lance Mannion refers to it as the "obligatory ad for the video game," and while that's harsh it's not inaccurate, because it does feel a little bit (between the overused CGI and the framing) like watching someone else play God of War. A lot of money went into it, and someone's clearly having a good time, but it's not necessarily you.

And to be clear, Marvel's not the only company writing screenplays this way. Star Trek: Into Darkness, for example, was a movie that committed every sin in the screenwriting book (and then added a few) but arguably the worst part was the meaningless and cruel spaceship crash at its climax. Over at Fox, X-Men: Days of Future Past has Magneto tossing an airborne baseball stadium at the White House. Huge flying objects are the new glass jail cell.

The problem with these pyrotechnics isn't just that they're repetitive and tasteless (although they are both), it's that they're ineffective: by destroying a huge chunk of a city, the writers are aiming for huge cinematic stakes, but by portraying it from a wide-angle lens (and, in PG-rated films, refusing to show any of the resulting bodies and carnage) there's no sense of drama. It's just computer-generated buildings falling over in the distance: who cares?

For a movie like Guardians, which has no real point other than to be a fun space adventure, it's bad enough that there's a good fifteen minutes of watching architecture instead of the characters that are the real draw. But it's more acutely frustrating for something like The Winter Soldier, which spends its first 90 minutes referencing the modern surveillance state, which is a tricky, subtle political problem. And nothing says "tricky" and "subtle" like sending three flying aircraft carriers through a building in Washington, DC.

Maybe that's expecting a bit much from a huge media property with a multi-year cinema domination plan. Marvel wants to put people into seats twice a year, and if that means making the same movie over and over again, that's fine. It's certainly never stopped anyone else (see also: Transformers and Harry Potter). If they're not even sure they can make a movie starring a woman, the chances they'll mess with the formular are pretty slim.

But look at it this way: now you know when you can take a bathroom break without missing anything. I figure you can stay away until the end of the credits, at which point you'll learn which comic-book movie will drop a giant metal object on Paris next year. If we're all very lucky, it'll be Squirrel Girl's turn eventually.

August 4, 2014

Filed under: journalism»new_media

Angular Momentum

This week, my interactive work for the Seattle Times examines the bidding wars that are part and parcel of being one of the fastest growing cities in the country. It's got everything you need to be horrified by your local real estate market: high prices, short days-on-market, and a search function to see how dire it is next door. It is also the third or fourth interactive that I've built with Angular this year (source code here). There aren't a lot of people building news apps with Angular, which I find amazing: if your goal is to surface data on a deadline, I'd argue it's the best option out there.

Let's review what Angular brings to the table. At the most basic level, it's a library for doing two things:

  • Data-binding: Any data that you attach to the Angular scope will be used to update the page, and vice-versa — if users change the page (via form elements or other inputs), it'll automatically update your data.
  • Custom HTML: Angular directives let you create new HTML elements and behaviors, so that instead of loading a plugin for an auto-completion input, you can just write <auto-complete>.

As a news developer, this means that building visualizations based on data can be incredibly fast: you attach it to the scope, annotate your HTML, and you're all set. A smart, sortable table is less than 100 lines of code, and uses regular JavaScript objects instead of "collections" or other heavy classes. Meanwhile, directives keep your HTML clean and free of "div soup," and the use of "virtual DOM" means that updates are incredibly fast.

By contrast, when I look at code written in D3 (seemingly the most popular library for doing news visualizations), I see an entirely different set of priorities:

  • Elements and styles are written in the JavaScript code, instead of in the HTML/CSS where you'd expect them to be.
  • Values loaded into the page are done via lengthy chained functional expressions, which makes them harder to inspect compared to the Angular scope.
  • Performance in non-Chrome browsers tends to be terrible, because D3 spends a huge amount of time rewriting the DOM directly instead of batching its changes and abstracting the document away (and because of SVG, which is a dog in Firefox and IE).

After years of debugging spaghetti code in jQuery, this design seems both familiar and ominous, particularly the lack of templating and the long call chains. I've written my fair share of apps this way, and they tend to sprawl out into an unstructured, unmaintainable mess. That may not be a problem for the New York Times, which has more budgetary and development resources than I'll ever have. But as the (for now) only developer in the Seattle Times newsroom, I need to be able to respond instantly to feedback from designers, editors, and reporters. One of my favorite things to hear is "we didn't expect a change so fast!" Angular gives me the agility I need to iterate rapidly, try things out, and discard what doesn't work in favor of what does.

Speed and structure are good reasons to use Angular in a newsroom, but there's another, less obvious incentive. Angular is basically training wheels for Web Components: although it lacks the Shadow DOM, it includes equivalents for custom elements and HTML imports. It's a short hop from Angular to libraries like Polymer, and from there to a whole world of deadline-friendly tooling and reuse. Make no mistake, this is the future of web development, and it can't get here soon enough: I'd love to be able to simply send off an <interactive-feature> tag to the web producers, and I imagine they'd appreciate it too. The Google Web Components tags would be a similar godsend.

For me, this makes using Angular a no-brainer. It's fast, it's effective, it's great for visualizations, and it's forward-thinking. It shocks me that more people haven't seen its advantages — but then, given the way that most newsroom hackers seem to think of the browser as "that embarrassing thing that loads my server code," it probably shouldn't be surprising.

July 30, 2014

Filed under: random»linky

<link rel="post">

  • When I reviewed Questlove's Mo Meta Blues, my main complaint was that the parts I really enjoyed — in-depth looks at musical history from his deep record-diving perspective — were too few and far between. So while I'm late reposting it, I have to say I really enjoyed this six-part series of articles for Vulture on "How Hip-Hop Failed Black America."
  • Marijn Haverbeke, author of the CodeMirror editor, Tern parser, and any number of other cool JavaScript projects, has released the second edition of Eloquent JavaScript, which now includes a lot more detail on the browser and NodeJS. If this had existed two years ago, I probably wouldn't have written my own textbook.
  • I'm a middling-good fighting game fan, so I knew much of the material, but I really enjoyed Patrick Miller's free guide to fighting games. For all that they appeal to button-mashing, there's a lot that goes into high-level gameplay, and Miller does a good job of covering the progression.
  • If you're in journalism and like what I've done at the Times so far (what little of it has gone public), you may want to check out my new project: a repository of tutorials for JavaScript and journalism. I've started with a guide to quick sortable tables with Angular, but I'll be following up with information on web scraping, canvas, browser performance, and more.
  • Finally, development on Caret has basically slowed to — if not a halt — a slow drip of updates. However, thanks to some setup work and a helpful overseas coder, it's now available in both English and Russian. I feel so international now. If you'd like to contribute another language, you don't have to know very much JavaScript at all — just enough to be able to convert the existing English text.

Past - Present - Future