this space intentionally left blank

January 22, 2015

Filed under: music»tools

Whitman

Whitman is a simple sampler (womp womp) written for modern web browsers. Built in Angular, it uses the WebAudio API to load and play sound files via a basic groovebox interface. You can try a demo on GitHub Pages. I put Whitman together for my dad's elementary school classes, so it's pretty simple by design, but it was a good learning experience.

The WebAudio API is not the worst new interface I've ever seen in a browser, but it's pretty bad. Some of its problems are just weird: for example, audio nodes are one-shot, and have to be created with new each time that you want to play the sound, which seems like a great way to trigger garbage collection and cause stuttering. Loading audio data is also kind of obnoxious, but at least you only have to do it once. I really wanted to be able to save the audio files in local storage so that they'd persist between refreshes, but getting access to the buffer (at least from the console/debugger) was oddly difficult, and eventually I just gave up.

But parts of it are genuinely cool, too. The API is built around wiring together nodes as if they were synthesizer components — an oscillator might get hooked up to a low-pass filter, then sent through a gain node before being mixed into the audio context — which feels pleasantly flexible. I'd like to put together a chiptune tracker with it. Support is decent, too, with the mobile browsers I care about (Safari and Chrome) already having decent availability. IE support is on the way.

The most surprising thing about Whitman is that it ended up being entirely built on web tech. When I started the project, I expected to move it over to a Chrome App at some point (it'll be taught on Chromebooks). There are still some places where that would have been nice (file retention, better support for saving data and synchronization), but for the most part it wasn't necessary at all. Believe it or not, you can pretty much write a basic audio app completely on the web these days, which is amazing.

In the parts where there is friction, it feels like a strong argument in favor of the Extensible Web. Take saving files, for example: without a "File Writer" object, Whitman does it by creating a link with a download attribute, base64-encoding the file into the href, and then programmatically clicking it when the user goes to save. That's a pretty crappy solution, because browsers only expose data URIs to create files. We need something lower-level, that can ask for permission to write locally, outside of a sandbox (especially now that the File System API is dead in the water).

January 15, 2015

Filed under: music»recording

Ponographic

This week, Neil Young finally made the dreams of heavy-walleted audiophiles a reality by releasing the PonoPlayer, a digital audio player that's specifically made for lossless files recorded at 192KHz. Basically, it plays master recordings, as opposed to the downsampled audio that ends up on CDs (or, god forbid, those horrible MP3s that all the kids are listening to these days). It's been a while since I've written about audio or science, so let's talk about why the whole idea behind Pono — or indeed, most audiophile nattering about sample rate — is hogwash.

To understand sample rates, we need to back up and talk about one of the fundemental theories of digital audio: the Nyquist limit, which says that in order to accurately record and reproduce a signal, you need to sample at twice the frequency of that signal. Above the limit, the sampler doesn't record often enough to preserve the variation of the wave, and the input "wraps around" the limit. The technical term for this is "aliasing," because the sampled wave becomes indistinguishable from a lower-frequency waveform. Obviously, this doesn't sound great: at a 10KHz sample rate, an 9KHz audio signal would wrap around and play in the recording as 1KHz — a transition in scale roughly the same as going from one end of the piano to another.

To solve this problem, when digital audio came of age with CDs, engineers did two things. First, they put a filter in front of the sample input that filters out anything above the Nyquist limit, which keeps extremely high-frequency sounds from showing up in the recording as low-frequency noises. Secondly, they selected a sample rate for playback that would be twice the frequency range of normal human hearing, ensuring that the resulting audio would accurately represent anything people could actually hear. That's why CDs use 44.1KHz sampling: it gives you signal accuracy at up to 22.05KHz, which is frankly generous (most human hearing actually drops off sharply at around 14KHz). There's not very much point in playback above 44.1KHz, because you couldn't hear it anyway.

There's a lot of misunderstanding of how this works among people who consider themselves to be audiophiles (or musicians). They look at something like the Nyquist limit and what they see is information that's lost: filtered out before sampling, then filtered again when it gets downsampled from the high-resolution Pro Tools session (which may need the extra sample data for filtering and time-stretching). But truthfully, this is a glass-half-full situation. Sure, the Nyquist limit says we can't accurately record above 1/2 the sample rate — but on the other hand, below that limit accuracy is guaranteed. Everything that people can actually hear is reproduced in CD-quality audio.

This isn't to say that the $400 you'll pay for a PonoPlayer is a total scam. Although the digital-analog converter (DAC) inside it probably isn't that much better than the typical phone headphone jack, there are lots of places where you can improve digital audio playback with that kind of budget. You can add a cleaner amplifier, for example, so that there's less noise in the signal. But for most people, will it actually sound better? Not particularly. I think it's telling that one of their testimonials compares it to a high-end turntable — vinyl records having a notoriously high noise floor and crappy dynamic range, which is the polar opposite of what Pono's trying to do. You'd probably be better off spending the money on a really nice set of headphones, which will make a real difference in audio quality for most people.

I think the really interesting question raised by Pono is not the technical gibberish on their specifications page (audiophile homeopathy at its best), but rather to ask why: why is this the solution? Neil Young is a rich, influential figure, and he's decided that the industry problem he wants to solve is MP3 bitrates and CD sampling, but why?

I find Young's quest for clarity and precision fascinating, in part, because the rock tradition he's known for has always been heavily mediated and filtered, albeit in a way that we could generously call "engineered" (and cynically call "dishonest"). A rock recording is literally unnatural. Microphones are chosen very specifically for the flavor that they bring to a given instrument. Fake reverb is added to particular parts of the track and not to others, in a way that's not at all like live music. Don't even get me started on distortion, or the tonal characteristics of recording on magnetic tape.

The resulting characteristics that we think of as a "rock sound" are profoundly artificial. So I think it's interesting — not wrong, necessarily, but interesting — that someone would spend so much time on recreating the "original form" (their words) of music that doesn't sound anything like its live performance. And I do question whether it matters musically: one of my favorite albums of all time, the Black Keys' Rubber Factory, is a cheaply-produced and badly-mastered recording of performances in an abandoned building. Arguably Rubber Factory might sound better as MP3 than it does as the master, but the power it has musically has nothing to do with its sample rate.

(I'd still rather listen to it than Neil Young, too, but that's a separate issue.)

At the same time, I'm not surprised that a rock musician pitched and sold Pono, because it seems very much of that genre — trying to get closer to analog sound, because it came from an age of tape. These days, I wonder what would be the equivalent "quality" measurement for music that is deeply rooted in digital (and lo-fi digital, at that). What would be the point of Squarepusher at 192KHz? How could you remaster the Bomb Squad, when so much of their sound is in the sampled source material? And who would care, frankly, about high-fidelity chiptunes?

It's kind of fun to speculate if we'll see something like Pono in 20 years aimed at a generation that grew up on digital compression: maybe a hypertext hyperaudio player that can connect songs via the original tunes they both sample, and annotate lyrics for you a la Rap Genius? 3D audio, that shifts based on position? Time-stretching and resampling to match your surroundings? I don't know, personally. And I probably won't buy it then, either. But I like to think that those solutions will be at least more interesting than just increasing some numbers and calling it a revolution.

December 16, 2014

Filed under: journalism»professional

Sell Block

This week The Seattle Times published a piece I've been looking forward to all year: Sell Block details the way that the Washington correctional industries have fallen down on the promises that they made over three decades ago. I did the logo design for this series, after daydreaming at the bus stop one day, and I also put together the maps for it.

From a development perspective, this project is also noteworthy as a test case for the custom elements that I've been pushing hard at the Times. Both maps are embedded through <responsive-frame> elements, which has supplanted our use of Pym.js, and the statewide map was built on top of <leaflet-map>. Custom elements continue to be a phenomenal way to develop and deploy web apps. In fact, I have an article up on Source, the OpenNews blog, about how we used them in the elections and this project, and calling for more news developers to adopt them.

November 26, 2014

Filed under: journalism»new_media

Ex Post Facto

I had hoped, personally, that we were past the app craze in newsrooms, particularly since the New York Times (eternally the canary in the coalmine for the rest of the industry) started killing off its unsuccessful subscription apps. But there's a sucker born every minute, and this time it's the Washington Post, which is launching a special Kindle app, the main goal of which seems to be to remind everyone that Jeff Bezos bought the Post last year:

The app, which was designed to reduce the noise of the Web to something as streamlined as a print publication, will be automatically added to certain Kindle Fire tablets as part of a software update. It will feature two editions each day, at 5 a.m. and 5 p.m. Eastern time, when the company believes it will reach the most readers.

Two things stick out in that paragraph: first, the "noise of the Web," as though that's a thing that exists in and of itself and not a product of newspaper websites being largely assembled at the whims of a huge number of competing interests (advertising, editorial, IT, etc). If your web site is noisy, it's because you made it that way, and maybe you should fix it instead of launching a new platform.

Second, two editions? In a world of twenty-four hour online news, someone's making a digital news publication that updates (with exceptions for breaking events) twice a day? That's not a strategy for reaching readers, it's a sop to a print-oriented workflow that has to produce distinct physical "products" instead of a stream of content. It's not like they have to lay out a page, so what's the point? Why make people wait?

I expect this thing to go the way of The Daily within a year, quietly killed when the Post announces some new shiny object, probably. Of course, as a long-time web partisan, I think launching another native journalism app is a silly move anyway. The reasons for this are well-rehearsed and familiar: ease of production, greater audience reach, and creating a single path for content. But ultimately what makes native news apps fail is that they can't interoperate with other services the way the web can.

A lot of ink has been spilled about the NYTimes innovation report, for better or worse, but one of the big takeaways for me was the graph on page 23 of home page visitors compared to page views. The Times has seen no real drop-off in overall traffic, but the number of people seeing the home page has dropped by half over the last two years alone. And the reason for this is simple: most people don't go looking for journalism anymore. It finds them instead, when stories are shared through Facebook and Twitter and (increasingly less) RSS/Atom feeds.

Whether you're thinking of your app as a new home page or as a new publishing platform entirely, this trend seems equally grim — a choice between apathy or obscurity. It's probably possible, somehow, to make an app share to Facebook or Twitter. But it's never going to be as quick, as smooth, or as easy as sharing to those services via a simple URL. As much as anything else, this dooms native news apps from the start: if users can't share your content, it might as well be stored in a sealed vault. If you make the app share a web link as a workaround, everyone ends up on the site anyway, so why bother creating the app in the first place?

(Incidentally, this is why the line tossed around by some pundits that "native apps are too on the web, because they use HTTP" is nonsense. Does your native app have a front-facing URL? Can I link someone to a specific page in your app? No? Then it's not on the web.)

Don't get me wrong: I'm not necessarily sanguine about this state of affairs. The increasing role of social media in discovery and spread of journalism is worrying, from the silencing effects to the loss of control for publishers. One day I'd like to think we'll be out from underneath Facebook's thumb, or anyone else seeking to wall off the web until it pays up. We need better solutions for that problem, ones that don't make us sharecroppers on anyone else's land.

Meanwhile, however, this is the world we live in: the social networks dominate, and ultimately they run on URLs, not on binary blobs stored in a native bundle. Publishing two gimmicky "editions" a day through a fancy app, on a device that relatively few people use, is not going to change that anytime soon. If you want people to read your news, it had better be on the (sharable, linkable, endlessly flexible) web.

November 12, 2014

Filed under: tech»web

grunt-init component

Last week, I wrote a little bit about using custom elements for our election pages. Being able to interact with SVG maps using a simple DOM interface, while still annoying (it's SVG, after all) miles more pleasant than actually using the tags directly. At the end of that post, I recommended that newsrooms thinking about docreating new JavaScript libraries look into Web Components — or at least custom elements. This week, I've got a way to make good on that pitch.

Similar to our news app template, I've put together a Grunt scaffolding for creating bundled custom elements, including HTML templating and CSS, all in a single standalone file. It's our component template — or, as I like to call it, the Poor Journalist's Polymer.

As with the app template, I'm developing the component scaffolding by building projects with it and then integrating the improvements back in. The first is a responsive-frame element that serves as a smaller, easier-to-use replacement for NPR's Pym. I like Pym, and I've used it in several projects now, but it's a little buggy and the setup process is cumbersome. In contrast, the custom elements don't require any JavaScript skills: just include the script to start using them on the page, and they'll connect up with the child elements on the other side of the iframe automatically.

My second testbed project is a Leaflet map element that uses custom HTML to set the map configuration without ever writing a line of JSON (unless you really want to). It's intended to make mapping simple and fast for web producers, while still offering plenty of power for people like me who just want the boilerplate out of the way. Leaflet's a great candidate for this kind of declarative approach, and I think this is a really promising demo for the power of custom elements.

For standalone components like these, the template seems to be working well. I haven't yet solved the problem of easily embedding them in highly opinionated news apps, due to the way that dependencies are handled. It's useful for custom elements to be able to bundle their CSS and other assets into their package, similar to the way that HTML imports and shadow root offer embedded styles, but that means they may not integrate well into projects that already have their own build system. As far as I can tell, the best solution for now will probably be to load the packages from Bower and require() the standalone files from its build directory, which should work with whatever module system you like.

But to be clear, the component template isn't really intended to solve those problems. Its goal is to simplify and modernize the kinds of scripts that, even now, people tend to solve with a jQuery plugin. I'd like to change that, so that more newsrooms produce reusable HTML elements instead of JavaScript spaghetti code. If you build something interesting with the component template, or if it inspires you to make your own, please let me know!

November 5, 2014

Filed under: journalism»new_media

Election Elements

You might have heard that there was an election this last week. Like every news organization, The Seattle Times had a live results page, powered by a Node-based scraper. It did pretty well: we had no glitches with pulling results, and the response has been solid. It also generated the source data for the print edition. Oh, and we put bunting on the front page, which is not something you get to do every day.

Behind the scenes, however, that results page has another interesting feature: as far as I'm aware, it's the first use of Web Components (at least, the custom elements part) in production by a news organization. Each of the Washington maps on the page is a custom-built <svg-map> element, which handles loading the image document and provides a set of convenience methods for manipulating the map once it's available.

SVG is one of those technologies that I really want to like, but has always been a total pain to actually use. It's an annoying format to author, doesn't seem to actually save any space compared to bitmap images, and has a ton of edge cases even in "standard" browsers (for example, Chrome will forget the state of an SVG document inside an object tag if that tag or its parents are set to display: none). Wrapping it up in a component that would manage its lifecycle and quirks for me just seemed like a no-brainer.

To create the component, I used Andrea Giammarchi's registerElement() shim instead of Polymer's polyfill layer — Giammarchi's script only shims the custom element portion of Web Components, but it works all the way back to IE9 and (more importantly) is only 2KB. On top of that, I used RSVP.js to create a quick shared cache for SVG source documents, ICanHaz for my templating, and a custom module called Savage to do SVG class/style manipulation.

From the outside, however, you don't need to know any of that. Instead, the interface is simple:

  1. Write a map element into the page, with a src attribute pointing to the SVG file you want to load. Put your tooltip template inside the tag.
  2. Attach callbacks to the element's ready promise to run code once the image is on the page.
  3. Use the eachPath method on the element to do painting, and set the onhover callback to pass in data for templating in the tooltip.
Using these maps, in other words, is basically just like using a regular element, if regular elements had a DOM API that wasn't written by psychopaths. All their complexity is tucked away inside, and what they present externally is clean, simple, and self-contained. The map element does 80% of what Pro Publica's Landline does, and I'd argue it does it better.

As a developer, I'm really excited by the potential of these new custom elements. Although I had used them at ArenaNet for building the new Guild Wars 2 trading post, those were used to create tight integration with the in-game interface, and only needed to work in a single browser. This is the first time I've used them in a wider ecosystem, and they worked like a charm.

But as a library consumer, and particularly as a harried newsroom dev, I think web components have a tremendous potential to make complex behavior way easier to build and train for. Take the afore-mentioned Landline, for example: wouldn't it be nice to simply include a script tag (or an HTML import) and then be able to write <landline-map> tags into the page, with an attribute pointing to a CSV or a Google Sheet containing the necessary data? Or consider Pym, NPR's responsive iframe library that's so great I forked and rewrote big chunks of it. Right now, using Pym on the parent page requires including the script, adding a dummy element, and then initializing the script — why shouldn't it just be <pym-embed> instead?

Distributing libraries not as modules or loose scripts, but as chunks of new HTML functionality, has the potential to radically change how we create new content on the web in the future. Newsrooms, which are always under pressure and often consume "pre-made" tools for interactive elements like timelines and galleries, are a perfect use-case for Web Components. After this election experience, I'm planning to lean heavily on them whenever possible, and I'm hoping other people will as well.

October 21, 2014

Filed under: journalism»investigation

Loaded with lead

I'm very proud to say that "Loaded with lead," a Seattle Times investigation into the ways that gun ranges poison their customers and workers, went live this weekend. I worked on all four interactives for this project, as well as doing the header design and various special effects. We'll have a post up soon on the developer blog about those headers, but what I'd like to talk about today is one particular graphic — specifically, the string-of-pearls chart from part 2.

The data underlying the pearl chart is a set of almost 300 blood tests. These are not all tests taken by range workers in Washington, just the ones that had to be reported after exceeding the safe threshold of 10 micrograms per deciliter. Although we know who some of the tested workers are, most of them are identified only by an anonymous patient ID and the name of their employer. My first impulse was to simply toss the data into a scatter chart, but as is often the case, that first impulse proved ill-advised:

  • Although the tests are taken from a ten-year period, for any given employer/employee the tests tend to be more compact in timeframe, which makes it tough to look at any series without either losing the wider context, or making it impossible to see the individual tests.
  • There aren't that many workers, or even many ranges, with lots of test results. It's hard to draw a trend when the filtered dataset might be composed of only a handful of points. And since within a range the tests might be from one worker or from many, they can't really be meaningfully compared.
  • Since these tests are only of workers that exceeded the safe limit, even those trends that can be graphed do not tell a good story visually: they usually show a high exposure, followed by gradually lowered lead levels. The impression given is that gun ranges are becoming safer, but the truth is that workers with hazardous blood lead levels undergo treatment and may be removed from the high-lead environment, resulting in lowered test results but not necessarily a lead-free workplace. It's one of those graphs that's "technically correct," but is actually misleading.

Talking with reporters, what emerged was that the time dimension was not really important to this dataset. What was important was to show that there was a repeated pattern of negligence: that these ranges posted high numbers repeatedly, over long periods of time (in several cases, more than five years). Once we discard a strict time axis, a lot more interesting options open up to us for data visualization.

One way to handle this would be with a traditional box and whiskers plot, which shows the median and variation within a statistical set. Unfortunately, box plots are also wonky and weird-looking for most readers, who are not statisticians and would not know a quartile if it offered them a grilled cheese sandwich. So one prototype simplified the box plot down to its simplest form — probably too simple: I rendered a bar that began and ended within the total range of test results for each range, with individual test results marked with a line inside that bar.

This version of the plot was visually interesting, but it had flaws. It made it easy to see the general level of blood tests found at each range, and compare gun ranges against each other, but it didn't show concentration. Since a single tick mark was shown within the bar no matter how many test results at a given level, there was litttle visual difference between two employers with the same range of test results, even if one employer mainly showed results at the top of the range, and the other results were clustered at the bottom. We needed a way to show not only level, but also distribution, of results.

Given that the chart was already basically a number line, with a bar drawn from the lowest to the highest test result, I removed the bar and replaced the tick marks with circles that were sized to match the number of test results at each amount. Essentially, this is a histogram, but I liked the way that the circles overlapped to create "blobs" around areas of common test results. You can immediately see where most of the tests fall for each employer, but you don't lose sight of the overall picture (which in some cases, like the contractors working outside of a ventilation hood at Wade's, can be horrific — almost three times the amount considered dangerous by the CDC). I'm not aware of anyone else who's done this kind of chart before, but it seems too simple for me to be the first to think of it.

I'd like to take a moment here to observe that pretty much all data visualization comes down to translating information into a form that our visual systems are evolved to quickly understand. There's a great post on how that translation functions here, with illustrations that show where each arrangement sits on a spectrum of perceived accuracy and meaning. It's not rocket science, but I think it's a helpful perspective: I'm just trying to trick your visual cortex into absorbing a table's worth of data at a glance.

But what I've been trying to stress in the newsroom from this example is less technical, and more about how much effective digital journalism comes from the simple process of iteration and self-evaluation. We shouldn't expect to come up with a brilliant interactive on the first try every time, or even any of the time. I think the string-of-pearls is a great example of that, going from a visualization that I was confusing and overly-broad to a more focused graphic statement, thanks to a lot of evolution and brainstorming. It was exhausting work, but it's become my favorite of the four visualizations for this project, and I'm looking forward to tweaking it for future stories.

October 8, 2014

Filed under: culture»internet

Serious (Pony) Business

There's nothing I'm going to write this week that's as moving, as horrifying, or as important as Kathy Sierra's take on trolls and being trolled. Sierra, who was seriously harassed and abused by Andrew "weev" Auernheimer, writes about how awful this harassment was, how common it is, and devastating it was to see Auernheimer's history of sociopathic trolling dismissed after his (admittedly flawed) prosecution for "hacking" AT&T.

But you all know what happened next. Something something something horrifically unfair government case against him and just like that, he becomes tech's "hacktivist hero." He now had A Platform not just in the hacker/troll world but in the broader tech community I was part of. And we're not just talking stories and interviews in Tech Crunch and HuffPo (and everywhere else), but his own essays in those publications. A tech industry award. His status was elevated, his reach was broadened. And for reasons I will never understand, he suddenly had gained not just status and Important Friends, but also "credibility".

Did not see that coming.

But hard as I tried to find a ray of hope that the case against him was, somehow, justified and that he deserved, somehow, to be in prison for this, oh god I could not find it. I could not escape my own realization that the cast against him was wrong. So wrong. And not just wrong, but wrong in a way that puts us all at risk. I wasn't just angry about the injustice of his case, I had even begun to feel sorry for him. Him. The guy who hates me for lulz. Guy who nearly ruined my life. But somehow, even I had started to buy into his PR. That's just how good the spin was. Even I mistook the sociopath for a misunderstood outcast. Which, I mean, I actually knew better.

And of course I said nothing until his case was prosecuted and he’d been convicted, and there was no longer anything I could possibly do to hurt his case. A small group of people — including several of his other personal victims (who I cannot name, obviously) asked me to write to the judge before his sentencing, to throw my weight/story into the "more reasons why weev should be sent to prison". I did not. Last time, for the record, I did NOTHING but support weev’s case, and did not speak out until after he’d been convicted.

But the side-effect of so many good people supporting his case was that more and more people in tech came to also... like him. And they all seemed to think that it was All Good as long as they punctuated each article with the obligatory "sure, he’s an ass" or "and yes, he's a troll" or "he's known for offending people" (which are, for most men, compliments). In other words, they took the Worst Possible Person, as one headline read, and still managed to reposition him as merely a prankster, a trickster, a rascal. And who doesn’t like a "lovable scoundrel"?

The whole post is well worth reading before Sierra possibly takes it offline. And who could blame her? Even as she was writing this, the #gamergate movement on Twitter has literally harassed women out of their homes, and yet (in an echo of weev's "hacktivism") still insists that they're all about "ethics in journalism."

For most, that's an obvious smokescreen — a way to cloak their behavior in respectability while planning their next attack. For those who claim to actually believe it, that's a declaration that writing about videogames is more important than hurting women in the worst possible ways. And there's the problem: you don't get to claim the moral high ground when you got there through the pain and suffering of other people.

October 3, 2014

Filed under: tech»web

WebGL and beyond

We came very close to using WebGL for a Seattle Times special report that will come out next week. Now that iOS 8 has shipped with support for WebGL, albeit in an unstable and slightly buggy form, it's common enough that I felt comfortable using it (with a scaled-down 2D fallback) for our audience. In the end, we went with a different design language and shelved the WebGL experiments, but the experience has left me very excited about the potential for mainstream usage.

It's probably easier to understand why WebGL is exciting by looking at what the regular 2D canvas does badly. 2D canvas is terrible at combining or masking its rendering functions (globalCompositeOperation in particular is dog-slow in Firefox). It doesn't give users easy access to the image data directly, which is frustrating in a bitmap-based drawing API. It doesn't like changing colors or styles frequently. But its biggest weakpoint is actually that it's tied so closely to JavaScript, which is a single-threaded language running in the browser event loop. The more pixels you touch in detail, the slower it gets.

WebGL, by contrast, is great at blending, filtering, and masking. But most importantly, WebGL code moves most (if not all) of the math-heavy graphics code you'd normally write in JavaScript — scaling and transforms, patterns, and color — over to the GPU. Your graphics card is a massive parallel-processing machine, so all your drawing occurs simultaneously, not sequentially. You can alter every pixel in the frame, if you want, and it'll barely take any more time than if you change only a few.

Once I spent a little time writing some simple shaders, I realized that there's a whole range of experiences you can write in WebGL that simply aren't possible on a 2D canvas. I could shift the colors or custom-filter an image on a per-pixel basis. I wrote a dust simulation that simulated thousands of motes on a low-end machine, even with the physics still running in JavaScript. I even created a faux-3D effect a la Depthy, by displacing each pixel by the value of a second texture's lightness and the mouse position.

None of these experiments involved 3D math in any way. They're not spinning teapots, or Unreal Engine demos, or elaborate parallax effects. I suspect that the real value of WebGL isn't going to be from any of those things. It's going to be the fact that it gives the web platform the free-drawing capability of canvas, but uncoupled from the JavaScript execution model that it's been shackled to.

There's an obvious parallel here, which is the first two major versions of Android. Because it was designed to run on low-end hardware, Android drew all its UI via software until 3.0 (and hardware acceleration didn't become widespread until 4.0). The resulting lag was never as bad as critics claimed, but it did mean that a lot of Android looked and felt a bit utilitarian. You wouldn't see something like Material Design emerge until the system supported using the GPU for rendering ordinary UI.

It's not a coincidence that Google's moving to Material Design on both Android and the web. Its design language — a smoothly-animated world of flat, geometric shapes — is attractive, but more importantly it's well-matched to the kinds of flat, geometric shapes that can be animated fluidly in a browser, using the 3D acceleration that's already built into the composition layer. Web Components will give developers a way to package those elements up, and make them reusable. Flexbox makes their layouts scalable and responsive.

But for the web platform to move forward, we need more than just a decent look and feel. We need the ability to write the kinds of applications that people insist that it can't run. WebGL is a step in that direction: graphics with near-native speed and capability, instantly deployed and paired with a surprisingly powerful UI toolkit. The kinds of apps and experiences we can write othe web, for a mainstream and mobile audience, just got a lot bigger. And I for one am looking forward to pushing those boundaries as much as I can.

September 23, 2014

Filed under: gaming»media

Press Play

I have started, and then failed to finish, three posts on the #GamerGate nonsense, in which a gang of misogynists led by 4Chan have attempted to hound Anita Sarkeesian and Zoe Quinn off the Internet for daring to be women with opinions about video games. There's very little insightful you can say about this, because they don't have any real arguments for someone to engage, and also because they're dumb as toast. At some point, however, someone decided that they'd use "ethics in journalism" as a catchphrase for their trolling.

What they mean by that is anyone's guess. This Vox explainer does its best to extract an explanation, but other than some noise about "objectivity," there aren't any concrete demands, and the links to various arguments are hilariously silly. One claims that there's a difference between "journalist" and "blogger" based on some vague measure of competence (read: the degree to which you agree with it), which veterans of the "blogger ethics panel" meme circa 2005 will enjoy. Frankly, the #GameGate movement's concept of journalism is itself pretty fuzzy, and tough to debate. As a journalist with actual newsroom experience, I think there are a few things that we should clear up.

Game journalism, isn't

Real journalists make phone calls. They dig into stories, find other viewpoints, and perform fact-checking. It's not glamorous work, which may be why reporters are so prone to self-mythologizing (a tendency I'm not immune to), but it is hard and often tedious. It's also, in its best moments, confrontational. There's an old saying: journalism is publishing what someone doesn't want to be published — everything else is just public relations. Some of that is just more myth-making, but it's also true.

When's the last time you read "gaming news" that had multiple sources? That actively investigated wrongdoing in the industry? That had something critical to say about more than how a single game played? You can probably think of exceptions, but that's what they are: exceptions. The vast majority of what gamers call "journalism" isn't anything like real reporting.

This isn't unusual, or even wrong. It's pretty typical for trade press, particularly in the entertainment industry. After all, there's only so combatative you can be when you're dependent on cooperation with game studios and publishers in order to have anything to write about. I don't expect hard-hitting investigations from Bass Player or The A.V. Club either. It's not journalism, but it still has value.

#GamerGate doesn't understand the difference

Unfortunately, as far as anyone can tell, the cries of "ethics in journalism" actually translate to a desire for more press releases and PR, like in the halcyon days of Nintendo Power. That's probably comforting for a lot of people, because PR is inherently more comforting than critical thought, but it means what they want and what they claim they want are very different things.

There's a kind of irony of calling for "objectivity" in gaming press, where the dominant mode of writing is through previews and reviews. People making this call aren't asking for actual objectivity, because that wouldn't make sense — what's an "objective" review? One that can definitively state that yes, the game exists? It's a code word for "tell me about the graphics, and the genre, and leave any pesky context out of it."

That isn't much of a review, frankly. It's the kind of thinking that gives four stars to Triumph of the Will because the cinematography is groundbreaking, no matter what the content might have been. Incidentally, the author of the definitive — if satirical — Objective Game Reviews site has a really nice post about this.

We have met the enemy, and he is us

Here's what's funny about #GamerGate and its muddled, incoherent demands for journalism: by all accounts, the person who actually meets a lot of their criteria is none other than Anita Sarkeesian, the person they loathe the most. I mean, think about it:
  • About half of her videos are just unaltered clips from games — that's what makes them so powerful, since they speak for themselves. And what's more objective than plain footage?
  • She states her biases right out of the gate. The YouTube channel is called "Feminist Frequency," so people know what they're getting into.
  • She's funded by people who paid for her kickstarter, not gaming companies or PR firms. There's no corporate influence.

It's exactly what they're asking for! And they hate it! It's almost as though, protests to the contrary, this isn't about journalism at all. As if there's actually an agenda being pushed that's more about forcing women and alternative viewpoints out. Imagine that.

Future - Present - Past