this space intentionally left blank

November 23, 2010

Filed under: tech»activism

The Console Model Is a Regressive Tax on Creativity

This weekend I dropped in on the second day of the 2010 DC PubCamp, an "unconference" aimed at public media professionals. I went for the mobile app session, where I got to meet another NPR Android developer and listen to an extremely belligerent nerd needlessly browbeat a bunch of hapless, confused program managers. But I stuck around after lunch for a session on video gaming for marginalized communities, hosted by Latoya Peterson of Racialicious. You can see the slides and Latoya's extensive source list here.

The presentation got sidetracked a bit early on during a discussion of internet cafes and gender in Asia, but for me the most interesting part was when Latoya began talking about the Glitch Game Testers, a project run by Betsy DiSalvo at the Georgia Institute of Technology. DiSalvo's program aims to figure out why there are so few African Americans, and specifically African American men, in the tech industry, and try to encourage those kids to engage more with technology.

The researchers found several differences between play patterns of her black students and their white peers: the minority gamers began playing younger, tended to play more with their families and less online, viewed gaming as a competition, and were less likely to use mods or hacks. These differences in play (which, as Latoya noted, are not simply racial or cultural, but also class-based) result in part from the constraints of gaming on a console. After all, a console is one shared family resource hooked up to another (the television), meaning that kids can't sit and mess with it on their own for hours. Consoles don't easily run homebrew code, so they don't encourage experimentation with programming.

Granted, these can be true of a PC as well. I didn't have my own computer until high school, and my parents didn't really want me coding on the family Gateway. But I didn't have to leave the computer when someone else wanted to watch television, and I was always aware (for better or worse, in those DOS/Win 3.1 days of boot disks and EMS/XMS memory) that the computer was a hackable, user-modifiable device. Clearly, that was a big advantage for me later on in life. In contrast, console gamers generally don't learn to think of software as mutable--as something they themselves could experiment with and eventually make a career at.

It's hopelessly reductionist, of course, to say that consoles cause the digital divide, or that they're even a major causal factor compared to problems of poverty, lack of role models, and education. But I think it's hard to argue that the console model--locked-down, walled-garden systems running single-purpose code--doesn't contribute to the problem. And it's worrisome that the big new computing paradigm (mobile) seems determined to follow the console path.

Set aside questions of distribution and sideloading just for the sake of argument, and consider only the means of development. As far as I'm aware, no handheld since the original DragonBall-powered PalmOS has allowed users to write a first-class application (i.e., given equal placement in the shell, and has full or nearly-full OS access) on the device itself. At the very least, you need to have another device--a real, open computer--to compile for the target machine, which may be burden enough for many people. In some cases, you may also need to pay a yearly fee and submit a lot of financial paperwork to the manufacturer in order to get a digitally-signed certificate.

I think it's safe to say that this is not an arrangement that's favorable to marginalized communities. It wouldn't have been favorable to me as a kid, and I come from a relatively advantaged background. In terms of both opportunity cost and real-world cost, the modern smartphone platform is not a place where poor would-be developers can start developing their skills. As smartphones become more and more the way people interact with computers and the Internet, a trend like this would largely restrict self-taught tech skills among the white and the wealthy.

The one wild-card against this is the web. We're reaching the point where all platforms are shipping with a decent, hackable runtime in the form of an HTML4/5-compatible browser. That's a pretty decent entry point: I don't personally think it's as accessible as native code, and there are still barriers to entry like hosting costs, but JS/HTML is a valid "first platform" these days. Or it could be, with one addition: the all-important "view source" option. Without that, the browser's just another read-only delivery channel.

I think it's self-evident why we should care about having more minorities and marginalized groups working in technology. It's definitely important to have them in the newsroom, to ensure that we're not missing stories from a lack of diversity in viewpoint. And while it's true that a huge portion of the solution will come from non-technological angles, we shouldn't ignore the ways that the technology itself is written in ways that reinforce discrimination. Closed code and operating systems that are actively hostile to hacking are a problem for all of us, but they hit marginalized communities the hardest, by eliminating avenues for bootstrapping and self-education. The console model is a regressive tax on user opportunity. Let's cut it out.

November 11, 2010

Filed under: fiction»reviews»banks_iain

Surface Detail

Surface Detail is not the worst Culture novel that Iain Banks has written--that dubious honor is reserved for Excession, which meandered through an underwhelming tour of AI politics--but it's pretty close, and for many of the same reasons. I suspect it's a problem of scale: Banks writes fantastic micro-level scheming in both his science fiction and "literary" books, but he seems to lose the thread when he tries to translate that into galactic-level politics, especially for a civilization as ridiculously over-powered as the Culture. As a result, both Excession and this most recent book suffer from a chronic lack of action, and the characters aren't given enough urgency to make up the difference.

But at least this time it's better assembled, without Excession's random plotting and character introductions. Surface Detail's A-plot concerns an indentured slave named Lededje, who's killed by her owner during an escape attempt only to wake up again aboard a Culture ship light-years away. Understandably, Lededje wants revenge--something which, for obvious reasons, her rescuers frown upon. It's a nice way of introducing another shade of grey to the Culture's supposedly-benevolent interference in other civilizations: because Lededje's killer is powerful and wealthy within his society, the Culture won't help bring him to justice, because that would cause an unpredictable shakeup of the planetary order. At the very least they need a good excuse, even if they have to make one themselves--hence the scheming, courtesy of the famed Special Circumstances department.

Banks wraps Lededje's journey with a secondary, loosely-connected plotline regarding virtual Hells: the dark side of the Singularity's "nerd rapture," they're the result of mind-state digitization technology in the hands of religious zealots. Of course, not everyone is thrilled with the idea of VR programs dedicated to eternal torment. The pro- and anti-Hell sides decide to contest their fate by holding a virtual war (the Culture is anti-Hell, of course, but abstains from the conflict for some reason). So on one hand, the book follows a soldier named Vatueil, who's fighting (he thinks) for the anti-Hell side. On the other, it watches a journalist who became trapped in one of the Hells during an undercover investigation, and ends up exploring more of their nature than she expected.

These virtual segments give Banks a place to stretch out and indulge himself: his "war over Hell" takes place in simulated scenarios ranging from creatures living the core of a gas giant to Bolo-like sentient tanks. Likewise, his Hell is a nasty piece of gothic engineering, all torture and despair. Whenever he dips into virtuality, it's always a surprise. Unfortunately, it's also too vaguely described to get an idea of the stakes or what victory means in any given scenario, and it tends to kill the novel's momentum.

So that's the general idea of Surface Detail: 600+ pages of people struggling along in increasingly clever but implausible virtual environments, and Lededje slowly making her way back to her home planet for an all-too-short vengeance. It's a funny book in parts, an imaginative book in others, but not an eventful (or ultimately, satisfying) book. And in a setting as generous as the Culture, that's a tremendous shame.

Maybe it's impossible to do real societal intrigue and plotting in a Culture book. Previous books treated the highest levels of Special Circumstances almost as distant and meddlesome gods: the inscrutable missions assigned in Use of Weapons, devastating near-genocide in Look to Windward, and (most brilliantly) the manipulative, nested strategems in Player of Games. Attempting to give readers too much insight into the Minds running the Culture seems to either undercut the omniscience Banks grants them, or it leaves the main characters entirely powerless, or both. There needs to be a delicate balance between deus and machina--in Surface Detail, he unfortunately doesn't have the mix quite right.

November 5, 2010

Filed under: gaming»software»torment

Death Tax

For every form of media, there are certain works that are considered essential for cultural literacy: albums, books, or films that are so influential or important to the development of the art form, a well-rounded critic should at least have glanced at them. You don't have to like them, but they're part of the zeitgeist. The same is true for gaming, I think. Maybe it hasn't had its version of The Wire yet, but there's no arguing that there are certain canonical games that you're supposed to have played.

Is Planescape: Torment one of those titles? Many people would probably say yes. I'm not sure, but I do know that I feel guilty for quitting it. Not enough to keep struggling through it, unfortunately, but guilty nonetheless. Parts of Torment are still brilliant--they make it obvious why so many people speak of the game in such reverent tones. But those pieces are wrapped in a design that has aged poorly (and it wasn't much to write home about even then).

Let's get the positives out of the way first. More than anything else, Torment's writing is fantastic. It has to be decent, since the game's graphics are crude (evocative, but crude), and there are no cutscenes or close-up shots (everything takes place from a 3/4 perspective). But the writers turned that limitation into a legitimate strength: the world and characters they describe are bizarre, comical, tragic, and rich. Even in the first few hours, they toss out more ideas than most games contain in their entirety: an underground town of well-adjusted undead, sorcery made of blood and thorns, and a main character whose body is a gnarled mess of tattoos and scars. It's hard to imagine how someone could create the kind of imagery in polygons that they accomplish with a little prose, particularly given the technology of the day.

The other thing Torment does right is to completely ignore conventional wisdom on death and experience for an RPG of the time. The Nameless One cannot die by conventional means. This makes for some fun story moments--rummaging around inside your own body for items, waking up in the morgue, using your own severed arm as a club--and if he's killed in combat, he wakes up a few feet away. As someone who hates dying in an RPG and realizing that my last save was two hours ago, I think this is brilliant. I also think it's brilliant that making clever dialog and story choices earns an order of magnitude more experience than fighting. That's a clear declaration of what's important in Torment: story, not swordplay.

But if they were willing to undermine that much of the traditional RPG design, it is simply beyond me why they didn't jettison the rest. If you're going to remove the punishment of combat death, not to mention making it largely unrewarding to fight in the first place, why keep it around at all? Why make me struggle with inventory and healing? Obviously their heart wasn't in it, but they couldn't bring themselves to anger the nerds by dropping it completely.

It's clever to make dialog count for extra experience points as an incentive. It's hateful, on the other hand, to abuse that incentive by restricting dialog choices based on the character's attribute scores. At that point, you're punishing the player for thinking that their choices during the game are meaningful, when really it was the first decision they made--assigning points during character creation--that determines success. The result is profoundly, deeply frustrating: I met a riddling skeleton, for example, but I'm not even given a chance to solve his riddles, because my scores have already determined that I'm not smart enough.

That was pretty much the point where I closed the game and put the disc away for the forseeable future, incidentally.

If only this game had been made a few years later (when RPGs started to become a more fluid genre), or a few years earlier (when adventure games were still profitable). If only it weren't shackled to the Baldur's Gate-derived, AD&D-centric Infinity Engine, or maybe if it could have been made by an indie team willing to shoulder a few more risks. There's a fantastic SCUMM-style puzzler somewhere in Torment, but it's buried under mountains of system and cruft. As it is, I know why this game is important. I know why people like it. But I can't bring myself to start it up again.

October 26, 2010

Filed under: meta»blosxom

The PHP Version

About a year back, Mile Zero started to seriously drag in terms of performance, taking more than two seconds to render the page. The problem seemed to be a combination of things: the CGI interface was slow, it didn't run under mod_perl, and I had accumulated a vast number of posts it was having to sift through--which, given that my Blosxom CMS uses the file system as its database, meant lots of drive I/O.

Since I needed to dip my feet into server-side coding anyway, I rewrote Blosxom in PHP. There are a few PHP versions of the script online, but they seemed like a hassle to install, and none of them had support for the plugins I was using--it was almost easier to just do it myself. The result was faster, smaller, and proved to be a great first programming project. Since it's also proved basically stable over the last year, I've decided to go ahead and post the source code in case anyone else wants it (consider it released under the WTF Public License). I suspect the market for file-based blogging scripts is fairly small at this point, but you never know.

HOW IT WORKS

Essentially, both versions of Blosxom work the same way: they recurse through the contents of your blog folder, looking for text files with a certain extension (.txt by default) and building a list. Then they sort the list by reverse-chron date and insert the contents of the first N entries into a set of templates (head, foot, story, and date). Using a REST-like URL scheme, you can change templates (helpful for mobile or RSS) or filter entries by subfolder. It's primitive, but it's also practically unhackable, and it's an awesome way to blog if you like text files. Turns out that I like text files a lot.

Original Blosxom boasted an impressive plugin collection, which it implemented via a package system: plugins exposed a function for each stage of the page assembly process where they wanted to get involved, and the main script would call out to them during those actions, passing in various parameters depending on the task. This being Perl, the whole thing was a weird approximation of object-oriented code that looked like a string of constant cartoon profanity.

PHP provided, I think, better tools. So a plugin for my new version of Blosxom does three things: it sets up its class definition, which should include the appropriate methods for its type, as well as any class or static properties it might need, then it instantiates itself, and finally adds itself to one of several global arrays by plugin type. During execution, the main script iterates through these arrays at the proper time, calling each plugin object's processing method in turn. At least, that's how it works in theory. In practice, I've only implemented plugins for entry text manipulation, because that's all I needed. But the pattern should carry forward without problems to other parts of the process, although you might want to rename the existing process() API method to something more specific, like processEntry(). That way a single plugin could register to handle multiple stages of rendering.

ENOUGH OF THAT, HOW DO I INSTALL IT?

Just copy the script to a publicly-accessible directory, and edit the configuration variables to point it toward your content directory. The part that tends to be confusing is the $datadir variable, which needs to be set to your internal server path (what you see if you log in via FTP or SSH), not the external URL path.

Next, you'll need to set up your templates. For each flavor, Blosxom loads a series of template files and inserts your content. These files are:

  • content_type.flavor - Allows you to mess with the HTTP content-type, if you really want to.
  • head.flavor - Everything that comes before your entries
  • date.flavor - Format for the date subhead
  • story.flavor - Template for each story, including its title and metadata
  • foot.flavor - Everything that comes after your entries
  • 404.flavor - What to display instead of an entry if no content is found
In each of these templates, you simply write standard HTML, inserting a placeholder variable where the actual content will go. This process is identical to original Blosxom theming system, including most of the supported variable names.

At that point, when you put text files in the data directory, they'll be assembled into blog entries based on the file modification time. The first line becomes the title of the entry. You can categorize entries by putting them in subfolders, or subfolders of subfolders, and then appending the path after the Blosxom script URL.

I've included a couple of entry plugins, as well, just to show how they generally work. One is a comment counter for the old Pollxn comment system that I still use--the CGI script works fine for comments, but the Perl can't interface with the new PHP script to say how many comments there are on any given entry. The other is a port of the directorybrowse plugin, which creates the little broken-up paths at the end of each entry, so people can jump up to a different level of the category heirarchy. They're short and mostly self-explanatory.

LESSONS LEARNED

At this point, I've been blogging on Blosxom, either the original or this custom version, for slightly more than five years. During that time, I've had all the dates wiped out during a bad server transition, I've moved hosts two or three times, and I've tweaked the site constantly. I think the level of effort is comparable to people I know on more traditional blogging platforms like Wordpress or Movable Type. Of course, the art of writing online isn't really about the tools. But there are some ways that Blosxom has its own quirks--for better and for worse.

The big hassle has been the folder system, especially for a personal blog like this one, where I may ramble across any number of loosely-connected topics from day to day. Basing a taxonomy on folders means that posts can't span multiple categories--I can't have something that's both /journalism and /gaming, for example, which is unfortunate when writing about something like incentive systems for news sites. And once it's been created, you're pretty much stuck with a category, since most of the old links will target the old folder. There aren't many reasons I would want to switch to a database CMS, but the ability to categorize by tags tops the list.

On the other hand, there's something to be said as a writer for the flat-file approach. It has an immediacy to it that a database layer can't duplicate. I don't have to sign into an admin page, visit the "create post" section, type my code into one of those text-mangling rich editing forms, select "publish," and then watch it validate and republish the whole blog. I just open a text editor and start typing, and when I save it somewhere, it's live. Working this way is great for eliminating distractions and obstacles. There's no abstraction between what my fingers and the end product.

And while working via individual files is probably less safe or reliable compared to a SQL store, it benefits from easy hackability. I don't have to understand the CMS schema to fix anything that goes wrong, or add new features, or make wide changes. If I decided to change where my linked images are located tomorrow, I'd have all the power of UNIX's text-obsessed command line at my fingertips for propagating those changes. For an organization, that'd be insane. But it works pretty well as long as it's just me tinkering around on the server in my spare time.

Blosxom also ended up being a pretty decent content framework when I recoded my portfolio earlier this year as a single-page JQuery interactive. I tweaked a couple of themes, added a client URL parameter and a teaser plugin, and within a day had it serving up the desired HTML snippets in response to my AJAX calls, while still providing a low-fidelity version for JavaScript-disabled browsers. I'm sure you could do the same kind of thing with a serious CMS like Drupal or Django, but I don't know that I could have done it as quickly or simply.

I don't recommend that anyone else try running a web page this way. But as a learning experience, writing your own tiny server framework serves pretty well. It's a good challenge that covers the broadest parts of Internet programming--file access, data structures, sorting, filtering, caching, HTTP requests, and output. And hey, it works for me. Maybe it'll work for you too.

October 18, 2010

Filed under: journalism»new_media

How J-Schools Are Failing New Media 101

If you're interested in working in data-driven journalism, or you know someone who is, my team at CQ is hiring. You can check out the listing at Ars. For additional context, this opening is for the server-side/database role on the team--someone who can set up a database for a reporting project, mine it for relevant data, and then present that information to either the newsroom or the public as a modern, standard-compliant web page.

To be honest, we're having a really difficult time filling this position. It's an odd duck: we need someone who's comfortable with computer science-y stuff like data structures and SQL, but also someone who can apply those skills towards journalism, which has its own distinct character traits: news sense, storytelling, and a peculiar tendency to pull at intellectual loose ends. A tough combination to begin with, even without taking into account the fact that anyone with both aptitudes can probably make a lot more money with the former than with the latter. So let's add a third requirement: they've got to be a true believer about what we do here.

As far as I can tell, the most reliable way to get someone with these three traits is to start with a journalist, then teach them how to code. In theory, that should be exactly what happens in a journalism school's "new media" or "interactive" program. And yet my experience with graduates of these MA programs is that they're woefully unprepared for the job my team is trying to do.

I should note here, I think, that I never attended J-school myself. GMU didn't have a journalism program, and I ended up in a different specialization in the communication department anyway. So it's possible that I'm a little bitter, given that I had to work my way into the news business via extensive freelancing, entry-level web production, and a lot of bloody-minded persistence. But I think my gripes are reasonable, and they're shared with coworkers from more traditional journalistic backgrounds.

Here's the crux of the problem, as I see it: programs in new media journalism are still teaching the Internet in the context of traditional print or television news, which stalls their graduates in two ways. First, it means the programs approach online media as outsiders, teaching classes in "blogging for journalists" or "media website design" as if they were alien artifacts to be unpuzzled instead of the native publishing platform for a whole generation now. It's the web, people: it's not going anywhere, and it's not something you should have to spend a semester introducing to your students. A whole class on blogging isn't education--it's coddling.

Second, these schools seem to be too focused on specific technologies or platforms instead of teaching rudimentary, generalizable computer engineering. There are classes on Flash, or on basic HTML, or using a given blog platform--and those are all good skills to have, but they're not sufficient. What we really need are people who know the general principles behind those skills: how do you structure data effectively for the story? How do you debug something? What's object-oriented design? Technology moves so fast in this business, someone without those fundamentals won't be able to keep up with the pace of change we need to maintain.

Maybe I'm just hardcore, but when I look at something like the Medill Graduate Curriculum (just to pick on someone at random), the interactive track looks lightweight to me. There's a lot of emphasis on industry inside baseball ("How 21st Century Media Works" or "Building Networked Audiences"), and not nearly enough on getting your hands dirty. "Digital Frameworks for Reporting" is only taught in DC? (Are government websites not available in Chicago?) "Database Reporting" is an optional elective? Not a single class taken from the graduate or undergraduate computer science curriculum, like "Fundamentals of Computer Programming I?" It looks to me like a program where you could emerge as a valuable data journalist, but it's just as likely that you'd be another Innovation Editor. And trust me, the world does not need any more of those.

I sympathize with the people who have to design these programs, I really do. The web is a big topic to cover. And worse, it's hard to teach people how to think critically--to understand about how they think, instead of just telling them what to think--but good programming has a lot in common with that level of metacognition. For the kind of data journalism we're trying to do at CQ, you've got to at least be able to think a little like a programmer, a little like a journalist, and a little like something new. If you think you can do that, we'd love to hear from you.

October 5, 2010

Filed under: journalism»new_media

CQ Economy Tracker

I don't know how long this'll be available to the general public, so take a look while you can: CQ Economy Tracker (formerly the Economic Indicators project) is now live. It's the product of more than a year of off-and-on development, and I'm thrilled to finally have it out in the wild.

Economy Tracker collects six big economic data sets (GDP, inflation, employment and labor, personal income and savings, home sales and pricing, and foreclosure rates) across the national, regional, and state levels, extended back as far as we could get data--sometimes almost a hundred years. The data is graphed, mapped, available in a sortable table, and also made available as Excel spreadsheets. As far as we're aware, we're the only organization that's collecting all of this information and putting it together in one easy-to-read package. It's a great resource for our own reporters when they go looking for vetted economic data, as well as a handy tool for readers.

But more than that, Economy Tracker has been my team's bid for some fundamental ideas about data journalism. The back end is a fairly simple PHP/PostgreSQL database, with the emphasis on A) making it easy for non-technical reporters to update by accepting Excel spreadsheets in a very tolerant way, and B) returning results in the web-standard JSON format for consumption by either Flash or Javascript. The current dashboard applet is a full-service showcase for the collection, but using a standards-based API, it should be easy for my team to build new visualizations based on our economic data--including smaller, single-purpose widgets or mash-ups with political or demographic data--or for our customers and readers to do so.

I think the last few years have shown how this strategy--building a news API for both internal and external use--has had real benefits for the newsrooms that have boldly let the way, like NPR and the New York Times. Not only does it engage the segment of the audience that's willing to dig into their data (free publicity!), but it grants newsroom developers a fleetness of foot that's hard to beat. It's a lot easier, for example, for NPR to turn on a dime and toss off a tablet-optimized website, or create a new native mobile client, because their content is already mostly decoupled from presentation and available in a machine-readable format. That's kind of a big deal, especially as we wait to see how this whole mobile Internet thing is going to shake out.

Whether or not this approach takes off, I'm enormously proud of the work that my team has done on this project. It's been a massive undertaking: building our own custom graphing framework, creating an internal event scheme for coordinating the two panels (pick a year on the National pane and it synchronizes with the Regional/State pane, and vice versa), and figuring out how to remain responsive while still displaying up to 40,000 rows of labor statistics (a combination of caching and delayed processing). Most importantly, the Economy Tracker stands as a monument to a partnership between the multimedia team, researchers, and our economics editor, in the best tradition of CQ journalism.

September 28, 2010

Filed under: music»performance»dance

Battle Royale


(inspired by, via, in the style of Run DMC)

This Saturday, October 2nd, I'll be taking part in the Crafty Bastards breakdance battles in Adams Morgan, DC. People have often asked me, since I started breaking, when they can come see a performance. Unfortunately, b-boys and b-girls don't really do recitals, and most battles are held in odd locations with a $15+ cover charge. But early on I attended Crafty Bastards--a free, outdoor, family-friendly battle held in conjunction with a craft fair--and decided that I'd try to make it my first formal battle, where friends and family could come watch.

So on Saturday, I'll be battling alongside b-girl KT B as Steak and Cake Crew. The competition starts around 2pm, at the Marie Reed Learning Center in Adams Morgan. In addition to myself, there will be a range of amazing local b-boys and b-girls performing incredible acts of rhythm, power, and coordination. DJ Stylus Chris will be playing funk, soul, and old-school hip-hop for the event. Also, there's a craft fair, if you're into that kind of thing.

I am nervous as all get out, people. I'm spending most of this week in last-ditch practice mode. But I have modest goals: get out there, have some fun, and not embarrass myself. If you're in the area, come on out and say hello!

September 21, 2010

Filed under: journalism»new_media

We Choose Both

So you're a modern digital media company, and you want to present some information online. The fervor around Flash has died down a little bit--it started showing up on phones and somehow that wasn't the end of the world, apparently--but you're still curious about the choice between HTML and Flash. What technology should you use for your slideshow/data visualization/brilliant work of explainer journalism? Here's my take on it: choose both.

You don't hear this kind of thing much from tech pundits, because tech pundits are not actually in the business of effectively communicating, and they would prefer to pit all technologies against each other in some kind of far-fetched, traffic-generating deathmatch. But when it comes to new media, my team's watchword is "pragmatism." We try to pick the best tools for any given project, where "best" is a balance between development speed, compatibility, user experience, and visual richness. While it's true, for example, that you can often create the same kind of experience in HTML5* as in Flash, both have strengths and weaknesses. And lately we've begun to mix the two together within a single package--giving us the best of both worlds. It's just the most efficient way to work, especially on a team where the skillsets aren't identical from person to person.

What follows are some of the criteria that we use to pick our building blocks. None of these are set in stone, but we've found that they offer a good heuristic for creating polished experiences under deadline. And ultimately that--not some kind of ideological browser purity test--is all we care about.

Animation and Graphics

Long story short, if it has an animation more complicated than jQuery.slideDown(), we use Flash. HTML animation has become more and more sophisticated, but it's still not as smooth as Flash's 2D engine. More importantly, performance can vary widely from browser to browser: what runs brilliantly in Chrome is going to chug along in IE or (to a lesser extent) Firefox. One of the big advantages of Flash is that speed is relatively constant between browsers, even on expensive operations like BitmapFilters and alpha transparency.

Likewise, anything that involves generating arbitrary shapes and moving them around a canvas is a strong candidate for Flash. This is especially true for any kind of graphing or for flashy bespoke UIs. It's possible to create some impressive things with CSS and HTML, especially if you throw caution to the wind and use HTML5's canvas tag, but it's slower and requires a lot more developer time to get polished results across browsers. A lot of this comes down to the APIs that ActionScript exposes. Once you've gotten used to having a heavily-optimized 2D display tree and event dispatcher, it's hard to go back--and there's definitely no way I'm going to try to train a team of journalists how to push and pop canvas transformations.

Text

On the other hand, if we're looking for the best text presentation, we go with HTML every time. While it's true that Flash has support for a wider range of embedded fonts, they've been tricky to debug properly, and Flash text handling otherwise has always left a lot to be desired. It's anti-aliased poorly, doesn't wrap or reflow well, and is trapped in the embed window regardless of length. Also, its CSS implementation is weird and frustrating, to say the least. Even if our text is originally loaded in Flash, we increasingly toss it over to HTML via the ExternalInterface for rendering.

Where this really becomes a painful issue is when dealing with tabular data. Flash's DataGrid component is orders of magnitude faster than JavaScript when it comes to sorting, filtering, and updating large datasets, but it comes with a lot of limitations: rows must be uniform in height, formatting is wonky, and nobody's happy with the mousewheel behavior. If you're a genius in one runtime or the other, you can mitigate a lot of its weaknesses with clever hacks, but who has the time? We usually make our choice based on size: anything up to a couple hundred rows goes into HTML, and everything else gets the Flash treatment.

Speed

In some cases, particularly the new JIT-enabled browser VMs, JavaScript may already be faster than Actionscript. But the key is "some cases," since most browsers are not yet running those kinds of souped-up interpreters. In my experience, heavy number-crunching works better in Flash--to the extent that it should be done on the client at all. We try to handle most of our computational work on the server side in PHP and SQL, where the results can be done once and then cached. For something like race ratings, this works pretty well. In the rare cases that we do need to burn a lot of cycles on the client side, Flash is often the best way to get it done without script timeouts in older browsers.

I also think Flash is easier to optimize, but that probably has to do with my level of experience, and we don't usually make decisions based on voodoo optimization techniques. My personal take is that client-side speed is only a priority if it impacts responsiveness, which is primarily a UX problem. We have run into problems with delays in response to user input on both technologies, and the solution is less about raw speed and more about giving good user feedback. We also use strategies like lazy loading and caching no matter where we're coding--they're just good practice.

XML and JSON

This is another minor factor, since we're in control (usually) of our own data formats here, but it's worth considering if all else is equal. Flash has excellent native XML support, but its JSON library (from Adobe's core library package) proved slow for us when loading more than a few thousand rows from a database. JavaScript obviously has good JSON support, but I always dread using it for XML. We've gradually started moving to JSON for both, because we're trying to set a good example for web API design at CQ, and it seems like the least of two evils.

It should be noted that one of the primary roles of XML and JSON in the browser are for AJAX-style web apps, and Flash does have a real advantage in this area: it can do cross-domain HTTP requests in all browsers, as opposed to JavaScript's heavy-handed sandboxing.

Code Reuse

There are doubtless tools and techniques for building reusable JavaScript components and APIs, but at the end of the day it's just been easier to do for our Flash/Flex projects. The combination of namespaces, traditional object inheritance, and a more consistent API mean that it's easier to get my team members up to speed, and we now have a small library of reusable ActionScript components for graphing, slideshows, mapping, and data display. So far, my experience is that when we build a Flash project, if done properly, the code ends up being pretty portable by defaultMastering reusable JavaScript, on the other hand, seems to require deep knowledge of things like closures and scope, and those don't come easy to most journalists-turned-coders.

I really can't overstate how important this is for our team. Like most newsroom multimedia teams, we're understaffed relative to the workload we'd really like to have. We don't really want to sink time into one-off projects, so any time we have a chance to recycle code, we take it. An additional bonus is that we can build these reusable components to fit the CQ look and feel, and it's easier to pitch a presentation to an editor if we can point to something similar we've done in the past.

Video

Video is an interesting case, and one that's representative of a mature approach to new media planning. I would say that we use a lot of JavaScript to place video on the page--but that video is typically a Flash embed from YouTube or a content delivery network. We're a long way away from a world of pure video tags.

In general, my time at B-SPAN taught me this about online video: if you're not a video hosting company, you should be hiring someone else to take care of it for you. Video is too high-bandwidth, too high-maintenance, and too finicky for non-experts to be managing it. And I think the HTML5 transition only proves that to be the case in the browser as well. Vimeo and Brightcove (just to pick two) will earn their money by working out ways for you to upload one file and deliver it via <video> or Flash on a per browser basis, freeing you up to worry about the bigger picture.

Mobile

Mobile is, of course, where this whole controversy got started, but I think most of the debate revolves around a straw man. Current mobile devices restrict your use of hover events (no more tooltips!), they limit the screen to a tiny keyhole view, and they require UI elements to be much larger for finger-friendliness. That's true for HTML and Flash both. The idea that HTML5 interactives can present a great experience on both desktop and mobile browsers without serious alterations is ridiculous--you're going to be doing two versions anyway if you want decent usability. So while it depends on your situation, I don't think of this as a Flash vs. HTML5 question. It's more like a desktop vs. mobile question, and the vast majority of our visitors still come in through a desktop browser, so that's generally what we design for.

That said, here's my prediction: Flash on Android is good enough, and is going to be common enough in a year or two, that I can easily see it being used on mobile sites going forward. Apple probably won't budge on their stance, meaning that Flash won't be quite as ubiquitous as it is on the desktop. But if small teams like mine find ourselves in a situation where Flash is a much better choice for the desktop and a sizeable chunk of smartphones, it won't be unusual--or unreasonable--to make that trade-off.

Powers of Two

But really, why should anyone have to choose either Flash or HTML5? I mean, isn't the ability to mix and match technologies a key part of modern, Web 2.0 design? In a day and age where you've got servers written in LISP, PHP, Ruby, C, and .Net all talking to each other, sometimes on the same machine, doesn't it seem a little old-fashioned to be a purist about the front-end? Whatever happened to "use the right tool for the right job?"

The key is to understand that you can choose both--that ActionScript and HTML actually make a great combination. By passing data across the ExternalInterface bridge, you can integrate Flash interactives directly into your JavaScript. Flash can transfer text out to be displayed via HTML. JavaScript can pass in data to be graphed, or can provide accessible controls for a rich media SWF component. If you code it right, ActionScript even provides a great drop-in patch for HTML5 features like <canvas>, <video>, and <audio> in older browsers.

The mania for "pure HTML" reminds me of the people in the late 90's who had off-grey websites written in Courier New "because styling is irrelevant, the text is the only thing that matters." If Flash has a place on the page, we're going to use it. We'll try to use it in a smart way, mixing it into an HTML-based interactive to leverage its strengths and minimize its weaknesses. But it'd be crazy to make more work for ourselves just because it's not fashionable to code in ActionScript these days. Leave that for the dilettantes--we're working here.

September 8, 2010

Filed under: fiction»reviews

Spilled Ink

Zero History, by William Gibson

As with any author, I have favorite William Gibson titles, as well as books I've enjoyed but never felt a need to revisit. Zero History, however, is the first Gibson novel I've found myself actively disliking for most of its length.

The third part of a loose trilogy by an author who seems to write trilogies by accident as much as anything else, Zero History follows relatively close on the heels of 2007's Spook Country. It centers on Hollis Henry, ex-rock singer and freelance journalist, and an ex-junkie named Milgrim, both of whom are recruited by eccentric PR tycoon Hubertus Bigend to locate an underground clothing designer (known only as the "Gabriel Hounds"). Bigend wants to do this for several reasons, partly because he's envious of their vague and trendy marketing strategy, but mostly because he wants to get into the business of designing military uniforms for the US, and he'd like the Hounds to do it for him.

In the right hands, this plotline is the material for a dark farce, but Gibson insists on writing it straight-faced. Worse, he spends most of the book stalled out in endless circular conversations. Over and over, it seems, Hollis and/or Milgrim meet with a possible lead on the enigmatic designer, fail to make any progress, and return to Bigend to give him the bad news and receive a new assignment. Lather, rinse, repeat, until finally Gibson seems to realize that he's gone 250 pages without any real action and kicks off an admittedly exciting hostage exchange, one involving flying drones, a prisoner exchange, and ubiquitous surveillance. Even then, it's peculiarly passive--viewed primarily through remote cameras--and is only the top layer of a market manipulation scheme that is described as monumentally important, but never explained or detailed.

These are not, granted, new criticisms for Gibson. He's never been able to write a convincing ending (the book's closing connection to Pattern Recognition is at best unjustifiable, and at worst entirely gratuitous), he likes his Macguffins elusive, and he often leaves the real plot events (not to mention their resolution, such as it is) in the background, while his protagonists toil over some small part of the greater plan. Unlike his past books, however, Zero History can't quite achieve escape velocity, perhaps because the stakes are so low, and the characters so slightly motivated. Why should we care whether or not a rich Belgian ad agency can find someone to make fashionable army pants? Especially when the agency is run by someone as aggressively bland as Bigend, whose only role is to fund the plotline for arbitrary reasons, and whose "eccentric" personality is limited to wearing obnoxiously-colored suits?

Over the entire trilogy, but particularly in Zero History, Gibson has joined the ranks of science fiction authors (see also: Doctorow and Sterling) who seem to believe that the world has become sufficiently weird that merely documenting it qualifies as genre fiction. This shift from sci-fi to techno-thriller is not kind to Gibson's style of writing, which has always been evocative rather than technically-detailed. In this new subgenre--blog-punk? tweet noir?--authors have traded in their worldbuilding for exhaustive trivia. All this real-world gadgetry has to be explained and infodumped to establish its real-world credibility, turning these novels into little more than collections of nerdy ephemera. For me, they become a distracting game of "guess the source" (a little John Robb here, a little Wired Magazine there, perhaps), constantly jerking me out of the narrative.

Besides, maybe it's just me and my particular pet peeves, but there's a lot here that seems tuned to the wavelength of the modern techno-hipster: a precious preoccupation with design, an exhaustive catalog of name brands, and a steady stream of shiny objects that reads like a random selection from BoingBoing or Valleywag (quadcopter drones, the OpenMoko Neo, steampunk hotels). Everyone has an iPhone, which they're constantly stroking or pinching or otherwise fondling via a near-sexual verb choice. Twitter features prominently. All it needs to complete the stereotype is a pair of skinny jeans and a bad haircut. This is a disappointingly mundane turn from the author who first envisioned the vast neon vistas and chrome origami of Neuromancer's cyberspace.

Zero History carries a lot of thematic similarities to another Gibson trilogy-ender, All Tomorrow's Parties, in that both try to describe some kind of grand paradigm shift between the real and the virtual. But in the latter, the protagonists were blessed with data-crunching abilities verging on magical realism, and a real technological transition (toward nanotech production) was taking place. Here, when side characters suddenly begin vaguely describing Bigend's marketing firm as "about to become exponentially bigger" during the book's climax, it comes across as a crutch--an author who doesn't know how to raise the stakes except by telling the audience that they're higher.

It's not all bad, I guess. Gibson still has a deft hand with dialog, and he has a few great characters up his sleeve, like Hollis's perpetually furious ex-drummer Heidi (unlike many of his colleagues, Gibson can pass a Bechdel test) and a surly, profane Eastern European computer repairman. The writing is less stylized, but also less distracting than Spook Country, where almost every chapter ended with a choppy, zen-like pronouncement. And when his eye for detail works, like the descriptions of a secret hotel in London, it's as gorgeous as ever.

Kraken, by China Mieville

Kraken, in contrast, is a playful throwback for China Mieville, returning to the kind of politically-aware, Gaiman-esque urban fantasy that he first wrote in King Rat and later indulged in his YA novel, Un Lun Dun. Since then, Mieville's been overdue for something less grim than his usual fare, and the result is a big, fun shaggy dog story. It's filled with dubious sorcery, religion collectors, and LOLspeak. Also, it's about the end of the world, in a way. Mieville treats apocalypses something like a grade-schooler's birthday party: what if two of them were thrown on the same day? Which one gets attended, and which gets left with a lot of uneaten ice cream cake?

So here's a biologist named Billy Harrow, whose career highlight to date is having preserved a giant squid specimen for the London Natural History Museum. Billy goes in to work one day, only to find that the squid has been neatly stolen from its tank, without a single clue left behind, and Billy's being investigated by the Fundamentalist and Sect-Related Crime Unit. In short order, he's pulled into a mess of competing conspiracies, including a group of devout kraken worshippers and (in a kind of reverse-Yakuza twist) a vicious mobster tattoo.

Mieville likes to play with genre, and urban fantasy is basically defined by its tension between belief systems--namely, the mundane world and the secret history. This is, of course, inherently ridiculous: you can barely go three pages without a violation of natural law in the average Dresden Files book--they're more like natural suggestions at that point--so urban fantasy simply replaces the old rules with a new set of extra special rules, which exist as "reality" until the author amends them to get around a difficult plot point. Kraken, as Mieville tends to do, stages a sly critique of this dynamic via excess: all the secret histories get a chance at the table--all of them that he can think of, that is, and that's quite a few, ranging from bizarre cults to television shows--but that doesn't mean they all get to be the history:

Vardy swung back his chair and looked at her with some queasy combine of dislike, admiration and curiousity. "Really? That's what it stems from, is it? You've got it all sorted out, have you? Faith is stupidity, is it?"

Collingswood cocked her head. Are you talking to me like that, bro? She couldn't read his head-texts, of course, not those of a specialist like Vardy.

"Oh believe me, I know the story," he said. "It's a crutch, isn't it? It's a fairy tale. For the weak. It's stupidity. See, that's why you'll never bloody be good enough for this job, Collingswood." He waited as if he'd said too much, but she waved her hand, Oh do please carry the fuck on. "Whether you agree with the bloody predicates or not, Constable Collingswood, you should consider the possibility that faith might be a way of thinking more rigorously than the woolly bullshit of most atheists. It's not an intellectual mistake." He tapped his forehead. "It's a way of thinking about all sorts of other things, as well as itself. The Virgin birth's a way of thinking about women and about love. The ark is a far more bloody logical way of thinking about the question of animal husbandry than the delightful ad hoc thuggery we've instituted. Creationism's a way of thinking I am not worthless at a time when people were being told and shown they were. You want to get angry about that bloody admirable humanist doctrine, and why would you want to blame Clinton. But you're not just too young, you're too bloody ignorant to know about welfare reform."

They stared at each other. It was tense, and weirdly slightly funny.

"Yeah but," Collingswood said cautiously. "Only, it's not totally admirable, is it, given that it's total fucking bollocks."

They stared some more.

"Well," Vardy said. "That is true. I would have to concede that, unfortunately." Neither of them laughed, but they could have done.

And that's your argument for rationalism, by way of a book about squid gods. Honestly, with the playing field wide open like this, Kraken gets a little overstuffed at times. Mieville's clearly enjoying himself, stewing together all the ideas and pop cultural references he no doubt couldn't use in either Bas-Lag or The City and the City, but there are a few times toward the end when the double-crosses and twists become more exhausting than confusing.

But hey: it's about time that someone tried to bring some intelligence to a sub-genre that's the pulp of our age, isn't it? When the bookshelves are groaning under the weight of mopey vampires, brooding werewolves, and the sexy men and women who love/kill them, isn't it nice that someone can step in, say "well, this is a bit ridiculous, so let's see how far it can go?" If it sometimes wanders on its way up to 11, maybe it abuses the italics a little bit and has more fun with squid puns than is strictly necessary... well, speaking personally, that's a price I'm willing to pay.

September 2, 2010

Filed under: gaming»software»final_fantasy

Press A

Literally the second thing you see after booting up Final Fantasy XIII, immediately following the Square-Enix logo, is a message asking you to "Press any button to continue." This is before you get to the title screen, mind you--before you have even mentally registered that the game could be asking you for input. It ambushes you, frankly. I thought it was a joke at first. It's not. The reward for pressing any button--for me that's the A button, being an XBox gamer by way of Nintendo, instead of whatever wacky "continue" button location Sony started using for the Playstation--is another OK-only dialog asking you to pick a location for your saved games. I don't have a memory card or anything in my XBox, so there's only one possible storage location.

That's three button presses, and no actual choices, in the first minute. First fifteen seconds, if you've seen this before and just hammer your way through it.

The Final Fantasy games have never been about open worlds and nonlinear choice, but they've at least maintained the illusion that the player has options. The thirteenth outing drops all those pretensions. It combines save points with shops and upgrade stations, so there's no side trips. It puts levelling up right in the pause menu. As of the ninth chapter (out of 13), every level is practically a straight-ahead corridor, with a handy automap that reminds you which way to run in case you forget. It is, in other words, lots of button presses, and no actual choices.

This extends to the new fight system as well, which features no small amount of hot A button action, usually to select "auto-battle" for a single character (the others are controlled by the AI). Eventually, Square introduces a "Paradigm" strategy layer on top of all the auto-battling, where you get to choose between different roles (tank, healer, mage, etc.) for party characters, but even granting that complication this is a game that my dog could probably play, if I could just train him to press the big green button on the fighting stick. And then he could play Tekken, too, which would be good for a laugh.

I've played a fair amount of this game while on a week's vacation, in between dance practice and dog walks, and at times it almost seems like it was satire. But it's Square, so of course it's hopelessly self-important. The writing's incoherent, the characters are shallow, the voice acting is sometimes flat, and the cosmology is vastly overcomplicated. If it were any more deadpan, we'd have to check for rigor mortis. On the other hand, I'm still playing and will probably finish this weekend, so it must be doing something right. Not that it would take much: bear in mind, I watch low-budget SciFi channel movies for fun.

I think what fascinates me about FF XIII is the ornery throwback quality of it all. In ruthlessly trimming everything about the game down to the very core of JRPG-ness, Square has made the game more streamlined--easing players gently from one barely-distinguishable fight to the next with only the occasional video clip to separate them--but also made it clear how little their conception of a "video game" has evolved. For all its sound and fury, the result is about two menus (and oh, how Square loves their menus still) away from Chrono Trigger.

There are some great games in my collection that you couldn't have done on a Super NES. Rock Band and Guitar Hero wouldn't work without higher-capacity media. Sands of Time really needs 3D to sell its acrobatic puzzles. And it's hard to imagine Burnout without the hyper-realistic, slow-motion car crashes. But there's very little in this Final Fantasy, apart from the admittedly-gorgeous art direction, that wouldn't play equally well in 16-bits or less.

And so ultimately, FF XIII occupies a weird space. It's clearly an incredibly expensive game in terms of production values. It's a continuation of one of the most well-respected video game franchises in existence. It's a certain amount of fun to play. And yet, if I were a complete stranger to gaming culture, I have no idea how I would react to this odd combination of lavish graphics, active time battles, and simple menu trees--a distillation of old-school RPG mechanics in a shiny new shell. I suppose I'd just have to press the A button, until it told me to stop.

Past - Present - Future