On Black Friday, while the rest of Virginia storms their local retailers for loss-leader merchandise, Belle and I will pile into the car with our pets and start a week-long drive across the country to our new home in Seattle. The move is the realization of a long dream of ours: to get as far away from Washington, DC as was humanly possible.
I kid! Mostly. We won't miss the political atmosphere, the terrible public transit system, or the sweltering DC summers. But we will be far away from our family and friends, and there are some parts of DC that have grown on me. Here are a few things about the city that I will, in all honesty, miss:
As the deadlines creep forward for the Joint Special Committee on Deficit Reduction, my team at CQ has put together a package of new and recent debt interactives covering the automatically-triggered budget cuts, the proposals on the table, the schedule set for committee action, and more.
The centerpiece of the package is a "reactive document" showing how the automatic cuts will go into effect if Congress does not pass cuts totalling $1.2 trillion by January 15. A series of sliders set the size of the hypothetical cuts, and the text and diagrams of the document adjust themselves to match. It's a neat idea, and one that's kind of a natural match for CQ: wordy, but still wonky.
Like a lot of people, I encountered the idea of reactive documents through Bret Victor's essay Explorable Explanations. Victor is an ex-Apple UI designer who wants to re-think the way people teach math, and reactive documents are one of the tools he wants to use. His explorations of learning design via reactive documents, such as Up and Down the Ladder of Abstraction, are breathtaking. As he writes,
There's nothing new about scenario modeling. The authors of this proposition surely had an Excel spreadsheet which answered the same questions. But a spreadsheet is not an explanation. It is merely a dataset and model; it cannot be read. An explanation requires an author, to interpret the results of the model, and present them to the reader via language and graphics.
The reactive document integrates spreadsheet-like models into authored text. It can be read at multiple levels, depending on the reader's level of interest. The hurried reader can skim it. The casual reader can read it as-is. The curious reader can adjust the author's scenarios. The engaged reader can explore scenarios of his own devising.
Unlike a spreadsheet, the barrier to exploration here is extremely low -- simply click and drag. This invites casual readers to become engaged and start exploring. It transforms readers from passive to active.
Victor's idea is a clever one, and as someone who often describes interactives using the same "layered reading" mechanism, it appeals to my storytelling sense. I also like that it embraces the original purpose of the web--to present hypertext documents--without sacrificing the rich interactions that browser applications have developed. That said, I'm not entirely convinced that reactive documents like this are actually terribly useful or novel.
The main problem with this method of presenting interactive information is that it's actually really burdensome for the playful user. It's easy to read, but if you change anything, you have to basically either read and process the entire paragraph again, or you have to learn to pick out individual changes and their meaning from a jumble of words. Besides, sometimes words are not a very good description of an effect or process--imagine describing complex machinery only in paragraph form.
Victor also has some examples that avoid this flaw by making the reactive document incorporate diagrams and graphs alongside his formulas. These are great, but they also illustrate the fact that, once you make reactive "documents" more visual and take away the intertextual trickery, they're really just regular interactives. They're stunningly designed, and I'm always in favor of more multimedia, but there's nothing new about them.
This probably comes off as a little more adversarial to the concept of reactive documents than I actually am, most of which is just my rhetorical background leaking out. I think they're neat, and I would guess that Victor himself thinks of them less as a complete solution and more as a different shade in his teaching palette. In some places, they're helpful, in others not so much.
As an Excel enthusiast, though, I do take exception to Victor's description of spreadsheets as something that "cannot be read," with a high barrier to entry. People read and create spreadsheets all the time, although (to my frustration) they often use them as layout tools. But a spreadsheet that's already set up for someone and locked up to prevent mistakes is barely any more difficult to use than his draggable text--the only real difference is the need to type a number. Regular people may find spreadsheet formulas difficult to connect with cells, but those same people are unlikely to be creating Victor's reactive documents either.
Ultimately, I'm wary of claims that any tool is a silver bullet for education or explainer journalism. It's easy to be blinded by slick UX, and to forget that we're basically just re-inventing storytelling tools used by great teachers for centuries. That shouldn't eliminate interactive games and illustrations from our kit. But reading Victor's site, it's easy to give the technology credit for its thought-provoking qualities, when the credit really goes to his lucid, considered reasoning and clear writing (both of which mean that the technology is well-applied). Sadly, there's no script for that.
Between our upcoming move and our recent wedding, it's not a great month for deep thoughts. So let's talk about something much, much shallower: Batman: Arkham City.
The going question, since it was raised by Film Crit Hulk, is "how sexist is Arkham City?" And the answer is, as it sadly tends to be in these discussions, "really sexist." But honestly, I think it's as much because the writing is very lazy this time around as it is the misogyny of the developers.
Let's be clear: there is one, and only one, reason that I like Batman, and that's the cartoon series that ran from 1992 to 1995. Striking a balance between Frank Miller's "Dark Knight" and the camp silliness of the Adam West TV show (tilted toward the former as much as a kid's show could be), it presented a version of the characters that was smart and well-shaded. It also introduced the "two voices" gimmick for Batman and Bruce Wayne, retconned several villains to be more interesting, and brought us Mark Hamill as the Joker (not to mention creating Harley Quinn as his codependent partner-in-crime--a relationship, incidentally, that Arkham City also fails to capture). That's impressive work for something that aired between "Tiny Toons" and "Freakazoid."
Arkham Asylum, the previous Batman game, was written by one of the animated series' head writers, Paul Dini, and it borrowed a lot from the show's reinvention of the character. As a fan of the show, even despite the "realistic" art direction, it felt like the animated series tie-in I would have wanted as a kid. But after the first five hours of Arkham City, I had to look it up online to see if the staff from Asylum had even been involved. In comparison, the new game's premise is wildly silly, the dialog is clunky, and Batman's actions veer inconsistently back and forth to meet the demands of the plot (such as it is, being a tedious stream of fetch-quests and scripted blackouts). Where's the humor? The wit? The arresting set-pieces? Why is Batman so grumpy?
A general air of forced macho grittiness is typified by Robin's cameo partway through the game's second act, when he saves Batman during a rooftop ambush. The two immediately get into a petty, ego-driven shouting match for no apparent reason, which comes across as incredibly resentful on Batman's part given that Robin just knocked a ninja off his throat. When the Boy Wonder seems to be the more mature of the Dynamic Duo, you may want to reconsider your script.
Now, I'm not trying to excuse or minimize the sexism that exists in Arkham City. If anything, it's the opposite. In contrast to those who argue that the sexism ruins a good game, I'd say instead that the sexism simply puts the insulting cherry on top of a badly-written sundae. I mean, seriously? It's bad enough that they couldn't write a funny Joker this time around, they've got to stack it high with misogyny to boot?
(The fact that laziness and misogyny go hand in hand also says something about the tolerance for sexism in the game development community. After all, this is an industry where the art director for Deus Ex: Human Revolution felt perfectly comfortable to stand in front of a public audience and describe his philosophy of female character design as people he'd like to have sex with. It's an atmosphere only Michael Bay could love.)
The general critical consensus seems to be that such terrible writing is particularly shameful because it's a great game, but I'm honestly not that impressed with it mechanically. Arkham City is set up as a Metroid-style progression, where new gadgets open up previously-visited portions of the map. Most games of this type start out with the main character de-powered, but City gives Batman most of his gadgets from the first game. As a result, it just feels cluttered and game-y: ice grenades that create floating platforms and a zap-gun for powering doors don't feel like Batman, World's Greatest Detective. They feel like they wandered in from Zelda in order to justify a sequel.
The same thing applies to the combat, which was one of the defining high points of Arkham Asylum. The foundation is still there, but they've crammed in extra enemy types that each require a flow-breaking special combo to counter. The worst of these are the shielded enemies, who take forever to dispatch because you can't land more than a single hit on them at a time, and have a tendency to crowd in during uncancelable animation frames to knock Batman out of his combo. It's an endlessly frustrating design, compounded by the awkward controls and the fact that few (if any) of the bat-gadgets do anything demonstrably helpful during combat (or out of it, really). Meanwhile the new open-world city--which is a genuine evolution--prioritizes these imbalanced brawls over Asylum's tense stalking arenas.
Part of the danger of sequels is that they exist in an entangled state with their predecessors. A great sequel--to pick an on-topic example, Nolan's The Dark Knight--makes previous entries look better, especially if it can weave in and question their themes. Arkham City isn't all bad. I finished it (granted, it's not very long). But it's definitely a disappointment, and one that reflects badly on its inspiration. This isn't the Batman I admired as a kid anymore, because what City tries to fix about him wasn't broken.
This weekend, Belle and I got married. It was a small, homemade, personal kind of wedding. Among the crafts we made was a "photo booth" consisting of my laptop running a bit of custom ActionScript and a box of very silly props. Unfortunately, we didn't have power and I'm not sure I had the screen saver all set up correctly, so some people may not have had a chance to take their pictures with it before the batteries died. If you'd like to give it a shot, feel free to download and install the Official Nerds Get Hitched Photo Booth (works on Mac or Windows, requires Adobe AIR). It'll take four pictures of you, then save them to a folder on your desktop. There's no quit button (it's a kiosk), but you can hit Escape to leave full-screen mode and then close the window.
This is a great case of what ActionScript (and the AIR platform that Adobe built out of Flex/Flash) does well: pair a better version of JavaScript with a comprehensive runtime library for exceptionally fast, easy multimedia production. I procrastinated on this project like crazy, but in the end it only took me a few hours of real work to create. Sure, we could have paid for software to do the same thing, but if it's so simple, why bother? And this way it's all under our control--it looks and acts the way we wanted it to look.
Anyway, feel free to install our photo booth and send us a picture. And thanks to everyone who came out to our little wedding celebration!
Recently my team worked on an interactive for a CQ Weekly Outlook on contracts. Government contracting is, of course, a big deal in these economic times, and the government spent $538 billion on contractors in FY2010. We wanted to show people where the money went.
I don't think this is one of our best interactives, to be honest. But it did raise some interesting challenges for us, simply because the data set was so huge: the basic table of all government contracts for a single fiscal year from USA Spending is around 3.5 million rows, or about 2.5GB of CSV. That's a lot of data for the basic version: the complete set (which includes classification details for each contract, such as whether it goes to minority-owned companies) is far larger. When the input files are that big, forget querying them: just getting them into the database becomes a production.
My first attempt was to write a quick PHP script that looped through the file and loaded it into the table. This ended up taking literally ten or more hours for each file--we'd never get it done in time. So I went back to the drawing board and tried using PostgreSQL's COPY command. COPY is very fast, but the destination has to match the source exactly--you can't skip columns--which is a pain, especially when the table in question has so many columns.
To avoid hand-typing 40-plus columns for the table definition, I used a combination of some command line tools, head and sed mostly, to dump the header line of the CSV into a text file, and then added enough language for a working CREATE TABLE command, everything typed as text. With a staging table in place, COPY loaded millions of rows in just a few minutes, and then I converted a few necessary columns to more appropriate formats, such as the dollar amounts and the dates. We did a second pass to clean up the data a little (correcting misspelled or inconsistent company names, for example).
Once we had the database in place, and added some indexes so that it wouldn't spin its wheels forever, we could start to pull some useful data, like the state-by-state totals for a basic map. It's not surprising that the beltway bandits in DC, Maryland, and Virginia pull an incredible portion of contracting money--I had to clamp the maximum values on the map to keep DC's roughly $42,000 contract dollars per resident from blowing out the rest of the country--but there are some other interesting high-total states, such as New Mexico and Connecticut.
Now we wanted to see where the money went inside each state: what were the top five companies, funding agencies, and product codes? My inital attempts, using a series of subqueries and count() functions, were tying up the server with nothing to show for it, so I tossed the problem over to another team member and went back to working on the map, thinking I wanted to have something to show for our work. He came back with a great solution--PostgreSQL's PARTITION command, which splits a table into component parts, combined with the rank() function for filtering--and we were able to find the top categories easily. A variation on that template gave us per-agency totals and top fives.
There are a couple of interesting lessons to be learned from this experience, the most obvious of which is the challenges of journalism at scale. There are certain stories, particularly on huge subjects like the federal budget, where they're too big to be feasibly investigated without engaging in computer-assisted reporting, and yet they require skills beyond the usual spreadsheet-juggling.
I don't think that's going away. In fact, I think scale may be the defining quality of the modern information age. A computer is just a machine for performing simple operations at incredibly high speeds, to the point where they seem truly miraculous--changing thousands (or millions) of pixels each second in response to input, for example. The Internet expands that scale further, to millions of people and computers interacting with each other. Likewise, our reach has grown with our grasp. It seems obvious to me that our governance and commerce have become far more complex as a result of our ability to track and interact with huge quantities of data, from contracting to high-speed trading to patent abuse. Journalists who want to cover these topics are going to need to be able to explore them at scale, or be reliant on others who can do so.
Which brings us to the second takeaway from this project: in computer-assisted journalism, speed matters. If hours are required to return a query, asking questions becomes too expensive to waste on undirected investigation, and fact-checking becomes similarly burdensome. Getting answers needs to be quick, so that you can easily continue your train of thought: "Who are the top foreign contractors? One of them is the Canadian government? What are we buying from them? Oh, airplane parts--interesting. I wonder why that is?"
None of this is a substitute for domain knowledge, of course. I am lucky to work with a great graphics reporter and an incredibly knowledgeable editor, the combination of often saves me from embarrassing myself by "discovering" stories in the data that are better explained by external factors. It is very easy to see an anomaly, such as the high level of funding in New Mexico from the Department of Energy, and begin to speculate wildly, while someone with a little more knowledge would immediately know why it's so (in this case, the DoE controls funding for nuclear weapons, including the Los Alamos research lab in New Mexico).
Performing journalism with large datasets is therefore a three-fold problem. First, it's difficult to prepare and process. Second, it's tough to investigate without being overwhelmed. And finally, the sheer size of the data makes false patterns easier to find, requiring extra care and vigilance. I complain a lot about the general state of data journalism education, but this kind of exercise shows why it's a legitimately challenging mix of journalism and raw technical hackery. If I'm having trouble getting good results from sources with this kind of scale, and I'm a little obsessed with it, what's the chance that the average, fresh-out-of-J-school graduate will be effective in a world of big, messy data?
When it comes to Deus Ex, I'm a contrarian: I think the second game was far better than the first, which was an ugly, buggy, tedious mess. Having finished Deus Ex: Human Revolution, it's probably the best of the three, assuming you skip its bizarre racial stereotypes. That's not just because the mechanics are better--although they are--or that the engine no longer looks like a bad Dark Forces mod. What I find most praiseworthy about Human Revolution is the way it actually engages with science fiction on a level deeper than laser swords and nano-babble.
Fundamentally, this is a game about progress. The developers use transhumanism and human enhancement (not to mention stabbing people with your robot arm-swords) as proxies for the ways that innovation interacts with class, with government, and with culture. This is all pretty standard fare for sci-fi, but it's something few games set in a science fiction world bother to raise. You don't see Gears of War dwelling on the morality of war, or Portal (for all its genius) drawing explicit lines to our relationship with science. Whatever annoyances it might have, I really respect Human Revolution for grabbing a big concept and taking it seriously.
This thoughtfulness extends all through the art design, which is genuinely great--probably the best since Mirror's Edge, in the way that it's both striking and still very much a video game. The visual theme that Eidos Montreal reportedly wanted to emphasize was Rembrandt, which means there's a lot of grainy, gold light bathing the scenes, outlined in clean digital polygons for interactions. The character animation during dialog could be sharper, but the visual worldbuilding is very thorough, and there are a couple of setpieces (like the all-white room late in the game) that are quietly impressive.
The attention to visual detail extends to the costuming, which really carries the Renaissance theme. But this is also a game about people merging with machines, and so mixed in with the capes and the ruffled collars are garments made with a kind of "low-polygon model" structure of tesselated triangles--as if some future fashion designer will be inspired by Battle Area Toshinden. Which is not, honestly, at all implausible, and is a pleasant change from the usual dystopian leather fetish. Even the body armor worn by the soldiers evokes a combination of iron plate and corsetry. Also nice: Adam Jensen's obligatory black trenchcoat is topped by shiny black velvet shoulder panels in a floral pattern, which I think is what all the hip cyborg messiahs are wearing this season.
There's a long history of games that compete visually based on fidelity and/or horsepower, like every iD title ever. And then there are games that go for highly-stylized rendering methods, like Team Fortress 2 or Wind Waker. Human Revolution operates somewhere between the two: it's a mostly-realistic engine, even one that's a little bit behind the times, being used to render a realistic world with a strong editorial style. It has a fashion sense, so to speak, one that helps to pull together its theme and world. I think that's part of why it feels so much more cohesive than the generic cyberpunk of the previous two.
But does it ultimately succeed in making a statement? It's one thing to raise provocative questions, but another to actually pose an argument. I think the real shame is that Human Revolution gets held held back at the last moment by being a Deus Ex title, meaning that it privileges pointless choice over point of view. Late in the game--late enough that it's comically irrelevant to the plot--two characters make their pitches for and against regulation of human enhancement technology. Reach the very end (this is no spoiler) and you'll be given the option of picking one of those plans, or two other equally-unsubtle choices, all of which are literally just a button-press away from your final save point. It is, just as with the original games, entirely cosmetic and consequence-free.
The problem is not that the developers needed to pick a side, but that the final choice feels needlessly reductionist. It comes after hours of stories that examine the costs and benefits of progress from all angles: exploitation of workers, addiction, medical advances, relationships, and scientific ethics. Human Revolution does a surprisingly good job of presenting these with nuance and depth, and then asks you to pass judgement on the whole issue in the most biased way. In contrast, Bioshock set up its political and economic dilemmas, stewed them with a set of rich characters (goofy final boss aside), and then just left them there for you, an approach that's substantially less insulting than "Press 1 to exalt Ayn Rand's values of selfishness, press 2 to embrace socialist altruism..."
In the end, that's why I suspect that RPS's John Walker was right to say that this is smartly-made by smart people, but it's not a smart game. Mechanically, it's sound: I enjoyed playing it much more than I ever thought I'd like a Deus Ex game. It looks great. It presents a complex world filled with interesting situations. And then it undermines much of that credibility--not all, but a large majority--by reverting to Choose Your Own Adventure in the name of nostalgia. This, fellow gamers, is why we can't have nice things.
This Saturday is Crafty Bastards 2011, DC's annual craft fair and b-boy battle. Two years ago, it was one of the first battles I attended, and last year it was my first public b-boy battle, so I have a soft spot for the event. I'm not entering this year, but I thought it would be a good time to write a little bit about what I've been doing lately, dance-wise.
In February, I joined Urban Artistry as a performer and part of the operations team (helping on the web sites, mostly). Over the past year, that role has grown somewhat, and I'm now the Director for Interactive Media for the company. It's been a great experience to help UA grow, even in small ways, and I'm pretty proud of that work.
In the meantime, I've still been working on b-boying, popping, and strutting. The latter, a popping style from San Francisco, is something that I really enjoy: it has a lot of exaggerated gestures, which work well with my height, and it can be performed in stunning group routines. One of the inventors of strutting, Pop Tart, even came to Soul Society to judge and teach a workshop:
Dancing has also re-kindled my interest in playing bass. I've been doing a few open mics lately after class on Thursdays, practicing with other company members, and trying my hand at new genres. Whether the two skills are directly reinforcing each other, I'm not sure. But I do find it interesting that I "hear" music differently from people with a pure dance background: I tend to pick out individual instruments more than they do, for one thing, possibly just because I know which sounds go with what. It's not better or worse, but it is different, and I'd like to learn to listen from either "perspective" at will.
It's kind of ironic that all this is coming together now, as Belle and I get ready to move to Seattle before the end of the year. The dance community here may not have completely turned around my opinion of the city, but it's done more than anything else to open my eyes to a more vital side of DC. Leaving it behind will be hard.
I spend a lot of time at work straddling four programming languages: PHP, SQL, JavaScript, and ActionScript. Many of our projects use at least three of these, if not all four. Yet while there's certainly some degree of domain-specific knowledge in each, there's more technique shared between them, floating off in the indefinite space of "software engineering."
Granted, I didn't study computer science in college. I had done some programming before and didn't really want anything to do with it professionally--I wanted to work for the Travel Channel! So when I fell into doing data journalism for CQ, a job that's halfway between storytelling and interactive coding, I knew there were skills where I was probably behind. And now that I feel like I'm relatively up to speed on the languages themselves, I want to catch back up on some of what I missed, starting with various low-level data structures.
The result is Typedefs, a simple blog where, in each entry, I pick an item from Wikipedia's list of data structures, implement it in JavaScript, and then explain how I did it and provide a quick demonstration. So far, I've done linked lists (the old classic), AA trees, and heaps. Next I want to try a chunk-based file, like PNG, and also a trie or bloom filter for text lookup.
I can already tell that working through these examples has been good for me--not because I expect to implement a lot of AA trees, because in my experience that's pretty rare, but because building these structures gives me a better understanding of how languages actually work, and a wider range of algorithms for solving other problems. The mechanics of a heap, for example, define a set of interesting ways to use arrays for storage and processing. AA trees really force you to examine the implications of pass-by-reference and pass-by-value. Linked lists are always a good experiment in miniature API design. As bite-sized, highly-technical exercises, they give me a chance to stretch my skills without having to build a full-sized JavaScript application.
These posts are also intended to leverage a truism: that the best way to learn is to teach someone else. By writing the blog as a teaching tool for other people, it forces me to organize my thoughts into a logical, coherent narrative: what are the foundations of this structure? What's it actually doing during this algorithm? Why is this useful? When the goal is to educate, I can't just get away with refactoring someone else's example. I need to know how and why it's built that way--and that knowledge is probably more useful than the example itself.
Before I get to the mini-reviews of my (mostly) Kindle reading recently, I want to talk about something that's undoubtably very stupid: books based on video games.
Crysis: Legion caught my eye, not because I care (or even know very much about) the game it's based on, but because it's written by Peter Watts. Watts wrote Blindsight, one of the most unnerving books about first contact, and the Rifters trilogy, the world's best underwater contagion disaster novel. He writes cerebral, hard science fiction that draws heavily on his background as a marine biologist. Watts is not, in other words, the guy you immediately imagine as the best candidate to write a book based on a game about robot-suited marines repeatedly shooting aliens in the head.
And sure enough, he can't entirely rescue it. Watts tries his best--a running subplot cognitive prostheses manages to be both creepy and darkly funny--but in the end, it's tied to the plot of the game, and that plot just isn't very good.
At least, it's not very good for a book. For all I know it's fine for a game. But Legion really illustrates how storytelling shifts between these mediums, and not always for the better on the interactive side of things. A game plot is subject to game mechanics: the verbs available to the player are the actions available to the character, and a satisfying experience comes from giving the player new ways to apply those verbs in increasingly complicated or involved circumstances.
So (I'm gathering from the book, granted) in Crysis 2, players can shoot things, they can flip switches, and they can assign energy to a set of suit abilities, such as defense or stealth. These actions are put to use in a series of firefights, directed by secondary characters who tell the player where to go, culminating in set-pieces where he or she has to fight through an alien mechanism to shut it down. For a game, that's plenty (as an FPS, in fact, it's already relying on a vast collection of behavior that players have learned). But it's a frustratingly passive, tedious experience for long-form print fiction, no matter how it's dressed up in an internal monologue and a series of interstitial reports from other points of view.
It doesn't have to be, of course. Just as a movie adaptation of a book has differences due to the change in medium, it's not unreasonable to expect that you could novelize a game. Nor is it intrinsically shameful: people draw their inspiration from all kinds of places (see also: Pirates of the Caribbean, Wicked, or the first Myst novel, none of which are "fine art" but still manage to be perfectly competent entertainment). But you can't do it by narrating the action. Pick a new character, expand the plot, do something unpredictable for heaven's sake.
With that out of the way, here are some of the other books I've read since my last set of reviews.
The Heroes is typical Joe Abercrombie: dark, slightly nihilistic fantasy tinged with gallows humor. It's the kind of thing that undercuts Sady Doyle's recent critique of George R. R. Martin--particularly the part where she describes fantasy literature as an "impulse to revisit an airbrushed, dragon-infested Medieval Europe." Abercrombie, even more than Martin, is not offering any pretense of airbrushing or of a desire to revisit anything. His generic fantasy setting is a miserable place, and his characters know it, which is part of what makes The Heroes so good--it's a careful deconstruction of the kinds of chivalry porn that has, admittedly, made up a respectable chunk of genre fiction. As such, it's probably best appreciated by people who know something about the context, and who don't mind an unhappy ending or three.
Richard Kadrey's Kill the Dead is a perfect example of how not to write a sequel. I read the previous book, Sandman Slim about a year ago, and thought it was a competent (if not exceptional) urban fantasy. That means I've had a year to forget almost everything about Kadrey's universe, and yet Kill the Dead does absolutely nothing to remind the reader about any of the characters, creations, or events of its preceding volume. I spent the entire first 100 pages asked "who? what, again?" and then looking for spoilers online. Combine that with a so-so zombie plot, and this is eminently skippable stuff.
Black Superheroes, Milestone Comics, and their Fans is kind of interesting given that Milestone--the minority-owned studio launched in the 90's--was rolled into the larger DC universe as a part of their recent reboot. Jeffrey Brown's look at Milestone in the context of black comic book heroes and comic book fans ranges back to the blacksploitation era, and while it's probably not saying anything incredibly new, it is interesting to read a critical look on how the company was received, how it grew, and what that means for a more diverse media. Whether or not Milestone's values will be able to survive under DC's leadership, we'll have to wait and see.
Wait, did George R. R. Martin actually release A Dance with Dragons this year? Most of the reviews I've read were positive, but I think those were caused by relief that it was actually published, because I thought this was a noticeably mediocre installment into the series. Despite the high page count, almost nothing happens--most of it is taken up by travelling and below-average court intrigues. Maybe that's to be expected: it's a middle book, after all, and those are sometimes more about setup than resolution. But it's certainly made me a lot less interested in continuing when Martin finally finishes book #6.
Voodoo Histories: The Role of the Conspiracy Theory in Shaping Modern History, by David Aaronovitch, is another book that never quite achieves liftoff. Aaronovitch sets out to find a grand unified theory of why we create conspiracy theories, and the role they play in culture. But to do so, he drags the reader through a long series of conspiracies-as-case-studies. The result is big on history, not terribly strong on argument. Perhaps it's ironic, but I want a little bit more point-of-view and personality from my academic study of conspiracy myths.
In Unlocking the Clubhouse: Women in Computing, Jane Margolis and Allen Fisher examine why, exactly, the gender imbalance in high-tech occupations emerged and persists. They trace it back along three lines: family treatment of technology, "imposter" syndrome, and a hostile male culture in computing. The last few chapters detail a program that the authors put together to try to address the problem. Since it was published in 2001, a lot of the information inside has seeped into more public awareness, but this is still a really good book on how women are turned away from tech trades, and what teachers and employers should do to reduce that effect. Speaking as someone working with a team of male and female data journalists, it's definitely a shame to lose 50% of our potential talent before the conversation even begins.
I've come to the conclusion that I'm just not really into Ian Macdonald, and The Dervish House is no exception. Macdonald's schtick is near-singularity cyberpunk set in developing countries, as if he's setting out to push Gibson's observation about the distribution of the future as far as he can. I'm glad someone's writing science fiction that's not set in the USA--this time it's Turkey--and I like the books well enough, but I don't love them. That said, Dervish House's combination of financial scams, mellified men, and virally-induced religion manages to be a fun read, jam-packed with ideas and intersecting plotlines. It's good stuff, it's just not my cup of tea.
I bought two books by South African writer Lauren Beukes recently. Zoo City is the better of the two: an urban fantasy in which criminals are inexplicably saddled with an animal familiar they have to care for. The main character, Zinzi, is a former journalist (with a sloth) who's hired for a missing persons case--a macguffin that doesn't last long. It's a noir-ish book, and an unromantic one, but I like how it edges up to Magical Realism without stepping into full-blown preciousness. Moxyland is more traditional dystopian science fiction, with the now-obligatory alternate reality game plot point. Although there are some clever touches in there--the strandbeest-like bio-art and the ebola variant used for crowd control--it's hard for me to get past the parts that borrow too heavily from contemporaneous fashions like gamification, without feeling like I'd rather just open up my RSS feeds.
Half-Made World? More like "half-written book," ba-dum-bum. Felix Gilman's bizarre pastiche reminds me a little bit of Mieville's Iron Council--it's a Western that's set... elsewhere, for lack of a better word--but in the end it just stops: either it's a setup for a sequel, or Gilman forgot how an ending is supposed to work. I like the idea of catching the ordinary people of his faux-Wild West between the Gun (representing the darkest parts of the gunslinger myth) and the Line (a malignant bureaucracy bent on manifest destiny via train), but the book is long on description and short on actual action, which I find incredible. It's like Gilman set out to write Weird Fiction in the least squeamish, visceral possible way, the point of which I can't possibly understand.
I don't know if Janet Reitman's Inside Scientology is the definitive account of L. Ron Hubbard's ponzi-scheme-turned-cult, but it's pretty good. Reitman briefly covers Hubbard's childhood, his biography (and his attempts at self-aggrandizement), and his role in the religion's founding and early growth. During the last half of the book, she turns to the modern Scientology organization, with special attention paid to Lisa McPherson, a member who died while under Scientology's care due to gross medical negligence and abuse. Reitman aims for plain-spoken objectivity throughout her telling of the organization's history, but even that is damning enough. She ends the book on an ambiguous note with a look at the next generation of Scientologists, which is something I found surprisingly refreshing. It provides a glimpse of the mundane humanity underneath one of the world's most bizarre dogmas.
After a month of much weeping and gnashing of teeth, my venerable laptop is back from out-of-warranty repair. That month was the longest I've been without a personal computer in years, if not decades, but I've survived--giving me hope for when our machine overlords rise up and replace Google Reader with endless reruns of The Biggest Loser.
After the second day, I felt good. In fact, I started to think that computing without a PC was doable. I do a lot of work "in the cloud," after all: I write posts and browse the web using Nano and Lynx via terminal window, and my social networks are all mobile-friendly. Maybe I'd be perfectly happy just using my smartphone for access--or anything that could open an SSH session, for that matter! Quick, fetch my Palm V!
Unfortunately, that was just the "denial" stage. By the second week of laptoplessness, my optimism had faded and I was climbing the walls. It's a shame, but I don't think cloud computing is there yet. As I tried to make it through a light month of general tasks, I kept running into barriers to acceptance, sorted into three categories: the screen, the keyboard, and the sandbox.
Trying to do desktop-sized tasks in a browser immediately runs into problems on small-screen devices. It's painful to interact with desktop sites in an area that's under 13" across. It's even more annoying to lose half of that space for a virtual keyboard. When writing, the inability to fit an entire paragraph onscreen in a non-squint font makes it tougher to write coherently. I probably spend more time zooming in and out than actually working.
But more importantly, the full-screen application model is broken for doing real work. That sounds like nerd snobbery ("I demand a tiling window manager for maximum efficiency snort"), but it's really not. Consider a simple task requiring both reading and writing, like assembling a song list from a set of e-mails. On a regular operating system, I can put those two tasks side by side, referring to one in order to compile the other. But on today's smartphones, I'm forced to juggle between two fullscreen views, a process that's slow and clumsy.
There's probably no good way to make a multi-window smartphone. But existing tablets and thin clients (like Chromebooks) are also designed around a modal UI, which makes them equally impractical for tasks that involve working between two documents at the same time. I think the only people who are even thinking about this problem are Microsoft with their new Windows 8 shell, but it's still deep in development--it won't begin to influence the market for at least another year, if then.
I bought a Thinkpad in the first place because I'm a sucker for a good keyboard, but I'm not entirely opposed to virtual input schemes. I still remember Palm's Graffiti--I even once wrote a complete (albeit terrible) screenplay in Graffiti. On the other hand, that was in college, when I was young and stupid and spending a lot of time in fifteen-passenger vans on the way to speech tournaments (this last quality may render the previous two redundant). My patience is a lot thinner now.
Input matters. A good input method stays out of your way--as a
touch-typist, using a physical keyboard is completely effortless--while
a weaker input system introduces cognitive friction. And what I've
noticed is that I'm less likely to produce anything substantial using an
input method with high friction. I'm unlikely to even start anything.
That's true of prose, and more so for the technical work I do (typical
programming syntax, like ><{}[];$#!=
, is truly
painful on a virtual keyboard).
Proponents of tablets are often defensive about the conventional wisdom that they're oriented more towards consumption, less toward creativity. They trumpet the range of production software that's available for making music and writing text on a "post-PC" device, and they tirelessly champion whenever an artist uses one to make something (no matter how many supporting devices were also used). But let's face it: these devices are--as with anything--a collection of tradeoffs, and those tradeoffs almost invariably make it more difficult to create than to consume. Virtual keyboards make it harder to type, interaction options are limited by the size of the screen, input and output can't be easily expanded, and touch controls are imprecise at best.
Sure, I could thumb-type a novel on my phone. I could play bass on a shoebox and a set of rubber bands, too, but apart from the novelty value it's hard to see the point. I'm a pragmatist, not a masochist.
I almost called this section "the ecosystem," but to be honest it's not about a scarcity of applications. It's more about process and restriction, and the way the new generation of mobile operating systems are designed.
All these operating systems, to a greater or lesser extent, are designed to sandbox application data from each other, and to remove the heirarchical file system from the user. Yes, Android allows you to access the SD card as an external drive. But the core applications, and most add-on applications, are written to operate with each other at a highly-abstracted level. So you don't pick a file to open, you select an image from the gallery, or a song from the music player.
As an old PalmOS user and developer, from back in the days when they had monochrome screens and ran on AAAs, this has an unpleasantly familiar taste: Palm also tried to abstract the file system away from the user, by putting everything into a flat soup of tagged databases. Once you accumulated enough stuff, or tried to use a file across multiple applications, you were forced to either A) scroll through an unusably lengthy list that might not even include what you want, or B) run normal files through an external conversion process before they could be used. The new metaphors ('gallery' or 'camera roll' instead of files) feel like a slightly hipper version of the PalmOS behavior that forced me over to Windows CE. We're just using Dropbox now instead of Hotsync, and I fail to see how that's a substantial improvement.
Look, heirarchical file systems are like democracy. Everyone agrees that they're terrible, but everything else we've tried is worse. Combined with a decent search, I think they can actually be pretty good. It's possible that mobile devices can't support them in their full richness yet, but that's not an argument that they've gotten it right--it shows that they still have a lot of work to do. (The web, of course, has barely even started to address the problem of shared resources: web intents might get us partway there, one day, maybe.)
When I was in high school, Java had just come out. I remember talking with some friends about how network-enabled, platform-independent software would be revolutionary: instead of owning a computer, a person would simply log onto the network from any device with a Java VM and instantly load their documents and software--the best of thin and thick clients in one package.
Today's web apps are so close to that idea that it kind of amazes me. I am, despite my list of gripes, still optimistic that cloud computing can take over much of what I do on a daily basis, if only for environmental reasons. My caveats are primarily those of form-factor: it's technically possible for me to work in the cloud, but the tools aren't productive, given the recent craze for full-screen, touch-only UIs.
Maybe that makes me a holdover, but I think it's more likely that these things are cyclical. It's as though on each new platform, be it mobile or the browser, we're forced to re-enact the invention of features like windows, multitasking, application management, and the hard drive. Each time, we start with the thinnest of clients and then gradually move more and more complexity into the local device. By the time "the cloud" reaches a level where it's functional, will it really be that different from what I'm using now?