this space intentionally left blank

January 29, 2013

Filed under: fiction»reviews

Machine Vision

Gun Machine, by Warren Ellis

Warren Ellis is a writer of a particular style, which can be polarizing. When it's good--as in Transmetropolitan, his frighteningly amusing riff on Hunter-Thompson-meets-Futurama political journalism--it's very good, but there are other times when it comes across as bluster. His first novel, Crooked Little Vein, was a good example: over its 200-odd pages the schtick wore thin, and the catalog of American fetish weirdness that was left (while funny) wasn't enough to carry it.

His second book, Gun Machine surprises on two levels. The first is that it pulls back substantially from Ellis' usual over-the-top dialog style. It still surfaces for comedic effect (Ellis gets a lot of mileage out of two manic CSI technicians), but most of the prose is written more restrained, or in another, Ludlum-like style entirely. This gives the book something Crooked Little Vein never really had: dynamics. The characters have room to breathe, and become a lot more sympathetic, when they're not all shouting in the same voice.

The other surprise is that the book actually works as the mystery-thriller it appears to be, since I was expecting something less traditional. It follows John Tallow, a New York City cop who is coasting along on his partner's graces when said partner is shot, simultaneously revealing a room full of purloined firearms arranged in complicated patterns--the "gun machine" of the title. Alternating chapters follow the room's owner, a schizophrenic killer for hire who's been working for influential New Yorkers over several decades.

It's not like Ellis is a stranger to mystery stories, or to conspiracy theories, but his usual M.O. tends to be more scattershot in scope, sprinkled liberally with Internet-age trivia. Gun Machine eschews this in favor of a lot of Manhattan history, and its relatively subdued narrative voice gives Ellis a chance to explore Tallow's gradual re-engagement with the world as he becomes more caught up in the case. It's a more thoughtful, sympathetic approach than I expected, in the best possible way.

Gun Machine has a few sections where it bogs down, and where it stretches across the line of plausibility, although it tends to skip past these deftly enough that they don't stand out until you stop and think about them later. But overall, it's a crackling little piece of genre fiction, paired with just enough in the way of characterization and unexpected turns to keep you turning pages without actually feeling guilty about it.

Rise of the Videogame Zinesters, by Anna Anthropy

Someone had to write this book. It was really just a question of who would get there first: someone from the maker/craft culture, or someone like Anthropy, a cranky member of the independent game design community. Zinesters is a book about democratizing gaming: the idea that anyone should be able to write a video game, the same way that anyone can paint a picture or write a story.

If video games can be art, what does "outsider art" in that medium look like? Where are the subversive messages? And how do we give a canvas to more people--people who aren't young, white men? As the creator of several adamantly non-mainstream works, like Dys4ia and Mighty Jill Off, these are not idle questions for Anthropy. So her goals are two-fold: to explain the ways that games can be more accessibly, and (more importantly) to convince readers that making them is something they should want to do.

I'm not sure it succeeds at the latter (to avoid tying her book too closely to any given tool, Anthropy basically lists a number of entry-level game engines and then gives readers a pep-talk), but the former is extremely well done. Starting from the definition of a game as "an experience created by rules," she uses that as a jumping off point to examine game design, its relationship to society, and "folk games."

Rise of the Videogame Zinesters is a very, very short book, and it often reads as a collection of blog posts instead of a single work, but it's an impressive tour of gaming at the margins of culture. If her argument has a weakness, it's contained in the title: considering the fade of zines (eclipsed by Internet blogging, now also on its decline), are there other models for DIY game creation that might need to be examined? How do independent games compare with indie films, or hackerspaces, or crafting?

It's not that Anthropy is wrong to pick zines as a starting point--given her emphasis on LGBT culture and fast/cheap creation, it's appropriate--but there's a lot of other creative philosophies that would be interesting to consider, and might result in very different interactive experiences. It may not be Anthropy's responsibility (or interest--it's a very personal series of essays) to present those, but in a book this brief, it couldn't hurt. As it is, we catch only a glimpse in her impressive citations, from the open-ended (ZZT) to the surrealist (La La Land 2) to the meta (Execution). The introduction to this diversity of gaming is intriguing, and it's a little disappointing when the corresponding analysis is relatively thin.

January 16, 2013

Filed under: tech»education

Git Busy

There's an ongoing discussion I'm having with the other instructors at Seattle Central Community College on how to improve our web development program. I come from a slightly different perspective than many of them, having worked in large organizations my whole career (most of them are freelancers). So there are places where we agree the program can do better (updating its HTML/CSS standards, reordering the PHP classes) and places where we differ (I'm strongly in favor of merging the design and development tracks). But there's one change that I increasingly think would make the biggest difference to any budding web developer: learning source control.

Source control management (SCM) is important in and of itself. I use it all the time for projects that I'm personally working on, so that I can track my own changes and work across a number of different machines. It's not impossible to be hired without SCM skills, but it is a mark against a potential employee, because no decent development team works without it. And using some kind of version control system is essential to participating in the modern open-source community, which is the number one way I advise students to get concrete experience for their resumes.

But more importantly, tools like Git and Subversion are gateways to the wider developer ecosystem. If you can clone a repo in Git, then you now have a tool for deployments, and you stop developing everything over FTP (local servers become much more convenient). Commits can be checked for validity before they go into the repo, so now you may be looking at automated code parsers and linting tools. All of these are probably going to lead you to other trendy utilities, like preprocessors and live reload. There's a whole world of people working on the web developer experience, creating workflows that didn't exist as recent as two or three years ago, and source control serves as a good introduction to it.

The objection I often hear is that instructors don't have time to keep up with everything across the entire web development field. Whether or not that's a valid complaint (and I feel strongly that it isn't), it's just not that hard to get started with version control these days. Git installs a small Bash shell with a repo-aware prompt. GitHub's client for Windows does not handle the advanced features, but it covers the 90% use case very well with a friendly UI. Tortoise has Windows add-ons for Git, SVN, and Mercurial. Learning to create a repo and commit files has never been easier--everything after that is gravy.

Last quarter, I recommended that teams coordinate their WordPress themes via GitHub, and gave a quick lecture on how to use it. The few teams that took me up on it considered it a good experience, and I had several others tell me they wish they'd used it (instead of manually versioning their work over DropBox). This quarter, I'm accepting homework via repo instead of portal sites, if students want--it'll make grading much easier, if nothing else. But these are stopgap, rearguard measures.

What SCCC needs, I think, is a class just on tooling, covering source control, preprocessing, and scripting. A class like this would serve as a stealth introduction to real-world developer workflows, from start (IDEs and test-driven development) to finish (deployment and build scripts). And it would be taught early enough (preferably concurrent with or right after HTML/CSS) that any of the programming courses could take it as a given. New developers would see a real boost in their value, and I honestly think that many experienced developers might also find it beneficial. Plus it'd be great training for me--I'm always looking for a good reason to dig deeper into my tools. Now I just have to convince the administration to give it a shot.

January 10, 2013

Filed under: tech»mobile

Four

I used a Nexus One as my smartphone for almost three years, pretty much since it was released in 2010. That's a pretty good testimonial. The N1 wasn't perfect--it was arguably underpowered even at release, and held back for upgrades by the pokey video chip and small memory--but it was good enough. When Google announced the Nexus Four, it was the first time I really felt like it was worth upgrading, and I've been using one for the last couple of months.

One big change, pardon the pun, is just the size of the thing: although it's thinner, the N4 is almost half an inch wider and taller than my old phone (the screen is a full diagonal inch larger). The N1 had a pleasant density, while between the size and the glass backing, the N4 feels less secure in your hand, and at first it doesn't seem like you're getting much for the extra size. Then I went back to use the N1 for something, and the virtual keyboard keys looked as small as kitten teeth. I'm (tentatively) a fan now. Battery life is also better than the N1, although I had to turn wifi on to stop Locale from keeping the phone awake constantly.

I think it's a shame they ditched the trackball after the Nexus One, too. Every time I need to move the cursor just a little bit, pick a small link on a non-mobile web page, or play a game that uses software buttons, I really miss that trackball. Reviewers made fun of it, but it was regularly useful (and with Cyanogen, it doubled as a second power button).

The more significant shift, honestly, was probably going from Android 2.3 to 4.2. For the most part, it's better where Android was already good: notifications are richer, switching tasks is more convenient, and most of the built-in applications are less awful (the POP e-mail client is still a disaster). Being able to run Chrome is usually very nice. Maps in particularly really benefits from a more powerful GPU. Running old Android apps can be a little clunky, but I mostly notice that in K-9 Mail (which was not a UX home run to begin with). The only software feature that I do really miss is real USB hosting--you can still get to internal storage, but it mounts as a multimedia device instead of a disk, which means that you can't reliably run computer applications from the phone drive.

There is always a lot of hullaballoo online around Android upgrades, since many phones don't get them. But my experience has been that most of it doesn't really matter. Most of my usage falls into a few simple categories, none of which were held back by Android 2.3:

  • Reading social networks
  • Browsing web sites
  • Checking e-mail
  • Retrieving passwords or two-factor auth keys
  • Occasionally editing a file with DroidEdit
I'm notoriously frugal with the apps I install, but even so I think the upgrade problem for end-users is overrated. 4.0 is nice, and I'm happy to have it, but putting a quad-core chip behind 2.3 would have done most of what I need. Honestly, a quality browser means even the built-in apps are often redundant. Google Maps in Chrome for Android is a surprisingly pleasant experience. Instagram finally discovered the web last year. Google Calendar has been better on the web than on Android for most of its existence. You couldn't pay me enough to install the Facebook app anymore.

Compared to its competitors, Android was always been designed to be standalone. It doesn't rely on a desktop program like iTunes to synchronize files, and it doesn't really live in a strong ecosystem the way that Windows Phone does--you don't actually need a Google Account to use one. It's the only mainstream mobile platform where installing applications from a third-party is both allowed and relatively easy, and where files and data can transfer easily between applications in a workflow. Between the bigger phone size (or tablets) and support for keyboards/mice, there's the possibility that you could do real work on a Nexus 4, for certain definitions of "real work." I think it would still drive me crazy to use it full-time. But it's gradually becoming a viable platform (and one that leaves ChromeOS in kind of an awkward place).

So sure, the Nexus 4 is a great smartphone. For the asking price ($300) it's a real value. But where things get interesting is that Android phones that aren't quite as high-powered or premium-branded (but still run the same applications and OS, and are still easily as powerful as laptops from only a few years ago) are available for a lot less money. This was always the theory behind Nokia's smartphones: cheap but powerful devices that could be "computers" for the developing world. Unfortunately, Symbian was never going to be hackable by people in those countries, and then Nokia started to fall apart. In the meantime, Android has a real shot at doing what S60 wanted to do, and with a pretty good (and still evolving) open toolkit for its users. I still think that's a goal worth targeting.

January 3, 2013

Filed under: gaming»software»hotline_miami

REM

During one of those 24-hour colds, when I curl up under every blanket in the house and just wait for the fever to break, I often lose track of reality. It's not like I hallucinate. But, drifting in and out of consciousness with my body temperature far above normal, the line blurs between dreaming and my rational mind, which means I find myself thinking quite seriously about things that are either entirely absurd, or which never actually happened. It's the closest I get to doing drugs.

It may just be that I was playing it after recovering from a cold during the holidays, but Hotline Miami often feels like it comes from a similar place (fever or drugs, take your pick). Although it pays homage to Drive with its setting, violence, and a selection of trippy electronic dance tunes, Hotline adds a gloss of unreality: heavy filtering (including a subtle screen tilt), an increasingly unreliable narrator, and an astonishing sound design. The darker half of the soundtrack leans heavily on synth drones, distorted bass, and indistinct vocal echoes, walking a line precisely between captivating and terrifying.

So it is atmospheric. But in the wake of Newtown it is difficult to talk about Hotline Miami without talking about violence, since it is also a game about brutal, sickening violence. Dressed up in a retro 16-bit facade, the blood and gore is made more abstract, and thus more palatable, but that's a bit of a cheat, isn't it? The NRA recently blamed video games for school shootings, drawing on such contemporary examples as Mortal Kombat and Natural Born Killers, and while that's obviously laughable (and more than a little disgusting) it's hard to take the moral high ground when a prospective game of the year for many people involves beating anonymous mobsters to death with a crowbar.

Part of the problem is that Hotline Miami is and isn't about those things. Someone playing the game isn't sitting at a computer plotting murder--they're primarily thinking about navigating space, line of sight, and the AI's predictable response. Most violent video games are only superficially violent: mechanically they're just button presses and spatial awareness. That's not an excuse, but it does explain why gamers get so huffy about the accusations of immorality. It also begs the question: if these games aren't actually about death and destruction, then why all the trappings?

In the case of Hotline Miami, there's a studied juvenile quality to the whole affair. It's the interactive version of some smart-but-disengaged stoner's doodling on their high school chemistry notebook. It's gross because its influences are gross, and because gross things are fun to draw, and because chemistry is boring, dude. This accounts for some of the feverishness as well, since it taps into the same powerful imaginative impulse that we have as kids and mostly lose when we have to start paying our own rent.

It's not a bad thing for Hotline Miami to draw on those influences, or for it to be ultra-violent. There's a place for ugly, childish things in our cultural stew: I don't think you get Death Proof without Saw or Dead Alive. I like the game. But it bothers me a little that its violence is so unremarkable, and that it wants to use self-awareness as an excuse or an explanation. Using excess to criticize gaming culture was old with Splatterhouse (another up-to-the-minute reference from the NRA, there). So since we don't have a lot of variety in video game narratives, maybe we should stop letting "bloodthirsty" pass for "profound."

December 12, 2012

Filed under: journalism»industry

The Platform

Last week, Rupert Murdoch's iPad-only tabloid The Daily announced that it was closing its doors on Thursday, giving it a total lifespan of just under one year. Lots of people have written interesting things about this, because the schadenfreude is irresistable. Felix Salmon makes a good case against its format, while former staffer Peter Ha noted that its publication system was unaccountably terrible. Dean Starkman at CJR believes, perhaps rightly, that it will take more than a Murdoch rag going under to form any real conclusions.

Around the same time, Nieman Lab published a mind-bogglingly silly pitch piece for 29th Street Publishing, a middleman that republishes magazine content as mobile apps. "What if getting a magazine into Apple's Newsstand was as easy as pushing the publish button on a blog?" Nieman asked on Twitter, demonstrating once again that the business side of the news industry will let nothing stand between it and the wrong questions.

The problem publications face is not that getting into Apple's storefront is too hard--it's that they have a perfectly good (cross-platform) publishing system right in front of them in HTML ("as easy as pushing the publish button on a blog," one might say) and they're completely unwilling to find a business model for it other than throwing up their hands and ceding 30% of their income (and control of their future) to a third party in another industry with a completely different set of priorities. (Not to mention the barriers to search, sharing, and portability that apps throw up.)

What publishers need to be doing is finding a way to monetize the content that they've already got and can already publish using tools that are--well, probably not quite as easy as blogging, but undoubtably far easier than becoming a mobile software developer. One way to do that is with a leaky paywall: it's been a definite success for the NYT, and the Washington Post is considering one. I suspect that when calmer heads prevail, this will become a lot more common. The problem with paywalls is mobile: even if consumers were not conditioned to want "apps," sign-in on mobile is a frustrating user experience problem.

But let's say apps remain a hot topic in news boardrooms. I've been thinking about this for a few days: how could the news industry build a revenue model out of the best of both worlds, with clean mobile HTML deployed everywhere but leveraging the easy payment mechanism of an app store--assuming, in fact, that "payment is hard" is actually a problem the industry has, and given the NYT's success, I'm not honestly sure that it is. My best solution takes inspiration from two-factor authentication (which everyone should be using).

My plan goes like this: just like today, you visit the app store on your platform of choice. You download a yearly "subscription key" application, pay for it in the usual way, and then open it. Behind the scenes, the app talks to the content server and generates a one-time password, then opens a corresponding URL in the default site browser, setting a cookie so that further browser visits will always be signed in--but you as the user don't see any of that. All you see is that the content has been unlocked for you without any sign-in hassle. Next year, you renew your subscription the same way.

In an ideal world, there would be a standard for this that platform authors could implement. Your phone would have one "site key" application (not without precedent), and content publishers could just plug add-on apps into it for both purchasing and authentication. Everyone wins. But of course, that's not a sexy startup idea for milking thousands of dollars from gullible editors. Nor is it helpful for computer companies looking to keep you from leaving their platform: I'm pretty sure an application like this violates Apple's store rules. Personally, that's reason enough for me to consider them unacceptable, because I don't believe the correct response to exploitation is capitulation. That's probably why nobody lets me make business decisions for a major paper.

Assume we can't publish an app: two-factor auth still works in lots of ways that are mobile-friendly, post-purchase. You could visit the website, click a big "unlock" button and be sent a URL via text message, e-mail, Facebook, Twitter, or whatever else you'd like. A site built in HTML and monetized this way works everywhere, instead of locking you into the iPad or another single platform. It lets the publisher, not a third party, retain control of billing and access. And it can be layered onto your existing system, not developed from scratch. Is it absolutely secure? No, of course not. But who cares? As the Times has proven, all you need to do is monetize the people who are willing to pay, not the pirates.

This is just one sane solution that lets news organizations control their own content, and their destiny. Will it happen? Probably not: the platform owners won't let them, and news organizations don't seem to care about having a platform that they themselves own. To me this is a terrible shame: after years of complaining that the Internet made everyone a publisher, news organizations don't seem to be interested in learning that same lesson when the shoe is on the other foot. But perhaps there's an upside: for every crappy app conversion startup funded by desparate magazine companies, there are jobs being created in a recovering economy. Thanks for taking one for the team, journalism.

December 5, 2012

Filed under: gaming»software»xcom

XCOM

Why is it all capitalized? That's what I want to know. XCOM isn't an acronym for something--presumably it stands for Extraterrestrial Combat (or Command?)--so shouldn't it be XCom? I guess that doesn't look as good on the posters. Maybe they should add an exclamation point. (Or a dash, according to the purists. Luckily, having never played the original, I'm not really interested in purity.)

There aren't a lot of games where I finish them and immediately start a new session. Mass Effect 2 was probably the last example--I did two straight playthroughs, and possibly started a third, just because the basic mechanics were so solid and enjoyable. XCOM might be just as catchy, even though I didn't expect it to be. Here are three things that surprised me the first time through:

I didn't think I'd get so attached to my squad. People talk about doing this in the old X-COM, being genuinely upset when a soldier bit the dust, and I just figured those people were crazy. But about half-way through the game, letting Col. Zahara "Werewolf" Mabuza die just stopped being acceptable. The nicknames must have a lot to do with it. I knew every nickname on my squad, especially the ones that got funnier as they got more panic-prone ("Padre," indeed).

XCOM gets a lot of mileage out of only a few maps. I think I saw in an interview that there's only 30 or so maps in XCOM, which is not a lot considering the hundreds of encounters in a typical game. Partly, the maps are just well-designed: just starting out in a different space and direction is enough to make many of the UFO capture maps completely disorienting. But they're also partially-randomized, meaning that you never entirely develop a single cover strategy for each map. Add in the day/night filters, and it feels like a lot more content than it actually is.

Everything is short. Six soldiers means that you're doing with a turn in roughly 60 seconds. A mission in XCOM takes, at most, 30 minutes. Between missions, you pick your research tasks and your engineering projects and then you hit the big "GO FAST" button in Mission Control and see how far you get before the next invasion. Sometimes a movie plays--they're all skippable, as are all the little interstitial animations (launching a fighter, landing the SkyRanger, etc). Everything in the game is made with the understanding that you Should Not Wait, a convenient side effect of which is that it's compulsively playable.

It's not a particularly profound game. It's not even particularly well-made--bugs pop up all over. Even with the tutorial, I restarted the game twice trying to figure out how to keep everything balanced, which is pretty hardcore. But it's so consistently fun that those problems don't halt the experience. I never really got the Halo philosophy of "30 seconds of fun" because I find Halo to be a boring, frat-boy knockoff of better shooters, but XCOM pulls it off.

November 21, 2012

Filed under: gaming»perspective

Bundled Up

The fourth Humble Bundle for Android is wrapping up today: if you like games and charity, it's a ridiculously good deal, even if you don't own an Android device--everything works on Windows, Mac, and Linux as well. Although it turns the Nexus 4 into a toasty little space heater, it would be worth it just to get Waking Mars, the loopy botany platformer I've been playing for a couple of days now.

If nothing else, I like that the Humble Bundle proves that it's still feasible to sell software the old-fashioned way: by putting up a website and taking orders yourself. Digital retailers like Steam or the various mobile platform stores are all well and good (the Bundle comes with Steam keys, which I usually use to actually download the games), but a lot of my favorite gaming memories come from this kind of ad-hoc distribution. I don't want to see it die, and I think it would be bad for independent developers if it did.

In the last few months, people like Valve's Gabe Newell and Mojang's Markus Persson have raised concerns about where Windows is going. Since the PC has been the site of a lot of really interesting experimentation and independent development over the last few years, Microsoft's plan to shut down distribution of Metro-style applications on Windows 8, except through a centralized store that they own, is troubling. At the same time, a lot of people have criticized that perspective, saying that these worries are overblown and alarmist.

There may be some truth to that. But I think the fact that the Humble Bundle is, across the three or four mobile platforms in popular use, only available on Android should tell us something. Why is that? Probably because Google's OS is the only one where developers can handle their own distribution and updates, without having to get approval from the platform owner or fork over a 30% surcharge. That fact should make critics of Newell and Persson think twice. Can the Humble Bundle (one of the most successful and interesting experiments since the shareware catalogs I had in the 80s) and similar sales survive once traditional computing moves to a closed distribution model? It looks to me like the answer is no.

November 13, 2012

Filed under: journalism»new_media»data_driven

Nate Silver: Not a Witch

In retrospect, Joe Scarborough must be pretty thrilled he never took Nate Silver's $1,000 bet on the outcome of the election. Silver's statistical model went 50 for 50 states, and came close to the precise number of electoral votes, even as Scarborough insisted that the presidential campaign was a tossup. In doing so, Silver became an inadvertent hero to people who (unlike Joe Scarborough) are not bad at math, inspiring a New Yorker humor article and a Twitter joke tag ("#drunknatesilver", who only attends the 50% of weddings that don't end in divorce).

There are two things that are interesting about this. The first is the somewhat amusing fact that Silver's statistical model, strictly speaking, isn't actually that sophisticated. That's not to take anything away from the hard work and mathematical skills it took to create that model, or (probably more importantly) Silver's ability to write clearly and intelligently about it. I couldn't do it, myself. But when it all comes down to it, FiveThirtyEight's methodology is just to track state polls, compare them to past results, and organize the results (you can find a detailed--and quite readable--explanation of the entire methodology here). If nobody has done this before, it's not because the idea was an unthinkable revolution or the result of novel information technology. It's because they couldn't be bothered to figure out how.

The second interesting thing about Silver's predictions is how incredibly hard the pundits railed against them. Scarborough was most visible, but Politico's Dylan Byers took a few potshots himself, calling Silver a possible "one-term celebrity." You can almost smell sour grapes rising from Byers' piece, which presents on the one side Silver's math, and on the other side David Brooks. It says a lot about Byers that he quoted Brooks, the rodent-like New York Times columnist best known for a series of empty-headed books about "the American character," instead of contacting a single statistician for comment.

Why was Politico so keen on pulling down Silver's model? Andrew Beaujon at Poynter wrote that the difference was in journalism's distaste for the unknown--that reporters hate writing about things they can't know. There's an element of truth to that sentiment, but in this case I suspect it's exactly wrong: Politico attacked because its business model is based entirely on the cultivation of uncertainty. A world where authority derives from more than the loudest megaphone is a bad world for their business model.

Let's review, just for a second, how Politico (and a whole host of online, right-leaning opinion journals that followed in its wake) actually work. The oft-repeated motto, coming from Gabriel Sherman's 2009 profile, is "win the morning"--meaning, Politico wants to break controversial stories early in order to work its brand into the cable and blog chatter for the rest of the day. Everything else--accuracy, depth, other journalistic virtues--comes second to speed and infectiousness.

To that end, a lot of people cite Mike Allen's Playbook, a gossipy e-mail compendium of aggregated fluff and nonsense, as the exemplar of the Politico model. Every morning and throughout the day, the paper unleashes a steady stream of short, insider-ey stories. It's a rumor mill, in other words, one that's interested in politics over policy--but most of all, it's interested in Politico. Because if these stories get people talking, Politico will be mentioned, and that increases the brand's value to advertisers and sources.

(There is, by the way, no small amount of irony in the news industry's complaints about "aggregators" online, given the long presence of newsletters like Playbook around DC. Everyone has one of these mobile-friendly link factories, and has for years. CQ's is Behind the Lines, and when I first started there it was sent to editors as a monstrous Word document, filled with blue-underlined hyperlink text, early every morning for rebroadcast. Remember this the next time some publisher starts complaining about Gawker "stealing" their stories.)

Politico's motivations are blatant, but they're not substantially different from any number of talking heads on cable news, which has a 24-hour news hole to fill. Just as the paper wants people talking about Politico to keep revenue flowing, pundits want to be branded as commentators on every topic under the sun so they can stay in the public eye as much as possible. In a sane universe, David Brooks wouldn't be trusted to run a frozen yoghurt stand, because he knows nothing about anything. Expertise--the idea that speaking knowledgably requires study, sometimes in non-trivial amounts--is a threat to this entire industry (probably not a serious threat, but then they're not known for underreaction).

Election journalism has been a godsend to punditry precisely because it is so chaotic: who can say what will happen, unless you are a Very Important Person with a Trusted Name and a whole host of connections? Accountability has not traditionally been a concern, and because elections hinge on any number of complicated policy questions, this means that nothing is out of bounds for the political pundit. No matter how many times William Kristol or Megan McArdle are wrong on a wide range of important issues, they will never be fired (let's not even start on poor Tom Friedman, a man whose career consists of endlessly sorting the wheat from the chaff and then throwing away the wheat). But FiveThirtyEight undermines that thought process, by saying that there is a level of rigor to politics, that you can be wrong, and that accountability is important.

The optimistic take on this disruption is, as Nieman Journalism Lab's Jonathan Stray argues, that specialist experts will become more common in journalism, including in horse race election coverage. I'm not optimistic, personally, because I think the current state of political commentary owes as much to industry nepotism as it does to public opinion, and because I think political data is prone to intentional obfuscation. But it's a nice thought.

The real positive takeaway, I think, is that Brooks, Byers, Scarborough, and other people of little substance took such a strong public stance against Silver. By all means, let's have an open conversation about who was wrong in predicting this election--and whose track record is better. Let's talk about how often Silver is right, and how often that compares to everyone calling him (as Brooks did) "a wizard" whose predictions were "not possible." Let's talk about accountability, and expertise, and whether we should expect better. I suspect Silver's happy to have that talk. Are his accusers?

October 31, 2012

Filed under: tech»web

Node Win

As I've been teaching Advanced Web Development at SCCC this quarter, my role is often to be the person dropping in with little hints of workflow technique that the students will find helpful (if not essential) when they get out into real development positions. "You could use LESS to make your CSS simpler," I say, with the zeal of an infomercial pitchman. Or: "it will be a lot easier for your team to collaborate if you're working off the same Git repo."

I'm teaching at a community college, so most of my students are not wealthy, and they're not using expensive computers to do their work. I see a lot of cheap, flimsy-looking laptops. Almost everyone's on Windows, because that's what cheap computers run when you buy them from Best Buy. My suggestion that a Linux VM would be a handy thing to have is usually met with puzzled disbelief.

This makes my students different from the sleek, high-profile web developers doing a lot of open-source work. It's a difference both cultural (they're being taught PHP and ASP.net, which are deeply unsexy), but technological as well. If you've been to a meetup or a conference lately, you've probably noticed that everyone's sporting almost exactly the same setup: as far as the wider front-end web community is concerned, if you're not carrying a newish MacBook or a Thinkpad (running Ubuntu, no doubt), you might as well not exist.

You can see some of this in Rebecca Murphey's otherwise excellent post, A Baseline for Front End Developers, which lists a ton of great resources and then sadly notes:

If you're on Windows, I don't begin to know how to help you, aside from suggesting Cygwin. Right or wrong, participating in the open-source front-end developer community is materially more difficult on a Windows machine. On the bright side, MacBook Airs are cheap, powerful, and ridiculously portable, and there's always Ubuntu or another *nix.

Murphey isn't trying to be mean (I think it's remarkable that she even thought about Windows when assembling her list--a lot of people wouldn't), but for my students a MacBook Air probably isn't cheap, no matter what its price-to-performance ratio might be. It could be twice, or even three times, the cost of their current laptop (assuming they have one--I have some students who don't even have computers, believe it or not). And while it's not actually that hard to set up many of the basic workflow tools on Windows (MinGW is a lifesaver), or to set up a Linux VM, it's clearly not considered important by a lot of open source coders--Murphey doesn't even know how to start!

This is why I'm thrilled about Node.js, which added a Windows version about a year ago. Increasingly, the kinds of tools that make web development qualitatively more pleasant--LESS, RequireJS, Grunt, Yeoman, Mocha, etc.--are written in pure JavaScript using Node. If you bring that to Windows, you also bring a huge amount of tooling to people you weren't able to reach before. Now those people are not only better developers, but they're potential contributors (which, in open source, is basically the difference between a live project and a dead one). Between Node.js, and Github creating a user-friendly Git client for the platform, it's a lot easier for students with lower incomes to keep up with the state of the art.

I'm not wild about the stereotype that "front-end" means a Mac and a funny haircut, personally. It bothers me that, as a web developer, I'm "supposed" to be using one platform or another--isn't the best thing about rich internet applications the fact that we don't have to take sides? Isn't a diverse web community stronger? I think we have a responsibility to increase access to technology and to the Internet, not focus our efforts solely on a privileged few.

We should be worried when any monoculture (technological or otherwise) takes over an industry, and exclusive tools or customs can serve as warning signs. So even though I don't love Node's API, I love that it's a web language being used to build web tools. It means that JavaScript is our bedrock, as Alex Russell once noted. That's what we build on. If it means that being a well-prepared front-end developer is A) more cross-platform and B) more consistent from top to bottom, it means my students aren't left out, no matter what their background. And that makes me increasingly happy.

October 24, 2012

Filed under: fiction»reviews»banks_iain

The Hydrogen Sonata

I believe there are two kinds of Iain Banks readers: those who are in it for the plot, and those who are looking for spectacle. Banks does both tremendously well, but hardly ever in the same book, which means that invariably reviews are split between people who thought his most recent novel was amazing, or merely very good.

I tend towards plot, myself. I think Banks is at his best when he keeps the scale small, and finds ways to twist and undermine his setting of high-tech, post-scarcity, socialist space dwellers, the Culture. Nobody does huge, mind-boggling scenes like him, but at those galaxy-spanning scales (and when starring the near-omniscient AIs that run the Culture) it's hard to feel like there's much at stake. My favorites, like Matter or Player of Games, combine the large and the small convincingly, hanging the outcome of huge events on the shoulders of fallible, comprehensible characters.

But for his last two books, Banks has tended more towards the huge-explosions-in-strange-places side of things. 2010's Surface Detail spun up a war in virtual Hells that spilled into reality, and now (with The Hydrogen Sonata), he's taken a look at a civilization trying to reach closure, even while long-kept secrets keep pushing up into the light.

I re-read Surface Detail this week, and I like it a bit more than I did the first time around. I still think it suffers from a lack of agency surrounding too many of its characters, who end up simply as pawns being ferried around to each major plot point, but I'll admit that those characters are charming, and the idea of the Hells--virtual worlds set up to punish people even after religion is technically obsolete--is thornier than it first appears.

The Hydrogen Sonata has a lot of the same issues: the events of its plot, while fascinating, are ultimately of dubious importance, and it's not entirely clear if any of the characters actually have real influence on anything that happens. But to its credit, the events of THS are so diverting, you almost don't care. This is Iain Banks doing spectacle at a level he hasn't really tried since Excession, and to a surpising degree it works. It's widescreen science fiction, and he's clearly having fun writing it.

The book opens as the Gzilt, one of the original co-founders (but not members) of the Culture, have decided to leave the material plane and "sublime" to a higher order of existence. Just as they're counting down, however, representatives of another sublimed civilization contact a Gzilt ship, hinting that they may have planted the seeds of Gzilt religion eons ago (and thus prevented them from joining the Culture when they had the chance). This sets off turmoil in the local government, and a gang of Culture ships recruits one former Gzilt military officer, named Vyr Cossont, to hunt down the oldest living survivor of the civilization's founding for a first-hand account of events.

There's not much actual mystery to be had here--Banks telegraphs how things are going to end up pretty quickly. But the fun is in the oversized set pieces being tossed around one after another, from the "Girdlecity" (a giant, elevated metropolis wrapped all the way around a planet's equator) to the hapless group of insects who conduct bee-like dances with their spacecraft while waiting to scavenge on the remains of the sublimed worlds. There's a Last Party being thrown by one rich Gzilt before the subliming that continually tops itself in extravagance. I was also tickled by Cossont's quest to play the titular composition on an instrument called the "Antagonistic Undecagonstring," which means she ends up lugging a bulky and inconvenient music case around the galaxy despite herself (as a bassist, I sympathize).

But while it's enjoyable enough, playing with these toys that Banks assembles, it's hard to shake the feeling that it's all a bit lightweight. The Culture has been set up in these books as tremendously powerful, almost omnipotent--it's run, if that could be said of decentralized anarchosocialists, by AI Minds at the helm of massive, powerful starships, far outclassing any of the other civilizations in the book. When there's a question of how events will turn out, it often reduces to "can ship X reach destination Y in an amount of time defined by the author?" which is not very dramatically satisfying. Like Excession, my least favorite Culture book, much of The Hydrogen Sonata takes place in catty infodumps between the Minds--these can be funny, but they can also read like you've wandered into someone else's e-mail thread by mistake.

Still, for people who are die-hard Culture fans like me, we'll take what we can get--even if I'd rather see more plot and less spectacle. Books like The Hydrogen Sonata flesh out a rich, funny, dark universe that Banks has been building for 25 (!) years now. It's good to visit, if only to point and enjoy the sights.

Future - Present - Past