I have argued vociferously in the recent past that the journalistic craze for native clients--an enthusiasm seemingly rekindled by Rupert Murdoch's ridiculous Daily iPad publication--is a bad idea from a technical standpoint. They're clumsy, require a lot of platform-specific work, and they're not exactly burning up the newstands. It continues to amaze me that, despite the ubiquity of Webkit as a capable cross-platform hypertext runtime, people are still excited about recreating the Multimedia CD-ROM.
But beyond the technical barriers, publishing your news in a walled-garden application market raises some serious questions of professional journalistic ethics. Curation (read: a mandatory, arbitrary approval process) exacerbates the dilemma, but even relatively open app stores are, in my opinion, on shaky ground. These problems emerge along three axes: accountability, editorial independence, and (perhaps most importantly) the ideology of good journalism.
Accountability
One of the hallmarks of the modern web is intercommunication based on a set of simple, high-level protocols. From a system of URLs and HTTP, a whole Internet culture of blog commentary, trackbacks, Rickrolls, mashups, and embedded video emerged. Most recently, Twitter created a new version of the linkblog (and added a layer of indirection via link shortening). For a journalist, this should be exciting: it's a rich soup of comments and community swarming around your work. More importantly, it's a constant source of accountability. What, you thought corrections went away when we went online?
But that whole ecosystem of viral sharing and review gets disconnected when you lock your content into a native client. At least on Android, you can send content to other applications via the powerful Intent mechanism (the iOS situation is much less well-constructed, and I have no idea how Windows Mobile now handles this), but even that has unpredictable results--what are you sharing, after all? A URL to the web version? The article text? Can the user choose? And when it comes to submitting corrections or feedback, native apps default to difficult: of the five major news clients I tried on Android this morning (NPR, CBS, Fox, New York Times, and USA Today), not one of them had an in-app way to submit a correction. Regret the error, indeed.
Editorial Independence
Accountability is an important part of professional ethics in journalism. But so is editorial independence, and in both cases the perception of misbehavior can be even more damaging than any actual foul play. The issue as I see it is: how independent can you be, if your software must be approved during each update by a single, fickle gatekeeper?
As Dan Gillmor points out, selling journalism through an app store is a partnership, and that raises serious questions of independence. Are news organizations less likely to be critical of Google, Apple, and Microsoft when their access to the platform could be pulled at any time from the virtual shelves? Do the content-restrictions on both mobile app stores change the stories that they're likely to publish? Will app stores stand behind journalists operating under governments with low press freedom, or will they buckle to a "terms of service" attack? On the web, a paper or media outlet can largely write whatever they want. Physical distribution is so diverse, a single retail entity can't really shut you down. But in an app store, you publish at the pleasure of the platform owner--terms subject to revision. That kind of scenario should give journalists pause.
Ideology and Solidarity
Organizing the news industry is like herding cats: it's a cutthroat business traditionally fueled by intra-city competition, and it naturally attracts argumentative, over-critical personality types. But it's time that newsrooms start to stick up for the basic ideology of journalism. That means that when the owners of an app store start censoring applications based on content, as happened to political cartoonist Mark Fiore or the Eucalyptus e-book reader, we need to make it clear that we consider that behavior unacceptable--pulling apps, refusing to partner for big launch events, and pursuing alternative publication channels.
There's a reason that freedom of the press is included next to speech, religion, and assembly in the Bill of Rights' first amendment. It's an important part of the feedback loop between people, events, and government in a democracy. And journalists have traditionally been pretty hardcore about freedom of the press: see, for example, the lawsuit over the publication of the Pentagon Papers, as well as the entirety of Reporters Without Borders. If the App Store were a country, its ranking for press freedom would be middling at best, and newspapers wouldn't be nearly as eager to jump into bed with it. The fact that these curated markets retain widespread publication support, despite their history of censorship and instability, is an shame for the industry as a whole.
Act, Don't React
Journalists have a responsibility to react against censorship when they see it, but we should also consider going on the offensive. While I don't actually think native news clients make sense when compared to a good mobile web experience, it is still possible to minimize or eliminate some of the ethical concerns they raise, through careful design and developer lobbying.
While it's unlikely that a native application could easily offer the same kind of open engagement as a website, designers can at least address accountability. News clients should offer a way to either leave comments or send corrections to the editors entirely within the application. A side effect of this would be cross-industry innovation in computerized correction tracking and display, something that few publications are really taking advantage of right now.
Simultaneously, journalists should be using their access to tech companies (who love to use newspapers and networks as keynote demos) to push for better policies. This includes more open, uncensored app stores, but it also means pushing for tools that make web apps first-class citizens in an app-centric world, such as:
We have so many interesting debates surrounding the business of American journalism--paywalls, ad revenue, user-generated content--can't we just call this one off? The HTML document, originally designed to publish academic papers, may be a frustrating technology for rich UIs, but it's perfectly suited for the task of presenting the news. It's as close as you can get to write-once-run-anywhere, making it the cheapest and most efficient option for mobile development. And it's ethically sound! Isn't it time we stood up for ourselves, and as an industry backed a platform that doesn't leave us feeling like we've sold out our principles for short-term gains? Come on, folks: let's leave that to the op-ed writers.
Urban Artistry's International Soul Society Festival just launched its website, with HTML and JavaScript by yours truly. The page had mockups ready, but the designer had to take another job before he could actually build it, and I volunteered to pick up where he left off. We'll be updating it regularly as the date gets closer, and there are links to various social media channels if you'd like to subscribe.
If you're in the area when April rolls around, this is something to put on your calendars. Soul Society is one of the biggest and best events for urban dance and music that DC has to offer. It's family-friendly, with workshops and events for all levels, and the talent on display--from the judges, the guests, and the competitors--is going to be off the charts this year, with popping, breaking, and all-styles battles. Check it out!
I tell everyone they should have Firebug or its equivalent installed, and know how to use it. I believe that people will find it invaluable if they're designing a page and want to test something. They might want to do some in-page scripting. They can examine the source for ideas, or to discover hidden items. But most importantly, they can use it to fix your stupid, unreadable, over-styled web page.
The development of HTML5 means that browsers have gotten more powerful, more elaborate, and more interactive. It also means that they can be annoying in new and subtle ways. Back in the day, page authors used <blink> and <marquee> to create eye-catching elements on their flat gray canvas. Nowadays, thanks to pre-made CMS templates, the web superficially looks better, but it's not necessarily easier to read. Take three examples:
Even worse are the people who have realized you can give the shadow an offset of zero pixels. If the shadow is dark, this ends up looking like the page got wet and all the ink has run. If it's a lighter shadow, you've got a poor man's text glow. Remember how classy text glow was when you used it on everything in Photoshop? Nobody else does either.
I'm not an expert in typesetting or anything, but the effect of these changes--besides sometimes giving Comic Sans a run for its ugly font money--is to throw me out of my browsing groove, and force me to re-acquire a grip on the text with every link to a custom page. If I'm not expecting it, and the font is almost the same as a system font, it looks like a display error. Either way, it's jarring, and it breaks the feeling that the Internet is a common space. Eventually, we'll all get used to it, but for now I hate your custom fonts.
It's no wonder, in an environment like this, that style-stripping bookmarklets like Readability caused such a sensation. There's a fine line between interactive design and overdesign, and designers are crossing it as fast as they can. All I ask, people, is that you think before getting clever with your CSS and your scripts. Ask yourself: "if someone else simulated this effect using, say, a static image, would I still think it looked good? Or would I ask them what Geocities neighborhood they're from?" Take a deep breath. And then put down the stylesheet, and let us read in peace.
Wet is one of those cases where there are interesting things to say about it, but the game itself is not actually very interesting. I feel much the same way about Kill Bill, one of Wet's obvious inspirations: there's a lot of very good commentary on the films, and they serve as a vast trivia nexus for aficianados, but as actual movies they still bore me senseless.
There was a lively comment thread on The Border House a little while back, when Wet protagonist Rubi Malone was included in a list of "disappointing characters." The conversation went something like this:
To be fair, those are better games with better production values, and that makes it a lot easier to ignore their sins and suffer through their cutscenes, much the same way that someone could enjoy superhero movies while still remaining aware of their numerous philosophical shortcomings. Gears of War may be a gynophobic, racist, power fantasy, but it's a polished game that's painstakingly animated and (despite a paper-thin plot) features good writing and well-directed voice acting. Eliza Dushku, on the other hand, seems to be a very nice actress stuck in "menacing femme fatale" roles after her stint on Dollhouse. As Rubi, she's stunt-cast into a role for which she's not particularly well-suited, represented onscreen by a jittery marionette, and apparently not given much direction. Even Jennifer "the real Commander Shepherd" Hale would have trouble selling the character under those circumstances.
So it doesn't help that Wet is mechanically and technically poor. The controls are imprecise (although I do like the guns-akimbo aiming mechanism) and the slow-motion feels half-baked. Its main gimmick is that it looks and feels like 70's exploitation cinema--all film grain and blood spurts. This is another a callback to Tarantino (or more accurately, co-director Robert Rodriguez and Planet Terror). But part of the pleasure of watching Grindhouse's double feature was the painstaking craftsmanship put to the service of cheap, disposable cinema--it functioned as both an example of, and a tribute to, its subject matter (it doesn't hurt that Death Proof is some of Tarantino's best work). When the game looks cheap because it is cheap, the joke is ruined.
Wet doesn't quite manage a perfect mimicry of celluloid, but more importantly there's no artfulness to it. In his review of Kill Bill Vol. 1, Roger Ebert noted that "for [Tarantino], all shots in a sense are references to other shots -- not particular shots from other movies, but archetypal shots in our collective moviegoing memories." In contrast, Wet is a game that features overcooked settings like a Hong Kong temple and a British mansion, but it doesn't have anything to say about them--they're just there. Same for the vintage concession stand ads that play between levels, or the obligatory smashable crates: there's nothing about these inclusions that's more than surface deep, so they never transcend cliche.
I do find the idea of "grindhouse" in games fascinating. For one thing, it's interesting to see one medium satirize another (see also: the use of video game culture in the Scott Pilgrim comic, and then again--in completely different ways--in the film). On the other hand, there's already a lo-fi gaming aesthetic for developers to call upon for self-parody. Nobody's done this better in the past few years than the original No More Heroes--an overstuffed melange of 8-bit graphics, hideously tiled textures, ridiculous boss fights, and Star Wars jokes. It wasn't a better game than Wet, really, but it had a sense of perspective, and that made a world of difference.
So where does that leave Wet? Unrecommended, certainly. But maybe that's what makes it useful for criticism. In better games, the violence and aggression of the main characters gets buried under a gloss of high production values and the well-worn cliche of Yet Another Space Marine. Maybe it takes a game like Wet--a game that gender-swaps the main character, that controls like Tomb Raider crossed with Tony Hawk--to make it a little more obvious just how much we accept the mediocre in interactive narratives.
Like I said, it's not a very good game. But it is, from the right point of view, interesting despite itself.
Tim Ferriss was a real-world griefer before real-world griefing was cool. Before Anonymous was putting epileptic kids into seizures, DDOSing the Church of Scientology, and harrassing teenage girls for no good reason whatsoever, Ferriss (through sheer force of narcissism) had already begun gaming whatever system he could get his hands on. And now he writes books about it. The question you should be asking yourself, as you read this tongue-in-cheek New York Times review of Ferriss's "four-hour workout" book is, did he write this to actually teach people his idiosyncratic health plan? Or (more likely) is it just the newest way Ferriss has decided to grief the world, via the NYT bestseller list?
Griefing, of course, is the process of exploiting the rules of an online community to make its members miserable. Griefers are the people who join your team in online games, and then do everything possible to sabotage your efforts. It's a malevolent version of the "munchkin" from old-school RPGs, where a player tries to find loopholes in the rules, except that griefers aren't playing to win--they're playing to get a reaction, which is much easier. The key is in the balance--a griefer or munchkin is looking to maximize impact while minimizing effort. That's basically what Ferriss is doing: he power-games various external achievements, like kickboxing or tango, not for their own sake, but to boost his own self-promotional profile.
The problem with writing about reputation griefers like this guy is, for them, there really is no such thing as bad publicity. They want you to hate them, as long as it boosts their search ranking. And there are an awful lot of people out there following similar career plans--maybe not as aggressively, almost certainly not as successfully, but they're certainly trying. They may not realize that they're griefing, but they are. Affiliate marketers? Griefing. Social networking 'gurus' who primarily seem to be networking themselves? Griefing. SEO consultants? Totally griefing.
Like a zen student being hit with a stick, I achieved enlightenment once I looked at the situation this way: it's the Internet equivalent of being a celebrity for celebrity's sake. Or, perhaps more accurately, griefing provides a useful framework for understanding and responding to pointless celebrities elsewhere. Maybe this is one way that the Internet, for all its frustrations and backwardness and self-inflicted suffering, can make us better people.
The one thing I've learned, from years of "Something Is Wrong On The Internet," is that the key to dealing with griefers--whether it's a game of Counterstrike, Tim Ferriss, or the vast array of pundits and shock jocks--is right there in the name. They benefit from getting under your skin, when you treat them as serious business instead of something to be laughed off. As Belle and I often say to each other, you can always recognize people who are new to the dark side of the Internet's ever-flowing river of commentary by the gravity they assign to J. Random Poster. We laugh a little, because we remember when we felt that way (sometimes we still do), before we learned: it takes two people to get trolled. Don't let them give you grief.
About a month back, a prominent inside-the-Beltway political magazine ran a story on Tea Party candidates and earmarks, claiming that anti-earmark candidates were responsible for $1 billion in earmarks over 2010. I had just finished building a comprehensive earmark package based on OMB data, so naturally my editor sent me a link to the story and asked me to double-check their math. At first glance, the numbers generally matched--but on a second examination, the article's total double- and triple-counted earmarks co-sponsored by members of the Tea Party "caucus." Adjusting my query to remove non-distinct earmark IDs knocked about $100 million off the total--not really that much in the big picture (the sum still ran more than $900 million), but enough to fall below the headline-ready "more than $1 billion" mark. It was also enough to make it clear that the authors hadn't really understood what they were writing about.
In general, I am in favor of journalists learning how to leverage databases for better analysis, but it's an easy technology to misuse, accidentally--or even on purpose. There's a truism that the skills required to interpret statistics go hand in hand with the skills used to misrepresent them, and nowhere is that more pertinent than in the newsroom. Reporters and editors entering the world of data journalism need to hold onto the same critical skills they would use for any other source, not be blinded by the ease with which they can reach a catchy figure.
That said, journalists would do well to learn about these tools, especially in beats like economics and politics, if only to be able to spot their abuses. And there are three strong arguments for using databases (carefully!) for reporting: improving newsroom mathematical literacy, asking questions at modern scale, and making connections easier.
First, it's no secret that journalists and math are often uneasy bedfellows--a recent Washington Post ombudsman piece explored some of the reasons why numerical corrections are so common. In short: we're an industry of English majors whose eyes cross when confronted with simple sums, and so we tend to take numbers at face value even during the regular copy-editing process.
These anxieties are signs of a deeper problem that needs to be addressed, and there's nothing magical about SQL that will fix them overnight. But I think database training serves two purposes. First, it acclimatizes users to dealing with large sets of numbers, like treating nosocomephobia with a nice long hospital stay. Second, it reveals the dirty secret of programming, which is that it involves a lot of math process, but relatively little actual adding or subtracting, especially in query languages. Databases are a good way to get comfortable with numbers without having to actually touch them directly.
Ultimately, journalists need to be comfortable with numbers, because they're becoming an institutional hazard. While the state of government (and private-sector) data may still leave a lot to be desired from a programmer's point of view, it's practically flooded out over the last few years, with machine-readable formats becoming more common. This mass of data is increasingly unmanageable via spreadsheet: there are too many rows, too many edge cases, and too much filtering required. Doing it by hand is a pipe-dream. A database, on the other hand, is designed to handle queries across hundreds of thousands of rows or more. Languages like SQL let us start asking questions at the necessary scale.
Finally, once we've gotten over a fear of numbers and begun to take large data sets for granted, we can start using relational databases to make connections between data sets. This synthesis is a common visualization task that is difficult to do by hand--mapping health spending against immigration patterns, for example--but it's reasonably simple to do with a query in a relational database. The results of these kinds of investigations may not even be publishable, but they are useful--searching for correlation is a great jumping-off point for further reporting. One of the best things I've done for my team lately is set up a spare box running PostgreSQL, which we use for uploading, combining, searching, and then outputting translated versions of data, even in static form.
As always when I write these kinds of posts, remember that there is no Product X for saving journalism. Adding a database does not make your newsroom Web 2.0, and (see the example I opened with) it's not a magic bullet for better journalism. But new technology does bring opportunities for our industry, if we can avoid the Product X hype. The web doesn't save newspapers, but it can (and should) make sourcing better. Mobile apps can't save subscription revenues, but they offer better ways to think about presentation. And databases can't replace an informed, experienced editor, but they can give those journalists better tools to interrogate the world.
Once again, I present CQ's annual vote studies in handy visualization form, now updated with the figures for 2010. This version includes some interesting changes from last year:
The vote studies are one of those quintessentially CQ products: reliable, wonky, and relentlessly non-partisan. We're still probably not doing justice to it with this visualization, but we'll keep building out until we get there. Take a look, and let me know what you think.
Obligatory Scotty reference aside, voice recognition has come a long way, and it's becoming more common: just in my apartment, there's a Windows 7 laptop, my Android phone, and the Kinect, each of which boasts some variation on it. That's impressive, and helpful from an accessibility standpoint--not everyone can comfortably use a keyboard and mouse. Speaking personally, though, I'm finding that I use it very differently on each device. As a result, I suspect that voice control is going to end up like video calling--a marker of "futureness" that we're glad to have in theory, but rarely leverage in practice.
I tried using Windows voice recognition when I had a particularly bad case of tendonitis last year. It's surprisingly good for what it's trying to do, which is to provide a voice-control system to a traditional desktop operating system. It recognizes well, has a decent set of text-correction commands, and two helpful navigation shortcuts: Show Numbers, which overlays each clickable object with a numerical ID for fast access, and Mouse Grid, which lets you interact with arbitrary targets using a system right out of Blade Runner.
That said, I couldn't stick with it, and I haven't really activated it since. The problem was not so much the voice recognition quality, which was excellent, but rather the underlying UI. Windows is not designed to be used by voice commands (understandably). No matter how good the recognition, every time it made a mistake or asked me to repeat myself, my hands itched to grab the keyboard and mouse.
The system also (and this is very frustrating, given the extensive accessibility features built into Windows) has a hard time with applications built around non-standard GUI frameworks, like Firefox or Zune--in fact, just running Firefox seems to throw a big monkey wrench into the whole thing, which is impractical if you depend on it as much as I do. I'm happy that Windows ships with speech recognition, especially for people with limited dexterity, but I'll probably never have the patience to use it even semi-exclusively.
On the other side of the spectrum is Android, where voice recognition is much more limited--you can dictate text, or use a few specific keywords (map of, navigate to, send text, call), but there's no attempt to voice-enable the entire OS. The recognition is also done remotely, on Google's servers, so it takes a little longer to work and requires a data connection. That said, I find myself using the phone's voice commands all the time--much more than I thought I would when the feature was first announced for Android 2.2. Part of the difference, I think, is that input on a touchscreen feels nowhere near as speedy as a physical keyboard--there's a lot of cognitive overhead to it that I don't have when I'm touch-typing--and the expectations of accuracy are much lower. Voice commands also fit my smartphone usage pattern: answer a quick query, then go away.
Almost exactly between these two is the Kinect. It's got on-device voice recognition that no doubt is based on the Windows speech codebase, so the accuracy's usually high, and like Android it mainly uses voice to augment a limited UI scheme, so the commands tend to be more reliable. When voice is available, it's pretty great--arguably better than the gesture control system, which is prone to misfires (I can't use it to listen to music while folding laundry because, like the Heart of Gold sub-etha radio, it interprets inadvertent movements as "next track" swipes). Unfortunately, Kinect voice commands are only available in a few places (commands for Netflix, for example, are still notably absent), and a voice system that you can't use everywhere is a system that doesn't get used. No doubt future updates will address this, but right now the experience is kind of disappointing.
Despite its obvious flaws, the idea of total voice control has a certain pull. Part of it, probably, is the fact that we're creatures of communication by nature: it seems natural to use our built-in language toolkit with machines instead of having to learn abstractions like keyboards or mouse, or even touch. There may be a touch of the Frankenstein to it as well--being able to converse with a computer would feel like A.I., even if it were a lot more limited. But the more I actually use voice recognition systems, the more I think this is a case of not knowing what we really want. Language is ambiguous by its nature, and computers are already scary and unpredictable for a lot of people. Simple commands for a direct result are helpful. Beyond that, it's a novelty, and one that quickly wears out its welcome.
Scare y'all quicker than a mean ol' goblin.
And popping with Ryan:
It may be hard for non-musicians--or even non-loopers--to understand how big a deal Mobius can be. You have to understand that, much more than other effects (and I've tried my share), looping is like learning a whole new instrument, and each looper brings its own set of constraints to the table that you have to learn to work around. For years, the gold standard was the Gibson EDP, but it was A) expensive, and B) discontinued. Then along comes some guy with a complete software emulation that anyone with a decent soundcard can use for free. Oh, and it's scriptable, so you can rewire the ins and outs to your heart's content (I made mine control like my beloved Line 6 DL-4). That's no small matter. Every now and then, I almost talk myself into picking up a netbook just to run Mobius and a few pedal VSTs again, it's that good.
One of Belle's favorite hobbies is to take a personality test (such as the Meyers-Briggs) once every couple of months. She makes me take the same test, and then she reads our results aloud. The description for her type never explicitly says "finds personality test results comforting," but it probably should. I'm skeptical of the whole thing, frankly, but then someone with my personality type would be.
I found myself thinking about profiles after having a conversation with a friend about the appeal of Diablo (or lack thereof). I understand the theory behind the Diablo formula--combining the random reward schedule of an MMO with a sense of punctuated but constant improvement--but games based on this structure (Torchlight, Borderlands) leave me almost entirely unmoved.
For better or worse, game design increasingly leverages psychological trickery to keep players interested. I think Jonathan Blow convincingly argues that this kind of manipulation is ethically suspect, and that it displays a lack of respect for the player as a human being But perhaps it's also an explanation for why Diablo doesn't click for me, but other people obsess over it: we've got different personality profiles.
I think the idea of a Meyers-Briggs profile for game design is kind of a funny idea. So as a thought exercise, here's a quick list I threw together of personality types, focused mainly on psychological exploits common in game design. I figure most people--and most games--have a mix of these, just in larger or smaller proportions. Some of them may even overlap a little.
There's probably a good way to simplify these, or sort them into a series of binaries or groups, if you wanted to make it more like a legitimate personality quiz. Still, looking over this list, I do feel like it's better at describing my own tastes than a simple list of genres. I think I rank high for Audience, Mechanic, and Buttonmasher, and low for Storyteller, Completionist, and Grinder--makes sense for someone who loves story-driven FPS and action-RPGs, but generally dislikes open-world games and dungeon crawlers.
Such a list certainly helps to describe how I approach any given title: concentrating more on getting through the narrative and learning the quirks of the system, less on grabbing all the achievements or experimenting with the environment. I almost wish reviewers ranked themselves on a system like this--it'd make it a lot easier to sort out whether my priorities sync with theirs.
In general, I agree with Blow: the move toward psychological manipulation as a part of game design is at best something to be approached with great caution. At worst, it's actually dangerous--leading to the kinds of con-artistry and unhealthy addiction in Farmville and (to a lesser extent) WoW. I don't think we can eliminate these techniques entirely, because they're part of what makes gaming unique and potentially powerful. But it would probably be a good idea to understand them better, and package them in a way that people can easily learn to be aware of them, similar to the ways that we teach kids about advertising appeals now. After all, as other sectors adopt "gamification," industry-standard psychological manipulation is only going to get more widespread.