this space intentionally left blank

December 29, 2010

Filed under: tech»i_slash_o

Keyboard. How quaint.

Obligatory Scotty reference aside, voice recognition has come a long way, and it's becoming more common: just in my apartment, there's a Windows 7 laptop, my Android phone, and the Kinect, each of which boasts some variation on it. That's impressive, and helpful from an accessibility standpoint--not everyone can comfortably use a keyboard and mouse. Speaking personally, though, I'm finding that I use it very differently on each device. As a result, I suspect that voice control is going to end up like video calling--a marker of "futureness" that we're glad to have in theory, but rarely leverage in practice.

I tried using Windows voice recognition when I had a particularly bad case of tendonitis last year. It's surprisingly good for what it's trying to do, which is to provide a voice-control system to a traditional desktop operating system. It recognizes well, has a decent set of text-correction commands, and two helpful navigation shortcuts: Show Numbers, which overlays each clickable object with a numerical ID for fast access, and Mouse Grid, which lets you interact with arbitrary targets using a system right out of Blade Runner.

That said, I couldn't stick with it, and I haven't really activated it since. The problem was not so much the voice recognition quality, which was excellent, but rather the underlying UI. Windows is not designed to be used by voice commands (understandably). No matter how good the recognition, every time it made a mistake or asked me to repeat myself, my hands itched to grab the keyboard and mouse.

The system also (and this is very frustrating, given the extensive accessibility features built into Windows) has a hard time with applications built around non-standard GUI frameworks, like Firefox or Zune--in fact, just running Firefox seems to throw a big monkey wrench into the whole thing, which is impractical if you depend on it as much as I do. I'm happy that Windows ships with speech recognition, especially for people with limited dexterity, but I'll probably never have the patience to use it even semi-exclusively.

On the other side of the spectrum is Android, where voice recognition is much more limited--you can dictate text, or use a few specific keywords (map of, navigate to, send text, call), but there's no attempt to voice-enable the entire OS. The recognition is also done remotely, on Google's servers, so it takes a little longer to work and requires a data connection. That said, I find myself using the phone's voice commands all the time--much more than I thought I would when the feature was first announced for Android 2.2. Part of the difference, I think, is that input on a touchscreen feels nowhere near as speedy as a physical keyboard--there's a lot of cognitive overhead to it that I don't have when I'm touch-typing--and the expectations of accuracy are much lower. Voice commands also fit my smartphone usage pattern: answer a quick query, then go away.

Almost exactly between these two is the Kinect. It's got on-device voice recognition that no doubt is based on the Windows speech codebase, so the accuracy's usually high, and like Android it mainly uses voice to augment a limited UI scheme, so the commands tend to be more reliable. When voice is available, it's pretty great--arguably better than the gesture control system, which is prone to misfires (I can't use it to listen to music while folding laundry because, like the Heart of Gold sub-etha radio, it interprets inadvertent movements as "next track" swipes). Unfortunately, Kinect voice commands are only available in a few places (commands for Netflix, for example, are still notably absent), and a voice system that you can't use everywhere is a system that doesn't get used. No doubt future updates will address this, but right now the experience is kind of disappointing.

Despite its obvious flaws, the idea of total voice control has a certain pull. Part of it, probably, is the fact that we're creatures of communication by nature: it seems natural to use our built-in language toolkit with machines instead of having to learn abstractions like keyboards or mouse, or even touch. There may be a touch of the Frankenstein to it as well--being able to converse with a computer would feel like A.I., even if it were a lot more limited. But the more I actually use voice recognition systems, the more I think this is a case of not knowing what we really want. Language is ambiguous by its nature, and computers are already scary and unpredictable for a lot of people. Simple commands for a direct result are helpful. Beyond that, it's a novelty, and one that quickly wears out its welcome.

November 23, 2010

Filed under: tech»activism

The Console Model Is a Regressive Tax on Creativity

This weekend I dropped in on the second day of the 2010 DC PubCamp, an "unconference" aimed at public media professionals. I went for the mobile app session, where I got to meet another NPR Android developer and listen to an extremely belligerent nerd needlessly browbeat a bunch of hapless, confused program managers. But I stuck around after lunch for a session on video gaming for marginalized communities, hosted by Latoya Peterson of Racialicious. You can see the slides and Latoya's extensive source list here.

The presentation got sidetracked a bit early on during a discussion of internet cafes and gender in Asia, but for me the most interesting part was when Latoya began talking about the Glitch Game Testers, a project run by Betsy DiSalvo at the Georgia Institute of Technology. DiSalvo's program aims to figure out why there are so few African Americans, and specifically African American men, in the tech industry, and try to encourage those kids to engage more with technology.

The researchers found several differences between play patterns of her black students and their white peers: the minority gamers began playing younger, tended to play more with their families and less online, viewed gaming as a competition, and were less likely to use mods or hacks. These differences in play (which, as Latoya noted, are not simply racial or cultural, but also class-based) result in part from the constraints of gaming on a console. After all, a console is one shared family resource hooked up to another (the television), meaning that kids can't sit and mess with it on their own for hours. Consoles don't easily run homebrew code, so they don't encourage experimentation with programming.

Granted, these can be true of a PC as well. I didn't have my own computer until high school, and my parents didn't really want me coding on the family Gateway. But I didn't have to leave the computer when someone else wanted to watch television, and I was always aware (for better or worse, in those DOS/Win 3.1 days of boot disks and EMS/XMS memory) that the computer was a hackable, user-modifiable device. Clearly, that was a big advantage for me later on in life. In contrast, console gamers generally don't learn to think of software as mutable--as something they themselves could experiment with and eventually make a career at.

It's hopelessly reductionist, of course, to say that consoles cause the digital divide, or that they're even a major causal factor compared to problems of poverty, lack of role models, and education. But I think it's hard to argue that the console model--locked-down, walled-garden systems running single-purpose code--doesn't contribute to the problem. And it's worrisome that the big new computing paradigm (mobile) seems determined to follow the console path.

Set aside questions of distribution and sideloading just for the sake of argument, and consider only the means of development. As far as I'm aware, no handheld since the original DragonBall-powered PalmOS has allowed users to write a first-class application (i.e., given equal placement in the shell, and has full or nearly-full OS access) on the device itself. At the very least, you need to have another device--a real, open computer--to compile for the target machine, which may be burden enough for many people. In some cases, you may also need to pay a yearly fee and submit a lot of financial paperwork to the manufacturer in order to get a digitally-signed certificate.

I think it's safe to say that this is not an arrangement that's favorable to marginalized communities. It wouldn't have been favorable to me as a kid, and I come from a relatively advantaged background. In terms of both opportunity cost and real-world cost, the modern smartphone platform is not a place where poor would-be developers can start developing their skills. As smartphones become more and more the way people interact with computers and the Internet, a trend like this would largely restrict self-taught tech skills among the white and the wealthy.

The one wild-card against this is the web. We're reaching the point where all platforms are shipping with a decent, hackable runtime in the form of an HTML4/5-compatible browser. That's a pretty decent entry point: I don't personally think it's as accessible as native code, and there are still barriers to entry like hosting costs, but JS/HTML is a valid "first platform" these days. Or it could be, with one addition: the all-important "view source" option. Without that, the browser's just another read-only delivery channel.

I think it's self-evident why we should care about having more minorities and marginalized groups working in technology. It's definitely important to have them in the newsroom, to ensure that we're not missing stories from a lack of diversity in viewpoint. And while it's true that a huge portion of the solution will come from non-technological angles, we shouldn't ignore the ways that the technology itself is written in ways that reinforce discrimination. Closed code and operating systems that are actively hostile to hacking are a problem for all of us, but they hit marginalized communities the hardest, by eliminating avenues for bootstrapping and self-education. The console model is a regressive tax on user opportunity. Let's cut it out.

August 17, 2010

Filed under: tech»web

The Web's Not Dead, It's RESTing

Oh, man. Where to start with Chris Anderson and Michael Wolff's dreadful "The Web Is Dead"? With the hilarious self-congratulatory tone, which treats a misguided 1997 article on push technology (by the equally-clueless Kevin Kelly) as some kind of hidden triumph? With the gimmicky, print-mimicking two-column layout? How about the eye-burning, white-on-red text treatment? Or should we begin with the obvious carnival-barker pitch: the fact that Anderson, who just launched a Wired iPad application that mimics his print publication, and who (according to the NY Times and former employees) has a bit of an ongoing feud with Wired.com, really wants you to stop thinking of the browser as a destination.

Yes, Anderson has an agenda. That doesn't make him automatically wrong. But it's going to take a lot more than this weaksauce article to make him right. As I noted in my long, exhausting look at Anderson's Free, his MO is to make a bold, headline-grabbing statement, then backpedal from it almost immediately. He does not abandon that strategy here, as this section from the end of the piece shows:

...what is actually emerging is not quite the bleak future of the Internet that Zittrain envisioned. It is only the future of the commercial content side of the digital economy. Ecommerce continues to thrive on the Web, and no company is going to shut its Web site as an information resource. More important, the great virtue of today's Web is that so much of it is noncommercial. The wide-open Web of peer production, the so-called generative Web where everyone is free to create what they want, continues to thrive, driven by the nonmonetary incentives of expression, attention, reputation, and the like. But the notion of the Web as the ultimate marketplace for digital delivery is now in doubt.
Right: so the web's not actually dead. It's just that you can't directly make money off of it, except for all the people who do. Pause for a second, if you will, to enjoy the irony: the man who wrote an entire book about how the web's economies of "attention, reputation, and the like" would pay for an entire real-world economy of free products is now bemoaning the lack of a direct payment option for web content.

Wolff's half of the article (it's the part in the glaring red column), meanwhile, is a protracted slap-fight with a straw man: it turns out that the web didn't change everything, and people will use it to sell traditional media in new ways, like streaming music and movies! Wolff doesn't mention anyone who actually claimed that the web would have "transformative effects," or how streaming is not in and of itself fairly transformative, or what those other transformative effects would be--probably because the hyperbole he's trying to counter was encouraged in no small part by (where else?) Wired magazine. It's a silly argument, and I don't see any reason to spend much time on it.

But let's take a moment to address Anderson's main point, such as it is: that the open web is being absorbed into a collection of "apps" and APIs which are, apparently, not open. This being Chris Anderson, he's rolled a lot of extraneous material into this argument (quality of service, voice over IP, an incredibly misleading graph of bandwidth usage, railroad monopolies), but they're padding at best (and disingenuous at worst: why, for example, are "e-mail" and VPNs grouped with closed, proprietary networks?). At the heart of his argument, however, is an artificial distinction between "the Web" and "the Internet."

At the application layer, the open Internet has always been a fiction. It was only because we confused the Web with the Net that we didn't see it. The rise of machine-to-machine communications - iPhone apps talking to Twitter APIs - is all about control. Every API comes with terms of service, and Twitter, Amazon.com, Google, or any other company can control the use as they will. We are choosing a new form of QoS: custom applications that just work, thanks to cached content and local code. Every time you pick an iPhone app instead of a Web site, you are voting with your finger: A better experience is worth paying for, either in cash or in implicit acceptance of a non-Web standard.
"We" confused the two? Who's this "we," Kemosabe? Anderson seems to think that the web never had Terms of Service, when they've been around on sites like Yahoo and Flickr for ages. He seems to think that the only APIs in existence are the commercial ones from Twitter or Amazon. And, strangest of all, he seems to be ignoring the foundation on which those APIs are built--the HTTP/JSON standards that came from (and continue to exist because of) the web browser. There's a reason, after all, that Twitter clients are not only built on the desktop, but through web portals like Seesmic and Brizzly--because they all speak the language of the web. The resurgence of native applications is not the death of the web app: it's part of a re-balancing process, as we learn what works in a browser, and what doesn't.

Ultimately, Anderson doesn't present a clear picture of what he thinks the "web" is, or why it's different from the Internet. It's not user content, because he admits that blogging and Facebook are doing just fine. He presents little evidence that browser apps are dying, or that the HTTP-based APIs used by mobile apps are somehow incompatible with them. He ignores the fact that many of those mobile apps are actually based around standard, open web services. And he seems to have utterly dismissed the real revolution in mobile operating systems like iPhone and Android: the inclusion of a serious, desktop-class browser. Oh, right--the browser, that program that launches when you click on a link from your Twitter application, or from your Facebook feed, or via Google Reader. How can the web be dead when it's interconnected with everything?

You can watch Anderson try to dodge around this in his debate with Tim O'Reilly and John Battelle. "It's all Internet," O'Reilly rightly says. "Advertising business models have always only been a small part of the picture, and have always gotten way too much attention." Generously, O'Reilly doesn't take the obvious jab: that one of the loudest voices pitching advertising as an industry savior has been Chris Anderson himself. Apparently, it didn't work out so well.

Is the web really a separate layer from the way we use the Internet? Is it really dead? No, far from it: we have more power than ever to collect and leverage the resources that the web makes available to us, whether in a browser, on a server, or via a native client. The most interesting development of "Web 2.0" has been to duplicate at the machine level what people did at the social level with static sites, webrings, and blogs: learn to interoperate, interlink, and synthesize from each other. That's how you end up with modern web services that can combine Google Maps, Twitter, and personal data into useful mashups like Ushahidi, Seesmic, and any number of one-off journalism projects. No, the web's not dead. Sadly, we can't say the same about Chris Anderson's writing career.

June 8, 2010

Filed under: tech»mobile

Page Up

Last week, at the Gov 2.0 conference in Washington, D.C., I sat through a session on mobile application design by an aspiring middleware provider. Like most "cross-platform" mobile SDKs these days, it consisted of a thin native wrapper around an on-device web page, scripted in some language (Ruby, in this case), with hooks into some of the native services. As usual with this kind of approach, performance was awful and the look-and-feel was crude at best, but those can be fixed. What struck me about the presentation, as always, was the simple question: if I already have to code my application in Ruby/HTML/JavaScript, with all their attendant headaches, why don't I just write a web service? Why bother with a "native" application, except for the buzzword compliance?

This is not just snark, but an honest query. Because to be honest, the fervor around "apps" is wearing me out--in no small part, because it's been the new Product X panacea for journalists for a while now, and I'm tired of hearing about it. More importantly, it drives me crazy, as someone who works hard to present journalism in the most appropriate format (whatever that may be), that we've taken the rich array of documents and media available to us and reduced it to "there's an app for that." This is not the way you build a solid, future-proof media system, people.

For one thing, it's a giant kludge that misses the point of general-purpose computing in the first place, which is that we can separate code from its data. Imagine if you were sent text wrapped in individual .exe files (or their platform equivalent). You'd think the author was insane--why on earth didn't they send it as a standard document that you could open in your favorite editor/reader? And yet that's exactly what the "app" fad has companies doing. Sure, this was originally due to sandboxing restrictions on some mobile platforms, but that's no excuse for solving the problem the wrong way in the first place--the Web didn't vanish overnight.

Worse, people have the nerve to applaud this proliferation of single-purpose app clutter! Wired predictably oversells a "digital magazine" that's essentially a collection of loosely-exported JPG files, and Boing Boing talks about 'a dazzling, living book' for something that's a glorified periodic table with some pretty movies added. It's a ridiculous level of hyperbole for something that sets interactive content presentation back by a good decade, both in terms of how we consume it and the time required to create it. Indeed, it's a good way to spend a fortune every few years rewriting your presentation framework from scratch when a new hardware iteration rolls around.

The content app is spiritual child of Encarta. Plenty of people have noticed that creating native, proprietary applications to present basic hypertext is a lot like the bad old days of multimedia CD-ROMs. Remember that? My family got a copy of Encarta with our 486-era Gateway, and like most people I spent fifteen minutes listening to sound clips, watched some grainy film clips, and then never touched it again. Cue these new publication apps: to my eye, they have the same dull sheen of presentation--one that's rigid, hard to update, and doesn't interoperate with anything else--and possibly the same usage pattern. I'm not a real Web 2.0 partisan, and I generally dislike HTML/CSS, but you have to admit that it got one thing right: a flexible, extensible document format for combining text with images, audio, and video on a range of platforms (not to mention a diverse range of users). And the connectivity of a browser also means that it has the potential to surprise: where does that link go? What's new with this story? You can, given time, run out of encyclopedia, but you never run out of Internet.

That's perhaps the part that grated most about the middleware presentation at Gov 2.0. A substantial chunk of it was devoted to a synchronization framework, allowing developers to update their application from the server. Seriously? I have to write a web page and then update it manually? Thing is, if I write an actual web application, I can update it for everyone automatically. I can even cache information locally, using HTML5, for times when there's no connectivity. Building "native" applications from HTML is making life more complicated than it needs to be, by using the worst possible tools for UI and then taking away the platform's one advantage.

I'm not arguing that there's no place for native applications--far from it. There are lots of reasons to write something in native code: access to platform-specific APIs, speed, or certain UI paradigms, maybe. But it all comes back to choosing appropriate technology and appropriate tools. For a great many content providers, and particularly many news organizations, the right tool is HTML/CSS: it's cheaper, easier, and widely supported. It's easily translated into AJAX, sent in response to thin client requests, or parsed into other formats when a new platform emerges in the market. Most importantly, it leaves you at the mercy of no-one but yourself. No, it doesn't get you a clever advertising tagline or a spot at a device manufacturer keynote, and you won't feel that keen neo-hipster glow at industry events. But as a sustainable, future-proof business approach? Ditch the apps. Go back to the browser, where your content truly belongs.

May 5, 2010

Filed under: tech»mobile

As A Courtesy

Dear Valued Customer,

We hope you are enjoying your Smartphone! We appreciate and value your business and want to be sure you are aware of a change we've made to your account to ensure you have the best possible experience with unlimited data usage in the United States.

Smartphones are made for data consumption-surfing the web, social networking, email and more. That's why we require a Smartphone data plan in conjunction with our Smartphones. This ensures that customers with data intensive devices are not unpleasantly surprised with high data pay-per-use charges-just one low, predictable, flat rate for unlimited use each month.

For whatever reason, our records indicate your Smartphone does not have the correct data plan. As a courtesy, we've added the minimum Smartphone data plan for you.

Thank you for being an AT&T customer. We look forward to continuing to provide you with a great Smartphone experience.

Sincerely,
AT&T

Dear AT&T,

Thank you for your charming explanation of "Smartphones" and their associated data usage (I don't think the capital S is AP style, though--mind if I drop it?). Despite your carefully-worded letter, I must admit to some confusion: after all, use of my current smartphone has not resulted in any substantial data charges (that would be odd, considering I was on an "unlimited" data plan). Nor has the change from a Nokia phone to a touchscreen Android device resulted in a noticeable increase in data use--your own web site consistently placed my bandwidth consumption at around 100MB/month.

Which is why it surprised me to see that you had "upgraded" me from said "Unlimited" plan to a new "Smartphone" plan, which does not seem to offer any actual advantages to me over the old plan, unless you count the ability to pay you an additional $15 per month (perhaps you do). As a courtesy, I have moved myself to another carrier. I hope you are enjoying the carefree sensation of having one fewer customer!

Can we speak frankly, AT&T? I've been meaning to do this for a while anyway. After you complied in warrantless wiretapping of American citizens ("As a courtesy, we are secretly recording your phone calls, traitor...") it was difficult to justify doing business with you. But the organization of the American wireless industry, even after number porting legislation, is powerfully aligned with keeping customers right where they are, both technologically and contractually.

Consider: in this country, we have two incompatible radio standards (CDMA and GSM) split between four major carriers, each using a largely incompatible portion of the radio spectrum. Even on the GSM carriers, where the technology allows people to separate their number from a specific phone without your "help," the frequency differences mean they'll lose 3G service if they switch. The result is that moving carriers, for most people, also means buying a completely new phone for no good reason. Why, it's almost as though you all have conspired to limit our choices on purpose! ("As a courtesy, we have created an elaborate and wasteful system of hidden surcharges for switching service...")

And your industry's business models--well, I don't think you're even pretending those are customer-friendly, do you? Charging customers with unlocked phones the same premium as people with subsidized hardware? Long contracts and costly early termination fees? Text-messaging plans? This business with your capital-S-Smartphone plans is simply the latest effort from a wireless industry fighting desperately to be more than just a data-pipe provider, just like the ISPs. It's AOL all over again, AT&T, and it's inevitable. I can see why you're trying to squeeze your customers while you can, but it doesn't mean I have to be a part of it.

I mean, I'm not endorsing anyone, but there is at least one carrier who's starting to get it. They're offering month-to-month plans with no contract, and discounts for people who bring their own phones (or, more accurately, they're not charging for unsubsidized hardware). They're GSM, so subscribers can buy phones from anywhere--you know, like the rest of the world. And hey, they sold me an unlimited data plan (with unlimited text messages included, no less!) for the same price I was paying you before you "corrected" my data plan. It's still not perfect--it's the cell industry, after all, and frankly I'd socialize the lot of you in a heartbeat--but it's a damn sight closer to sanity.

In any case, I don't want to sound bitter. Thanks for letting me know about the change you've made to my ex-account. Good luck with that.

Sincerely,
Thomas

April 15, 2010

Filed under: tech»mobile

Android Essentials

My love for Locale aside, what else is good on Android? Inquiring minds want to know.

  • K-9: I am, apparently, a total weirdo in that I don't use G-Mail. I have an account, of course, because it comes with the whole Android developer thing, but I pretty much only use it when Google requires it. Unfortunately, the POP/IMAP client that ships with devices--even up to and including version 2.1--is awful. It forgets which messages I've read, doesn't easily allow batch operations, and decides randomly not to connect. K-9 started as a fork of that client, but it's reached the point where it's the only reasonable choice for handling non-Gmail on Android. There are still some things I'd like to see added (text-selection, recover from trash), but it also has touches of brilliance (check out the swipe-to-multi-select list mechanic).
  • Touiteur: There are lots of Twitter clients on Android, and for the most part it's a matter of personal preference which you use. Seesmic is great, but it keeps asking me which account I want to use even though I've only got the one. Lately I've been using Touiteur instead, for a few reasons: I dig the pull-down windowshade for updates, the UI for jumping out to links is better, and it's super-speedy. Also, the name cracks me up. Like all Android clients, of course, Touiteur gracefully handles checking for messages and mentions in the background via the notification bar, and it makes itself available to the "share" menu for URL-shortening right from the browser.
  • Ringdroid: One of the main goals for Android is that it's not tied down to a computer. Everything's either in the cloud, or it's meant to be self-sufficient. Sometimes this can be frustrating (I'd love to have Outlook sync included), but for the most part it's liberating. Take Ringdroid, for example: it's a basic WAV/MP3 editor that you can use to edit and set system sounds directly on the device. I don't use it often, but if I decide to nerd out on the bus and set my notification tone to a Star Trek communicator clip I found online, Ringdroid makes it possible (and yet no less shameful).
  • Text Edit: Every platform needs a Notepad.exe equivalent. Unfortunately, most of the note-taking programs for Android keep their data siloed in a private database, which defeats the point of having that SD card filesystem available. Text Edit actually opens and saves files, which makes it tremendously helpful for someone like me who'd actually like to get some work done in a standard format every now and then.
  • Smart Keyboard Pro: You can replace almost anything on an Android phone if you don't like the default, and I don't much care for the built-in soft keyboard. Specifically, I hate the way it keeps all the good punctuation hidden away from me. Smart Keyboard offers good foreign language support, swipe gestures (one of the better bits from Windows Mobile), multiple skins, and Blackberry-like autotext shortcuts. But most importantly, it puts all the punctuation on the keyboard as a long-press of individual letters, so I don't have to hunt for question marks or semi-colons anymore.
  • Replica Island: There aren't a lot of great games on Android, but Replica Island (which started as a tech demo for the platform) is a well-done little platformer, and it's free.

The interesting thing about making a list like this is, for me, was that I realized how little use most of the native software on the device actually sees. 95% of my time on a smartphone is spent in three places: e-mail, Twitter, and the browser. That's not to say that I don't use other applications--that I don't find them phenomenally helpful, or that I wouldn't miss them if they were gone--only to say that as a matter of routine, those three are what matter most. Everything else is gravy.

(Well, almost everything. When people ask me about whether they should get a smartphone, the first thing I tell them is that Maps will change. Their. Lives. Because it absolutely does. I spend relatively little time in Maps, but it's probably the most valuable application on the phone.)

Past - Present - Future