Patents are important. They create an incentive for inventors to release their inventions into the public domain in return for a temporary monopoly on those inventions. Everybody loves patents. Software patents, on the other hand, must die.
Lately there has been a lot of news about patents, as almost every mobile company in existence is suing every other company for infringement. That many happy lawyers should have been a warning sign that something fishy was going on. And this being the Internet, everyone online wants to leap to the defense of their favorite. But ultimately, there are no winners in this game, because software patents have done nothing but harm innovation. They're counter-productive, anti-competitive, and insulting.
The most well-written defense of software patents that I've seen comes from Nilay Patel, who is a former lawyer, and focuses mainly on why patents in general are beneficial to society. I don't have any issues with that. But let's examine his claims on software patents in particular, as a way of getting at why they're broken. He writes:
Patents publicly disclose some of the most advanced work ever done by some of the most creative and resourceful people in history, and it'll all be free for the taking in several years. Stop offering patent protection and there's no more required disclosure -- all this stuff stays locked up as trade secrets as long as it offers a competitive advantage, after which point it may well be forgotten.
What might surprise you is that the USPTO has historically resisted efforts to patent software, only to have the courts chart a different course. The conflict itself is a simple one: software is ultimately just the automated expression of various algorithms and math, and you can't patent math. To many, it then seems like a forgone conclusion that software patents shouldn't exist -- preventing other people from using math is solidly outside the boundaries of the patent system. The problem isn't software patents -- the problem is that software patents don't actually exist.
But look a little closer and it's easy to see that the boundaries between "just math" and "patentable invention" are pretty fuzzy. Every invention is "just math" when it comes right down to it -- traditional mechanical inventions are really just the physical embodiments of specific algorithms.
That last bit, the part about how everything is "just math" (Patel uses the example of a special beer bottle shape), is where this argument runs into trouble (well, that and the point where he admits that he's neither a patent lawyer nor a developer, meaning that he brings almost no actual expertise to a highly-technical table). As Patel is not a developer, it's not immediately apparent to him the difference between pages of source code and a physical mechanism. But they are, in fact, very different: computer programs are, at heart, a series of instructions that the device follows. They're more like a recipe than a machine. Can you imagine patenting a recipe--claiming to have "invented" a type of bread, or a flavor of pie, and insisting that anyone else making an apple pie must pay your licensing fees? It's ridiculous.
To add insult to injury, what most programmers will tell you is that the majority of software is not "some of the most advanced work ever done," but a series of obvious steps connected together. Take, for example, Patel's own example, drawn from Apple's multitouch patent.
You may think that looks really complicated--Patel clearly does--but it's really not. That chunk of text, once you pull it apart and turn the math into actual working code, reduces to three steps:
Now, Patel is careful to say that this is not the complete patent. But I've looked over the complete patent text, and frankly there's little I see there that goes beyond this level of complexity. It describes the kind of common-sense steps that anyone would take if they were implementing a multitouch UI (as well as a number of steps that, as far as I can tell, no-one--including the patent-holder--has actually implemented, such as distinguishing between a palm or elbow and a stylus). There's nothing novel or non-obvious about it--it's just written in so much legalese that it looks extremely complicated.
Such obfuscation overturns the entire argument for patents in the first place. After all, if I were trying to code something, there's no way I'd turn to a software patent like this to figure out how to implement it--it leaves out too many steps, overcomplicates those that it does describe, adds a bunch of extraneous notes that I have to discard, and includes no source code at all. It'd be easier for me to figure it out myself, or learn from an open-source implementation.
Moreover, if I worked for a big company, like Apple or Microsoft, I'd be forbidden from reading patents in the first place, because the penalties for "knowing" infringement are much higher than "unknowing" infringement (a particularly ironic term, given that someone can "unknowingly" infringe by inventing the process independently). Meanwhile, as Lukas Mathis points out, we don't actually need the patent for an actually innovative piece of programming (Google's original PageRank) because it was published openly, as an academic paper.
I'm not saying, by the way, that multitouch hardware can't--or shouldn't--be patented. By all means, the inventor of the capacitative screen--a non-obvious, physical work of engineering--should be rewarded for their creativity. But that's not what Patel is defending, and it's not what the patent in question actually covers.
So software patents don't cover brilliant work, and they're not encouraging innovation. In fact, they discourage it: thanks to patent trolls who sue small companies over broad software patents, indie developers may not be able to afford the cost of business at all. If this seems difficult to reconcile with the arguments for the existence of software patents, that's because it undermines their entire case. Software patents are just broken: that's all there is to it.
What bothers me about this is not so much the ongoing lawsuit frenzy--I could really care less when large companies sue each other as a proxy for their market competition--but that there's this idea floating around that some software patent lawsuits are worthwhile, based on who's doing the suing. Everyone agrees that trolls like Lodsys (a shell company suing small developers over a purported patent on "in-app purchases") are a drain on the system. Whereas if Apple, or Microsoft, or Nokia files a patent suit, their fans charge to their defense against "copycats," as if their patents are the only really innovative ones in existence. Even this week, pro-Apple pundit John Gruber cheered Google's action against Lodsys while simultaneously insisting that Android and Linux should be sued out of existence for patent theft.
But there are no justified software patent lawsuits, because there are no justified software patents. You don't get to say that patent trolls are despicable, while defending your favorite company's legal action as a valid claim--it's the same faulty protection racket at work. Software patents are a plague on all our houses, and tribalism is no excuse for preserving them.
After the LulzSec hacking rampage, I finally found the motivation to do something I've been putting off for a long time: I switched to more secure passwords using password management software, so that I'm not using the same five passwords everywhere anymore. Surprisingly, it was a lot less painful than I thought it would be.
Similar to what Brinstar did, I'm using KeePass to store my passwords--I don't want to pay for a service, and I don't really like using closed-source tools for this kind of thing. But since I'm not feeling incredibly confident about Dropbox for secure materials right now (less because they've admitted being able to open your files to the government, more because they left the whole system wide open for four hours the other day), I'm not using them to store my database.
Instead, I'm taking advantage of the fact that Android phones act like USB hard drives when they're plugged into the computer. The 1.X branch of the KeePass desktop client is a portable executable, so it can run from the phone's memory card and use the same database as KeePassDroid. If I need a password from a real computer, I can just plug in my phone. I do keep a backup of the encrypted database uploaded to Google Docs storage, but that's behind two-factor authentication, so I think it's reasonably safe.
It's shallow of me, but for a long time I held off on this move because the screenshots on the KeePassDroid site are incredibly ugly. Fortunately, those are out of date. It's still not quite as attractive as alternative like Pocket or Tiny Password, but with the group font size turned down it can pass for "functional." And I like that it's not dependent on a third-party cloud provider like those are (Pocket has a client, and it's even in cross-platform Java, but it doesn't expose its database for USB access). I don't know if the author will take my patches, but I've submitted a few changes to the KeePassDroid layouts that make it look a little bit less "open source."
So what's the point? A password manager does almost nothing to keep my local data safe, or to protect me if someone steals my laptop with its cached passwords in Firefox. On the other hand, a common weakness in recent hacking incidents has been the use of shared (often weak) passwords across sites, so that if one falls the others go as well. Now my passwords are stronger, but more importantly, they're different from site to site. If someone acquires my Facebook login info, for example, that no longer gives them credentials to get into anything else.
It all comes down to the fact that I can secure my own data, but once it goes out on the web, I'm at the mercy of random (probably untrained) server administrators. That does not fill me with confidence, and it should probably make you a little uneasy as well. If so, my advice based on this experience would be to go ahead and make the switch to some kind of password-management system. Like keeping good backups, it's not nearly as hard as it sounds, and it'll be time well spent when the script kiddies strike again.
Summer is here, bringing with it 100° weather and a new series of CQ data projects--which, in turn, means working with Excel again. Here is a list of all the things I hate about Excel:
Why have I rediscovered my enthusiasm for Excel? It's kind of funny, actually. In the past couple of years, fueled by a series of bizarre experiments in Visual Basic scripting, I've often solved spreadsheet dilemmas using brute-force automation. But now that I'm working more often with a graphics reporter who uses the program on OS X, where it no longer supports scripting, I'm learning how to approach tables using the built-in cell functions (the way I probably should have done all along). The resulting journey is a series of elegant surprises as we dig deeper into the Excel's capabilities.
I mean, take the consolidate operation and the LOOKUP function. If I had a dime for every time I'd written a search-and-sum macro for someone that could have been avoided by using these two features, I'd have... I don't know, three or four bucks, at least. Consolidate and LOOKUP are a one-two combo for reducing messy, unmatched datasets into organized rows, the kind of information triage that we need all too often. I've been using Excel for years, and it's only now that I've discovered these incredibly useful features (they're much more prominent in the UI of the post-Ribbon versions, but the office is still using copies of Excel 2000). It's tremendously exciting to realize that we can perform these kinds of analysis on the fly, without having to load up a full-fledged database, and that we're only scratching the surface of what's possible.
I find that I don't miss the challenge of coding in Excel, because formula construction scratches the same problem-solving itch. Besides, spreadsheets are also programs, of a sort. They may not be Turing-complete, but they mix data and code in much the same way as a binary program, they have variables and "pointers" (cell references), and they offer basic input and output options. Every cell formula is like its own little command line. Assembling them into new configurations offers the same creative thrill of a programming task--at smaller doses, maybe, but in a steady drip of productivity.
But honestly, efficiency and flexibility are only part of my affection for Excel. I think on some level I just really like the idea of a spreadsheet. As a spacially-oriented thinker, the idea of laying out data in a geometric arrangement is instantly intuitive to me--which, for all that I've grown to like SQL, is not something I can say for relational database queries. "We're going to take some values from over there," says Excel, "and then turn them into new values here." A fully-functioning spreadsheet, then, is not just a series of rows and columns. It's a kind of mathematical geography, a landscape through which information flows and collects.
By extension, whenever I start up Excel and open up a new sheet, the empty grid is a field of undiscovered potential. Every cell is a question waiting to be asked and answered. I get paid to dive in and start filling them up with information, see where they'll take me, and turn the results into stories about the world around us. How could anyone not find that thrilling? And how could you not love a tool that makes it possible?
It's been almost two years now since I picked up an Android phone for the first time, during which time it has gone from a generally unloved, nerdy thing to the soon-to-be dominant smartphone platform. This is a remarkable and sudden development--when people start fretting about the state of Android as an OS (fragmentation, competing app stores, etc.), they tend to forget that it is still rapidly mutating and absorbing the most successful parts of a pretty woolly ecosystem. To have kept a high level of stability and compatibility, while adding features and going through major versions so quickly, is no small feat.
Even back in v1.0, there were obvious clever touches in Android--the notification bar, for instance, or the permission system. And now that I'm more used to them, the architectural decisions in the OS seem like "of course" kind of ideas. But when it first came out, a lot of the basic patterns Google used to build Android appeared genuinely bizarre to me. It has taken a few years to prove just how foresighted (or possibly just lucky) they actually were.
Take, for example, the back button. That's a weird concept at the OS level--sure, your browser has a one, as does the post-XP Explorer, but it's only used inside each program on the desktop, not to move between them. No previous mobile platform, from PalmOS to Windows Mobile to the iPhone, used a back button as part of the dominant navigation paradigm. It seemed like a case of Google, being a web company, wanting everything to resemble the web for no good reason.
And yet it turns out that being able to navigate "back" is a really good match for mobile, and it probably is important enough to make it a top-level concept. Android takes the UNIX idea of small utilities chained together, and applies it to small screen interaction. So it's easy to link from your Twitter feed to a web page to a map to your e-mail , and then jump partway back up the chain to continue from there (this is not an crazy usage pattern even before notifications get involved --imagine discovering a new restaurant from a friend, and then sending a lunch invitation before returning to Twitter). Without the back button, you'd have to go all the way back to the homescreen and the application list, losing track of where you had been in the process.
The process of composing this kind of "attention chain" is made possible by another one of Android's most underrated features: Intents. These are just ways of calling between one application and another, but with the advantage that the caller doesn't have to know what the callee is--Android applications register to handle certain MIME types or URIs on installation, and then they instantly become available to handle those actions. Far from being sandboxed, it's possible to pass all kinds of data around between different applications--or individual parts of an application. In a lot of ways, they resemble HTTP requests as much as anything else.
So, for example, if you take a picture and want to share it with your friends, pressing the "share" button in the Camera application will bring up a list of all installed programs that can share photos, even if they didn't exist when Camera was first written. Even better, Intents provide an extensible mechanism allowing applications to borrow functionality from other programs--if they want to use get an image via the camera, instead of duplicating the capture code, they can toss out the corresponding Intent, and any camera application can respond, including user replacements for the stock Camera. This is smart enough that other platforms have adopted something similar--Windows Mobile 7 will soon gain URIs for deep linking between applications, and iPhone has the clumsy, unofficial x-callback-url protocol--but Android still does this better than any other platform I've seen.
Finally, perhaps the choice that seemed oddest to me when Google announced Android was the Dalvik virtual machine. VMs are, after all, slow. Why saddle a mobile CPU with the extra burden of interpreting bytecode instead of using native applications? And indeed, the initial versions of Android were relatively sluggish. But two things changed: chips got much faster, and Google added just-in-time compilation in Android 2.2, turning the interpreted code into native binaries at runtime. Meanwhile, because Dalvik provides a platform independent from hardware, Android has been able to spread to all kinds of devices on different processor architectures, from ARM variants to Tegra to x86, and third-party developers never need to recompile.
(Speaking of VMs, Android's promise--and eventual delivery--of Flash on mobile has been mocked roundly. But when I wanted to show a friend footage of Juste Debout the other week, I'd have been out of luck without it. If I want to test my CQ interactives from home, it's incredibly handy. And of course, there are the ever-present restaurant websites. 99% of the time, I have Flash turned off--but when I need it, it's there, and it works surprisingly well. Anecdotal, I know, but there it is. I'd rather have the option than be completely helpless.)
Why are these unique features of Android's design interesting? Simple: they're the result of lessons successfully being adopted from web interaction models, not the other way around. That's a real shift from the conventional wisdom, which has been (and certainly I've always thought) that the kind of user interface and application design found on even the best web applications would never be as clean or intuitive as their native counterparts. For many things, that may still be true. But clearly there are some ideas that the web got right, even if entirely by chance: a stack-based navigation model, hardware-independent program representation, and a simple method of communicating between stateless "pages" of functionality. It figures that if anyone would recognize these lessons, Google would. Over the next few years, it'll be interesting to see if these and other web-inspired technologies make their way to mainstream operating systems as well.
Obligatory Scotty reference aside, voice recognition has come a long way, and it's becoming more common: just in my apartment, there's a Windows 7 laptop, my Android phone, and the Kinect, each of which boasts some variation on it. That's impressive, and helpful from an accessibility standpoint--not everyone can comfortably use a keyboard and mouse. Speaking personally, though, I'm finding that I use it very differently on each device. As a result, I suspect that voice control is going to end up like video calling--a marker of "futureness" that we're glad to have in theory, but rarely leverage in practice.
I tried using Windows voice recognition when I had a particularly bad case of tendonitis last year. It's surprisingly good for what it's trying to do, which is to provide a voice-control system to a traditional desktop operating system. It recognizes well, has a decent set of text-correction commands, and two helpful navigation shortcuts: Show Numbers, which overlays each clickable object with a numerical ID for fast access, and Mouse Grid, which lets you interact with arbitrary targets using a system right out of Blade Runner.
That said, I couldn't stick with it, and I haven't really activated it since. The problem was not so much the voice recognition quality, which was excellent, but rather the underlying UI. Windows is not designed to be used by voice commands (understandably). No matter how good the recognition, every time it made a mistake or asked me to repeat myself, my hands itched to grab the keyboard and mouse.
The system also (and this is very frustrating, given the extensive accessibility features built into Windows) has a hard time with applications built around non-standard GUI frameworks, like Firefox or Zune--in fact, just running Firefox seems to throw a big monkey wrench into the whole thing, which is impractical if you depend on it as much as I do. I'm happy that Windows ships with speech recognition, especially for people with limited dexterity, but I'll probably never have the patience to use it even semi-exclusively.
On the other side of the spectrum is Android, where voice recognition is much more limited--you can dictate text, or use a few specific keywords (map of, navigate to, send text, call), but there's no attempt to voice-enable the entire OS. The recognition is also done remotely, on Google's servers, so it takes a little longer to work and requires a data connection. That said, I find myself using the phone's voice commands all the time--much more than I thought I would when the feature was first announced for Android 2.2. Part of the difference, I think, is that input on a touchscreen feels nowhere near as speedy as a physical keyboard--there's a lot of cognitive overhead to it that I don't have when I'm touch-typing--and the expectations of accuracy are much lower. Voice commands also fit my smartphone usage pattern: answer a quick query, then go away.
Almost exactly between these two is the Kinect. It's got on-device voice recognition that no doubt is based on the Windows speech codebase, so the accuracy's usually high, and like Android it mainly uses voice to augment a limited UI scheme, so the commands tend to be more reliable. When voice is available, it's pretty great--arguably better than the gesture control system, which is prone to misfires (I can't use it to listen to music while folding laundry because, like the Heart of Gold sub-etha radio, it interprets inadvertent movements as "next track" swipes). Unfortunately, Kinect voice commands are only available in a few places (commands for Netflix, for example, are still notably absent), and a voice system that you can't use everywhere is a system that doesn't get used. No doubt future updates will address this, but right now the experience is kind of disappointing.
Despite its obvious flaws, the idea of total voice control has a certain pull. Part of it, probably, is the fact that we're creatures of communication by nature: it seems natural to use our built-in language toolkit with machines instead of having to learn abstractions like keyboards or mouse, or even touch. There may be a touch of the Frankenstein to it as well--being able to converse with a computer would feel like A.I., even if it were a lot more limited. But the more I actually use voice recognition systems, the more I think this is a case of not knowing what we really want. Language is ambiguous by its nature, and computers are already scary and unpredictable for a lot of people. Simple commands for a direct result are helpful. Beyond that, it's a novelty, and one that quickly wears out its welcome.
This weekend I dropped in on the second day of the 2010 DC PubCamp, an "unconference" aimed at public media professionals. I went for the mobile app session, where I got to meet another NPR Android developer and listen to an extremely belligerent nerd needlessly browbeat a bunch of hapless, confused program managers. But I stuck around after lunch for a session on video gaming for marginalized communities, hosted by Latoya Peterson of Racialicious. You can see the slides and Latoya's extensive source list here.
The presentation got sidetracked a bit early on during a discussion of internet cafes and gender in Asia, but for me the most interesting part was when Latoya began talking about the Glitch Game Testers, a project run by Betsy DiSalvo at the Georgia Institute of Technology. DiSalvo's program aims to figure out why there are so few African Americans, and specifically African American men, in the tech industry, and try to encourage those kids to engage more with technology.
The researchers found several differences between play patterns of her black students and their white peers: the minority gamers began playing younger, tended to play more with their families and less online, viewed gaming as a competition, and were less likely to use mods or hacks. These differences in play (which, as Latoya noted, are not simply racial or cultural, but also class-based) result in part from the constraints of gaming on a console. After all, a console is one shared family resource hooked up to another (the television), meaning that kids can't sit and mess with it on their own for hours. Consoles don't easily run homebrew code, so they don't encourage experimentation with programming.
Granted, these can be true of a PC as well. I didn't have my own computer until high school, and my parents didn't really want me coding on the family Gateway. But I didn't have to leave the computer when someone else wanted to watch television, and I was always aware (for better or worse, in those DOS/Win 3.1 days of boot disks and EMS/XMS memory) that the computer was a hackable, user-modifiable device. Clearly, that was a big advantage for me later on in life. In contrast, console gamers generally don't learn to think of software as mutable--as something they themselves could experiment with and eventually make a career at.
It's hopelessly reductionist, of course, to say that consoles cause the digital divide, or that they're even a major causal factor compared to problems of poverty, lack of role models, and education. But I think it's hard to argue that the console model--locked-down, walled-garden systems running single-purpose code--doesn't contribute to the problem. And it's worrisome that the big new computing paradigm (mobile) seems determined to follow the console path.
Set aside questions of distribution and sideloading just for the sake of argument, and consider only the means of development. As far as I'm aware, no handheld since the original DragonBall-powered PalmOS has allowed users to write a first-class application (i.e., given equal placement in the shell, and has full or nearly-full OS access) on the device itself. At the very least, you need to have another device--a real, open computer--to compile for the target machine, which may be burden enough for many people. In some cases, you may also need to pay a yearly fee and submit a lot of financial paperwork to the manufacturer in order to get a digitally-signed certificate.
I think it's safe to say that this is not an arrangement that's favorable to marginalized communities. It wouldn't have been favorable to me as a kid, and I come from a relatively advantaged background. In terms of both opportunity cost and real-world cost, the modern smartphone platform is not a place where poor would-be developers can start developing their skills. As smartphones become more and more the way people interact with computers and the Internet, a trend like this would largely restrict self-taught tech skills among the white and the wealthy.
The one wild-card against this is the web. We're reaching the point where all platforms are shipping with a decent, hackable runtime in the form of an HTML4/5-compatible browser. That's a pretty decent entry point: I don't personally think it's as accessible as native code, and there are still barriers to entry like hosting costs, but JS/HTML is a valid "first platform" these days. Or it could be, with one addition: the all-important "view source" option. Without that, the browser's just another read-only delivery channel.
I think it's self-evident why we should care about having more minorities and marginalized groups working in technology. It's definitely important to have them in the newsroom, to ensure that we're not missing stories from a lack of diversity in viewpoint. And while it's true that a huge portion of the solution will come from non-technological angles, we shouldn't ignore the ways that the technology itself is written in ways that reinforce discrimination. Closed code and operating systems that are actively hostile to hacking are a problem for all of us, but they hit marginalized communities the hardest, by eliminating avenues for bootstrapping and self-education. The console model is a regressive tax on user opportunity. Let's cut it out.
Oh, man. Where to start with Chris Anderson and Michael Wolff's dreadful "The Web Is Dead"? With the hilarious self-congratulatory tone, which treats a misguided 1997 article on push technology (by the equally-clueless Kevin Kelly) as some kind of hidden triumph? With the gimmicky, print-mimicking two-column layout? How about the eye-burning, white-on-red text treatment? Or should we begin with the obvious carnival-barker pitch: the fact that Anderson, who just launched a Wired iPad application that mimics his print publication, and who (according to the NY Times and former employees) has a bit of an ongoing feud with Wired.com, really wants you to stop thinking of the browser as a destination.
Yes, Anderson has an agenda. That doesn't make him automatically wrong. But it's going to take a lot more than this weaksauce article to make him right. As I noted in my long, exhausting look at Anderson's Free, his MO is to make a bold, headline-grabbing statement, then backpedal from it almost immediately. He does not abandon that strategy here, as this section from the end of the piece shows:
...what is actually emerging is not quite the bleak future of the Internet that Zittrain envisioned. It is only the future of the commercial content side of the digital economy. Ecommerce continues to thrive on the Web, and no company is going to shut its Web site as an information resource. More important, the great virtue of today's Web is that so much of it is noncommercial. The wide-open Web of peer production, the so-called generative Web where everyone is free to create what they want, continues to thrive, driven by the nonmonetary incentives of expression, attention, reputation, and the like. But the notion of the Web as the ultimate marketplace for digital delivery is now in doubt.Right: so the web's not actually dead. It's just that you can't directly make money off of it, except for all the people who do. Pause for a second, if you will, to enjoy the irony: the man who wrote an entire book about how the web's economies of "attention, reputation, and the like" would pay for an entire real-world economy of free products is now bemoaning the lack of a direct payment option for web content.
Wolff's half of the article (it's the part in the glaring red column), meanwhile, is a protracted slap-fight with a straw man: it turns out that the web didn't change everything, and people will use it to sell traditional media in new ways, like streaming music and movies! Wolff doesn't mention anyone who actually claimed that the web would have "transformative effects," or how streaming is not in and of itself fairly transformative, or what those other transformative effects would be--probably because the hyperbole he's trying to counter was encouraged in no small part by (where else?) Wired magazine. It's a silly argument, and I don't see any reason to spend much time on it.
But let's take a moment to address Anderson's main point, such as it is: that the open web is being absorbed into a collection of "apps" and APIs which are, apparently, not open. This being Chris Anderson, he's rolled a lot of extraneous material into this argument (quality of service, voice over IP, an incredibly misleading graph of bandwidth usage, railroad monopolies), but they're padding at best (and disingenuous at worst: why, for example, are "e-mail" and VPNs grouped with closed, proprietary networks?). At the heart of his argument, however, is an artificial distinction between "the Web" and "the Internet."
At the application layer, the open Internet has always been a fiction. It was only because we confused the Web with the Net that we didn't see it. The rise of machine-to-machine communications - iPhone apps talking to Twitter APIs - is all about control. Every API comes with terms of service, and Twitter, Amazon.com, Google, or any other company can control the use as they will. We are choosing a new form of QoS: custom applications that just work, thanks to cached content and local code. Every time you pick an iPhone app instead of a Web site, you are voting with your finger: A better experience is worth paying for, either in cash or in implicit acceptance of a non-Web standard."We" confused the two? Who's this "we," Kemosabe? Anderson seems to think that the web never had Terms of Service, when they've been around on sites like Yahoo and Flickr for ages. He seems to think that the only APIs in existence are the commercial ones from Twitter or Amazon. And, strangest of all, he seems to be ignoring the foundation on which those APIs are built--the HTTP/JSON standards that came from (and continue to exist because of) the web browser. There's a reason, after all, that Twitter clients are not only built on the desktop, but through web portals like Seesmic and Brizzly--because they all speak the language of the web. The resurgence of native applications is not the death of the web app: it's part of a re-balancing process, as we learn what works in a browser, and what doesn't.
Ultimately, Anderson doesn't present a clear picture of what he thinks the "web" is, or why it's different from the Internet. It's not user content, because he admits that blogging and Facebook are doing just fine. He presents little evidence that browser apps are dying, or that the HTTP-based APIs used by mobile apps are somehow incompatible with them. He ignores the fact that many of those mobile apps are actually based around standard, open web services. And he seems to have utterly dismissed the real revolution in mobile operating systems like iPhone and Android: the inclusion of a serious, desktop-class browser. Oh, right--the browser, that program that launches when you click on a link from your Twitter application, or from your Facebook feed, or via Google Reader. How can the web be dead when it's interconnected with everything?
You can watch Anderson try to dodge around this in his debate with Tim O'Reilly and John Battelle. "It's all Internet," O'Reilly rightly says. "Advertising business models have always only been a small part of the picture, and have always gotten way too much attention." Generously, O'Reilly doesn't take the obvious jab: that one of the loudest voices pitching advertising as an industry savior has been Chris Anderson himself. Apparently, it didn't work out so well.
Is the web really a separate layer from the way we use the Internet? Is it really dead? No, far from it: we have more power than ever to collect and leverage the resources that the web makes available to us, whether in a browser, on a server, or via a native client. The most interesting development of "Web 2.0" has been to duplicate at the machine level what people did at the social level with static sites, webrings, and blogs: learn to interoperate, interlink, and synthesize from each other. That's how you end up with modern web services that can combine Google Maps, Twitter, and personal data into useful mashups like Ushahidi, Seesmic, and any number of one-off journalism projects. No, the web's not dead. Sadly, we can't say the same about Chris Anderson's writing career.
This is not just snark, but an honest query. Because to be honest, the fervor around "apps" is wearing me out--in no small part, because it's been the new Product X panacea for journalists for a while now, and I'm tired of hearing about it. More importantly, it drives me crazy, as someone who works hard to present journalism in the most appropriate format (whatever that may be), that we've taken the rich array of documents and media available to us and reduced it to "there's an app for that." This is not the way you build a solid, future-proof media system, people.
For one thing, it's a giant kludge that misses the point of general-purpose computing in the first place, which is that we can separate code from its data. Imagine if you were sent text wrapped in individual .exe files (or their platform equivalent). You'd think the author was insane--why on earth didn't they send it as a standard document that you could open in your favorite editor/reader? And yet that's exactly what the "app" fad has companies doing. Sure, this was originally due to sandboxing restrictions on some mobile platforms, but that's no excuse for solving the problem the wrong way in the first place--the Web didn't vanish overnight.
Worse, people have the nerve to applaud this proliferation of single-purpose app clutter! Wired predictably oversells a "digital magazine" that's essentially a collection of loosely-exported JPG files, and Boing Boing talks about 'a dazzling, living book' for something that's a glorified periodic table with some pretty movies added. It's a ridiculous level of hyperbole for something that sets interactive content presentation back by a good decade, both in terms of how we consume it and the time required to create it. Indeed, it's a good way to spend a fortune every few years rewriting your presentation framework from scratch when a new hardware iteration rolls around.
The content app is spiritual child of Encarta. Plenty of people have noticed that creating native, proprietary applications to present basic hypertext is a lot like the bad old days of multimedia CD-ROMs. Remember that? My family got a copy of Encarta with our 486-era Gateway, and like most people I spent fifteen minutes listening to sound clips, watched some grainy film clips, and then never touched it again. Cue these new publication apps: to my eye, they have the same dull sheen of presentation--one that's rigid, hard to update, and doesn't interoperate with anything else--and possibly the same usage pattern. I'm not a real Web 2.0 partisan, and I generally dislike HTML/CSS, but you have to admit that it got one thing right: a flexible, extensible document format for combining text with images, audio, and video on a range of platforms (not to mention a diverse range of users). And the connectivity of a browser also means that it has the potential to surprise: where does that link go? What's new with this story? You can, given time, run out of encyclopedia, but you never run out of Internet.
That's perhaps the part that grated most about the middleware presentation at Gov 2.0. A substantial chunk of it was devoted to a synchronization framework, allowing developers to update their application from the server. Seriously? I have to write a web page and then update it manually? Thing is, if I write an actual web application, I can update it for everyone automatically. I can even cache information locally, using HTML5, for times when there's no connectivity. Building "native" applications from HTML is making life more complicated than it needs to be, by using the worst possible tools for UI and then taking away the platform's one advantage.
I'm not arguing that there's no place for native applications--far from it. There are lots of reasons to write something in native code: access to platform-specific APIs, speed, or certain UI paradigms, maybe. But it all comes back to choosing appropriate technology and appropriate tools. For a great many content providers, and particularly many news organizations, the right tool is HTML/CSS: it's cheaper, easier, and widely supported. It's easily translated into AJAX, sent in response to thin client requests, or parsed into other formats when a new platform emerges in the market. Most importantly, it leaves you at the mercy of no-one but yourself. No, it doesn't get you a clever advertising tagline or a spot at a device manufacturer keynote, and you won't feel that keen neo-hipster glow at industry events. But as a sustainable, future-proof business approach? Ditch the apps. Go back to the browser, where your content truly belongs.
Dear Valued Customer,
We hope you are enjoying your Smartphone! We appreciate and value your business and want to be sure you are aware of a change we've made to your account to ensure you have the best possible experience with unlimited data usage in the United States.
Smartphones are made for data consumption-surfing the web, social networking, email and more. That's why we require a Smartphone data plan in conjunction with our Smartphones. This ensures that customers with data intensive devices are not unpleasantly surprised with high data pay-per-use charges-just one low, predictable, flat rate for unlimited use each month.
For whatever reason, our records indicate your Smartphone does not have the correct data plan. As a courtesy, we've added the minimum Smartphone data plan for you.
Thank you for being an AT&T customer. We look forward to continuing to provide you with a great Smartphone experience.
Thank you for your charming explanation of "Smartphones" and their associated data usage (I don't think the capital S is AP style, though--mind if I drop it?). Despite your carefully-worded letter, I must admit to some confusion: after all, use of my current smartphone has not resulted in any substantial data charges (that would be odd, considering I was on an "unlimited" data plan). Nor has the change from a Nokia phone to a touchscreen Android device resulted in a noticeable increase in data use--your own web site consistently placed my bandwidth consumption at around 100MB/month.
Which is why it surprised me to see that you had "upgraded" me from said "Unlimited" plan to a new "Smartphone" plan, which does not seem to offer any actual advantages to me over the old plan, unless you count the ability to pay you an additional $15 per month (perhaps you do). As a courtesy, I have moved myself to another carrier. I hope you are enjoying the carefree sensation of having one fewer customer!
Can we speak frankly, AT&T? I've been meaning to do this for a while anyway. After you complied in warrantless wiretapping of American citizens ("As a courtesy, we are secretly recording your phone calls, traitor...") it was difficult to justify doing business with you. But the organization of the American wireless industry, even after number porting legislation, is powerfully aligned with keeping customers right where they are, both technologically and contractually.
Consider: in this country, we have two incompatible radio standards (CDMA and GSM) split between four major carriers, each using a largely incompatible portion of the radio spectrum. Even on the GSM carriers, where the technology allows people to separate their number from a specific phone without your "help," the frequency differences mean they'll lose 3G service if they switch. The result is that moving carriers, for most people, also means buying a completely new phone for no good reason. Why, it's almost as though you all have conspired to limit our choices on purpose! ("As a courtesy, we have created an elaborate and wasteful system of hidden surcharges for switching service...")
And your industry's business models--well, I don't think you're even pretending those are customer-friendly, do you? Charging customers with unlocked phones the same premium as people with subsidized hardware? Long contracts and costly early termination fees? Text-messaging plans? This business with your capital-S-Smartphone plans is simply the latest effort from a wireless industry fighting desperately to be more than just a data-pipe provider, just like the ISPs. It's AOL all over again, AT&T, and it's inevitable. I can see why you're trying to squeeze your customers while you can, but it doesn't mean I have to be a part of it.
I mean, I'm not endorsing anyone, but there is at least one carrier who's starting to get it. They're offering month-to-month plans with no contract, and discounts for people who bring their own phones (or, more accurately, they're not charging for unsubsidized hardware). They're GSM, so subscribers can buy phones from anywhere--you know, like the rest of the world. And hey, they sold me an unlimited data plan (with unlimited text messages included, no less!) for the same price I was paying you before you "corrected" my data plan. It's still not perfect--it's the cell industry, after all, and frankly I'd socialize the lot of you in a heartbeat--but it's a damn sight closer to sanity.
In any case, I don't want to sound bitter. Thanks for letting me know about the change you've made to my ex-account. Good luck with that.
My love for Locale aside, what else is good on Android? Inquiring minds want to know.
The interesting thing about making a list like this is, for me, was that I realized how little use most of the native software on the device actually sees. 95% of my time on a smartphone is spent in three places: e-mail, Twitter, and the browser. That's not to say that I don't use other applications--that I don't find them phenomenally helpful, or that I wouldn't miss them if they were gone--only to say that as a matter of routine, those three are what matter most. Everything else is gravy.
(Well, almost everything. When people ask me about whether they should get a smartphone, the first thing I tell them is that Maps will change. Their. Lives. Because it absolutely does. I spend relatively little time in Maps, but it's probably the most valuable application on the phone.)