this space intentionally left blank

May 17, 2013

Filed under: tech»web

Why the Web Wins

Last year, Google spent most of its I/O conference keynote talking about hardware: Android, Glass, and tablets. This year, someone seems to have reminded Google that they're a web company, since most of the new announcements were all running in a browser, and in many cases (like the photo editing and WebGL maps) pushing the envelope for what's possible. As much as I like Android, I'm really happy to see the web getting some love.

There's been a drumbeat for several years now, particularly as smartphones got more powerful, to move away from web apps, and Google's focus on Android lent credence to that perspective. A conventional wisdom has emerged: web apps were a misstep, but we're past that now, and it'll be all native from this point out. I can't disagree with that more, and Google's clearly staking its claim as well.

The reason the web wins (such that anything will) is not, ultimately, because of its elegance or its purity (it's not big on either) but because of its ubiquity. The browser is the worst cross-platform API except for all the other ones, and (more importantly) it offers persistence. I can turn on any computer with an Internet connection and have near-instant access to files and applications without installing anything or worrying about compatibility. Every computer is my computer on the web.

For context, there was a time in my high school years when Java was on fire. As a cross-platform language with a network-savvy runtime, it was going to revive thin clients: I remember talking to people about the idea that I could log into any computer and load my desktop (with all my software) over the Internet connection. There wouldn't be any point to having your own dedicated hardware in a world like that, because you'd just grab whatever was handy and use it as a host. It was going to be like living in a William Gibson novel.

Java ended up being too heavy and too slow to make that actually happen. Instead, this weird combination of JavaScript, HTML, and CSS took over, like weeds springing up and somehow forming a fully-furnished apartment block. The surprise was that the ad-hoc web platform turned out to be competitive with Java on the front-end. Even though it's meant to be a document viewer, the browser is pretty good at building UI, and it's getting a lot better. I've been creating some web apps lately without worrying about backwards compatibility, and it's been remarkably pleasant, both as a developer and a user.

I don't believe that native programs will ever entirely go away. But I do think we see web applications spreading their tentacles over time, because if something is possible in the browser--if it's a decent user experience, plus it has the web's advantages of instant, no-install launch and sharing across devices--there's not much point in keeping it native. It's better to have your e-mail on any device. It's better for me to do presentations from a browser, instead of carrying a Powerpoint file around. It's better to keep my RSS reader in the cloud, instead of tying its state to individual machines. As browsers improve, this will be true of more and more applications, just as it was true of the Java applets that web technology replaced.

Google and I disagree with where those applications should be hosted, of course. Google thinks they should run it (which for many people is perfectly okay), and I want to run them myself. But that's a difference of degree, not principle. We both think the basic foundation--an open, hackable, portable web--is an important priority.

I like to look at it in terms of "design fiction"--the dramatic endpoint that proponents of each approach are aiming to achieve. With native apps, devices themselves are valuable, because native code is heavy: it takes time to install, it stores data locally, and it's probably locked to a given OS or architecture. Web apps don't give us the same immediate power, but their ultimate goal is a world where your local hardware doesn't matter--walk up to any web-capable surface, and your applications are there. Software in the web-centric viewpoint follows you, not your stuff. There are lots of reasons why I'm bullish on the web, but that particular vision is, for me, the most compelling one.

February 8, 2013

Filed under: tech»web

LESS Is More

Last night I gave a presentation for Seattle Central's Byte Club (and other interested students) on using LESS to write better, easier-to-maintain stylesheets. The lecture was recorded in a Google Hangout, which means that you can watch it yourself, if you're interested in LESS or if you've ever wondered what it's like to be trapped in a classroom with me for an hour:

The audio is a little wonky, it's a little hard to see sometimes, and I don't know why the one guy in the classroom with me insisted on keeping his webcam on the entire time (if I'd thought about it, I would have had him turn the camera on me, instead). But all in all, I think it turned out pretty well.

February 6, 2013

Filed under: tech»web

Unsavory

Every year at the Super Bowl, for many years now, it's traditional for GoDaddy to remind everyone that they're a horrible company run by a creepy, elephant-hunting misogynist. This year was no different. The good news is that I was working on the Soul Society website on Sunday, so I didn't actually see any of their ads. The bad news is that Soul Society is hosted on GoDaddy (cue ironic record scratch).

The thing about GoDaddy is that they are fractally gross: everything about them gets more distasteful the more you dig into it. There is no part of their operation that does not make you want to take a shower after interacting with them--neither the advertising, nor the sales experience, nor the admin panels, and certainly not the actual hosting.

It should be enough that the company was run, for years, by a horrible, horrible person who kills elephants for sport, supports torturing Guantanamo Bay detainees, and is a relentless self-promoter. You should look no further than its incredibly sexist advertising, which manages to be both repulsive and badly produced. The fact that they originally came out in favor of SOPA just rounds out the list of offensive behavior.

But if, despite all those reasons, you go to sign up for an account (as many people, including many of my students, end up doing), chances are that you'll end up overpaying due to an intentionally-confusing sales process. The upsell actually doesn't stop at the first purchase. Every time I interact with the site, I'm forced to wade through a morass of confusing ads and sale links masquerading as admin panels. Everything on GoDaddy leads to a shopping cart.

GoDaddy also parcels up its crappy service into smaller pieces, so they can force you to pay more for stuff that you should get for free. As an example, I have an urbanartistry.org e-mail address for when we need a webmaster link on the site. For a while, it was a separate mailbox, which meant that I never checked it. Then I missed a bunch of e-mails from other UA directors, and decided to redirect the e-mail address to my personal account. On most mail providers, this is a free service. On GoDaddy, you can set up a forward, but an actual alias costs an additional fee (for all the disk space it... doesn't use?). Which means, technically, that my mail is piling up on their servers, and at some point they'll probably figure out some new reason to screw it up.

And let's not pretend the hosting you get after all this hassle is any good. The server is a slow, underpowered shared account somewhere, which means you don't get your own database (have fun sharing a remote MySQL instance with a bunch of other people, suckers!), and you can't run any decent versioning or deployment software. The Apache instance is badly configured (rewrite rules are overridden by their obnoxious 404, among other things). Bandwidth is limited--I have never seen slower transfers than on GoDaddy, and my SFTP connection often drops when updating the site. It's a lot of fun debugging a WordPress theme (already not the speediest of pages) when your updates get stuck in a background window.

I don't write a lot of posts like this, because I've got better things to do with my time these days than complain about poor service somewhere. There's a lot of repulsive companies out there, and while I believe in shame-and-blame actions, there's only so many hours in the day. I'm trying to have a positive outlook. But it is rare that you find something that's so awful you can't think of a single redeeming quality, and GoDaddy is that company. If you're in the market for any kind of web service, and you haven't already been convinced to go elsewhere, let me add my voice to the chorus. Lifehacker's post on moving away from the company is also a great reference for people who are already customers. I'm probably stuck with them, because Urban Artistry has more important things to worry about than their hosting, but you don't have to be.

November 1, 2012

Filed under: tech»web

Node Win

As I've been teaching Advanced Web Development at SCCC this quarter, my role is often to be the person dropping in with little hints of workflow technique that the students will find helpful (if not essential) when they get out into real development positions. "You could use LESS to make your CSS simpler," I say, with the zeal of an infomercial pitchman. Or: "it will be a lot easier for your team to collaborate if you're working off the same Git repo."

I'm teaching at a community college, so most of my students are not wealthy, and they're not using expensive computers to do their work. I see a lot of cheap, flimsy-looking laptops. Almost everyone's on Windows, because that's what cheap computers run when you buy them from Best Buy. My suggestion that a Linux VM would be a handy thing to have is usually met with puzzled disbelief.

This makes my students different from the sleek, high-profile web developers doing a lot of open-source work. It's a difference both cultural (they're being taught PHP and ASP.net, which are deeply unsexy), but technological as well. If you've been to a meetup or a conference lately, you've probably noticed that everyone's sporting almost exactly the same setup: as far as the wider front-end web community is concerned, if you're not carrying a newish MacBook or a Thinkpad (running Ubuntu, no doubt), you might as well not exist.

You can see some of this in Rebecca Murphey's otherwise excellent post, A Baseline for Front End Developers, which lists a ton of great resources and then sadly notes:

If you're on Windows, I don't begin to know how to help you, aside from suggesting Cygwin. Right or wrong, participating in the open-source front-end developer community is materially more difficult on a Windows machine. On the bright side, MacBook Airs are cheap, powerful, and ridiculously portable, and there's always Ubuntu or another *nix.

Murphey isn't trying to be mean (I think it's remarkable that she even thought about Windows when assembling her list--a lot of people wouldn't), but for my students a MacBook Air probably isn't cheap, no matter what its price-to-performance ratio might be. It could be twice, or even three times, the cost of their current laptop (assuming they have one--I have some students who don't even have computers, believe it or not). And while it's not actually that hard to set up many of the basic workflow tools on Windows (MinGW is a lifesaver), or to set up a Linux VM, it's clearly not considered important by a lot of open source coders--Murphey doesn't even know how to start!

This is why I'm thrilled about Node.js, which added a Windows version about a year ago. Increasingly, the kinds of tools that make web development qualitatively more pleasant--LESS, RequireJS, Grunt, Yeoman, Mocha, etc.--are written in pure JavaScript using Node. If you bring that to Windows, you also bring a huge amount of tooling to people you weren't able to reach before. Now those people are not only better developers, but they're potential contributors (which, in open source, is basically the difference between a live project and a dead one). Between Node.js, and Github creating a user-friendly Git client for the platform, it's a lot easier for students with lower incomes to keep up with the state of the art.

I'm not wild about the stereotype that "front-end" means a Mac and a funny haircut, personally. It bothers me that, as a web developer, I'm "supposed" to be using one platform or another--isn't the best thing about rich internet applications the fact that we don't have to take sides? Isn't a diverse web community stronger? I think we have a responsibility to increase access to technology and to the Internet, not focus our efforts solely on a privileged few.

We should be worried when any monoculture (technological or otherwise) takes over an industry, and exclusive tools or customs can serve as warning signs. So even though I don't love Node's API, I love that it's a web language being used to build web tools. It means that JavaScript is our bedrock, as Alex Russell once noted. That's what we build on. If it means that being a well-prepared front-end developer is A) more cross-platform and B) more consistent from top to bottom, it means my students aren't left out, no matter what their background. And that makes me increasingly happy.

September 25, 2012

Filed under: tech»web

DOM If You Don't

I've noticed, as browsers have gotten better (and the pace of improvement has picked up) that there's an increasingly vocal group of front-end developers crusading against libraries like jQuery, in favor of raw JavaScript coding. Granted, most of these are the fanatical comp.lang.javascript types who have been wearing tinfoil anti-jQuery hats for years. But the argument is intriguing: do we really need to include 30KB of script on every page as browsers implement dev-friendly features like querySelectorAll? Could we get away with writing "pure" JavaScript, especially on mobile where every kilobyte counts?

It's a good question. But I suspect that the answer, for most developers, will continue to be "no, use jQuery or Dojo." There are a lot of good reasons for this--including the fact that they deliver quite a bit more than just DOM navigation these days--but the primary reason, as usual, is simple: no matter how they claim to have changed, browser developers still hate you.

Let's take a simple example. I'd like to find all the file inputs in a document and add a listener to them for HTML5 uploads. In jQuery, of course, this is a beautifully short one-liner, thanks to the way it operates over collections: $('input[type=file]').on('change', onChange); Doing this in "naked" JavaScript is markedly more verbose, to the point where I'm forced to break it into several lines for readability (and it's still kind of a slog). var inputs = document.querySelectorAll('input[type=file]'); inputs.forEach(function(element) { element.addEventListener('change', onChange); }); Except, of course, it doesn't actually work: like all of the document methods, querySelectorAll doesn't return an array with JavaScript methods like slice, map, or forEach. Instead, it returns a NodeList object, which is array-like (meaning it's numerically-indexed and has a length property). Want to do anything other than a length check or an iterative loop over that list? Better break out your prototypes to convert it to a real JavaScript object: var inputs = document.querySelectorAll('input[type=file]'); inputs = Array.prototype.slice.call(inputs); inputs.forEach(function(element) { element.addEventListener('change', onChange); }); Oh, yeah. That's elegant. Imagine writing all this boilerplate every time you want to do anything with multiple elements from the page (then imagine trying to train inexperienced teammates on what Array.prototype.slice is doing). No doubt you'd end up writing yourself a helper function to abstract all this away, followed by similar functions for bulk-editing CSS styles or performing animations. Congratulations, you've just reinvented jQuery!

All this could be fixable if browsers returned native JavaScript objects in response to JavaScript calls. That would be the logical, sensical thing to do. They don't, because those calls were originally specified as "live" queries (a feature that no developer has ever actually wanted, and exists to implement the now-obsolete DOM-0 document collections), and so the list they return is a thin wrapper over a native host object. Even though querySelector and querySelectorAll are not live, and even we knew by the time of their implementation that this was an issue, they're still wrappers around host objects with the same impedance mismatch.

Malice or incompetence? Who knows? It looks to me like the people developing browser standards are too close to their rendering engines, and not close enough to real-world JavaScript development. I think it's useful for illustration purposes to compare the output of running a DOM query in Firebug vs. the built-in Firefox Web Console. The former gives you a readable, selector-based line of elements. The latter gives you an incomprehensible line of gibberish or a flat "[Object HTMLDivElement]", which when clicked will produce a clunky tree menu of its contents. Calling this useless is an insult to dead-end interfaces everywhere--and yet that's what Mozilla seems to think is sufficient to compete with Firebug and the Chrome dev tools.

At any given time, there are at least three standards committees fighting over how to ruin JavaScript: there's ECMA TC-39 (syntax), W3C (DOM standards), and WHATWG (HTML5). There are people working on making JavaScript more like Python or CoffeeScript, and people working on more "semantic" tags than you can shake a stick at. But the real problems with JavaScript--the reasons that everyone includes that 30KB of jQuery.js--do not have anything to do with braces or function keywords or <aside> tags. The problem is that the DOM is a horrible API, from elements to events, filled with boilerplate and spring-loaded bear traps. The only people willing to take a run at a DOM 2.0 require you to use a whole new language (which might almost be worth it).

So in the meantime, for the vast majority of front-end developers (myself included), jQuery and other libraries have become that replacement DOM API. Even if they didn't provide a host of other useful utility functions (jQuery's Deferred and Callback objects being my new favorites), it's still just too frustrating to write against the raw browser interface. And library authors recognize this: you can see it in jQuery's planned removal of IE 6, 7, and 8 support in version 2.0. With the worst of the cross-compatibility issues clearing up, libraries can concentrate on writing APIs to make browsers more pleasant to develop in. After all, somebody has to do it.

December 9, 2011

Filed under: tech»web

Trapped in WebGL

As a web developer, it's easy to get the feeling that the browser makers are out to get you (the standards groups definitely are). The latest round of that sinking feeling comes from WebGL which is, as far as I can tell, completely insane. It's a product of the same kind of thinking that said "let's literally just make SQLite the web database standard," except that for some reason Mozilla is going along with it this time.

I started messing with WebGL because I'm a graphics programmer who never really learned OpenGL, and that always bothered me a little. And who knows? While I love Flash, the idea of hardware-accelerated 3D for data visualization was incredibly tempting. But WebGL is a disappointment on multiple levels: it's completely alien to JavaScript development, it's too unsafe to be implemented across browsers, and it's completely out of place as a browser API.

A Square Peg in a JavaScript-shaped Hole

OpenGL was designed as an API for C a couple of decades ago, and even despite constant development since then it still feels like it. Drawing even a simple shape in OpenGL ES 2.0 (the basis for WebGL) requires you to run some inscrutable setup functions on the GL context using bit flags, assemble a shader program from vertex and fragment shaders written in a completely different language (we'll get to this later), then pass in an undistinguished stream of vertex coordinates as a flat, 1D array of floating point numbers. If you want other information associated with those vertices, like color, you get to pass in another, entirely separate, flat array.

Does that sound like sane, object-oriented JavaScript? Not even close. Yet there's basically no abstraction from the C-based API when you write WebGL in JavaScript, which makes it an incredibly disorienting experience, because the two languages have fundamentally different design goals. Writing WebGL requires you to use the new ArrayBuffer types to pack your data into buffers for the GL context, and acquire "pointers" to your shader variables, then use getter and setter functions on those pointers to actually update values. It's confusing, and not much fun. Why can't we just pass in objects that represent the vertexes, with x, y, z, vertex color, and other properties? Why can't the GL context hand us an object with properties matching the varying, uniform, and attribute variables for the shaders? Would it kill the browser, in other words, to pick up even the slightest amount of slack?

Now, I know that API design is hard and probably nobody at Mozilla or Google had the time to code an abstraction layer, not to mention that they're all probably old SGI hackers who can write GL code in their sleep. But it cracks me up that normally browser vendors go out of their way to reinvent the wheel (WebSockets, anyone?), and yet in this case they just threw up their hands and plunked OpenGL into the browser without any attempt at impedance matching. It's especially galling when there are examples of 3D APIs that are intended for use by object-oriented languages: the elephant in the room is Direct3D, which has a slightly more sane vertex data format that would have been a much better match for an object-oriented scripting language. Oh, but that would mean admitting that Microsoft had a good idea. Which brings us to our second problem with WebGL.

Unsafe at Any Speed

Microsoft has come right out and said that they won't add WebGL to IE for security reasons. And although they've caught a lot of flack for this, the fact is that they're probably right. WebGL is based on the programmable pipeline version of OpenGL, meaning that web developers write and deliver code that is compiled and run directly on the graphics card to handle scene transformation and rendering. That's pretty low-level access to memory, being granted to arbitrary people on the Internet, with security only as strong as your video driver (a huge attack surface that has never been hardened against attack). And you thought Flash was a security risk?

The irony as I see it is that the problem of hostile WebGL shaders only exists in the first place because of my first point: namely, that the browsers using WebGL basically just added direct client bindings to canvas, including forcing developers to write, compile, and link shader programs. A 3D API that was designed to abstract OpenGL away from JavaScript developers could have emulated the old fixed-function pipeline through a set of built-in shaders, which would have been both more secure and dramatically easier for JavaScript coders to tackle.

Distraction via Abstraction

Most programming is a question of abstraction. I try to train my team members at a certain level to think about their coding as if they were writing an API for themselves: write small functions to encapsulate some piece of functionality, test them, then wrap them in slightly larger functions, test those, and repeat until you have a complete application. Programming languages themselves operate at a certain level of abstraction: JavaScript hides a number of operating details from the developer, such as the actual arrangement of memory or threads. That's part of what makes it great, because managing those things is often a huge pain that's irrelevant to making your web application work.

Ultimately, the problem of WebGL is one of proper abstraction level. I tend to agree with JavaScript developer Nicolas Zakas that a good browser API is mid-level: neither so low that the developer has to understand the actual implementation, nor so high that they make strong assumptions about usage patterns. I would argue that WebGL is too low--that it requires developers to effectively learn a C API in a language that's unsuited for writing C-style code, and that the result is a security and support quagmire.

In fact, I suspect that the reason WebGL seems so alien to me, even though I've written C and Java code that matched its style in the past, is that it's actually a lower-level API than the language hosting it. At the very minimum, a browser API should be aligned with the abstraction level of the browser scripting language. In my opinion, that means (at the very least) providing a fixed-function pipeline, using arrays of native JavaScript objects to represent vertex lists and their associated data, and providing methods for loading textures and data from standard image resources.

In Practice (or, this is why I'm still a Flash developer)

Let me give an example of why I find this entire situation frustrating--and why, in many ways, it's a microcosm of my feelings around developing for so-called "HTML5." Urban Artistry is working on updating our website, and one of the artistic directors suggested adding a spinning globe, with countries where we've done international classes or battles marked somehow. Without thinking too much I said sure, I could do that.

In Flash, this is a pretty straightforward assignment. Take a vector object, which the framework supports natively, color parts of it, then either project that onto the screen using a variation of raycasting, or actually load it as a texture for one of the Flash 3D engines, like PaperVision. All the pieces are right there for you, and the final size of the SWF file is probably about 100K, tops. But having a mobile-friendly site is a new and exciting idea for UA, so I thought it might be nice to see if it could be done without Flash.

In HTML, I discovered, fewer batteries come included. Loading a vector object is easy in browsers that support SVG--but for Android and IE, you need to emulate it with a JavaScript library. Then you need another library to emulate canvas in IE. Now if you want to use WebGL or 3D projection without tearing your hair out, you'd better load Three.js as well. And then you have to figure out how to get your recolored vector image over into texture data without tripping over the brower's incredibly paranoid security measures (hint: you probably can't). To sum up: now you've loaded about half a meg of JavaScript (plus vector files), all of which you have to debug in a series of different browsers, and which may not actually work anyway.

When faced with a situation like this, where a solution in much-hated Flash is orders of magnitude smaller and easier to code, it's hard to overstate just how much of a setback HTML development can be--and I say that as someone who has grown to like HTML development much more than he ever thought possible. The impression I consistently get is that neither the standards groups nor the browser vendors have actually studied the problems that developers like me commonly rely on plugins to solve. As a result, their solutions tend to be either underwhelming (canvas, the File APIs, new semantic elements) or wildly overcomplicated (WebGL, WebSQL, Web Sockets, pretty much anything with a Web in front of it).

And that's fine, I'll still work there. HTML applications are like democracy: they're the worst platform possible, except for most of the others. But every time I hear someone tell me that technologies like WebGL make plugins obsolete, my eyes roll so hard I can see my optic nerve. The replacements I'm being sold aren't anywhere near up to the tasks I need to perform: they're harder to use, offer me less functionality and lower compatibility, and require hefty downloads to work properly. Paranoid? Not if they're really out to get me, and the evidence looks pretty convincing.

August 17, 2010

Filed under: tech»web

The Web's Not Dead, It's RESTing

Oh, man. Where to start with Chris Anderson and Michael Wolff's dreadful "The Web Is Dead"? With the hilarious self-congratulatory tone, which treats a misguided 1997 article on push technology (by the equally-clueless Kevin Kelly) as some kind of hidden triumph? With the gimmicky, print-mimicking two-column layout? How about the eye-burning, white-on-red text treatment? Or should we begin with the obvious carnival-barker pitch: the fact that Anderson, who just launched a Wired iPad application that mimics his print publication, and who (according to the NY Times and former employees) has a bit of an ongoing feud with Wired.com, really wants you to stop thinking of the browser as a destination.

Yes, Anderson has an agenda. That doesn't make him automatically wrong. But it's going to take a lot more than this weaksauce article to make him right. As I noted in my long, exhausting look at Anderson's Free, his MO is to make a bold, headline-grabbing statement, then backpedal from it almost immediately. He does not abandon that strategy here, as this section from the end of the piece shows:

...what is actually emerging is not quite the bleak future of the Internet that Zittrain envisioned. It is only the future of the commercial content side of the digital economy. Ecommerce continues to thrive on the Web, and no company is going to shut its Web site as an information resource. More important, the great virtue of today's Web is that so much of it is noncommercial. The wide-open Web of peer production, the so-called generative Web where everyone is free to create what they want, continues to thrive, driven by the nonmonetary incentives of expression, attention, reputation, and the like. But the notion of the Web as the ultimate marketplace for digital delivery is now in doubt.
Right: so the web's not actually dead. It's just that you can't directly make money off of it, except for all the people who do. Pause for a second, if you will, to enjoy the irony: the man who wrote an entire book about how the web's economies of "attention, reputation, and the like" would pay for an entire real-world economy of free products is now bemoaning the lack of a direct payment option for web content.

Wolff's half of the article (it's the part in the glaring red column), meanwhile, is a protracted slap-fight with a straw man: it turns out that the web didn't change everything, and people will use it to sell traditional media in new ways, like streaming music and movies! Wolff doesn't mention anyone who actually claimed that the web would have "transformative effects," or how streaming is not in and of itself fairly transformative, or what those other transformative effects would be--probably because the hyperbole he's trying to counter was encouraged in no small part by (where else?) Wired magazine. It's a silly argument, and I don't see any reason to spend much time on it.

But let's take a moment to address Anderson's main point, such as it is: that the open web is being absorbed into a collection of "apps" and APIs which are, apparently, not open. This being Chris Anderson, he's rolled a lot of extraneous material into this argument (quality of service, voice over IP, an incredibly misleading graph of bandwidth usage, railroad monopolies), but they're padding at best (and disingenuous at worst: why, for example, are "e-mail" and VPNs grouped with closed, proprietary networks?). At the heart of his argument, however, is an artificial distinction between "the Web" and "the Internet."

At the application layer, the open Internet has always been a fiction. It was only because we confused the Web with the Net that we didn't see it. The rise of machine-to-machine communications - iPhone apps talking to Twitter APIs - is all about control. Every API comes with terms of service, and Twitter, Amazon.com, Google, or any other company can control the use as they will. We are choosing a new form of QoS: custom applications that just work, thanks to cached content and local code. Every time you pick an iPhone app instead of a Web site, you are voting with your finger: A better experience is worth paying for, either in cash or in implicit acceptance of a non-Web standard.
"We" confused the two? Who's this "we," Kemosabe? Anderson seems to think that the web never had Terms of Service, when they've been around on sites like Yahoo and Flickr for ages. He seems to think that the only APIs in existence are the commercial ones from Twitter or Amazon. And, strangest of all, he seems to be ignoring the foundation on which those APIs are built--the HTTP/JSON standards that came from (and continue to exist because of) the web browser. There's a reason, after all, that Twitter clients are not only built on the desktop, but through web portals like Seesmic and Brizzly--because they all speak the language of the web. The resurgence of native applications is not the death of the web app: it's part of a re-balancing process, as we learn what works in a browser, and what doesn't.

Ultimately, Anderson doesn't present a clear picture of what he thinks the "web" is, or why it's different from the Internet. It's not user content, because he admits that blogging and Facebook are doing just fine. He presents little evidence that browser apps are dying, or that the HTTP-based APIs used by mobile apps are somehow incompatible with them. He ignores the fact that many of those mobile apps are actually based around standard, open web services. And he seems to have utterly dismissed the real revolution in mobile operating systems like iPhone and Android: the inclusion of a serious, desktop-class browser. Oh, right--the browser, that program that launches when you click on a link from your Twitter application, or from your Facebook feed, or via Google Reader. How can the web be dead when it's interconnected with everything?

You can watch Anderson try to dodge around this in his debate with Tim O'Reilly and John Battelle. "It's all Internet," O'Reilly rightly says. "Advertising business models have always only been a small part of the picture, and have always gotten way too much attention." Generously, O'Reilly doesn't take the obvious jab: that one of the loudest voices pitching advertising as an industry savior has been Chris Anderson himself. Apparently, it didn't work out so well.

Is the web really a separate layer from the way we use the Internet? Is it really dead? No, far from it: we have more power than ever to collect and leverage the resources that the web makes available to us, whether in a browser, on a server, or via a native client. The most interesting development of "Web 2.0" has been to duplicate at the machine level what people did at the social level with static sites, webrings, and blogs: learn to interoperate, interlink, and synthesize from each other. That's how you end up with modern web services that can combine Google Maps, Twitter, and personal data into useful mashups like Ushahidi, Seesmic, and any number of one-off journalism projects. No, the web's not dead. Sadly, we can't say the same about Chris Anderson's writing career.

Past - Present - Future