I used a Nexus One as my smartphone for almost three years, pretty much since it was released in 2010. That's a pretty good testimonial. The N1 wasn't perfect--it was arguably underpowered even at release, and held back for upgrades by the pokey video chip and small memory--but it was good enough. When Google announced the Nexus Four, it was the first time I really felt like it was worth upgrading, and I've been using one for the last couple of months.
One big change, pardon the pun, is just the size of the thing: although it's thinner, the N4 is almost half an inch wider and taller than my old phone (the screen is a full diagonal inch larger). The N1 had a pleasant density, while between the size and the glass backing, the N4 feels less secure in your hand, and at first it doesn't seem like you're getting much for the extra size. Then I went back to use the N1 for something, and the virtual keyboard keys looked as small as kitten teeth. I'm (tentatively) a fan now. Battery life is also better than the N1, although I had to turn wifi on to stop Locale from keeping the phone awake constantly.
I think it's a shame they ditched the trackball after the Nexus One, too. Every time I need to move the cursor just a little bit, pick a small link on a non-mobile web page, or play a game that uses software buttons, I really miss that trackball. Reviewers made fun of it, but it was regularly useful (and with Cyanogen, it doubled as a second power button).
The more significant shift, honestly, was probably going from Android 2.3 to 4.2. For the most part, it's better where Android was already good: notifications are richer, switching tasks is more convenient, and most of the built-in applications are less awful (the POP e-mail client is still a disaster). Being able to run Chrome is usually very nice. Maps in particularly really benefits from a more powerful GPU. Running old Android apps can be a little clunky, but I mostly notice that in K-9 Mail (which was not a UX home run to begin with). The only software feature that I do really miss is real USB hosting--you can still get to internal storage, but it mounts as a multimedia device instead of a disk, which means that you can't reliably run computer applications from the phone drive.
There is always a lot of hullaballoo online around Android upgrades, since many phones don't get them. But my experience has been that most of it doesn't really matter. Most of my usage falls into a few simple categories, none of which were held back by Android 2.3:
Compared to its competitors, Android was always been designed to be standalone. It doesn't rely on a desktop program like iTunes to synchronize files, and it doesn't really live in a strong ecosystem the way that Windows Phone does--you don't actually need a Google Account to use one. It's the only mainstream mobile platform where installing applications from a third-party is both allowed and relatively easy, and where files and data can transfer easily between applications in a workflow. Between the bigger phone size (or tablets) and support for keyboards/mice, there's the possibility that you could do real work on a Nexus 4, for certain definitions of "real work." I think it would still drive me crazy to use it full-time. But it's gradually becoming a viable platform (and one that leaves ChromeOS in kind of an awkward place).
So sure, the Nexus 4 is a great smartphone. For the asking price ($300) it's a real value. But where things get interesting is that Android phones that aren't quite as high-powered or premium-branded (but still run the same applications and OS, and are still easily as powerful as laptops from only a few years ago) are available for a lot less money. This was always the theory behind Nokia's smartphones: cheap but powerful devices that could be "computers" for the developing world. Unfortunately, Symbian was never going to be hackable by people in those countries, and then Nokia started to fall apart. In the meantime, Android has a real shot at doing what S60 wanted to do, and with a pretty good (and still evolving) open toolkit for its users. I still think that's a goal worth targeting.
This is not just snark, but an honest query. Because to be honest, the fervor around "apps" is wearing me out--in no small part, because it's been the new Product X panacea for journalists for a while now, and I'm tired of hearing about it. More importantly, it drives me crazy, as someone who works hard to present journalism in the most appropriate format (whatever that may be), that we've taken the rich array of documents and media available to us and reduced it to "there's an app for that." This is not the way you build a solid, future-proof media system, people.
For one thing, it's a giant kludge that misses the point of general-purpose computing in the first place, which is that we can separate code from its data. Imagine if you were sent text wrapped in individual .exe files (or their platform equivalent). You'd think the author was insane--why on earth didn't they send it as a standard document that you could open in your favorite editor/reader? And yet that's exactly what the "app" fad has companies doing. Sure, this was originally due to sandboxing restrictions on some mobile platforms, but that's no excuse for solving the problem the wrong way in the first place--the Web didn't vanish overnight.
Worse, people have the nerve to applaud this proliferation of single-purpose app clutter! Wired predictably oversells a "digital magazine" that's essentially a collection of loosely-exported JPG files, and Boing Boing talks about 'a dazzling, living book' for something that's a glorified periodic table with some pretty movies added. It's a ridiculous level of hyperbole for something that sets interactive content presentation back by a good decade, both in terms of how we consume it and the time required to create it. Indeed, it's a good way to spend a fortune every few years rewriting your presentation framework from scratch when a new hardware iteration rolls around.
The content app is spiritual child of Encarta. Plenty of people have noticed that creating native, proprietary applications to present basic hypertext is a lot like the bad old days of multimedia CD-ROMs. Remember that? My family got a copy of Encarta with our 486-era Gateway, and like most people I spent fifteen minutes listening to sound clips, watched some grainy film clips, and then never touched it again. Cue these new publication apps: to my eye, they have the same dull sheen of presentation--one that's rigid, hard to update, and doesn't interoperate with anything else--and possibly the same usage pattern. I'm not a real Web 2.0 partisan, and I generally dislike HTML/CSS, but you have to admit that it got one thing right: a flexible, extensible document format for combining text with images, audio, and video on a range of platforms (not to mention a diverse range of users). And the connectivity of a browser also means that it has the potential to surprise: where does that link go? What's new with this story? You can, given time, run out of encyclopedia, but you never run out of Internet.
That's perhaps the part that grated most about the middleware presentation at Gov 2.0. A substantial chunk of it was devoted to a synchronization framework, allowing developers to update their application from the server. Seriously? I have to write a web page and then update it manually? Thing is, if I write an actual web application, I can update it for everyone automatically. I can even cache information locally, using HTML5, for times when there's no connectivity. Building "native" applications from HTML is making life more complicated than it needs to be, by using the worst possible tools for UI and then taking away the platform's one advantage.
I'm not arguing that there's no place for native applications--far from it. There are lots of reasons to write something in native code: access to platform-specific APIs, speed, or certain UI paradigms, maybe. But it all comes back to choosing appropriate technology and appropriate tools. For a great many content providers, and particularly many news organizations, the right tool is HTML/CSS: it's cheaper, easier, and widely supported. It's easily translated into AJAX, sent in response to thin client requests, or parsed into other formats when a new platform emerges in the market. Most importantly, it leaves you at the mercy of no-one but yourself. No, it doesn't get you a clever advertising tagline or a spot at a device manufacturer keynote, and you won't feel that keen neo-hipster glow at industry events. But as a sustainable, future-proof business approach? Ditch the apps. Go back to the browser, where your content truly belongs.
Dear Valued Customer,
We hope you are enjoying your Smartphone! We appreciate and value your business and want to be sure you are aware of a change we've made to your account to ensure you have the best possible experience with unlimited data usage in the United States.
Smartphones are made for data consumption-surfing the web, social networking, email and more. That's why we require a Smartphone data plan in conjunction with our Smartphones. This ensures that customers with data intensive devices are not unpleasantly surprised with high data pay-per-use charges-just one low, predictable, flat rate for unlimited use each month.
For whatever reason, our records indicate your Smartphone does not have the correct data plan. As a courtesy, we've added the minimum Smartphone data plan for you.
Thank you for being an AT&T customer. We look forward to continuing to provide you with a great Smartphone experience.
Thank you for your charming explanation of "Smartphones" and their associated data usage (I don't think the capital S is AP style, though--mind if I drop it?). Despite your carefully-worded letter, I must admit to some confusion: after all, use of my current smartphone has not resulted in any substantial data charges (that would be odd, considering I was on an "unlimited" data plan). Nor has the change from a Nokia phone to a touchscreen Android device resulted in a noticeable increase in data use--your own web site consistently placed my bandwidth consumption at around 100MB/month.
Which is why it surprised me to see that you had "upgraded" me from said "Unlimited" plan to a new "Smartphone" plan, which does not seem to offer any actual advantages to me over the old plan, unless you count the ability to pay you an additional $15 per month (perhaps you do). As a courtesy, I have moved myself to another carrier. I hope you are enjoying the carefree sensation of having one fewer customer!
Can we speak frankly, AT&T? I've been meaning to do this for a while anyway. After you complied in warrantless wiretapping of American citizens ("As a courtesy, we are secretly recording your phone calls, traitor...") it was difficult to justify doing business with you. But the organization of the American wireless industry, even after number porting legislation, is powerfully aligned with keeping customers right where they are, both technologically and contractually.
Consider: in this country, we have two incompatible radio standards (CDMA and GSM) split between four major carriers, each using a largely incompatible portion of the radio spectrum. Even on the GSM carriers, where the technology allows people to separate their number from a specific phone without your "help," the frequency differences mean they'll lose 3G service if they switch. The result is that moving carriers, for most people, also means buying a completely new phone for no good reason. Why, it's almost as though you all have conspired to limit our choices on purpose! ("As a courtesy, we have created an elaborate and wasteful system of hidden surcharges for switching service...")
And your industry's business models--well, I don't think you're even pretending those are customer-friendly, do you? Charging customers with unlocked phones the same premium as people with subsidized hardware? Long contracts and costly early termination fees? Text-messaging plans? This business with your capital-S-Smartphone plans is simply the latest effort from a wireless industry fighting desperately to be more than just a data-pipe provider, just like the ISPs. It's AOL all over again, AT&T, and it's inevitable. I can see why you're trying to squeeze your customers while you can, but it doesn't mean I have to be a part of it.
I mean, I'm not endorsing anyone, but there is at least one carrier who's starting to get it. They're offering month-to-month plans with no contract, and discounts for people who bring their own phones (or, more accurately, they're not charging for unsubsidized hardware). They're GSM, so subscribers can buy phones from anywhere--you know, like the rest of the world. And hey, they sold me an unlimited data plan (with unlimited text messages included, no less!) for the same price I was paying you before you "corrected" my data plan. It's still not perfect--it's the cell industry, after all, and frankly I'd socialize the lot of you in a heartbeat--but it's a damn sight closer to sanity.
In any case, I don't want to sound bitter. Thanks for letting me know about the change you've made to my ex-account. Good luck with that.
My love for Locale aside, what else is good on Android? Inquiring minds want to know.
The interesting thing about making a list like this is, for me, was that I realized how little use most of the native software on the device actually sees. 95% of my time on a smartphone is spent in three places: e-mail, Twitter, and the browser. That's not to say that I don't use other applications--that I don't find them phenomenally helpful, or that I wouldn't miss them if they were gone--only to say that as a matter of routine, those three are what matter most. Everything else is gravy.
(Well, almost everything. When people ask me about whether they should get a smartphone, the first thing I tell them is that Maps will change. Their. Lives. Because it absolutely does. I spend relatively little time in Maps, but it's probably the most valuable application on the phone.)
The iPad will not save journalism. Beyond that, I can't really bring myself to care about it. It's tempting to be worried about the trend it represents--the triumph of walled-garden content consumption instead of creative computing--but I already sound enough like a cackling lunatic on a regular basis. And at this point, nothing I say or do is going to make much of a difference anyway, right?
But in no small part I'd also rather not get upset because I feel like it would put me in the same class of lunatic as Gizmodo's Joel Johnson, who responds to critics of Apple's device with what appears to be a complete mental breakdown:
The old guard has The Fear. They see the iPad and the excitement it has engendered and realize that they've made themselves inessential--or at least invisible. ... It all just kills me. It literally makes me sick to my stomach.Uh, yeah, okay there, Sparky. When Cory Doctorow's consumer choices make you physically unwell, it's probably time to step back from the brand identification and stop taking pulls from the crazy juice--although I suspect Johnson is "literally" ill because he's a terrible writer, and not because he's actually nauseous.
Here's a tip for keeping your sanity, kids: maybe the whole argument is a sideshow. Maybe the real trend here is something different. Maybe what we're looking at isn't the Tinkerer's Sunset, but the computing equivalent of the Casio VL-1.
The VL-1, for those who aren't avid readers of electronic music blogs, is an incredibly crappy little keyboard from the 80s. It's two and a half octaves of kitsch, essentially: a cheap, mass-produced synthesizer with a built-in speaker and an LCD that doubled as a calculator. The sound engine was based on the Walsh function, which is a fancy name for what is essentially a square-wave generator. The VL-1 was a toy, although it did find some modest success a few decades later as a sound effect for people who wanted a specific kind of annoying beep.
Casio's little white box didn't just represent the sound of trendy underground electronica bands. It's also a symbol of the time when synthesis--thanks to the wonders of transistors and Turing machines--took two paths. On the one side, the expensive and finicky studio synthesizers with their complicated FM math operators, oscillators, MIDI ports, and modular bays. On the other, throwaway consumer gadgets that are technically synthesizers, but don't offer the kind of patching or playability that a serious instrument does. It's the difference, in other words, between consumer- and professional-grade equipment.
This is largely how I've started to think about the whole walled-garden computing model. It's the VL-1 of information technology. When you look at it that way, why get bother to get upset? Its success won't mean the death of open machines, any more than the Casiotone synths killed off low-end synth hardware. Just as with the VL-1, some creative work will be done on closed systems (see: the famed New Yorker covers) because an artist likes the quirkiness or the feel of it, but serious creatives in any field are still going to need--and can probably buy, for less money--a full-fledged computer.
Likewise, predictions that such simple devices are the "future of computing" are self-evidently ridiculous--like saying that the VL-1 swept in a new musical future via Hallmark's tinny audio greeting cards. Indeed, if you look at the music production landscape today, the state of traditional synthesizers is ridiculously strong, even though they share many of the same "pitfalls" as a desktop OS: complicated user interfaces, intimidating technical specifications, and hackable hardware, to name just a few. The development of pre-programmed, "appliance" synthesis only added to the prestige of modular synths. I think it's not just possible, but even probable that the same will be true for the open computing model that's most common today.
Maybe this is just a way of stroking my own ego ("I'm kind of a big deal. I have many leather-bound books and my apartment smells of low frequency oscillators and shell scripting."), but the more I think about it the more it rings true. In the end, standard open systems tend to win out, both because they're desired by pros (or those who aspire to professionalism) and because closed systems are more expensive to design and maintain over time (after all, we wouldn't have netbooks if not for commodity parts). So let's not hysterically overreact in either direction, but especially let's not make claims for a utopian paradigm shift. There's room for both open and closed devices right now, and if the fate of the post-VL-1 synth is any indication, this is nowhere near the death knell of the computer as we know it. Far from it, in fact.
As an example of what Android's doing right, it's hard to top Locale.
When I first started using the ADP1, Locale was one of the programs I tried and uninstalled, thinking that it was nice but overkill for my needs. As time went by, my alternatives for settings automation succumbed to either developer neglect or ridiculous feature creep (nonsense like task killers or banner ads), and had to be removed. So when Two Forty Four AM, Locale's developer team, recently released a for-pay 1.0 version, I gave it another shot, and was pleasantly surprised. During the past year, they've refined it into a polished, sharply-focused utility that's well worth the $10 asking price.
Reviews of Android phones often fault the platform for missing some single application that the reviewer has decided they can't live without--a specific Twitter client, for example, which says a lot about the priorities of tech bloggers compared to normal people. In my opinion, though, Locale really does provide the sort of functionality that ought to be a deal-breaker for other platforms, and it's a must-have for Android users. After all, isn't this sort of automation kind of the point of a "smart" phone?
When I was doing research for the Audiofile articles, one of the surprisingly discoveries I made was the degree of overlap between audio sampling and other kinds of scientific sensors. In retrospect, it's obvious that the theory behind accurately measuring an audio signal thousands of times a second would be much the same as taking a measurement from something like a temperature, voltage, or orientation sensor. But it takes a moment's thought to make the transition from audio-as-sound to audio-as-voltage (or audio-as-binary-stream), which is the paradigm shift that makes it possible. It's not just a microphone or a speaker--it's an I/O port.
Audio inputs and outputs are everywhere, but they've fallen out of style for digital interconnection, which is why it's so cool when someone uses them in unconventional ways. One of my favorite examples is the re-release of Bangai-O for DS, which lets users trade custom levels through audio files (they sound a bit like picking up the phone on an old modem). That's really clever--particularly since it takes nerve to ignore the console's built-in wireless connection (a good thing, given the way Nintendo has reliably squandered it). Sharing MP3 files over the Internet is cheaper, easier, and longer-lived than any centralized, developer-provided solution could have been. It even degrades gracefully (you could send creations by mail via cassette tape).
Another great use of audio hacking showed up on Make recently, with a point-of-sale system that plugs into a smartphone's microphone jack. You see something like this and you think "well, of course!" It reminds me of the old IR blasters that were available for PalmOS back in the day--plug a stubby, square rectangle into the headphone jack, load up some .wav files, and suddenly you've got a universal remote control. Both inventions play on the realization that almost every device has a high-quality digital sensor with a sampling API and a standard pin configuration, as long as you stop thinking of the headphone jack as "for music only."
Tricks like these are not only fun for digital audio nerds like me, they're also a reminder that audio is a big part of our software heritage. After all, the original sytem hackers were audio people: phone phreaks who exploited flaws in the network signaling to get free long-distance calls. It's been a long time since those days, and since you had to physically place a handset onto an acoustic coupler to go online (that was even before my time, actually), but audio remains a powerful (and evocative) tool for storing, transmitting, and even hiding information. I can't wait to see what people will invent (or revive) next.
Of late it seems to have become fashionable among tech writers to complain about the "fragmentation" of Android hardware, and to blame this for perceived weaknesses in the platform, or to explain away their preference for something else. With several screen sizes, input methods, and UI customizations available, these writers say, Android developers face the insurmountable obstacle of rewriting their code for each new device running the OS.
Take, for example, Crunchgear's John Biggs, who's entirely typical of the tech pundit conventional wisdom. In a post on the size of Apple's App Store, he claims:
In Android's case you have multiple "branches" of the OS for multiple devices. HTC and Motorola have their own UI tweaks and these branches for programmers to recompile for multiple devices. This, obviously, is a big issue for mom and pop shops run by a few developers and even worse for the 14-year-olds out there building apps in their basements.Yeah, all that recompiling would be a real hassle--if it were true. It's not, any more than I have to get a special version of every Windows program to work with my specific model Thinkpad. In fact, the "fragmentation" claim is a myth, and one that's frankly idiotic if you think about it at all. To understand why, let's take a detour into programming metaphor.
As much as anything else, programming is based on metaphors of nested abstractions. Take your browser, for example. The programmers of Firefox don't have to write code that manually changes the electrical current in your network cable or wireless card. Instead, they can take advantage of the layers in the TCP/IP stack and write at a higher level of abstraction--using concepts like packets and ports. Even better, if you're writing a Firefox plugin, you can build on their work at a higher level of abstraction: HTTP page requests. These high-level program calls get translated down through the layers until, eventually, some code somewhere takes care of the actual electrical signaling. But for the programmer, all that is abstracted away. Abstraction is the miracle that makes increasingly complex software development possible.
Programming for Android is no different. In fact, abstraction is a major feature of the platform. Almost all Android software is written in Java (which was originally pitched to programmers as a highly-abstracted, cross-platform language) and compiled to a virtual machine instead directly to ARM code. That's why you can actually run Android applications on Ubuntu without recompilation. Technically, they're hardware-independent by design (the downside being that execution speed--until Google adds Just-In-Time compilation to the Dalvik VM--is relatively sluggish).
Even when it comes to interacting with hardware like the screen or input devices, Android provides libraries and abstractions to handle this kind of thing. One platform has a trackball while another has a d-pad? To an Android program, they both look like identical directional commands. Different screen sizes? There's no need to recompile: programs just ask for a resolution and adjust to fit, or use the platform's excellent XML-based layout kit and let the OS scale UI elements to fit ("wrap_content" and "fill_parent" attributes, combined with a RelativeLayout, can do impressive things). Again, this isn't a new idea. Hardware abstraction layers and flexible layout tools are a given on every modern OS--you didn't think developers wrote windowing and display code all over again for every video card on the market, did you? Of course not.
I'm no big-time developer on par with something like Twidroid or ShopSavvy, but I've had no reports that Underground is having problems running on different Android devices. And I'm not aware of any developer who's complaining about having to compile a completely new build in order to run on new Android devices. They may be fixing isolated problems (cameras in particular can be tricky), or adding code to take full advantage of features like the Motorola Droid's high-res screen. But for 99% of applications, and 99% of developers, it doesn't even enter the picture. Given the design of Android's abstraction layer, the fragmentation myth makes no sense at all. There's no evidence for it. It's literally something that the pundit community just invented out of thin air.
The irony of this myth, of course, is in the rhetorical strategy behind its deployment. When Biggs and his fellow tech pundits typically evoke the "fragmentation" of Android, it's usually in contrast to development for the iPhone, which supposedly has only one hardware configuration to target. Nothing could be further from the truth: after all, the iPod Touch (which accesses the same software as the iPhones) does not have a camera or GPS (the original iPhone also lacked GPS functionality). The iPhone 3GS exposes several hardware features missing from the other models, including video recording, digital compass, MMS, and better 3D hardware. And among the entire range of products connected to Apple's application store, there are at least three variations in CPU speed and architecture.
The iPhone is hardly unified--indeed, it's split by the most of the same distinctions bemoaned on Android, and they're solved by developers in exactly the same way: a robust hardware abstraction layer and careful, defensive programming. To claim fragmentation in either case would be absurd. But for the tech punditry to make that cognitive leap, they'd have to know what they were talking about. If the current state of affairs is any indication, there's little chance of that happening.
Update: In a hilarious coda to this, one of Wired's front-page articles this morning is "Android's Rapid Growth Has Some Developers Worried." "Some" developers, apparently, means three people from two companies, one of which explains in comments that he was misquoted and that "I honestly haven't had a lot of trouble with the fragmentation issue on Android, apart from some differences in the way Android 2.0 handles font sizes vs. earlier versions."
Tech writers: hacks, plain and simple.
Last week, Google sent a cease-and-desist letter to Cyanogen Mod, one of the rebuilt ROM images available for Android phones. Cyanogen was designed to be faster and more stable than the prebuilt images, and incorporated features like SD card partitions and new process schedulers to make that happen. Several of the project's innovations have even made it back into the main Android code. But Cyanogen also distributed closed-source applications like Google Maps in the ROM, so Google has effectively shut it down until the authors can work on a solution.
It's depressing, but Google is legally in the right, and their actions may even have side-benefits for the community (like an uptick in interest for alternative applications). Be that as it may, it's also illustrative of a real advantage that's keeping me on Android for now: the juvenile, terrifying, but ultimately beneficial presence of the XDA Developers board where Cyanogen got its start.
XDA got its start as a modding board for older Windows Mobile phones. The way the ecosystem is set up, it's up to hardware manufacturers to determine whether or not older devices get an upgrade when Microsoft releases a new version of Windows Mobile. Sometimes they do, most of the time they don't, mainly because the upgrade would require a ton of testing for not a lot of profit. XDA sprung up to fill that gap--they dump the ROMs of newer devices, wedge the new software into "cooked" images, and figure out how to get them onto older phones. HTC, the primary manufacturer of Windows Mobile devices and of the original XDA, takes a benevolent view of all this.
Contrast that with, say, Nokia. I think Nokia is awesome, and their phones are fantastic. If their devices were "future-proofed" in the same way, I'd still be using a Nokia now. But the Finnish company is not nearly so tolerant of the homebrew underground--they'd rather sell you a new phone. So when they come out with a new version of S60 that does something important, like upgrading the web browser core, they don't make it available on older phones. And because Nokia keeps much stricter control of the firmware update process, the community can't do it for them. The result (ironic for a company that's proud of its green image) is that consumers who want new features have to buy all-new phones.
So while it is unfortunate that Android homebrew is suffering a legal setback, it's also a reminder of how much these developers contribute to the ecosystem--perhaps not in a front-facing, consumer-accessible sort of way, but as a push for the platform to remain open, hackable, and sustainable. Most consumers probably don't notice that kind of thing, or even know that it exists, but I think it's ultimately a win for them--if only because it keeps developers, who are huge nerds, excited about the platform. It's certainly held my attention.
The default Android homescreen is a generally pleasant piece of software. It's easy to rearrange, hosts widgets for quick access to information, and has a fun parallax effect when you pan between "desktops." Add a handpainted Legend of Zelda wallpaper, and you've got yourself a classy launcher.
That said, I have two minor issues with it. First, if it's killed by the system (say, because the browser is hogging all the RAM), it can take a few seconds to restart itself. Second, it clears the activity stack when you load it, so the Back button becomes useless for multitasking. The Activity Stack/Back button combo is one of the best things about Android: together with the pull-down notification bar and the powerful Intent system, it lets users address incoming events without losing their place. Take, for example, this typical activity path:
A better way would be if there were a way to augment the Home button as a kind of "quick launch" key, one that doesn't destroy the Activity stack. That's what my first Android program, Underground, does. It's not a full Home screen replacement, but a color-coded list of your favorite applications triggered by the Home button for fast, effortless multitasking. Think of it as a subway system for your phone.
I'll be putting it in the Android Market in the next day or so, but in the meantime you can grab the installation package here (don't forget to enable "Unknown sources" in the "Applications" section of your device settings). It's small: less than 50KB when installed. I'm also making the source available, in case anyone feels like tweaking it.
After installation, there won't be an icon in the application drawer. Instead, press the Home key. Android should ask what program you want to use to respond to the event, Home or Underground. Choose Underground to start using it. I also recommend clicking the checkbox to use Underground by default--don't worry, you can still get to the old Home screen, and you can always reset this in the "Manage Applications" section of your device settings.
When you start it for the first time, Underground has three items onscreen: a sample link to the web browser, "add new," and "home." You can choose "home," or press the hardware Home key again, to return to the default Home screen. To put your own favorite applications on the list, choose "add new" and pick one from the list (be patient: it takes a few seconds to find them all the first time). You can also long-press on a list item to change its tag color or to remove it from the list. Most importantly, just tap on an application's name to switch to that activity--Underground will quietly remove itself from the stack, so that the Back button will return you directly to your previous task.
That's all there is! Simple, fast, and effective multitasking in a memory-conscious package, without removing what was already great about Android's Home. I'm not much of a designer, so I've left it pretty simple: bold colors, nice big fonts, and liberal use of the platform's long-press interaction model. But that doesn't mean it couldn't stand some improvement: feel free to leave feedback here in the comments.
Notes on Programming for Android
It's extraordinarily impressive how easy this project was. Granted, it's a Java API, and after a couple of years in dynamic languages it feels like a real regression to go back to strict static typing, no first-class functions, and a million subclasses. And of course, it's not like this was an incredibly ambitious project. But for all that, Google has made working on Android about as simple as it could be. I've particularly grown fond of the XML-based layout tool and the package resource system, which makes it easy to build little chunks of UI (like tagged list cells) and pass them in to the view builders. If only S60 had been this easy!
And of course, it says a great deal about the openness of the platform they've built that I can literally take over from the preinstalled program manager (and that I was able to refer to its source code when I got stuck on a few tricky bits). All in all, it was a great experience, and I'm looking forward to working on a few more Android projects when I get the chance. Suggestions?