Over the last six months, I've consistently given three pieces of advice to my students at SCCC: get comfortable with the Firebug debugger, contribute to open source, and learn to use Linux. The first is because all developers should learn how to debug properly. The second, because open source is a great way to start building a resume. And the last, in part, because Linux is what powers a large chunk of the web--not to mention a dizzying array of consumer devices. Someone who knows how to use an SSH shell and a few basic commands (including one of the stock editors, like vi or emacs) is never without tools--it may not be comfortable, but they can work anywhere, like Macgyver. It all comes back to Macgyver, eventually.
At Big Fish, in an effort to take this to its logical extreme, I've been working in Linux full-time (previously, I've used it either on the server or through a short-lived VM). It's been an interesting trip, especially after years of using mostly Windows desktops (with a smattering of Mac here and there). Using Linux exclusively for development plays to its strengths, which helps: no fighting with Wine to run games or audio software. Overall, I like it--but I'll also admit to being a little underwhelmed.
To get the bad parts out of the way: there are something like seven different install locations for programs, apparently chosen at random; making changes in the graphical configuration still involves arcane shell tricks, all of which will be undone in hilariously awful ways when you upgrade the OS; and Canonical seems intent on removing all the things that made Ubuntu familiar, like "menus" and "settings." I ended up switching to the XFCE window manager, which still makes me angry because A) I don't want to know anything about window managers, and B) it's still impossible to theme a Linux installation so that everything looks reasonably good. Want to consistently change the color of the window decorations for all of your programs? Good luck with that. XFCE is usable, and that's about all you can honestly say for it.
The best part of Linux by far is having a native web stack right out of the box, combined with a decent package manager for anything extra you might need. Installing scripting languages has always been a little bit of a hassle on Windows: even if the base package is easily installed, invariably you run into some essential library that's not made for the platform. Because these languages are first-class citizens on Linux, and because they're just an apt-get away, it opens up a whole new world of utility scripts and web tools.
I love combining a local server with a set of rich commmand-line utilities. Finally, I can easily use tools like the RequireJS optimizer, or throw together scripts to solve problems in my source, without having to switch between contexts. I can use all of my standard visual tools, like Eclipse or Sublime Text, without going through a download/upload cycle or figuring out how to fool them into working over SFTP. Native source control is another big deal: I've never bothered installing git on Windows, but on Linux it's almost too easy.
So there is one axis along which the development experience is markedly superior. It's not that Linux is better built (it has its fair share of potholes) so much as it's where people curently go to build neat tools, and then if we're lucky they bring them over to Windows. Microsoft is trying to fix this (see: their efforts to make NodeJS a first-class Windows platform), but it'll probably always be an uphill battle. The open-source developer culture just isn't there.
On the other hand, I was surprised by the cases where web development is actually worse on Linux compared to Windows. There's no visual FTP client that's anywhere near as good as WinSCP that I can find. The file managers are definitely clumsier than Explorer. Application launching, of all things, can be byzantine--there's no option to create a shortcut to an program, you have to manually assemble a .desktop file instead, and then XFCE will invariably position its window someplace utterly unhelpful. Don't even get me started on the configuration mess: say what you like about the registry, at least it's centralized.
None of these things are dealbreakers, the same way that it's not a dealbreaker for me to need GOW for a decent Windows command line. But if I was considering trying to dual-boot or switch to Linux as a work environment, instead of just keeping a headless VM around for when I need Ruby, I've given that up now. When all is said and done, I spend much of my time in either Eclipse or Firefox anyway, and they're the same no matter where you run them. I still believe strongly that developers should learn a little Linux--it's everywhere these days!--but you can be perfectly productive without living there full time. Ultimately, it's not how you build something, but what you build that matters.
If you do decide to give it a chance, here are a few tips that have made my life easier:
It's been almost two years now since I picked up an Android phone for the first time, during which time it has gone from a generally unloved, nerdy thing to the soon-to-be dominant smartphone platform. This is a remarkable and sudden development--when people start fretting about the state of Android as an OS (fragmentation, competing app stores, etc.), they tend to forget that it is still rapidly mutating and absorbing the most successful parts of a pretty woolly ecosystem. To have kept a high level of stability and compatibility, while adding features and going through major versions so quickly, is no small feat.
Even back in v1.0, there were obvious clever touches in Android--the notification bar, for instance, or the permission system. And now that I'm more used to them, the architectural decisions in the OS seem like "of course" kind of ideas. But when it first came out, a lot of the basic patterns Google used to build Android appeared genuinely bizarre to me. It has taken a few years to prove just how foresighted (or possibly just lucky) they actually were.
Take, for example, the back button. That's a weird concept at the OS level--sure, your browser has a one, as does the post-XP Explorer, but it's only used inside each program on the desktop, not to move between them. No previous mobile platform, from PalmOS to Windows Mobile to the iPhone, used a back button as part of the dominant navigation paradigm. It seemed like a case of Google, being a web company, wanting everything to resemble the web for no good reason.
And yet it turns out that being able to navigate "back" is a really good match for mobile, and it probably is important enough to make it a top-level concept. Android takes the UNIX idea of small utilities chained together, and applies it to small screen interaction. So it's easy to link from your Twitter feed to a web page to a map to your e-mail , and then jump partway back up the chain to continue from there (this is not an crazy usage pattern even before notifications get involved --imagine discovering a new restaurant from a friend, and then sending a lunch invitation before returning to Twitter). Without the back button, you'd have to go all the way back to the homescreen and the application list, losing track of where you had been in the process.
The process of composing this kind of "attention chain" is made possible by another one of Android's most underrated features: Intents. These are just ways of calling between one application and another, but with the advantage that the caller doesn't have to know what the callee is--Android applications register to handle certain MIME types or URIs on installation, and then they instantly become available to handle those actions. Far from being sandboxed, it's possible to pass all kinds of data around between different applications--or individual parts of an application. In a lot of ways, they resemble HTTP requests as much as anything else.
So, for example, if you take a picture and want to share it with your friends, pressing the "share" button in the Camera application will bring up a list of all installed programs that can share photos, even if they didn't exist when Camera was first written. Even better, Intents provide an extensible mechanism allowing applications to borrow functionality from other programs--if they want to use get an image via the camera, instead of duplicating the capture code, they can toss out the corresponding Intent, and any camera application can respond, including user replacements for the stock Camera. This is smart enough that other platforms have adopted something similar--Windows Mobile 7 will soon gain URIs for deep linking between applications, and iPhone has the clumsy, unofficial x-callback-url protocol--but Android still does this better than any other platform I've seen.
Finally, perhaps the choice that seemed oddest to me when Google announced Android was the Dalvik virtual machine. VMs are, after all, slow. Why saddle a mobile CPU with the extra burden of interpreting bytecode instead of using native applications? And indeed, the initial versions of Android were relatively sluggish. But two things changed: chips got much faster, and Google added just-in-time compilation in Android 2.2, turning the interpreted code into native binaries at runtime. Meanwhile, because Dalvik provides a platform independent from hardware, Android has been able to spread to all kinds of devices on different processor architectures, from ARM variants to Tegra to x86, and third-party developers never need to recompile.
(Speaking of VMs, Android's promise--and eventual delivery--of Flash on mobile has been mocked roundly. But when I wanted to show a friend footage of Juste Debout the other week, I'd have been out of luck without it. If I want to test my CQ interactives from home, it's incredibly handy. And of course, there are the ever-present restaurant websites. 99% of the time, I have Flash turned off--but when I need it, it's there, and it works surprisingly well. Anecdotal, I know, but there it is. I'd rather have the option than be completely helpless.)
Why are these unique features of Android's design interesting? Simple: they're the result of lessons successfully being adopted from web interaction models, not the other way around. That's a real shift from the conventional wisdom, which has been (and certainly I've always thought) that the kind of user interface and application design found on even the best web applications would never be as clean or intuitive as their native counterparts. For many things, that may still be true. But clearly there are some ideas that the web got right, even if entirely by chance: a stack-based navigation model, hardware-independent program representation, and a simple method of communicating between stateless "pages" of functionality. It figures that if anyone would recognize these lessons, Google would. Over the next few years, it'll be interesting to see if these and other web-inspired technologies make their way to mainstream operating systems as well.
I don't get it.
So Google is introducing a Linux operating system with a new windowing layer based on its Webkit browser, Chrome. It's the thin client reinvented for the HTTP age. I guess from Google's perspective this makes sense: the more they can do to convince people that the Web is a valid application platform, the lower the resistance to their browser-based services, particularly in the lucrative enterprise market. If nothing else, it works as a proof of concept.
That much I get. What I don't understand is: who is this for on the user end? It's not a substitute for a full-sized OS stack--the browser is still, even with HTML 5, incapable of supplanting many basic applications, like listening to MP3s or doing serious document editing. It's certainly not for people who really leverage their CPU power, like gamers or media producers. Perhaps some day it will be--but I remember this idea from when it was the JavaPC and the Audrey, and somehow it has never had quite the mainstream appeal that its developers assumed it would. Ultimately, thin clients seem to leave even the most undemanding users wanting, for good reason.
Is it for netbooks? Most people seem to be following that line. But then, netbook manufacturers have been offering stripped-down Linux installations on these machines since their introduction, usually with a decent browser (Firefox) included. Certainly both Canonical and Microsoft see netbooks as target platforms--there are Netbook Remix editions of Ubuntu, and a major selling point of Windows 7 is its ability to run on low-end hardware. Either way, if I install ChromeOS, I just get a browser. If I install Linux or Windows, I get a browser that's equally good (if not identical), plus all the native applications my poor eyes can stand. Why would anyone, much less a hypothetical Windows user, switch to the former instead of the latter?
Does anyone think Google's going to do a better job than Ubuntu has done in the Linux space anyway? Or that Canonical will sit by and let it happen? I'm not a real Linux fan, and even I will admit that they've developed it into a usable desktop alternative. Ubuntu is a competitor, no doubt, and I simply don't see what Google can offer that they don't already have--or won't develop in response. And make no mistake: this is as much a threat to traditional Linux as it is to Windows, or Google would have simply worked with one of the existing distributions.
I lack most of the obsessive-compulsive tendencies that seem to be common to the tech community, luckily. I don't have any of the weird superstitions chronicled in Rands' Quirkbook post. I think my habits are relatively restrained and normal, and you should ignore it if Belle makes claims to the contrary. As such, the order of windows on the taskbar, and the fact that those windows can't be manually reordered for whatever reason, doesn't particularly bother me. I seem to remember once reading a good reason why taskbar buttons can't be reordered, perhaps by Raymond Chen, but I can't find it now.
In any case, if it bothers you, Taskbar Shuffle inoffensively enables drag-and-drop reordering for taskbar items. There: now you've got five minutes of your day back from closing/reopening programs to create the correct order, you psychopath.
I downloaded four or five Linux CDs and/or VM images this weekend, since I needed to use RealPlayer without installing it on my Thinkpad. Turns out that Real's software is also terrible on *nix. I know, I was surprised too.
Every time that I download a Linux installer for my old laptop, it's a herculean task to get it working right. Some of this may be due to the age of the laptop that gets used for these kinds of disposable projects--it's almost ten years old now, and I'm sure it has issues. But it's not like I've ever had any problems getting it to read a Windows install CD during those ten years, despite reinstalling the OS three or four times for various reasons. Advocates sometimes ask what Linux would have to do to impress me--"create a reliable boot disk" does not seem to be too much to ask.
What always cracks me up about this process is the information that the community does provide for reliability--the MD5 checksum, which validates the download file itself. Now, perhaps I simply live in a sheltered world of unmangled connections, but I honestly cannot remember the last time that the file I downloaded was corrupted or different from what I expected to get--this was, I was under the impression, the entire point of the HTTP protocol. Sure enough, when I bothered to check, the disks that didn't work were burned from an ISO with a valid checksum--and moreover, they eventually did work, sometimes with the same CD that had failed not an hour before.
Look, don't get me wrong--Ubuntu's come a long way even just since the last time I tried it, and clearly these things are working for someone. But the MD5 thing is still pretty funny. It's like someone shipping you a package and then insisting that what's inside will work because they've got a receipt from UPS.
Me, I've given up on the whole live CD/spare laptop thing, and I'm going to go with virtual machines instead. It's a lot tidier, a lot less frustrating, and with machines as powerful as they are nowadays, actually faster than trying to load even the most frugal modern OS onto my spare laptop.
Forget the right mouse button: where the ^@#$# is all the keyboard navigation?