this space intentionally left blank

May 9, 2012

Filed under: tech»os

Apropos of Nothing

Over the last six months, I've consistently given three pieces of advice to my students at SCCC: get comfortable with the Firebug debugger, contribute to open source, and learn to use Linux. The first is because all developers should learn how to debug properly. The second, because open source is a great way to start building a resume. And the last, in part, because Linux is what powers a large chunk of the web--not to mention a dizzying array of consumer devices. Someone who knows how to use an SSH shell and a few basic commands (including one of the stock editors, like vi or emacs) is never without tools--it may not be comfortable, but they can work anywhere, like Macgyver. It all comes back to Macgyver, eventually.

At Big Fish, in an effort to take this to its logical extreme, I've been working in Linux full-time (previously, I've used it either on the server or through a short-lived VM). It's been an interesting trip, especially after years of using mostly Windows desktops (with a smattering of Mac here and there). Using Linux exclusively for development plays to its strengths, which helps: no fighting with Wine to run games or audio software. Overall, I like it--but I'll also admit to being a little underwhelmed.

To get the bad parts out of the way: there are something like seven different install locations for programs, apparently chosen at random; making changes in the graphical configuration still involves arcane shell tricks, all of which will be undone in hilariously awful ways when you upgrade the OS; and Canonical seems intent on removing all the things that made Ubuntu familiar, like "menus" and "settings." I ended up switching to the XFCE window manager, which still makes me angry because A) I don't want to know anything about window managers, and B) it's still impossible to theme a Linux installation so that everything looks reasonably good. Want to consistently change the color of the window decorations for all of your programs? Good luck with that. XFCE is usable, and that's about all you can honestly say for it.

The best part of Linux by far is having a native web stack right out of the box, combined with a decent package manager for anything extra you might need. Installing scripting languages has always been a little bit of a hassle on Windows: even if the base package is easily installed, invariably you run into some essential library that's not made for the platform. Because these languages are first-class citizens on Linux, and because they're just an apt-get away, it opens up a whole new world of utility scripts and web tools.

I love combining a local server with a set of rich commmand-line utilities. Finally, I can easily use tools like the RequireJS optimizer, or throw together scripts to solve problems in my source, without having to switch between contexts. I can use all of my standard visual tools, like Eclipse or Sublime Text, without going through a download/upload cycle or figuring out how to fool them into working over SFTP. Native source control is another big deal: I've never bothered installing git on Windows, but on Linux it's almost too easy.

So there is one axis along which the development experience is markedly superior. It's not that Linux is better built (it has its fair share of potholes) so much as it's where people curently go to build neat tools, and then if we're lucky they bring them over to Windows. Microsoft is trying to fix this (see: their efforts to make NodeJS a first-class Windows platform), but it'll probably always be an uphill battle. The open-source developer culture just isn't there.

On the other hand, I was surprised by the cases where web development is actually worse on Linux compared to Windows. There's no visual FTP client that's anywhere near as good as WinSCP that I can find. The file managers are definitely clumsier than Explorer. Application launching, of all things, can be byzantine--there's no option to create a shortcut to an program, you have to manually assemble a .desktop file instead, and then XFCE will invariably position its window someplace utterly unhelpful. Don't even get me started on the configuration mess: say what you like about the registry, at least it's centralized.

None of these things are dealbreakers, the same way that it's not a dealbreaker for me to need GOW for a decent Windows command line. But if I was considering trying to dual-boot or switch to Linux as a work environment, instead of just keeping a headless VM around for when I need Ruby, I've given that up now. When all is said and done, I spend much of my time in either Eclipse or Firefox anyway, and they're the same no matter where you run them. I still believe strongly that developers should learn a little Linux--it's everywhere these days!--but you can be perfectly productive without living there full time. Ultimately, it's not how you build something, but what you build that matters.

If you do decide to give it a chance, here are a few tips that have made my life easier:

  • Install a distribution to a virtual machine using VirtualBox instead of trying to use a live CD or USB stick. It's much easier to find your way along if you can still run a web browser alongside your new OS.
  • The title of this post is a reference to the apropos command, which will search through all the commands installed on your system and give you a matching list. This is incredibly helpful when you're sitting at a prompt with no idea what to type. Combined with man for reading the manual pages, you can often muddle through pretty well.
  • By default, the config editor for a lot of systems is vi. Lots of people like vi, I think it's insane, and I guarantee that you will have no idea what to do the first time it opens up unexpectedly. Eventually you'll want to know the basics, but for right now you're much better off setting nano as the default editor. Type "export EDITOR=nano" to set that up.

April 20, 2011

Filed under: tech»os

Paradigm GET

It's been almost two years now since I picked up an Android phone for the first time, during which time it has gone from a generally unloved, nerdy thing to the soon-to-be dominant smartphone platform. This is a remarkable and sudden development--when people start fretting about the state of Android as an OS (fragmentation, competing app stores, etc.), they tend to forget that it is still rapidly mutating and absorbing the most successful parts of a pretty woolly ecosystem. To have kept a high level of stability and compatibility, while adding features and going through major versions so quickly, is no small feat.

Even back in v1.0, there were obvious clever touches in Android--the notification bar, for instance, or the permission system. And now that I'm more used to them, the architectural decisions in the OS seem like "of course" kind of ideas. But when it first came out, a lot of the basic patterns Google used to build Android appeared genuinely bizarre to me. It has taken a few years to prove just how foresighted (or possibly just lucky) they actually were.

Take, for example, the back button. That's a weird concept at the OS level--sure, your browser has a one, as does the post-XP Explorer, but it's only used inside each program on the desktop, not to move between them. No previous mobile platform, from PalmOS to Windows Mobile to the iPhone, used a back button as part of the dominant navigation paradigm. It seemed like a case of Google, being a web company, wanting everything to resemble the web for no good reason.

And yet it turns out that being able to navigate "back" is a really good match for mobile, and it probably is important enough to make it a top-level concept. Android takes the UNIX idea of small utilities chained together, and applies it to small screen interaction. So it's easy to link from your Twitter feed to a web page to a map to your e-mail , and then jump partway back up the chain to continue from there (this is not an crazy usage pattern even before notifications get involved --imagine discovering a new restaurant from a friend, and then sending a lunch invitation before returning to Twitter). Without the back button, you'd have to go all the way back to the homescreen and the application list, losing track of where you had been in the process.

The process of composing this kind of "attention chain" is made possible by another one of Android's most underrated features: Intents. These are just ways of calling between one application and another, but with the advantage that the caller doesn't have to know what the callee is--Android applications register to handle certain MIME types or URIs on installation, and then they instantly become available to handle those actions. Far from being sandboxed, it's possible to pass all kinds of data around between different applications--or individual parts of an application. In a lot of ways, they resemble HTTP requests as much as anything else.

So, for example, if you take a picture and want to share it with your friends, pressing the "share" button in the Camera application will bring up a list of all installed programs that can share photos, even if they didn't exist when Camera was first written. Even better, Intents provide an extensible mechanism allowing applications to borrow functionality from other programs--if they want to use get an image via the camera, instead of duplicating the capture code, they can toss out the corresponding Intent, and any camera application can respond, including user replacements for the stock Camera. This is smart enough that other platforms have adopted something similar--Windows Mobile 7 will soon gain URIs for deep linking between applications, and iPhone has the clumsy, unofficial x-callback-url protocol--but Android still does this better than any other platform I've seen.

Finally, perhaps the choice that seemed oddest to me when Google announced Android was the Dalvik virtual machine. VMs are, after all, slow. Why saddle a mobile CPU with the extra burden of interpreting bytecode instead of using native applications? And indeed, the initial versions of Android were relatively sluggish. But two things changed: chips got much faster, and Google added just-in-time compilation in Android 2.2, turning the interpreted code into native binaries at runtime. Meanwhile, because Dalvik provides a platform independent from hardware, Android has been able to spread to all kinds of devices on different processor architectures, from ARM variants to Tegra to x86, and third-party developers never need to recompile.

(Speaking of VMs, Android's promise--and eventual delivery--of Flash on mobile has been mocked roundly. But when I wanted to show a friend footage of Juste Debout the other week, I'd have been out of luck without it. If I want to test my CQ interactives from home, it's incredibly handy. And of course, there are the ever-present restaurant websites. 99% of the time, I have Flash turned off--but when I need it, it's there, and it works surprisingly well. Anecdotal, I know, but there it is. I'd rather have the option than be completely helpless.)

Why are these unique features of Android's design interesting? Simple: they're the result of lessons successfully being adopted from web interaction models, not the other way around. That's a real shift from the conventional wisdom, which has been (and certainly I've always thought) that the kind of user interface and application design found on even the best web applications would never be as clean or intuitive as their native counterparts. For many things, that may still be true. But clearly there are some ideas that the web got right, even if entirely by chance: a stack-based navigation model, hardware-independent program representation, and a simple method of communicating between stateless "pages" of functionality. It figures that if anyone would recognize these lessons, Google would. Over the next few years, it'll be interesting to see if these and other web-inspired technologies make their way to mainstream operating systems as well.

Past - Present