this space intentionally left blank

February 6, 2013

Filed under: tech»web

Unsavory

Every year at the Super Bowl, for many years now, it's traditional for GoDaddy to remind everyone that they're a horrible company run by a creepy, elephant-hunting misogynist. This year was no different. The good news is that I was working on the Soul Society website on Sunday, so I didn't actually see any of their ads. The bad news is that Soul Society is hosted on GoDaddy (cue ironic record scratch).

The thing about GoDaddy is that they are fractally gross: everything about them gets more distasteful the more you dig into it. There is no part of their operation that does not make you want to take a shower after interacting with them--neither the advertising, nor the sales experience, nor the admin panels, and certainly not the actual hosting.

It should be enough that the company was run, for years, by a horrible, horrible person who kills elephants for sport, supports torturing Guantanamo Bay detainees, and is a relentless self-promoter. You should look no further than its incredibly sexist advertising, which manages to be both repulsive and badly produced. The fact that they originally came out in favor of SOPA just rounds out the list of offensive behavior.

But if, despite all those reasons, you go to sign up for an account (as many people, including many of my students, end up doing), chances are that you'll end up overpaying due to an intentionally-confusing sales process. The upsell actually doesn't stop at the first purchase. Every time I interact with the site, I'm forced to wade through a morass of confusing ads and sale links masquerading as admin panels. Everything on GoDaddy leads to a shopping cart.

GoDaddy also parcels up its crappy service into smaller pieces, so they can force you to pay more for stuff that you should get for free. As an example, I have an urbanartistry.org e-mail address for when we need a webmaster link on the site. For a while, it was a separate mailbox, which meant that I never checked it. Then I missed a bunch of e-mails from other UA directors, and decided to redirect the e-mail address to my personal account. On most mail providers, this is a free service. On GoDaddy, you can set up a forward, but an actual alias costs an additional fee (for all the disk space it... doesn't use?). Which means, technically, that my mail is piling up on their servers, and at some point they'll probably figure out some new reason to screw it up.

And let's not pretend the hosting you get after all this hassle is any good. The server is a slow, underpowered shared account somewhere, which means you don't get your own database (have fun sharing a remote MySQL instance with a bunch of other people, suckers!), and you can't run any decent versioning or deployment software. The Apache instance is badly configured (rewrite rules are overridden by their obnoxious 404, among other things). Bandwidth is limited--I have never seen slower transfers than on GoDaddy, and my SFTP connection often drops when updating the site. It's a lot of fun debugging a WordPress theme (already not the speediest of pages) when your updates get stuck in a background window.

I don't write a lot of posts like this, because I've got better things to do with my time these days than complain about poor service somewhere. There's a lot of repulsive companies out there, and while I believe in shame-and-blame actions, there's only so many hours in the day. I'm trying to have a positive outlook. But it is rare that you find something that's so awful you can't think of a single redeeming quality, and GoDaddy is that company. If you're in the market for any kind of web service, and you haven't already been convinced to go elsewhere, let me add my voice to the chorus. Lifehacker's post on moving away from the company is also a great reference for people who are already customers. I'm probably stuck with them, because Urban Artistry has more important things to worry about than their hosting, but you don't have to be.

January 16, 2013

Filed under: tech»education

Git Busy

There's an ongoing discussion I'm having with the other instructors at Seattle Central Community College on how to improve our web development program. I come from a slightly different perspective than many of them, having worked in large organizations my whole career (most of them are freelancers). So there are places where we agree the program can do better (updating its HTML/CSS standards, reordering the PHP classes) and places where we differ (I'm strongly in favor of merging the design and development tracks). But there's one change that I increasingly think would make the biggest difference to any budding web developer: learning source control.

Source control management (SCM) is important in and of itself. I use it all the time for projects that I'm personally working on, so that I can track my own changes and work across a number of different machines. It's not impossible to be hired without SCM skills, but it is a mark against a potential employee, because no decent development team works without it. And using some kind of version control system is essential to participating in the modern open-source community, which is the number one way I advise students to get concrete experience for their resumes.

But more importantly, tools like Git and Subversion are gateways to the wider developer ecosystem. If you can clone a repo in Git, then you now have a tool for deployments, and you stop developing everything over FTP (local servers become much more convenient). Commits can be checked for validity before they go into the repo, so now you may be looking at automated code parsers and linting tools. All of these are probably going to lead you to other trendy utilities, like preprocessors and live reload. There's a whole world of people working on the web developer experience, creating workflows that didn't exist as recent as two or three years ago, and source control serves as a good introduction to it.

The objection I often hear is that instructors don't have time to keep up with everything across the entire web development field. Whether or not that's a valid complaint (and I feel strongly that it isn't), it's just not that hard to get started with version control these days. Git installs a small Bash shell with a repo-aware prompt. GitHub's client for Windows does not handle the advanced features, but it covers the 90% use case very well with a friendly UI. Tortoise has Windows add-ons for Git, SVN, and Mercurial. Learning to create a repo and commit files has never been easier--everything after that is gravy.

Last quarter, I recommended that teams coordinate their WordPress themes via GitHub, and gave a quick lecture on how to use it. The few teams that took me up on it considered it a good experience, and I had several others tell me they wish they'd used it (instead of manually versioning their work over DropBox). This quarter, I'm accepting homework via repo instead of portal sites, if students want--it'll make grading much easier, if nothing else. But these are stopgap, rearguard measures.

What SCCC needs, I think, is a class just on tooling, covering source control, preprocessing, and scripting. A class like this would serve as a stealth introduction to real-world developer workflows, from start (IDEs and test-driven development) to finish (deployment and build scripts). And it would be taught early enough (preferably concurrent with or right after HTML/CSS) that any of the programming courses could take it as a given. New developers would see a real boost in their value, and I honestly think that many experienced developers might also find it beneficial. Plus it'd be great training for me--I'm always looking for a good reason to dig deeper into my tools. Now I just have to convince the administration to give it a shot.

January 10, 2013

Filed under: tech»mobile

Four

I used a Nexus One as my smartphone for almost three years, pretty much since it was released in 2010. That's a pretty good testimonial. The N1 wasn't perfect--it was arguably underpowered even at release, and held back for upgrades by the pokey video chip and small memory--but it was good enough. When Google announced the Nexus Four, it was the first time I really felt like it was worth upgrading, and I've been using one for the last couple of months.

One big change, pardon the pun, is just the size of the thing: although it's thinner, the N4 is almost half an inch wider and taller than my old phone (the screen is a full diagonal inch larger). The N1 had a pleasant density, while between the size and the glass backing, the N4 feels less secure in your hand, and at first it doesn't seem like you're getting much for the extra size. Then I went back to use the N1 for something, and the virtual keyboard keys looked as small as kitten teeth. I'm (tentatively) a fan now. Battery life is also better than the N1, although I had to turn wifi on to stop Locale from keeping the phone awake constantly.

I think it's a shame they ditched the trackball after the Nexus One, too. Every time I need to move the cursor just a little bit, pick a small link on a non-mobile web page, or play a game that uses software buttons, I really miss that trackball. Reviewers made fun of it, but it was regularly useful (and with Cyanogen, it doubled as a second power button).

The more significant shift, honestly, was probably going from Android 2.3 to 4.2. For the most part, it's better where Android was already good: notifications are richer, switching tasks is more convenient, and most of the built-in applications are less awful (the POP e-mail client is still a disaster). Being able to run Chrome is usually very nice. Maps in particularly really benefits from a more powerful GPU. Running old Android apps can be a little clunky, but I mostly notice that in K-9 Mail (which was not a UX home run to begin with). The only software feature that I do really miss is real USB hosting--you can still get to internal storage, but it mounts as a multimedia device instead of a disk, which means that you can't reliably run computer applications from the phone drive.

There is always a lot of hullaballoo online around Android upgrades, since many phones don't get them. But my experience has been that most of it doesn't really matter. Most of my usage falls into a few simple categories, none of which were held back by Android 2.3:

  • Reading social networks
  • Browsing web sites
  • Checking e-mail
  • Retrieving passwords or two-factor auth keys
  • Occasionally editing a file with DroidEdit
I'm notoriously frugal with the apps I install, but even so I think the upgrade problem for end-users is overrated. 4.0 is nice, and I'm happy to have it, but putting a quad-core chip behind 2.3 would have done most of what I need. Honestly, a quality browser means even the built-in apps are often redundant. Google Maps in Chrome for Android is a surprisingly pleasant experience. Instagram finally discovered the web last year. Google Calendar has been better on the web than on Android for most of its existence. You couldn't pay me enough to install the Facebook app anymore.

Compared to its competitors, Android was always been designed to be standalone. It doesn't rely on a desktop program like iTunes to synchronize files, and it doesn't really live in a strong ecosystem the way that Windows Phone does--you don't actually need a Google Account to use one. It's the only mainstream mobile platform where installing applications from a third-party is both allowed and relatively easy, and where files and data can transfer easily between applications in a workflow. Between the bigger phone size (or tablets) and support for keyboards/mice, there's the possibility that you could do real work on a Nexus 4, for certain definitions of "real work." I think it would still drive me crazy to use it full-time. But it's gradually becoming a viable platform (and one that leaves ChromeOS in kind of an awkward place).

So sure, the Nexus 4 is a great smartphone. For the asking price ($300) it's a real value. But where things get interesting is that Android phones that aren't quite as high-powered or premium-branded (but still run the same applications and OS, and are still easily as powerful as laptops from only a few years ago) are available for a lot less money. This was always the theory behind Nokia's smartphones: cheap but powerful devices that could be "computers" for the developing world. Unfortunately, Symbian was never going to be hackable by people in those countries, and then Nokia started to fall apart. In the meantime, Android has a real shot at doing what S60 wanted to do, and with a pretty good (and still evolving) open toolkit for its users. I still think that's a goal worth targeting.

November 1, 2012

Filed under: tech»web

Node Win

As I've been teaching Advanced Web Development at SCCC this quarter, my role is often to be the person dropping in with little hints of workflow technique that the students will find helpful (if not essential) when they get out into real development positions. "You could use LESS to make your CSS simpler," I say, with the zeal of an infomercial pitchman. Or: "it will be a lot easier for your team to collaborate if you're working off the same Git repo."

I'm teaching at a community college, so most of my students are not wealthy, and they're not using expensive computers to do their work. I see a lot of cheap, flimsy-looking laptops. Almost everyone's on Windows, because that's what cheap computers run when you buy them from Best Buy. My suggestion that a Linux VM would be a handy thing to have is usually met with puzzled disbelief.

This makes my students different from the sleek, high-profile web developers doing a lot of open-source work. It's a difference both cultural (they're being taught PHP and ASP.net, which are deeply unsexy), but technological as well. If you've been to a meetup or a conference lately, you've probably noticed that everyone's sporting almost exactly the same setup: as far as the wider front-end web community is concerned, if you're not carrying a newish MacBook or a Thinkpad (running Ubuntu, no doubt), you might as well not exist.

You can see some of this in Rebecca Murphey's otherwise excellent post, A Baseline for Front End Developers, which lists a ton of great resources and then sadly notes:

If you're on Windows, I don't begin to know how to help you, aside from suggesting Cygwin. Right or wrong, participating in the open-source front-end developer community is materially more difficult on a Windows machine. On the bright side, MacBook Airs are cheap, powerful, and ridiculously portable, and there's always Ubuntu or another *nix.

Murphey isn't trying to be mean (I think it's remarkable that she even thought about Windows when assembling her list--a lot of people wouldn't), but for my students a MacBook Air probably isn't cheap, no matter what its price-to-performance ratio might be. It could be twice, or even three times, the cost of their current laptop (assuming they have one--I have some students who don't even have computers, believe it or not). And while it's not actually that hard to set up many of the basic workflow tools on Windows (MinGW is a lifesaver), or to set up a Linux VM, it's clearly not considered important by a lot of open source coders--Murphey doesn't even know how to start!

This is why I'm thrilled about Node.js, which added a Windows version about a year ago. Increasingly, the kinds of tools that make web development qualitatively more pleasant--LESS, RequireJS, Grunt, Yeoman, Mocha, etc.--are written in pure JavaScript using Node. If you bring that to Windows, you also bring a huge amount of tooling to people you weren't able to reach before. Now those people are not only better developers, but they're potential contributors (which, in open source, is basically the difference between a live project and a dead one). Between Node.js, and Github creating a user-friendly Git client for the platform, it's a lot easier for students with lower incomes to keep up with the state of the art.

I'm not wild about the stereotype that "front-end" means a Mac and a funny haircut, personally. It bothers me that, as a web developer, I'm "supposed" to be using one platform or another--isn't the best thing about rich internet applications the fact that we don't have to take sides? Isn't a diverse web community stronger? I think we have a responsibility to increase access to technology and to the Internet, not focus our efforts solely on a privileged few.

We should be worried when any monoculture (technological or otherwise) takes over an industry, and exclusive tools or customs can serve as warning signs. So even though I don't love Node's API, I love that it's a web language being used to build web tools. It means that JavaScript is our bedrock, as Alex Russell once noted. That's what we build on. If it means that being a well-prepared front-end developer is A) more cross-platform and B) more consistent from top to bottom, it means my students aren't left out, no matter what their background. And that makes me increasingly happy.

October 14, 2012

Filed under: tech»coding

Repo Man

Although I've had a GitHub account for a while, I didn't really use it much until last week, when I taught my Seattle Central students how to use version control for project sharing. That lesson was the first chance I'd had to play with GitHub's Windows client (although I develop most of my web code on Linux, I do a lot of JavaScript work on my Thinkpad in Windows). It seemed like a good time to clean up my account and create a few new repositories for projects I'm working on, some of which other people might find interesting.

Code

Code is my personal JavaScript utility belt--I use it for throwing together quick projects, when I don't want to hunt down a real library for any given task. So it provides a grab-bag of functionality: Futures/Promises, an extremely limited template system, shims for Function.bind and Base64 encoding, basic object/array utilities, and a couple of useful mixins. I started building this at CQ and took it with me because it was just too handy to lose. While I doubt anyone else will be using this instead of something like Underscore, I wanted to put it in a repo so that I can keep a history when I start removing unused portions or experimenting.

Grue

Grue is undoubtedly a cooler project: it's a small library for quickly building text games like Zork. I originally wanted to port Inform7 to JavaScript, but found myself stymied by A) Inform7's bizarre, English-like syntax, and B) its elaborate rule matching system. The fact that there's no source to study for the Inform7 compiler (not to mention that it's actually recompiling to an older version of Inform, then compiling from there to z-machine bytecode) puts this way beyond my "hobby project" threshold.

Instead, I tried to think about how to bring the best parts of Inform7, like its declarative syntax and simple object heirarchy, to JavaScript. Instead of connecting items directly to each other, Grue provides a "Bag" collection type that can be queried by object property--whether something is portable, or flammable, or contains a certain keyword--using a CSS3-like syntax. Objects also come with built-in getter/setter functions that can be "proxied" to temporarily override a value, or change it based on world conditions.

It takes about 30 lines of Grue code to write the opening scene of Zork I, which is not quite as concise as Inform7, but it's pretty close. I figure Grue is about halfway done--I still need to add more vocabulary, regional rulesets, and some additional types (Regions, Doors, Devices, etc)--but it's close enough to start dogfooding it. Feel free to pull the repo and open "index.html" to see what I've gotten so far.

KeepassDroid

My fork of KeepassDroid exists entirely to scratch a particular itch: I like the Android port of Keepass, but I find its UI to be functional, at best. The fonts are often too small, and forms end up underneath the virtual keyboard more often than not. So I've changed the view styles, and some of the layout XML, just for my own use (there's a .zip with the compiled application package, in case anyone else is interested). The main project doesn't seem interested in my changes, which is fine by me, but it does mean that every now and then I have to merge in changes from trunk if I want mine up to date. Increasingly, I don't bother.

Urban Artistry

And then there's one project I've been working on that's not located on GitHub, but went live this weekend. Ever since I started maintaining the web presence for Urban Artistry, it's been a mess of PHP files accreted since they first went online. There was an abortive attempt to move to WordPress in 2010, but it never got anywhere, and it would have used the same theme that someone once described as "a bit like a dark nightclub."

When UA went fully non-profit in the state of Maryland, and asked me to be on the board, one of my goals was to turn the site into something that would be a bit more appealing to the typical grant donor. The new site is intented to do exactly that: my design takes its cues from the UA logo with a lightweight, modern feel. The site is also responsive across three sizes--phone, tablet/netbook, and desktop--and since it's built on WordPress, it's easy for other members of the company to log in and make changes if they need to do so. I'm pretty happy with how things turned out, but the design was the easy part: content is much harder, and that's what we're tackling next.

October 3, 2012

Filed under: tech»coding

Teachable Moments

When you're on top of the world, it's the perfect time to start kicking the little people who lifted you up. At least, that's the only conclusion I can draw from Bret Victor's newest post on teaching code. After he did his presentation on "Inventing on Principle" a while back, the tech community went nuts for Victor's (admittedly impressive) visualization work and approach to live programming. This admiration culminated in Khan Academy's computer science curriculum, which integrates a live Processing environment very similar to Victor's demos. In response, he's written a long post bashing the crap out of it.

Instead, he has a plan to redesign programming itself in a new, education- oriented direction. I'm generally a fan of Victor's presentation work (his touch-based animation UX is phenomenal), but I find that his ideas for teaching tend to be impractical when they're examined closely, and I suspect that's the case here. I don't think it's a coincidence that Victor doesn't seem to spend a lot of time asking if anyone else has solved these problems. A little research should have turned up that someone already wrote the language he's proposing: Scratch.

Scratch isn't terribly pretty--it's designed for kids, and it shows--but it provides almost everything Victor claims he wants. Variables are provided in context, with an instant visual environment that lets users examine routines at any time. The syntax is all drag-and-drop, with clear indications of what is nested where, and there's a stepping debug mode that visually walks through the code and provides output for any variables in use. And as much as Victor wants to push the comparison to "pushing paint," Scratch's sprite-based palette is probably as good as that'll get for programming. That no mainstream programming languages have followed its lead doesn't necessarily indicate anything, but should at least give Victor pause.

In his essay, however, Scratch is nowhere to be found. Victor draws on four other programming paradigms to critique Processing: Logo, Smalltalk, Hypercard, and Rocky's Boots. To say that these references are dated is, perhaps, the least of their sins (although it does feel like Victor's research consisted of "stuff I remember from when I was a kid"). The problem is that they feel like four random things he likes, instead of coherent options for how to structure a learning program. They couldn't possibly be farther from each other, which suggests that these lessons are not easy to integrate. Moreover, using Logo as a contrast to Processing is ironic, since the latter's drawing instructions are strikingly similar (I typically use the Logo turtle to introduce people to canvas graphics). And in Smalltalk's case, the syntax he's applauding is deceptively complicated for beginners (even I find the message rules a little confusing).

Meanwhile, where are the examples that aren't twenty years old? The field hasn't stood still since the Apple IIGS, but you wouldn't know it from Victor's essay. Scratch is the most well-known educational programming environment, but there's no shortage of others, from the game-oriented (Kodu, Game Maker) to actual games (The Incredible Machine, SpaceChem). Where's the mention of the vibrant mod community (UnrealScript is many a coder's first language, and I've had several students whose introduction to coding was writing Lua scripts for World of Warcraft)? Like his Braid-inspired live coding demonstration, Victor's essay gives the impression that he's proposing some incredible innovation only by ignoring entire industries working on and around these problems. It's unclear whether he thinks they're not worth examining, or if he just can't be bothered to use Google.

There's also a question of whether these essays solve problems for anyone but Bret Victor. His obsession with visual programming and feedback is all well and good, but it ignores the large class of non-visual problems and learning styles that exist. As a result, it's nearly all untested, as far as I can tell, whereas its polar opposite (Zed Shaw's Learn Code the Hard Way) has a huge stream of actual users offering feedback and experience.

Let me clarify, in case it seems like I'm simply blaming Victor for failing to completely reinvent computing in his spare time. These essays repeatedly return to visualization as the method of feedback: visualization of time, visualization of data, and code that itself performs visualization. Unfortunately, there's an entire field of programming where a graphical representation is either impossible or misleading (how much of web programming is just pushing strings around, after all?).

Frankly, in actual programming, it's counterproductive to try to examine every value by stepping through the code: if I reach that point, I've already failed all other approaches. My goal when teaching is explicitly not for students to try to predict every value, but to think of programming as designing a process that will be fed values. In this, it's similar to this Quora answer on what it's like to be an advanced mathematician:

Your intuitive thinking about a problem is productive and usefully structured, wasting little time on being aimlessly puzzled. For example, when answering a question about a high-dimensional space (e.g., whether a certain kind of rotation of a five-dimensional object has a "fixed point" which does not move during the rotation), you do not spend much time straining to visualize those things that do not have obvious analogues in two and three dimensions. (Violating this principle is a huge source of frustration for beginning maths students who don't know that they shouldn't be straining to visualize things for which they don't seem to have the visualizing machinery.) Instead... When trying to understand a new thing, you automatically focus on very simple examples that are easy to think about, and then you leverage intuition about the examples into more impressive insights.

"Show the data" is a fine mantra when it comes to news graphics, but it's not really helpful when coding. Beginning coders should be learning to think in terms of data and code structure, not trying to out-calculate the computer. Working with exact, line-by-line values is a distraction at best--and yet it's the primary focus of Victor's proposed learning language, precisely because he's so graphically-focused. The idea that the goals of a visualization (to communicate data clearly) and the goals of a visualization programmer (to transform that data into graphics via abstraction) are diametrically opposed does not seem to have occurred to him. This is kind of shocking to me: as a data journalist, my goal is to use the computer to reduce the number of individual values I have to see at any time. A programming language that swamps me in detail is exactly what I don't want.

I'm glad that people are pushing the state of tech education forward. But changing the way people learn is not something you can do in isolation. It requires practical research, hands-on experience, and experimentation. It saddens me that Victor, who has some genuinely good feedback for Khan Academy in this essay, insists on framing them in grandiose proclamations instead of practical, full-fledged experiments. I don't know that I could honestly say which would be worse: if his ideas were imitated or if they were ignored. Victor, it seems, can't decide either.

September 25, 2012

Filed under: tech»web

DOM If You Don't

I've noticed, as browsers have gotten better (and the pace of improvement has picked up) that there's an increasingly vocal group of front-end developers crusading against libraries like jQuery, in favor of raw JavaScript coding. Granted, most of these are the fanatical comp.lang.javascript types who have been wearing tinfoil anti-jQuery hats for years. But the argument is intriguing: do we really need to include 30KB of script on every page as browsers implement dev-friendly features like querySelectorAll? Could we get away with writing "pure" JavaScript, especially on mobile where every kilobyte counts?

It's a good question. But I suspect that the answer, for most developers, will continue to be "no, use jQuery or Dojo." There are a lot of good reasons for this--including the fact that they deliver quite a bit more than just DOM navigation these days--but the primary reason, as usual, is simple: no matter how they claim to have changed, browser developers still hate you.

Let's take a simple example. I'd like to find all the file inputs in a document and add a listener to them for HTML5 uploads. In jQuery, of course, this is a beautifully short one-liner, thanks to the way it operates over collections: $('input[type=file]').on('change', onChange); Doing this in "naked" JavaScript is markedly more verbose, to the point where I'm forced to break it into several lines for readability (and it's still kind of a slog). var inputs = document.querySelectorAll('input[type=file]'); inputs.forEach(function(element) { element.addEventListener('change', onChange); }); Except, of course, it doesn't actually work: like all of the document methods, querySelectorAll doesn't return an array with JavaScript methods like slice, map, or forEach. Instead, it returns a NodeList object, which is array-like (meaning it's numerically-indexed and has a length property). Want to do anything other than a length check or an iterative loop over that list? Better break out your prototypes to convert it to a real JavaScript object: var inputs = document.querySelectorAll('input[type=file]'); inputs = Array.prototype.slice.call(inputs); inputs.forEach(function(element) { element.addEventListener('change', onChange); }); Oh, yeah. That's elegant. Imagine writing all this boilerplate every time you want to do anything with multiple elements from the page (then imagine trying to train inexperienced teammates on what Array.prototype.slice is doing). No doubt you'd end up writing yourself a helper function to abstract all this away, followed by similar functions for bulk-editing CSS styles or performing animations. Congratulations, you've just reinvented jQuery!

All this could be fixable if browsers returned native JavaScript objects in response to JavaScript calls. That would be the logical, sensical thing to do. They don't, because those calls were originally specified as "live" queries (a feature that no developer has ever actually wanted, and exists to implement the now-obsolete DOM-0 document collections), and so the list they return is a thin wrapper over a native host object. Even though querySelector and querySelectorAll are not live, and even we knew by the time of their implementation that this was an issue, they're still wrappers around host objects with the same impedance mismatch.

Malice or incompetence? Who knows? It looks to me like the people developing browser standards are too close to their rendering engines, and not close enough to real-world JavaScript development. I think it's useful for illustration purposes to compare the output of running a DOM query in Firebug vs. the built-in Firefox Web Console. The former gives you a readable, selector-based line of elements. The latter gives you an incomprehensible line of gibberish or a flat "[Object HTMLDivElement]", which when clicked will produce a clunky tree menu of its contents. Calling this useless is an insult to dead-end interfaces everywhere--and yet that's what Mozilla seems to think is sufficient to compete with Firebug and the Chrome dev tools.

At any given time, there are at least three standards committees fighting over how to ruin JavaScript: there's ECMA TC-39 (syntax), W3C (DOM standards), and WHATWG (HTML5). There are people working on making JavaScript more like Python or CoffeeScript, and people working on more "semantic" tags than you can shake a stick at. But the real problems with JavaScript--the reasons that everyone includes that 30KB of jQuery.js--do not have anything to do with braces or function keywords or <aside> tags. The problem is that the DOM is a horrible API, from elements to events, filled with boilerplate and spring-loaded bear traps. The only people willing to take a run at a DOM 2.0 require you to use a whole new language (which might almost be worth it).

So in the meantime, for the vast majority of front-end developers (myself included), jQuery and other libraries have become that replacement DOM API. Even if they didn't provide a host of other useful utility functions (jQuery's Deferred and Callback objects being my new favorites), it's still just too frustrating to write against the raw browser interface. And library authors recognize this: you can see it in jQuery's planned removal of IE 6, 7, and 8 support in version 2.0. With the worst of the cross-compatibility issues clearing up, libraries can concentrate on writing APIs to make browsers more pleasant to develop in. After all, somebody has to do it.

August 15, 2012

Filed under: tech»education

VM Where?

My third quarter of teaching Intro to Programming and Intro to JavaScript at SCCC ends today. Over the last eight months, I've learned a bit about what gives new programmers trouble, and how to teach around those problems. I definitely wouldn't suggest JavaScript as a first language unless the students already know HTML and CSS very well--otherwise, they're learning three languages at once, and that's more than a little overwhelming.

Outside of class I've also started to daydream about ways to teach the basics of programming, not just for web development, but in general. I think a simpler (but still dynamic) language is probably best for beginners--even though I think its whitespace model is insane, Python (or its cheery little cousin Ruby) would probably be a good option. It's got a good library, not a lot of punctuation or syntax, and it would get people to indent their code (which, for some reason that I can't understand, is like pulling teeth no matter how many times you show people that it makes the code more readable). More importantly, it's straightforward imperative code--none of the crazy functional malarkey that emerges, marmot-like, whenever JavaScript starts doing any kind of user interaction or timing. And people actually use Python, so it's not like you're teaching them Lisp or something.

But let's pretend we weren't bound by the idea of "teach skills that are directly marketable"--i.e., let's pretend I'm not working for a community college (note: there's nothing wrong with teaching marketable skills, it's just a thought exercise). What's a good way to introduce people to the basic problems of programming, like looping and conditionals and syntax?

What about assembly?

Let's go ahead and get some reasons out of the way as to why you shouldn't teach programming with assembly. First, it offers no abstractions or standard libraries, so students won't be learning any immediately-useful skills for outside the classroom. It's architecture-specific, meaning they probably can't even take it from one computer to another. And of course, assembly is not friendly. It doesn't have nice, easy-to-type keywords like "print" and "echo" and "document.getElementById" (okay, so that's not all bad).

What you gain, given the right choice of architecture, is simplicity. Assembly does not give you syntax for loops, or for functions. It doesn't give you structures. You get some memory, a few registers to serve as variables, and some very basic control structures. It's like BASIC, but without all the user-friendliness. Students who have trouble keeping track of what their loops are doing, or how functions work, might respond better to the simple Turing tape-like flow of assembly.

But note that huge caveat: given the right choice of architecture. Ideally, you want a very short set of instructions, so students don't have to learn very much. That probably rules out x86. 6502 has a reasonably small set of opcodes, but they're organized in that hilarious table depending on what goes where and what addressing scheme you're in, which is kind of crazy. 68000 looks like a bigger version of 6502 to me. Chip-8 might work, but I don't really like the sprite system. I'd rather just have text output. What we need is an artificial VM designed for teaching--something that's minimal but fun to work with.

That's why I'm really interested in 0x10c, the space exploration game that's in development by Notch, the creator of Minecraft. The game will include an emulated CPU to run ship functions and other software, and it's programmed in a very simple, friendly form of assembly. There are a ton of people already writing tools for it, including virtual screen access--and the game's not even out yet. It's going to be a really great tool for teaching some people how computers work at a low level (in a simplified model, of course).

I'm not the first person to wonder if assembler could make a decent teaching tool. And I'm not convinced it actually would--it's certainly not going to do anything but confuse and frustrate my web development students. But I do think programming instruction requires a language that, against some conventional wisdom, isn't too high level. JavaScript's functions are great, but they're too abstract to translate well for most students--this and the DOM form a serious conceptual barrier to entry. On the other hand, students need to feel effective to keep their attention, meaning that it shouldn't take 50 lines of opcodes to print "hello, world." That flaw may make assembly itself ineffective, but thinking about what it does teach effectively may give us a better frame of reference for teaching with other toolkits.

August 8, 2012

Filed under: tech»green

Broken

The Thinkpad Hardware Maintenance Manual is kind of amazing. It is specifically laid out to walk someone through the process of disassembling a laptop, all the way down to the pads on the motherboard, and including (so helpful!) the types of screws used at each step of the way. You can fix pretty much anything on a Thinkpad with the HMM and the right spare parts. I know this now because I have been through the entire thing forward and backward.

About two months ago, I fried my motherboard (literally) by replacing the cooling fan with one meant for a laptop with integrated graphics, instead of the Nvidia chip that was actually installed. At 105°C, it wasn't long before the chip melted itself down into a puddle of silicon and coltan. I was able to eke out a few more days of use by turning off anything that could possibly require acceleration--Flash, videos of any kind, Aero Glass--but soon enough it just gave up on me completely.

What's frustrating about this is that you can't simply replace the shiny patch of metal that used to be a GPU on a laptop. Even though Lenovo refers to the Nvidia option as "discrete" graphics, it's actually soldered to the motherboard (perhaps they mean it won't gossip about you, which is probably true, because in my experience Nvidia's mobile chips don't live long enough to learn anything embarrassing anyway). So your only repair option is to replace the entire system board: about $600 with parts and labor, assuming you take it to some place where they know what they're doing, which is not a guarantee. Not that I'm apparently any better, but at least I only destroy my own property.

Stil, given the comprehensive hardware maintenance manual, a used motherboard from eBay, and a spare pill case to hold the screws, fixing the laptop turned out to be no big deal--I even converted it to the Intel graphics chip, which is less likely to overheat since it's got all the horsepower of a Nintendo Super FX chip. The experience has left me impressed with Lenovo's engineering efforts. The construction is solid without "cheating"--it's honestly pretty easy to open it up, and there's no glue or clips that only the manufacturer can remove/replace. As an inveterate tinkerer, I believe everything should be built this way.

But in that, at least, Lenovo and I am in the minority. Increasingly (led by Apple), laptops and other computing devices are sealed boxes with few or no user-serviceable parts. Want to upgrade your RAM or hard drive? Too bad, it's soldered in place. Break the screen? Better send it to the factory (or not, I guess). If a video card burns out on its own (as the Nvidia 140M chips did, repeatedly, even before I started tinkering with the cooling system) or another hardware problem occurs, they don't want you to fix it. They want you to buy another, most likely throwing the old one away. Even the Thinkpad, when it's outpaced by more demanding software, can only be upgraded so far. That means it also has a limited lifespan as a donation to a school or needful friend.

So in addition to fixing my old laptop, I'm moving away from a dependence on tightly-coupled, all-in-one computing, and back to modular desktop units. I bought a Maingear F131 PC (the most basic model), which I can upgrade or repair piece by piece (I already started by adding a wireless PCI card--Maingear doesn't sell those directly). It's better for the environment, better for my wallet, and it represents a vote against wasteful manufacturing. My goal is to make it the last whole computer (but not the last hardware) I'll buy for a long, long time.

Should you do the same? I think so. Manufacturers are moving to the integrated device model because it's cheaper for them and consumers don't seem to care. Change the latter, and we can change the direction of the industry. And I think you should care: in addition to the problem of e-waste, user-replacable components help to support an entire cottage industry of small repair shops, consultants, and boutique builders--the mom-and-pop businesses of the tech world. Moving to one-piece technology will kill those, just as it killed television, vaccuum, and radio repair shops.

You can argue, I'm sure, that many people do not want to learn how to install or repair their own hardware. That's true, and they shouldn't have to. But I also deeply believe that user-servicable and ease-of-use are not mutually-exclusive qualities, nor do they require users to actually learn everything there is to know about a computer. We don't expect everyone to know how to rebuild a car engine, but I think most people would also agree that a thriving industry for aftermarket repair parts and service is a good thing--who hasn't at least had a new radio installed? Who wants to be forced into a dealership every time a filter needs to be changed?

It is tempting to see the trend toward disposable devices as one with the trend toward walled gardens in software--a way to convert users from one-time payers to a constant revenue stream. I don't actually think most companies set out to be that sinister. But it's obvious that they also won't do the right thing unless we hold their feet to the fire. The time has come to put our money where our mouths are, and insist that the environment is not a suitable sacrifice for a few extra millimeters shaved off a product spec sheet.

May 9, 2012

Filed under: tech»os

Apropos of Nothing

Over the last six months, I've consistently given three pieces of advice to my students at SCCC: get comfortable with the Firebug debugger, contribute to open source, and learn to use Linux. The first is because all developers should learn how to debug properly. The second, because open source is a great way to start building a resume. And the last, in part, because Linux is what powers a large chunk of the web--not to mention a dizzying array of consumer devices. Someone who knows how to use an SSH shell and a few basic commands (including one of the stock editors, like vi or emacs) is never without tools--it may not be comfortable, but they can work anywhere, like Macgyver. It all comes back to Macgyver, eventually.

At Big Fish, in an effort to take this to its logical extreme, I've been working in Linux full-time (previously, I've used it either on the server or through a short-lived VM). It's been an interesting trip, especially after years of using mostly Windows desktops (with a smattering of Mac here and there). Using Linux exclusively for development plays to its strengths, which helps: no fighting with Wine to run games or audio software. Overall, I like it--but I'll also admit to being a little underwhelmed.

To get the bad parts out of the way: there are something like seven different install locations for programs, apparently chosen at random; making changes in the graphical configuration still involves arcane shell tricks, all of which will be undone in hilariously awful ways when you upgrade the OS; and Canonical seems intent on removing all the things that made Ubuntu familiar, like "menus" and "settings." I ended up switching to the XFCE window manager, which still makes me angry because A) I don't want to know anything about window managers, and B) it's still impossible to theme a Linux installation so that everything looks reasonably good. Want to consistently change the color of the window decorations for all of your programs? Good luck with that. XFCE is usable, and that's about all you can honestly say for it.

The best part of Linux by far is having a native web stack right out of the box, combined with a decent package manager for anything extra you might need. Installing scripting languages has always been a little bit of a hassle on Windows: even if the base package is easily installed, invariably you run into some essential library that's not made for the platform. Because these languages are first-class citizens on Linux, and because they're just an apt-get away, it opens up a whole new world of utility scripts and web tools.

I love combining a local server with a set of rich commmand-line utilities. Finally, I can easily use tools like the RequireJS optimizer, or throw together scripts to solve problems in my source, without having to switch between contexts. I can use all of my standard visual tools, like Eclipse or Sublime Text, without going through a download/upload cycle or figuring out how to fool them into working over SFTP. Native source control is another big deal: I've never bothered installing git on Windows, but on Linux it's almost too easy.

So there is one axis along which the development experience is markedly superior. It's not that Linux is better built (it has its fair share of potholes) so much as it's where people curently go to build neat tools, and then if we're lucky they bring them over to Windows. Microsoft is trying to fix this (see: their efforts to make NodeJS a first-class Windows platform), but it'll probably always be an uphill battle. The open-source developer culture just isn't there.

On the other hand, I was surprised by the cases where web development is actually worse on Linux compared to Windows. There's no visual FTP client that's anywhere near as good as WinSCP that I can find. The file managers are definitely clumsier than Explorer. Application launching, of all things, can be byzantine--there's no option to create a shortcut to an program, you have to manually assemble a .desktop file instead, and then XFCE will invariably position its window someplace utterly unhelpful. Don't even get me started on the configuration mess: say what you like about the registry, at least it's centralized.

None of these things are dealbreakers, the same way that it's not a dealbreaker for me to need GOW for a decent Windows command line. But if I was considering trying to dual-boot or switch to Linux as a work environment, instead of just keeping a headless VM around for when I need Ruby, I've given that up now. When all is said and done, I spend much of my time in either Eclipse or Firefox anyway, and they're the same no matter where you run them. I still believe strongly that developers should learn a little Linux--it's everywhere these days!--but you can be perfectly productive without living there full time. Ultimately, it's not how you build something, but what you build that matters.

If you do decide to give it a chance, here are a few tips that have made my life easier:

  • Install a distribution to a virtual machine using VirtualBox instead of trying to use a live CD or USB stick. It's much easier to find your way along if you can still run a web browser alongside your new OS.
  • The title of this post is a reference to the apropos command, which will search through all the commands installed on your system and give you a matching list. This is incredibly helpful when you're sitting at a prompt with no idea what to type. Combined with man for reading the manual pages, you can often muddle through pretty well.
  • By default, the config editor for a lot of systems is vi. Lots of people like vi, I think it's insane, and I guarantee that you will have no idea what to do the first time it opens up unexpectedly. Eventually you'll want to know the basics, but for right now you're much better off setting nano as the default editor. Type "export EDITOR=nano" to set that up.

Past - Present - Future