this space intentionally left blank

September 12, 2013

Filed under: tech»web

Beta Caret-ene

At this point, Caret has been in the Chrome Web Store for about a week. I think that's long enough to say that the store is a pretty miserable experience for developers.

When I first uploaded it last week, Caret had these terrible promo tiles that I threw together, mostly involving a big pile of carrots (ba dum bum). At some point, I made some slightly less terrible promo tiles, stripping it down to just bold colors and typography. Something set off the store's automated review process, and my new images got stuck in review for four days — during which time it was stuck at the very bottom of the store page and nobody saw it.

On Tuesday, I uploaded the first version of Caret that includes the go-to/command palette. That release is kind of a big deal--the palette is one of the things that people really love about Sublime, and I definitely wanted it in my editor. For some reason, this has triggered another automatic review, this one applied to the entire application. I can unpublish Caret, but I can't edit anything — or upload new versions — until someone checks off a box and approves my changes. No information has been provided on why it was flagged, or what I can do to prevent these delays in the future.

Even at the best of times, the store takes roughly 30 minutes to publish a new version. I'm used to pushing out changes continuously on the web, so slow updates drive me crazy. Between this and the approval hijinks, it feels like I'm developing for iOS, but without the sense of baseless moral superiority. What makes it really frustrating is the fact that the Play store for Android has none of these problems, so I know that they can be solved. There's just no indication that the Chrome team cares.

I was planning on publishing a separate, Google-free version of the app anyway, so I worked out how to deploy a standalone .crx file. The installation experience for these isn't great — the file has to be dragged onto the Chrome extensions list, and can't just be installed from the link — and it introduces another fun twist: even though they promised it would be possible for years, there's no way to download the private key that Google uses in the Chrome store, meaning that the two installations are treated as completely different applications when installed.

Fair enough: I'll just make the standalone version the "edge" release with a different icon, and let the web store lag behind a little bit. As a last twist of the knife, generating a .crx package as part of a process that A) won't include my entire Git history, and B) will work reliably across platforms, is a nightmare. Granted, this is partly due to issues with Grunt, but Chrome's not helping matters with its wacky packaging system.

All drama aside, everything's now set up in a way that, if not efficient, is at least not actively harmful. The new Caret home page is here, including a link to the preview channel file (currently 3 releases ahead of the store). As soon as Google decides I'm not a menace to society, I'll make it the default website for the store entry as well.

The problems with Google's web store bug me, not so much because they're annoying in and of themselves, but because they feel like they miss the point of what packaged web apps should be. Installing Caret is fast, secure, and easy to update, just as regular web apps are. Developing Caret, likewise, is exactly as easy and simple as writing a web app (easier, actually: I abuse flexbox like crazy for layout, because I know my users have a modern browser). Introducing this opaque, delay-ridden publication step in between development and installation just seems perverse. It won't stop people from using the store (if nothing else, external installation is too much of a pain not to go through the official channel), but it's certainly going to keep me from enjoying it.

September 5, 2013

Filed under: tech»web

Caret

As I mentioned in my Chromebook notes, one of the weak points for using Chrome OS as a developer is the total lack of good graphical editor. You can install Crouton, which lets you run Vim from the command line or even run a full graphical stack. But there aren't very many good pure text editors that run within Chrome OS proper — most of the ones that do exist are tied to hosted services like Cloud9 or Nitrous. If you just want to write local files without a lot of hassle, you're out of luck.

I don't particularly want to waste what little RAM the Chromebook has running a whole desktop environment just for a notepad, and I'm increasingly convinced that Vim is a practical joke perpetuated by sadists. So I built the Chrome OS editor I wanted to have as a packaged app (just in time!), and posted it up in the store this weekend. It's 100% open source, of course, and contributions are welcome.

Caret is a shell around the Ace code editor, which also powers the editor for Cloud9. I'm extremely impressed with Ace: it's a slick package that provides a lot of must-have features, like syntax highlighting, multiple cursors, and search/replace, while still maintaining typing responsiveness. On top of that base, Caret adds support for tabbed editing, local file support, cloud settings storage, and Sublime-compatible keystrokes.

In fact, Sublime has served as a major inspiration during the development of Caret. In part, this is just because it's the standard for web developers that must be met, but also because it got a lot of things right in very under-appreciated ways. For example, instead of having a settings dialog that adds development complexity, all of Sublime's settings are stored in JSON files and edited through the same window as any other text files — the average Sublime user probably finds this as natural as a graphical interface (if not more so). Caret uses the same concept for its settings, although it saves the files to Chrome's sync service, so all your computers can share your preferences automatically.

The current release of Caret, 0.0.10, is usable enough that I think you could do serious editing with it — I've certainly done professional work with less effective tools, including the initial development on Caret itself — but I'm on a roll adding features and expect to have a lot of improvements made by the end of next week. My first priorities are getting the keybindings into full working condition and adding a command palette, but from that point on it's mostly just polish, bugfixes, and investigating how to get plugin support past Chrome's content security policy. Once I'm at 1.0, I'll also be posting a standalone CRX package that you can use to install Caret without needing a Google account (it'll even auto-update for you).

Working with Chrome's new packaged app support has been rough at times: there are still a lot of missing capabilities, and calling the documentation "patchy" is an insult to quilts everywhere. But I am impressed with what packaged apps can do, not the least of which is the ease of installation: if you have Chrome, you can now pretty much instantly have a professional-grade text editor available, no matter what your operating system of choice. This has always been a strong point for web apps anyway, but Chrome apps combine that with the kinds of features that have typically been reserved for native programs: local file access, real network sockets, or hardware device access. There's a lot of potential there.

If you'd like to help, even something as simple as giving Caret a chance and commenting with your impressions would be great. Filing bugs would be even better. Even if you're not a programmer, having a solid document editor may be something you'd find handy, and together we can make that happen.

August 1, 2013

Filed under: tech»web

Learning WebGL

Once Microsoft announced IE 11 will offer WebGL, that was pretty much the last straw: Apple may drag their feet at enabling it in Safari, but everyone else seems to have decided that it's secure enough and capable enough for production. I still think it's a little nutty, but I don't really have an excuse to avoid it anymore. So when an EaselJS-based visualization at work started having performance issues, I wrote a WebGL shim as a learning project.

I stand by my earlier impressions of the WebGL API--it's clumsy and ill-suited to JavaScript--but the performance is undeniably there. Rendering through EaselGL is often orders of magnitude faster than vanilla Easel, particularly when it comes to mouseover responsiveness. It's worth struggling through the learning process if you find that canvas is becoming a bottleneck for your application. I think a more interesting question is why so many WebGL tutorials are awful. And they are awful:

  • They hide the WebGL boilerplate behind library code--say, a function call that loads shaders or does matrix math--and force readers to dig through the source to figure out what mat4mult() or loadShader() does.
  • Even when they leave the code all in one place, they treat the often-confusing GL code as boilerplate, and don't explain it. Why do I need to call bindBuffer()? What do the six (!) parameters of vertexAttribPointer() actually control? What's the deal with all the constants, like gl.STATIC_DRAW?
  • Since WebGL is not a 3D API, they require a lot of 3D math. Most people teach this badly, assuming they don't double down on the mistakes above and just hide it behind calls to library functions. As a result, it's easy to get lost, and hard to reach the "putting shapes on the screen" feedback loop that keeps students engaged.

Of course, these are not uncommon mistakes in programming tutorials. In fact, they're extremely common--I just haven't had to learn anything from scratch in a while, and had forgotten how confusing the process could be. Anyone writing for beginners would do well to keep these errors in mind.

There are a few walkthroughs that I found more helpful. As I've mentioned, Greg Tavares's series on WebGL was eye-opening, and Brandon Jones provided the only worthwhile explanation of attribute array setup that I found. Between those two, and countless Google searches, I managed to cobble together a basic understanding of how the GL state machine actually works.

As a way of distilling out that knowledge, I've assembled the WebGL demo script that I would have wanted when I started out. It uses no external code--everything's right there on the same page. It explains each parameter that's used, and what each function call does. And it's only concerned with drawing a basic 2D shape--no matrix math is involved. It's stored in a Github Gist, so feel free to file pull requests against anything you find confusing. Also, feel free to look through EaselGL: it's a bit more advanced and I need to add more comments, but as a 2D API I think it's quite a bit easier to understand than the typical game library, particularly for ex-ActionScript developers like myself.

July 18, 2013

Filed under: tech»web

Over the Top

By the time you read this, I'll have been running Weir as my full-time RSS reader for two and a half weeks, starting on July 1. It's going well! Having just added OPML export, so that I can switch if it stops being worth the trouble, I've had a chance to sit back and consider some lessons learned from the project.

  1. Eight megabytes does not seem like a lot of data these days, but it adds up. Before I started culling feeds (and before I added Gzip support to the request service), Weir was pulling down roughly eight megs of data with each fetch, at ten minute intervals. That's 48MB per hour, a small amount that adds up to over a gigabyte per day. By default, I get 24 gigs per month of transfer allowance on my server. Something had to go.

    It's interesting to note, by the way, that this is something that RSS services like Newsblur or Feedly don't worry so much about, because the cost of each feed is spread across all subscribers. I didn't cost Google Reader as much traffic as Weir requires on its own.

  2. So I started unsubscribing. The majority of my original subscription list in Google Reader came from Paul Irish's front-end feed collection, and while I had already unsubscribed from the crazy people, it turns out most of the other blogs in that collection were dead. Even with 304 support added in, Weir was downloading a ton of RSS, only to discard much of it as being past the configured expiration date. I don't think this means blogging is a thing of the past, personally, but it's clearly down from its heyday in favor of social services, particularly (in the technical community) Google+.
  3. That said, using Feedburner seems to be a clear indication that you weren't that interested in blogging anyway, because it makes up a disproportionate amount of the abandoned or simply broken RSS feeds on my list. I suspect this is because it signals a lack of ownership. If you care about your feed, you maintain it yourself.
  4. Even with feeds that work, sometimes connections fail, or things break, just because it's the wild and crazy web out there. Taking that into consideration from the start, and tracking the last result for every feed, was one of the smarter things I did. I should probably be tracking more, but I'm too lazy to do real logging.
  5. Feeds are messy, and sanitization is hard. People inject all kinds of styles into their RSS. They include height and width attributes that don't play well with mobile. They put things into tables. They load scripts that I don't want to run, and images that I'd like to defer until their containing post is activated. Right now, I'm using document.implementation.createHTMLDocument() to make a functional (but "dead") DOM, then running a sanitization task over that, but figuring out that process--and making it watertight--has not been easy.
  6. In fact, working with RSS--ostensibly a "machine-readable" format--tends to drive home just how porous the web can be, and how amazing it is that it works at all. Take RSS date formats discovered by the developer of another reader web app, for example. I'm relatively isolated from the actual parsing, but there's code in Weir to work around buggy HTTP responses, missing feed information, and weird characters. Postel's Law has a lot to answer for, in my opinion.
  7. "Worse-is-better" really works for me as a personal development philosophy. My priority has been to get things running, no matter how badly--hacks get added to the .plan file and addressed later on, when I have time to figure out a graceful solution. This has kept my momentum high, and ensured that I don't get bogged down with architecture that I might not even need.
  8. The value of open-source software for this project can't be overstated. There's no way I could have built Weir on my own. In addition to Node, of course, I'm using a number of open-source modules for parsing feeds, handling two-factor auth, and storing posts in the database. Because of open source, I can patch those various libraries together, add my own code on top, and have a newsreader application that does everything I need. Weir doesn't stand on the shoulders of giants--it stands on the shoulders of countless other people, each giving a little bit back to the wider community.

July 9, 2013

Filed under: tech»web

Chromebook

I bought a Chromebook (the Samsung ARM model) a couple of weeks ago. It became increasingly obvious that the battery situation on my beloved Thinkpad was going from bad to worse, and trustworthy replacements are hard to find--especially on a budget. I still have lots of uses for the Thinkpad (it may end up serving as a media center if the XBox dies again), but it can't really be portable the way I need it to be when my classes start back up again.

I don't particularly want to get into the question of whether the Chromebook is a good solution for other people. I'm not other people. I can't tell you whether they'll like it. I think it covers a great deal (if not all) of the average person's computer usage, most of which is spent in a browser, but I don't have evidence to back that up, and I'm not going to treat my case as representative. What I can say is how it's working for me so far, specifically as a writer and a web programmer with a heavy emphasis on Linux tools. And the answer is that, for the most part, it's working very well.

My top priority was battery life and portability. I'm on a bus for two hours a day, and one of my goals this year has been to turn that into productive time by working on my textbook, lesson plans, or other projects, preferably with some juice left over for when I get off the bus and walk into my classroom at night. The Chromebook definitely has that covered. I'm not sure the battery meter is 100% accurate, but I tend to run out of energy before it does, and the ultrabook size is easy to carry or slip into a small Timbuktu bag. The build quality seems solid, although I'm a bit uneasy with the idea of cheap, "disposable" laptops like this.

Second priority was a decent browser experience, since (like most people) I spend most of my time these days in a browser. Depending on the page, the Chromebook can be a little slow sometimes, but it handles most things the way you'd expect Chrome to do. It's easy to forget that it's basically a smartphone chip hooked up to a big screen. WebGL performance is surprisingly good: I loaded up the new Google Maps beta, and had no problems panning around a 3D textured version of downtown Seattle. Flash is built-in, so I'm not missing that (the new XBox Music site, like a lot of its competitors, still uses Flash for streaming audio). Tethering works flawlessly.

But my third priority (and still a must-have factor for me) was the ability to develop and write on the Chromebook itself. Being able to log into a server from the Chrome OS SSH client is fine, but a lot of the time I still don't have a network connection. If I can't work locally using the tools I'm used to, it's useless to me.

There's a thing called Crouton that installs a full, semi-sandboxed Linux distribution alongside Chrome OS. The two operating systems share a kernel, but have separate sets of binaries and processes. The result is a complete Ubuntu server stack that I can dip into whenever I need to work offline, including Git, NodeJS, PostgreSQL, and all the other command-line utilities I've gotten used to having. Crouton's totally supported, by the way: you need to be in developer mode, but that's just a keystroke away.

You can even set Crouton to run the graphical interface for the second OS, toggling between them, but considering how much I hate the Linux GUI situation, I haven't bothered. Chrome OS works nicely to manage my terminal and browser windows--the Aura interface that they've added lately does a decent impersonation of Windows 7, including an improved version of Aero Snap. There are some quirks--the dedicated "switch windows" button doesn't seem to quite work consistently--but it's already the best Linux window manager I've used.

The weirdest thing as a developer is the lack of full-powered editors running within Chrome itself. Cloud9 doesn't run on ARM yet, and Brackets isn't available as a packaged app. I'm personally fine using a terminal-based editor--I wrote most of Weir using Nano, and I'm getting more comfortable with vim--but it surprised me that none of the web-based editors have made a serious effort to run on a web-based platform.

The second-weirdest thing is the way Chrome OS distinguishes between "bookmarks" and "applications," considering that (for the most part) they're the same thing. There is a legitimate set of "packaged apps" that get more privileged API access, but most of the products in the Chrome "web store" are just links to web sites, so why can't I add bookmarks (such as the aforementioned XBox Music site, which I prefer to run in its own, chromeless window) to the Chrome OS launcher? I've been using this method to build single-serving Chrome Apps for the few sites where I want this ability, but it really ought to be built-in, and (considering that all you need is a JSON manifest and a .png file) I have a hard time understanding why it's not.

Oddities aside, though, the Chromebook is a great little machine for my needs so far. If I edited photos/audio/video on the go, or wanted a portable gaming laptop, I'd probably feel differently. On the other side of the power spectrum, if I didn't need a keyboard, I'm sure an Android tablet would cover a lot of my needs. My work, however, is almost entirely centered on text-editing in a web-friendly (preferably Linux or Windows) environment, and Chrome OS handles that gracefully and without complaint. It's surprisingly close to being useful even without Crouton. I'm excited to see whether (between Chrome OS and Firefox OS) the web platform can become legitimately self-sufficient in the future.

June 12, 2013

Filed under: tech»web

Outward Vectors

I'm happy to say that Weir is now in a beta-ready state. You'll need a server capable of running NodeJS and PostgreSQL (for now), and you'll need an OPML file to populate the feed list (Google Takeout will accomodate you nicely with a subscriptions.xml if you're fleeing Reader). But if you pull from the repo and then follow the instructions in the readme file, everything should be in a good-enough state to fetch, read, and mark stories as read. Feedback would be awesome.

The front end for Weir is written using AngularJS, because it's supposed to be great for rapid development and I'm all about failing fast on this project. Indeed, getting the client-side application up and running has gone very quickly, but Angular itself takes some adjustment, especially if you're used to other JavaScript frameworks.

I'm not convinced that this is a bad thing. Predictions are a mug's game, but I suspect that future libraries are going to look a lot more like Angular than its competitors. Before I can explain why, we have to first look at the way client-side JavaScript has been traditionally organized, and then see how Angular works differently.

JavaScript MVC libraries, from Backbone to Ember, find themselves confronted with a language that's very different from the languages where Model-View-Controller philosophies evolved:

  • JavaScript has no privacy, and (until recently) no getters and setters. Between the two, it's hard to know if a given object has changed since the last redraw.
  • The DOM is not designed to be strongly linked with JavaScript data structures.
  • Multi-level inheritance of values is fine, but inheritance of behavior is a mess.
Despite these quirks, libraries are still designed as if JavaScript was similar to SmallTalk. They work around the differences by using manual getter and setter functions on Model classes, registering for DOM events inside View classes, and retemplating using templates when one or the other is changed.

This works--and is certainly a million times better than writing jQuery spaghetti code--but it's not what you'd call "clean." For example, here's some code written in an imaginary (but typical) library, just to update a simple list view: var Song = new Vertebrae.Model.extend({ title: { value: "" }, listens: { value: 0 }, file: { value: "" }, starred { value: false } }); var SongView = new Vertebrae.View.extend({ render: function() { var model = this.get("model"); var el = this.get("element"); el.find(".can-template").html( templates.song(model.toJSON())); var rev = model.get("review"); el.find(".cannot").val(rev); } });

That is a lot of boilerplate just to display a song (and it doesn't even include the templates, or loading the actual data). Heavy object classes are necessary so that the framework can be notified of changes--hence all the extend and get calls, as well as the awkward way of defining default values. In places, we can at least use templates, but we're still having to place them manually into the DOM. It's like a terrible parody of Java's worst bits glued onto jQuery.

In contrast, Angular uses regular JavaScript objects, written with normal JavaScript syntax, for its models. There are no getter or setter functions, unless you really want them: change an object, and if it is attached to the $scope variable, it will be scanned for changes automatically. And while you're not discouraged from using inheritance, you're not really encouraged to do so, either. Angular uses prototypal inheritance to manage values under the hood, but its developer-facing APIs tend to bear more resemblance to AMD or CommonJS modules. It feels like JavaScript, in other words.

On the other hand, Angular is all about augmenting HTML: although templates are available to ease re-use, an Angular page actually gets marked up using custom tags and attributes, then compiled and linked into components that respond instantly to the application's backing data. This is very forward-thinking--in fact, it's not dissimilar from the Extensible Web Manifesto, and I can dig that--but it definitely comes across as "magic" the first time that you use it. After years of logic-less template engines being popular, Angular stakes out a very different position.

Normally, I'm not a fan of magic in programming: it's hard to debug what you don't understand. In this case, the novelty of Angular's approach--and its undeniable effectiveness--overcame my skepticism, to the point where it's really grown on me. Using Angular makes me much more aware of the boilerplate that's required by the traditional MVC frameworks I use in my day job. Simple tasks require less code, and I don't feel like I'm fighting my way through thick layers of abstraction.

If there's a place where Angular still feels awkward, it's anything to do with the DOM. Angular will let you get access to elements of your page, but only reluctantly--it would really prefer that you only alter your model data and let the DOM react. Most of the time, this is fine: the less page manipulation I have to do, the happier I am. But there are some times when it is inevitable, such as when I'd like to perform deferred image loading, and those are definitely the ugliest parts of Weir's client code so far.

But here's the rub: if the web ecosystem teaches us anything, it's that you can always make a simple framework faster and more powerful, but people won't use an API that's clumsy and tiresome (see also: jQuery vs. pretty much everything else). Yes, DOM manipulation isn't great in Angular--they'll have to write some new directives, to cover the edge cases and holes. Yes, the object polling that Angular does is kind of scary, but browsers will add features like Object.observe() to make it faster overnight. Meanwhile, nothing's going to make those heavy Model and View classes any more fun to use.

There has been (and still is) a lot of time in the JavaScript community spent trying to make it work like something more familiar. That's how you end up with Coffeescript, or YUI, or all these MVC frameworks. Those projects have a place, and there are certainly times when I want something familiar, but it's also good to see tools (like Angular, Node, or D3) that are built around JavaScript weirdness. There hasn't been an oddball language with a profile this high in a long time, so let's shake things up while we've got the chance.

May 30, 2013

Filed under: tech»coding

Project Seymour

A month from now, Google will shut down Reader, leaving RSS addicts in the lurch. I suspect this will be both more and less disruptive than anticipated: expect replacement services to go through another set of growing pains, but RSS isn't exactly a high lock-in situation, and most people will find a new status quo fairly quickly.

I am not eager to move from one hosted service to another (once burned, twice shy), nor do I want to go back to native applications that can't share progress, so as soon as the shutdown was announced I started working on a self-hosted RSS reader. I applied the same techniques I'd used for Big Fish Unlimited: an easy-to-configure router, a series of views talking to the database only through model classes, and heavy use of closures for dependency management and callbacks. I built a wrapper around PHP's dismal cURL library. It was a nice piece of architecture.

It also bogged down very, very quickly. My goal was a single-page application with straightforward database queries, but I was building the foundation for a sprawling, multi-page site. Any time I started to dip in and add functionality, I found myself frustrated by how much plumbing I needed in order to do it "the right way." I was also annoyed by the difficulty of safely requesting a large number of feeds in parallel in PHP. The language just isn't built for that kind of task, even with the adaptations and improvements that have been pasted on.

This week I decided to start over, this time using Node.js and adopting a strict worse is better philosophy. When I use Reader, 99% of my time is spent in "All Items" pressing the spacebar (or, on mobile, clicking "Mark Items as Read") to advance the stream. So I made that functionality my primary concern, and wrote only as much as I needed to (both in terms of code size and elegance) to make that happen. In two days, I've gotten farther than I had with the PHP, and I'm much happier with the underlying platform as well--Node is unsurprisingly well suited to firing off tens and hundreds of concurrent requests.

I've just posted the work-in-progress code for the application, which I'm calling Weir (just barely winning out over "Audrey II"), to a public GitHub repo. It is currently ugly, badly-documented, and patchy in places. The Angular code I'm using for the front-end is obviously written by a someone with very little experience using the library. There's lots of room for improvement. On the other hand, my momentum is very good. By next week, I expect Weir will be good enough for me to dogfood it full time, and at that point improvements will come naturally whenever I need to smooth out the rough edges.

I like this way of working--"worse is better"--quite a bit. It's not always pretty, but it seems effective so far. It also fits in well with my general coding style, which is (perhaps unsurprisingly) on the left-ish side of Steve Yegge's developer politics. I like elegance and architecture as much as the next person, but when it all comes down to it, there's no point in elegant code that never gets used.

Writing my own Reader alternative is also proving educational. The conventional wisdom is that RSS readers benefit greatly from running at scale: operations like feed retrieval can be performed once for all subscribers, spreading the costs out. The flip side is that you're at the mercy of the server bot for when you get updates. High-frequency feeds, such as politics or news, get batched up instead of coming in as they're posted. I'm also able to get a lot more feedback on which feeds are dead, which came as a surprise: Reader just swallowed the errors whole. All in all, I doubt the experience will be any worse.

Currently, Weir isn't much good for public consumption. I've made a sanitized copy of my config file in the repo, but there's no setup script for the database, and no import step for getting your subscriptions loaded up. I hope to have that ready soon, and the code is licensed under the GPL, so pull requests and feature suggestions are welcomed as it becomes usable for other people.

May 22, 2013

Filed under: tech»education

Equal Opportunity

Last Friday, I gave a short presentation for a workshop run by the SCCC Byte Club called "Technical Interview Mastery for Women." Despite the name, it was attended by both men and women. Most of my advice was non-gender specific, anyway: I wanted to encourage people to interview productively by taking into account the perspective from the other side of the table, and seeing the process more as a dialog instead of a confrontation.

Still, during the question and answer period, several people asked about being women in the interview process. Given that my co-presenter has many years more experience being a woman, I deferred to her whenever possible, but I did chime in when the conversation turned to interaction styles. One participant said she was ignored if she wasn't assertive enough, but was then considered unpleasant if she stuck up for herself--what could she do about this?

It's one thing, I said, to suggest ways that women should adapt their communications for a male-dominated workplace--that kind of pragmatic code-switching may well do the trick. But I think it's unfair to put all the burden on women to adapt to men. There needs to be a way to remind men that it's their responsibility to act reasonably.

The problem is that it's often difficult to have that conversation without falling afoul of the same double-standard that says women in the workplace shouldn't be too loud. Complaining about sexism tends to raise hackles--meaning that the offending statement not only goes uncorrected, but dialog gets shut down. I don't know that I have any good solutions to that, but I suggested finding ways to phrase the issue akin to Jay Smooth's presentations on How To Tell People They Sound Racist. I like to think that most people aren't trying to be sexist, they're just not very self-aware. This may be a faulty assumption.

There are still people who argue that the tech industry isn't sexist--that women just aren't as inherently good at coding (this is often hidden behind comments that it's a "meritocracy"--in which, conveniently, women somehow just haven't had merit). From my point of view, I don't see any way that could be correct. My best JavaScript students are split 50/50 between men and women (so are the worst students). I trained equal numbers of men and women on the multimedia team at CQ (and probably would have given the effectiveness prize to the women in a pinch). Moreover, I've never seen any evidence that the skills I use in day-to-day work--spatial reasoning, some basic math, navigating abstraction--are gender-exclusive (or, indeed, required for all programming: the job of a web programmer is markedly different from a systems coder or security investigator, and yet those also suffer from serious inequality issues).

My talk at the workshop was specifically about interviewing, but obviously this is an issue that goes beyond hiring. Something is happening between the classroom and the workplace that causes this disparity. We have a word for this--sexism--regardless of the specific mechanics. And I would love to have more discussions of those specifics, but it's like climate change: every time there's a decent conversation in a public forum about solutions, it gets derailed by people who insist loudly that they don't think there's a problem in the first place.

That said, assuming that people just don't realize when they've done something wrong, there are doubtless ways to address the topic without defensiveness. If the description "sexist" derails, I'm personally happy to use other terms, like "unprofessional" or "rude"--I'm just embarrassed that I (and others) need to resort to euphemism. We need to change the culture around this discussion--to make it clear that we (both men and women) take this seriously, including respectful responses to criticism. We can do better, and I'd like to be able to tell future workshops that we're trying.

May 16, 2013

Filed under: tech»web

Why the Web Wins

Last year, Google spent most of its I/O conference keynote talking about hardware: Android, Glass, and tablets. This year, someone seems to have reminded Google that they're a web company, since most of the new announcements were all running in a browser, and in many cases (like the photo editing and WebGL maps) pushing the envelope for what's possible. As much as I like Android, I'm really happy to see the web getting some love.

There's been a drumbeat for several years now, particularly as smartphones got more powerful, to move away from web apps, and Google's focus on Android lent credence to that perspective. A conventional wisdom has emerged: web apps were a misstep, but we're past that now, and it'll be all native from this point out. I can't disagree with that more, and Google's clearly staking its claim as well.

The reason the web wins (such that anything will) is not, ultimately, because of its elegance or its purity (it's not big on either) but because of its ubiquity. The browser is the worst cross-platform API except for all the other ones, and (more importantly) it offers persistence. I can turn on any computer with an Internet connection and have near-instant access to files and applications without installing anything or worrying about compatibility. Every computer is my computer on the web.

For context, there was a time in my high school years when Java was on fire. As a cross-platform language with a network-savvy runtime, it was going to revive thin clients: I remember talking to people about the idea that I could log into any computer and load my desktop (with all my software) over the Internet connection. There wouldn't be any point to having your own dedicated hardware in a world like that, because you'd just grab whatever was handy and use it as a host. It was going to be like living in a William Gibson novel.

Java ended up being too heavy and too slow to make that actually happen. Instead, this weird combination of JavaScript, HTML, and CSS took over, like weeds springing up and somehow forming a fully-furnished apartment block. The surprise was that the ad-hoc web platform turned out to be competitive with Java on the front-end. Even though it's meant to be a document viewer, the browser is pretty good at building UI, and it's getting a lot better. I've been creating some web apps lately without worrying about backwards compatibility, and it's been remarkably pleasant, both as a developer and a user.

I don't believe that native programs will ever entirely go away. But I do think we see web applications spreading their tentacles over time, because if something is possible in the browser--if it's a decent user experience, plus it has the web's advantages of instant, no-install launch and sharing across devices--there's not much point in keeping it native. It's better to have your e-mail on any device. It's better for me to do presentations from a browser, instead of carrying a Powerpoint file around. It's better to keep my RSS reader in the cloud, instead of tying its state to individual machines. As browsers improve, this will be true of more and more applications, just as it was true of the Java applets that web technology replaced.

Google and I disagree with where those applications should be hosted, of course. Google thinks they should run it (which for many people is perfectly okay), and I want to run them myself. But that's a difference of degree, not principle. We both think the basic foundation--an open, hackable, portable web--is an important priority.

I like to look at it in terms of "design fiction"--the dramatic endpoint that proponents of each approach are aiming to achieve. With native apps, devices themselves are valuable, because native code is heavy: it takes time to install, it stores data locally, and it's probably locked to a given OS or architecture. Web apps don't give us the same immediate power, but their ultimate goal is a world where your local hardware doesn't matter--walk up to any web-capable surface, and your applications are there. Software in the web-centric viewpoint follows you, not your stuff. There are lots of reasons why I'm bullish on the web, but that particular vision is, for me, the most compelling one.

March 20, 2013

Filed under: tech»education

ASTIGBY

Working on my textbook continues to be a great opportunity to write interesting little snippets of interactive JavaScript. Today I'd like to draw your attention to a couple of new modules for doing annotated source walkthroughs that I'm calling Timelapse. They're located in the repo under js/meta/TimeLapse and js/meta/TLPlayer. There's also a demo history file located here.

There are lots of tools for doing diffs between two source files, but I'm not aware of any source control system (save Perforce, which we use at ArenaNet) that do a timeline view of all revisions since a file was first checked in, and none that store the entire revision history in a single, web-friendly format. This is a shame, because my goal for several parts of the textbook is to be able to "replay" the process of writing a script, to show how it develops from a few lines of simple code into larger and more functional units like functions and prototypes. It's possible that someone else has done something like this, but a cursory Google couldn't turn it up, so I made my own.

The syntax for the files that Timelapse uses is designed to be similar to a standard diff file, but to not collide with JavaScript for easy parsing. It's a line-by-line comparison format with two main types of line tags:

  • @x,y@ source line: In this case, the tagged line exists from revision x to revision y. Both x and y are optional--x defaults to the first revision, and leaving out y will mark the line as included through the end of the history.
  • @@c:x; comments @@: This tag marks a multiline comment for a single revision x. Everything between the semicolon and the closing @@ will be loaded but not shown with the rest of the source.

You don't have to write these files by hand, which is good, because they can get pretty nightmarish. Instead, I've written an authoring tool for putting in multiple revisions (or importing them, using the HTML5 file API), commenting them, and exporting them. Using Ace means the editor is friendly and includes source-highlighting, which is great. You also don't have to worry about writing an output parser: the TLPlayer module is not quite complete, but it's done enough to wire it up to a UI and let people flip through the file, with new lines highlighted in the output.

If you'd like to see a demo, I've started using it for the chapter on writing functions. My goal is to put at least one timelapse at the end of each chapter, so that readers can see the subject matter being used to build at least on real-world code script. By doing these as revision histories, I'm hoping to avoid the common textbook "dump a huge source example into the chapter" syndrome. I know when I see that, my eyes glaze over--I don't see any reason that it's any different for my students.

Although I don't have a license on the textbook files yet (they'll probably be MIT-licensed in the near future), you're welcome to use these two modules for your own projects, and feel free to submit patches (the serialization, in particular, could probably use some love with someone with a stronger parsing background). I'd love to see if this is useful for anyone else, and I'm hoping it will help make this textbook project much friendlier to new developers.

Future - Present - Past