this space intentionally left blank

April 14, 2016

Filed under: tech»coding

Calculated Amalgamation

In a fit of nostalgia, I've been trying to get my hands on a TI-82 calculator for a few weeks now. TI BASIC was probably the first programming language in which I actually wrote significant amounts of code: although a few years later I'd start working in C for PalmOS and Windows CE, I have a lot of memories of trying to squeeze programs for speed and size during slow class periods. While I keep checking Goodwill for spares, there are plenty of TI calculator emulation apps, so I grabbed one and loaded up a TI-82 ROM to see what I've retained.

Actually, TI BASIC is really weird. Things I had forgotten:

  • You can type in all-caps text if you want, but most of the time you don't, because all of the programming keywords (If, Else, While, etc.) are actually single "character" glyphs that you insert from a menu.
  • In fact, pretty much the only code that's typed manually are variable names, of which you get 26 (one for each letter). There are also six arrays (max length 99), five two-dimensional matrices (limited by memory), and a handful of state variables you can abuse if you really need more. Everything is global.
  • Variables aren't stored using =, which is reserved for testing, but with a left-to-right arrow operator: value → dest I imagine this clears up a lot of ambiguity in the parser.
  • Of course, if you're processing data linearly, you can do a lot without explicit variables, because the result of any statement gets stored in Ans. So you can chain a lot of operations together as long as you just keep operating on the output of the previous line.
  • There's no debugger, but you can hit the On key to break at any time, and either quit or jump to the current line.
  • You can call other programs and they do return after calling, but there are no function definitions or return values other than Ans (remember, everything is global). There is GOTO, but it apparently causes memory leaks when used (thanks, Dijkstra!).

I'd romanticized it over time — the self-contained hardware, the variable-juggling, the 1-bit graphics on a 96x64 screen. Even today, I'm kind of bizarrely fascinated by this environment, which feels like the world's most cumbersome register VM. But loading up the emulator, it's obvious why I never actually finished any of my projects: TI BASIC is legitimately a terrible way to work.

In retrospect, it's obviously a scripting language for a plotting library, and not the game development environment I wanted it to be when I was trying to build Wolf3D clones. You're supposed to write simple macros in TI BASIC, not full-sized applications. But as a bored kid, it was a great playground, and the limitations of the platform (including its molasses-slow interpreter) made simple problems into brainteasers (it's almost literally the challenge behind TIS-100).

These days, the kids have it way better than I did. A micro:bit is cheaper and syncs with a phone or computer. A Raspberry Pi is a real computer of its own, as is the average smartphone. And a laptop or Chromebook with a browser is miles more productive than a TI-82 could ever be. On the other hand, they probably can't sneak any of those into their trig classes and get away with it. And maybe that's for the best — look how I turned out!

March 22, 2016

Filed under: tech»web

ES6 in anger

One of the (many) advantages of running Seattle Times interactives on an entirely different tech stack from the rest of the paper is that we can use new web features as quickly as we can train ourselves on them. And because each news app ships with an isolated set of dependencies, it's easy to experiment. We've been using a lot of new ES6 features as standard for more than a year now, and I think it's a good chance to talk about how to use them effectively.

The Good

Surprisingly (to me at least), the single most useful ES6 feature has been arrow functions. The key to using them well is to restrict them only to one-liners, which you'd think would limit their usefulness. Instead, it frees you up to write much more readable JavaScript, especially in array processing. As soon as it breaks to a second line (or seems like it might do so in the future), I switch to writing regular function statements.


//It's easy to filter and map:
var result = list.filter(d => d.id).map(d => d.value);

//Better querySelectorAll with the spread operator:
var $ = s => [...document.querySelectorAll(s)];

//Fast event logging:
map.on("click", e => console.log(e.latlng);

//Better styling with template strings:
var translate = (x, y) => `translate(${x}px, ${y}px);`;

Template strings are the second biggest win, especially as above, where they're combined with arrow functions to create text snippets. Having a multiline string in JS is very useful, and being able to insert arbitrary values makes building dynamic popups or CSS styles enormously simpler. I love writing template strings for quick chunks of templating, or embedding readable SQL in my Node apps.

Despite the name, template strings aren't real templates: they can't handle loops, they don't really do interpolation, and the interface for using "tagged" strings is cumbersome. If you're writing very long template strings (say, more than five lines), it's probably a sign that you need to switch to something like Handlebars or EJS. I have yet to see a "templating language" built on tagged strings that didn't seem like a wildly frustrating experience, and despite the industry's shift toward embedded DSLs like React's JSX, there is a benefit to keeping different types of code in different files (if only for widespread syntax highlighting).

The last feature I've really embraced is destructuring and object literals. They're mostly valuable for cleanup, since all they do is cut down on repetition. But they're pleasant to use, especially when parsing text and interacting with CommonJS modules.



//Splitting dates is much nicer now:
var [year, month, day] = dateString.split(/\/|-/);

//Or getting substrings out of a regex match:
var re = /(\w{3})mlb_(\w{3})mlb_(\d+)/;
var [match, away, home, index] = gameString.match(re);

//Exporting from a module can be simpler:
var x = "a";
var y = "b";
module.exports = { x, y };

//And imports are cleaner:
var { x } = require("module");

The bad

I've tried to like ES6 classes and modules, and it's possible that one day they're going to be really great, but right now they're not terribly friendly. Classes are just syntactic sugar around ES5 prototypes — although they look like Java-esque class statements, they're still going to act in surprising ways for developers who are used to traditional inheritance. And for JavaScript programmers who understand how the language actually works, class definitions boast a weird, comma-less syntax that's sort of like the new object literal syntax, but far enough off that it keeps tripping me up.

The turning point for the new class keyword will be when the related, un-polyfillable features make their way into browsers — I'm thinking mainly of the new Symbols that serve as feature flags and the ability to extend Array and other built-ins. Until that time, I don't really see the appeal, but on the other hand I've developed a general aversion to traditional object-oriented programming, so I'm probably not the best person to ask.

Modules also have some nice features from a technical standpoint, but there's just no reason to use them over CommonJS right now, especially since we're already compiling our applications during the build process (and you have to do that, because browser support is basically nil). The parts that are really interesting to me about the module system — namely, the configurable loader system — aren't even fully specified yet.

New discoveries

Most of what we use on the Times' interactive team is restricted to portions of ES6 that can be transpiled by Babel, so there are a lot of features (proxies, for example) that I don't have any experience using. In a Node environment, however, I've had a chance to use some of those features on the server. When I was writing our MLB scraper, I took the opportunity to try out generators for the first time.

Generators are borrowed liberally from Python, and they're basically constructors for custom iterable sequences. You can use them to make normal objects respond to language-level iteration (i.e., for ... of and the spread operator), but you can also define sequences that don't correspond to anything in particular. In my case, I created a generator for the calendar months that the scraper loads from the API, which (when hooked up to the command line flags) lets users restart an MLB download from a later time period:


//feed this a starting year and month
var monthGen = function*(year, month) {
  while (year < 2016) {
    yield { year, month };
    month++;
    if (month > 12) {
      month = 1;
      year++;
    }
  }
};

//generate a sequence from 2008 to 2016
var months = [...monthGen(2008, 1)];

That's a really nice code pattern for creating arbitrary lists, and it opens up a lot of doors for JavaScript developers. I've been reading and writing a bit more Python lately, and it's been amazing to see how much a simple pattern like this, applied language-wide, can really contribute to its ergonomics. Instead of the Stream object that's common in Node, Python often uses generators and iteration for common tasks, like reading a file line-by-line or processing a data pipeline. As a result, I suspect most new Python programmers need to survey a lot less intellectual surface area to get up and running, even while the guts underneath are probably less elegant for advanced users.

It surprised me that I was so impressed with generators, since I haven't particularly liked Python very much in the past. But in reading the Cookbook to prep for a UW class in Python, I've realized that the two languages are actually closer together than I'd thought, and getting closer. Python's class implementation is actually prototypical behind the scenes, and its use of duck typing for built-in language features (such as the with statement) bears a strong resemblance to the work being done on JavaScript Promises (a.k.a. "then-ables") and iterator protocols.

It's easy to be resistant to change, and especially when it's at the level of a language (computer or otherwise). I've been critical of a lot of the decisions made in ES6 in the past, but these are positive additions on the whole. It's also exciting, as someone who has been working in JavaScript at a deep level, to find that it has new tricks, and to stretch my brain a little integrating them into my vocabulary. It's good for all of us to be newcomers every so often, so that we don't get too full of ourselves.

December 28, 2015

Filed under: tech»web

Let's not

Right now you can access my portfolio over a secure, encrypted connection, thanks to Let's Encrypt. Which is pretty cool! On the other hand, if nginx restarts this week, it'll probably crash on a bad config value, temporarily disabling all my public-facing websites. This has been emblematic of my HTTPS experience in general: a mix of triumphs and severe configuration mishaps.

A little background: in order to serve a website over a secure connection, you need a digital certificate to encrypt communication with the browser. You can generate these certificates yourself, but that's really only good for personal use. The self-signed cert has to be manually installed on each machine that accesses the server, otherwise the browser will throw up a big, ugly warning screen. The alternative is to buy a certificate from a "trusted authority," most of which are not particularly trustworth or authoritative, but it'll get you a green lock icon in the URL bar. Purchased certs tend to be either expensive or a hassle or both.

After the Snowden leaks, there was a lot of interest in encrypting all web traffic, which meant bypassing the existing certificate authority protection racket run by Symantec et al. Mozilla and some other organizations got together and started Let's Encrypt, with the goal of making trusted certificates free and easy. I figure they're halfway there: I didn't pay anyone for the cert, at least.

There's an official client for the service, but it only works for Apache and it's kind of hefty. My server is set up in an unsupported (but still pretty standard) configuration: I run nginx as a forward proxy in front of Apache (for PHP scripts) and Node (for various apps, including Weir), both of which I'd like to be secured. So I used acme-tiny instead, which basically just talks to the cert API and is small enough that I could read and understand the whole thing. I wrote a shell script to wrap it up and automate things. Automation is important, because unlike paid certificates, these are only good for 90 days, so you need a cron job set to run every month or so to renew them.

Setting all this up wasn't an easy process. The acme-tiny script is well-written, but it has bugs on the version of Python that comes with CentOS. Then I had to set up nginx to use the certificates manually. My webmail got locked into an infinite redirect once I moved my self-signed cert out from Apache and out to the proxy. And the restart crash? Turns out that Let's Encrypt is rate-limited on a per-domain basis, and I didn't back up the current certificate before I hit the rate limit, so my update script overwrote it with an empty version. Luckily, nginx caches certs and won't restart if it detects a bad config, so I'm safe as long as it can outlast the seven-day rate-limit window (it probably will: it's been up 333 days so far, after all).

Without literally years of server admin experience, I'm not sure I would have made it through these issues. And as I mentioned, my system is pretty standard — there's no load-balancer, no CDN, and I don't need to host third-party content. I also don't have any business that gets lost if anything is busted and the certificate expires in March. If I were, say, an IT department responsible for a high-traffic site, I'd be a lot more cautious about moving everything over to HTTPS, either through Let's Encrypt or a paid option.

Ultimately, the news industry and other sites are going to have to follow the lead of the Washington Post, even if the timeframe takes a while. Even apart from the security benefits it carries, browsers have locked new features (Service Worker, for example) behind HTTPS, and are moving old features behind it as well (geolocation is going to be the biggest disruption there). If you want to develop fast websites in the future (assuming that's something news product management cares about, which is... questionable), and especially if you want to create rich news applications, you're going to have to be encrypted.

In my case, I wanted to get a head start on developing with new browser features (a Service Worker would clean up a lot of Weir code), so it's worth the hassle. And we will continue to push these boundaries on the Seattle Times interactives team, since we've moved our S3 hosting to HTTPS (the rest of the site will follow eventually).

But I think there's a lot of tension between where we want to be, as a news industry, and where it's possible for us to be right now. Although I've seen people calling for incentives to change it (such as requiring HTTPS for news grants), the truth is that it just isn't that simple. News sites are often built in a baroque, overcomplicated set of layers — the Seattle Times, for example, currently sits behind a CDN, several instances of Varnish, some reverse proxies, and a load balancer, mostly due to a lot of historical baggage. Changing this to run securely is going to be a big process, even for a company of our size (maybe because of our size). I can't imagine the hassle for local papers that might have little or no IT support. It won't happen overnight, and Let's Encrypt hasn't done anything to change that yet.

In the meantime, I think it's worth stepping back and asking what we really want out of a digital news industry, because sometimes it's hard to maintain perspective from in the trenches. Is it important that readers be able to see our sites securely, free from worries that third parties are snooping or altering what they see? Sure, that's important. Is it in the top three things that Americans need from local news, above problems like "a sustainable revenue model" and "a CMS that doesn't actively fight against the newsroom?" Probably not. Given a choice between a cryptographically-secure media and a diverse, sustainably-funded media, I'm personally going to take the latter every time.

December 7, 2015

Filed under: tech»web

How We're Fast

Over the Thanksgiving holiday, when I wasn't busily digesting as much cornbread stuffing as I could eat, I spent some time running WebPageTest against various projects that the Seattle Times Interactive team has built. The news industry as a whole may not care about speed, but I do, and I want our pages as fast as possible — especially the ones that are embedded in the regular CMS via responsive frames.

After all the testing, I'm generally pleased by how our stuff stacks up, especially when compared against the rest of the site. We have some advantages, of course: our pages typically have fewer ads, and we can strip down the page for maximum efficiency. But it's also the result of a lot of hard work on our news app template, ensuring that every project comes with smart decisions built in. I genuinely think that all news pages could be this fast, so it's worth talking about how we've made it happen, especially for other news organizations that use a similar flat-file approach to their interactives.

Browserify with care

We use Browserify to package up our JavaScript, because we're not savages, and you need some sort of module system for JavaScript these days. Browserify builds all our scripts into a single file, which is important for high-latency connections (which means most cellular networks, even on 4G). We also make sure to load that bundle file with the async attribute at the bottom of the page, so that it won't block rendering.

All of that is pretty standard best practice, but we've also learned that Browserify can be dangerous if you're not careful. A lot of NPM modules are published with the unminified, debug version of the library as the default export from the module. Angular in particular is bad about this: running require("angular") on its own will load a file filled with comments and documentation, totalling more than a megabyte in size (even after gzip, it's still more than 200KB). That's huge!

As a result, one of our production checklist items is to make sure that we are loading the minified version of any external libraries. We also use the browser property in our package.json file to alias common libraries to their minified versions, so that when we require Angular, jQuery, or Leaflet, it automatically defaults to the smallest file.

Gzip on S3

Like a lot of newsroom developers, my team hosts files on Amazon S3, mostly because it's cheap and reliable. People like to think about S3 as though it's just a normal, heirarchical flat-file server, like Apache or Nginx, but it's not. S3 is really a key-value store: you put in a path, and it spits back a prerecorded reply, including the headers.

If you think of S3 as a server, you'll expect it to do a bunch of things that it doesn't actually do. For example, it doesn't set a cache expiration date, and it doesn't know about content types. It also doesn't understand about Gzip compression, so it'll merrily serve your files in their uncompressed form, making them way bigger than they need to be, even if the browser requests the compressed version.

We get around this by running a compression stage on any text-based file during deployment, and setting the headers for the stored object to match. This does mean that theoretically, a browser that doesn't support Gzip will be unable to request that content, because S3 will always respond with compressed content no matter what Accepts-Encoding header the browser sends. Luckily, every browser since IE4 supports it.

Reduce framework code

I love Angular. If you want to quickly generate a visualization with powerful tools for filtering and data binding, you can't do much better. I personally think it's an order of magnitude better than D3. But Angular can also be brutally slow: its change detection algorithm requires a lot of time and memory as a tradeoff for developer convenience.

On a recent project that looked at animal imports, we started with Angular as a way to test out the visualization, but soon noticed that it was taking three or four seconds just to parse and apply the data. On a desktop, that time is a drag. On mobile, it's likely to get the tab terminated, or convince readers that there's something wrong with it.

When the profiler says that you're spending that much time in JavaScript, there are two options. The first is to try to find ways to work around the framework, which can range from unpleasant to actively painful. The second is to just rewrite in vanilla JS. It sounds more difficult to do the rewrite, but if all you're doing is data-binding and events, you can usually replace it pretty easily with a little templating and some custom data attributes. The resulting code isn't as clean or simple, but in the case of the animal imports, it dropped our JS execution time to under 100ms. That's fast.

Even jQuery can be optional these days. Because we compile ES6 down with Babel, a lot of DOM code that would be ungainly can become elegant. Template strings and arrow functions alone have allowed us to cut out DOM libraries entirely, and as a result many of our interactives consist of no external libraries at all. If you haven't checked into the advantages of using Babel in your build process, it's well worth another look.

Reduce third-party code

The number one contributor to page load time is not written by journalists: it's the third-party ad code that runs on the page. There may be only so much you can do about this, since it pays the bills, and of course it may not even apply on embedded graphics. But on our standalone pages, I've taken a strong stance on implementing all code ourselves whenever possible. For example, although our commenting system usually requires multiple scripts loaded synchronously, I wrote a loader that runs through and adds them asynchronously, and only after a user clicks on the "view comments" banner. We can't avoid the hit, but we can delay it until well after the rest of the page has had a chance to render.

Lazy-load everything

Once you've delayed scripts with the async attribute, trimmed the size of those scripts and compressed them, and deferred as much third-party code as you can, what's left over? In our case, this is where we start getting into the structure of the actual interactive, and how it loads itself. For most interactives, we embed data directly into the page, but beyond a certain size it becomes worthwhile to grab it via AJAX instead.

But there's another way to think about lazy-loading, and that's to consider what format you're actually using to populate the page. I'm as big a fan of progressive enhancement as anyone else, but in the case of my team, what we produce is interactive — there's literally no point if JavaScript is disabled. I've found that moving content into JSON and then templating it onto the page can reduce download times significantly, while the speed hit is negligible. Finding the balance between network speed and JavaScript execution time is a constant process for us.

When performance matters

Finally, a note of caution: as much fun as it is to squeeze every last millisecond out of the browser, I'm a little uncomfortable making it the alpha and omega of the job. Ultimately, our goal is to inform people — we'd like that to be fast, but a fast page with bad or misleading reporting is still a failure.

What I like about front-end speed is that it serves as a useful proxy for site quality. A site that's fast can't load too many ads. It can't serve too many tracking scripts. It has to put the reader first. It's easy, much of the time, to chip away at performance in the name of business metrics: loading an additional analytics script to get more information, or an obnoxious ad for a short-term revenue boost.

But if you put speed first, every decision has to start from the perspective of "what's good for the reader?" It's hard to measure the impact of good journalism, but we can have metrics for speed and other technical aspects of the presentation. We can spend more time on the former if we have strong, user-centric guidelines on the latter. If we want people to give us money over the long term, that seems like the only healthy strategy to me.

November 20, 2015

Filed under: tech»coding

Weir, year two

I realized the other day that Google Reader shut down in June of 2013, which means I've been using Weir as my RSS reader for more than two years now. It's my longest-running software project, and still one of the most complex things I've built in Node. And apart from occasional revisions, it's been up and running constantly, in mobile and desktop browsers, that entire time.

I don't log out a lot of metrics from Weir, so there's a lot of stuff that I'm not tracking. But I can say that there are currently 113 subscriptions, with around 6,000 stories in the database. The server that hosts the app (as well as my various domains) downloads about 20GB of data each month, most of which is probably Weir (the rest is e-mail and server updates, and I'm frankly not that popular). It also hovers around 10% of available memory, which is pretty good for a garbage-collected language on a piddly little VM.

On the client side of things, the Angular code has definitely started to show its age. This was the project that I used to learn Angular, and since then I've learned a lot about the framework. Would I use it if I were writing Weir from scratch? I'm not sure. I still love the databinding aspect of Angular, but I suspect I could write a smaller, nimbler version of the UI in vanilla JavaScript pretty easily. At some point, I may give it a shot: the server API is clean enough that writing a new client should be relatively straightforward.

As an experiment in self-hosting a cloud service, Weir is a mixed success. But I have grown to love the way that something I wrote has become a fixture of my life. I clear out my stream on the bus in the morning. Throughout the day, Weir's purple tab icon lights up to let me know that new items are available. It feels like wearing clothes that I tailored for myself — using it feels a little nicer than it should, just from the pride in its construction.

November 4, 2015

Filed under: tech»education

Closing the textbook

Last week, when the administration sent out their quarterly "please someone cover these classes, we are very desparate" email, I put in my notice at Seattle Central College (how's that for irony?). I'll be finishing up this quarter teaching ITC 210, and then I'll need to find a new way to occupy 10-20 hours a week. For a start, I'm planning to volunteer for the local Girl Develop It organization as a TA. I'll be able to cook for Belle more often. And I'd like to be more active in managing the local Hacks/Hackers chapter that I took over earlier this year.

SCC does have some deep organizational problems, and I won't pretend they haven't influenced my decision to leave. But I don't regret the time I spent there: there's been little as rewarding as seeing people take the information I can give them and really run with it. Teaching has often pushed me to make sure that I knew every detail of a subject so that I wouldn't mislead students, and it's gotten me to explore new workflows and clarify my thinking on a lot of topics.

The most important thing I've learned isn't anything technical. Early on in my time as an instructor, I would often be surprised when students wouldn't know something basic, even though it might have been in the prerequisites (only later would I find out how porous those prereqs are at SCC). After a little while, I made a conscious decision that my reaction should be enthusiasm instead of surprise. Although I'm not a huge fan of XKCD, I was inspired by this comic:

Approaching ignorance not as a character flaw or personal failing, but as a chance to share something cool, was great for students. It provided a perspective from which basic questions can turn into an enthusiastic deep-dive into a topic — something even advanced students might find valuable. And it kept me engaged far longer than I think I could have managed in a curriculum where the opportunities to teach really high-level, interesting techniques just weren't there.

Although it seems a bit like pablum, and deeply out of character for a cynic like me, I actually believe that enthusiasm will continue to be useful, even when I'm not teaching regularly. After all, I work on the bleeding edge of an industry that's still struggling to figure out the Internet: the least I can do is be positive about it, for my sake if not for theirs.

September 3, 2015

Filed under: tech»web

Codes of Conduct

Rachel Nabors writes that that you literally cannot pay her to attend a conference without a code of conduct:

When I promised not to go to conferences without Codes of Conduct, I wasn’t paying lip service to a trend, doing the popular thing to gain brownie points with my feminist besties. I meant every word. It is my greatest fear in life that something bad would happen to someone attending a conference I attracted them to.

It was weird the other day to realize that I'm actually becoming a mentor figure for some people: I have interns, I teach classes, I'm now the co-organizer of the local Hacks/Hackers chapter. I'm still basically nobody, but I will also make this pledge: if a conference does not have a code of conduct (or its equivalent), I won't go, as a speaker or an attendee. It's important to me that industry gatherings be places where people are comfortable and safe, no matter who they are. Everyone deserves that much consideration, and while a code of conduct is not a guarantee of safe behavior, it's a good start.

It's disappointing, but perhaps to be expected, that several high-profile white dudes have decided that the most important thing they can do this year is fight against codes of conduct. Many of them feel attacked and want to circle the wagons instead of listening to voices from the community. That's too bad for people who attend conferences — but part of the point of the pledge is that it should punish organizers who don't think creating a standard framework for conference behavior is a priority. If they can't get speakers or attendees to come, sooner or later they don't have an event.

Let's be clear: nobody is entitled to run a conference, and helping people with their careers once upon a time does not excuse you from being a good citizen. If the pressure causes people like Jared Spool and Mike Monteiro to change their mind, everyone benefits. If it doesn't, and they go the way of the dinosaurs, well... that's how evolution works. We'll survive without them.

But I think this is a great opportunity to re-examine one particular community role that I see a lot, both in tech and outside. You know the one: that guy (almost always a man) whose schtick is being the "honest teller of truth," which really means "rude, petty, and abusive in kind of a funny way about stuff we all agree on." A lot of times, we tolerate that behavior because honestly, it is satisfying to have someone saying what we're all thinking about clients who won't pay, or bad design, or ugly code. I have sometimes thought of myself as that person, but I'm trying not to be anymore.

The problem with the "angry truth-teller" is that it stops being funny when they suddenly turn those tools on their allies. When that happens, you realize that it was never actually funny: you just didn't care about being respectful, because you didn't think the target was worthy. Unfortunately, a lack of empathy is not the same as a sense of humor. There might be a thin line between "comedian" and "jerk," but unless you're Don Rickles I don't really see the point in intentionally crossing it.

In the end, life is too short to give money and attention to people who can't have a little empathy over something as silly as tech conference administration. We are not required to make them leaders in our community, nor are we required to keep them as leaders if we decide that the negatives of their input outweigh the positives. There are lots of technically-skilled people out there who can be right about an issue — or wrong on it, even — without being entitled, nasty jackasses about it. Let's boost them instead.

June 24, 2015

Filed under: tech»web

A good (virtual) walk spoiled

Earlier today I took the wraps off of the private repo for our Chambers Bay interactive flyover. You can find the source code here, and a post on our dev blog about it here. It was a really fun challenge, and a rare example of using WebGL in a news capacity (the NYT did the Dawn Wall, but that's the only one I can remember recently).

From a technical standpoint, this was my first three.js project, and the experience was largely positive. I think there's a strong case to be made that three.js is basically jQuery for WebGL: sure, you don't need it, but it only takes a couple of features to make it worthwhile. In this case, I didn't particularly feel like writing a model loader or a scene graph. There are still plenty of hooks to write the parts that I do enjoy, like the fragment shader for the landscape (check out that sweet dithering), or the UI for directing the camera. Sure, three.js is a relatively large library, but I'm loading 4MB of textures and another 4MB of gzipped landscape model, so what's a few more hundred KB of code?

WebGL itself runs surprisingly well these days, although failure modes do not seem to be its strong suit. For example, the browser may have WebGL support enabled, but then crash when it tries to render (or it may be lying about support, as with the remote VM sessions used in Times meeting rooms). That said, I was astonished to find that pretty much everything (mobile included) could run the landscape at a solid frame rate, despite the fact that it's a badly-optimized mesh with 150,000 triangles. Even iOS, which usually falls over and dies when WebGL pushes past its skimpy RAM limits, was able to run smoothly once I added a low-res texture for it to use.

This was an ambitious project using some pretty cutting-edge web technology, which makes it interesting in light of arguments that the web suffers from "featuritis". After all, when you're talking about feature overkill, WebGL is a barnstormer. But this story would have been tough to tell another way, and it would have never had the same reach siloed in an app store.

Or take Paul Ford's mind-boggling What is Code? in Businessweek this month: behind Bloomberg's Trapper Keeper design aesthetic, it's a powerful article that integrates animations, videos, and interactive demonstrations with the textual message. Ironically, I saw many of the same people that criticize web apps going wild for Ford's piece, a stance that I can only attribute to sophistry.

My team at the Seattle Times has gradually abandoned the term "news apps," since everyone who hears it assumes we're actually writing iPhone clients for the paper. As a term of art, it has always been clumsy. But it does strike at a crucial quality of what we do, which occupy a gray area between "text" and "program." And if it seems like I'm touchy about pundits who think we should abandon the web, this is in large part the reason why.

Arguments that browsers should just go back to being a document viewers ignore the fact that HTML is not just a text format: it's a hypermedia format, and those have always blurred the already-fuzzy lines between data and code (see also: Excel, Hypercard, and IPython notebooks). It's true that the features of the web platform are often abused. Nobody likes slow navigation, ad popups, or user tracking scripts. But it's those same features that make new kinds of storytelling possible — my journalism is built on the same heavily-structured, "over-tooled" web platform that critics find so objectionable. I wouldn't give that up for all the native apps in the world.

April 3, 2015

Filed under: tech»education

What's advanced?

Next week, I'll start teaching ITC 298 at Seattle Central College. It's the first special topics class I've actually gotten to teach (a previous class on tooling couldn't get enough students), and it's on a topic near and dear to my heart: namely, intermediate to advanced JavaScript. But what does that actually mean? If web development were a medieval blacksmith shop, what would you need to know in order to move beyond "apprentice" and hit the road as a journeyman? What is it that separates a jQuery dabbler from a serious front-end coder?

The honest answer is "about five years of practice," but that's not the whole story (nor is it something I can take to a curriculum planning committee). I think there are two areas of growth that students need to be aware of, and that I'm planning on stressing for this quarter: tooling and functional programming.

Tooling

Rebecca Murphey recently wrote a revised baseline for front-end developers, just as I was putting together my syllabus. It's still a great guide to the state of the art, even if I don't agree with everything in it. But I would add that modern JavaScript applications tend to be built on three pillars that students need to understand:
  • Server code written in NodeJS,
  • Client code built on top of some kind of MVC, and
  • A build system (also in Node) that weaves the two together.

Learning their way around this trio is going to be a huge challenge for my students, most of whom still live in a world where individual files are edited and sent to the browser as-is (possibly with a PHP include or two). They haven't built applications with RESTful routes, or written client-side code in a module system. SCC hasn't typically stressed those techniques, which is a shame.

I'm happy to be the person who forces students into the deep end, but I do want to make sure they have a good, structured experience. Throwing everything at students is a quick way to make sure that they get overwhelmed and give up (not a hypothetical scenario: the previous ITC 298 class had exactly that problem, and ended poorly). To ease them in, we'll try building the following sequence of exercises in our directed lab sessions:

  1. simple Node script with callbacks
  2. scraper using async/events/streams
  3. basic site using Hapi.js
  4. authenticated site with session-handling
  5. Grunt scripts to build JavaScript with Browserify
  6. simple MVC with Backbone

The progression starts with Node, and then builds out gradually so that each step conceptually depends on a previous lesson. Along the way, students will learn a lot about how to structure an application across all three of these environments — which brings us to the second, and probably harder, focus of the class.

Functional programming

Undoubtably, students need to have a better grasp of JavaScript fundamentals ("the good parts") if they're to be considered intermediate front-end devs. But how do we break down those fundamentals? We could concentrate on inheritance and object orientation, or think of everything as MVC. Maybe we could spend a bunch of time on modules, or deep-dive into how to write high-performance DOM code. Murphey recommends learning ES2015 features, like fat arrows and destructuring. All of these are important pieces of the JavaScript toolkit, but they are more patterns available for use, not core theoretical knowledge.

The heart of the language, however, remains the humble function. Where other languages have modules, private/static properties, blocks, async/await, classes, and list comprehensions, JavaScript just has first-class functions. Astonishingly, this has actually worked out pretty well, but it means you do really need to understand them in order to read and write code — particularly on Node, where callbacks are still the preferred method of handling concurrency.

Typically, functional coding has been one of the more difficult concepts to introduce in my basic class: we spend some time about halfway through the quarter implementing Array.forEach(), and we wire up a lot of event listeners. Map/reduce usually overwhelms most students, however, and call/apply gets a lot of blank stares. These are literally unavoidable in a well-written, modern JavaScript codebase: we have to find a way to approach them if students are to reach the next stage of their professional development.

The hard part of writing for Node is that you must embrace some degree of functional programming: the continuation-passing style used in the core APIs makes it inescapable. But the great part of writing for Node (especially as the first section of the course) is that it's actually a fairly gentle ramp-up. Callback functions are not that far from event listeners, and the ubiquitous async library softens the difficulty of mapping an array functionally. Between the two, there's no shortage of practice, since there's literally no other way to write a Node program.

That's the other strategy behind the tooling sequence I've laid out. We'll start from Node, and then build toward increasingly complex functional constructs, like modules, constructors, and promises. By the time the class have finished their final projects, they should be old hands at callbacks and closures, which will serve them well in almost any language.

The plan

Originally, I joked with people that we'd spend six weeks reading JavaScript: the Good Parts and then six weeks building a chat application. Two students would have enjoyed this, and the rest would have followed me home and murdered me in my sleep. But the two-part plan laid out in this post hopefully marries the practical and the theoretical in a way that will help students grow. They'll learn about the intricacies of JavaScript's functional quirks, but they'll do so by building real applications to solve real problems.

The specifics of this quarter are still a little bit in flux, and will likely remain so, since I think it's good to be flexible the first time teaching a class. But if you're interested in following along, feel free to check out the class repo, which contains the syllabus, supporting materials, and example code so far. Issues and pull requests are also welcome!

February 18, 2015

Filed under: tech»web

Speed Kills

It's an accepted truth on the web that fast pages are better for users — people stay on them longer, follow more links from them, and generally report being happier with them. I think a lot about performance on my projects, because I want readers to be thinking about the story, not distracted by slow load times.

Unfortunately, web performance has a bad rap, in part because it's a complicated topic. Making it work effectively and efficiently means learning a lot about how the browser runtime works, and optimizing for new techniques like GPU transforms. Like everything else in web development, there's also a lot of misinformation out there, and a lot of people who insist that everything was better back when we built everything without all the JavaScript and fancy-pants frameworks.

It's possible that I've been more aware of it, just because I've been working on a project that involves smoothly animating a chart using regular HTML instead of canvas, but it seems like it's been a bad month for that kind of thing. First Peter-Paul Koch wrote a diatribe about client-side templating, insisting that it's a needless performance hit. Then Flipboard wrote about discarding traditional elements entirely, instead rendering everything to a canvas tag in pursuit of 60 frames/second animations. Ironically, you'll notice that these are radically different approaches that both claim they create a better experience.

Instead of just sighing while the usual native app advocates use these posts to bash the web, and given that I am working on a page where high-performance mobile animations are a key part, I thought it'd be nice to talk about some experiments I've run with the approaches found in both. There are a lot of places where the web platform needs help competing on mobile, no doubt. But I'd prefer we talk about actual performance problems, and not get sidetracked into chasing down scattered criticisms without evidence.

Let's start with templating, which is serving as a stand-in for client-side JavaScript in general. PPK argues that templating (and by extension, single-page app design) is terrible for performance, but is that true? While I was working on my graph, I worried a little bit about startup time. Since I write JavaScript on both the server and the client, it was pretty easy to port my code from one to the other and check. I personally found the results conclusive, and a little surprising.

The client-side version of the page weighed in at 10KB and spent roughly 35ms in JavaScript during startup, rendering the page and prepping its data structures. That's actually not bad for something that's doing some fairly heavy positioning and styling, and it fits in the 14KB first TCP round-trip recommended by Google. In contrast, the server-side page, in which all the markup was pre-rendered and then progressively enhanced after page load, was 160KB and spent about 30ms in JavaScript. In other words, following PPK's advice to avoid client-side templating caused the page to be sixteen times larger, and still required two video frames to start up.

Now, this is a slightly special case: unlike typical server applications, my news apps are useless without JavaScript. They're not RESTful, they don't talk to a database, they involve a lot of moving parts. But even I was surprised by how little impact client-side templating actually had. Browsers these days are just ridiculously fast at assembling HTML. So while I don't recommend doing the entire page this way, or abandoning server-generated HTML entirely, it's pretty clear to me that it's not the slam-dunk case that holdouts for traditional server rendering claim it is.

At the other extreme is Flipboard's experiment with canvas rendering. Instead of putting everything in the document, like normal websites, they put a full-screen canvas image up and render everything — text, images, animations, etc. — manually to that buffer. You can try a demo out on your device here. On my Nexus 5, which is a reasonably new device running the latest version of Chrome, it's noticeably choppy. My experience with canvas is that Chrome's implementation is actually much faster than Safari, so I don't expect it to be smooth on iOS either (they've blacklisted tablets, so I can't be sure).

In order to get this "fluid" experience, here's what the Flipboard team threw away:

  • Accessibility: nothing on the page actually exists to a screen reader. Users that invert their screens or change the text size to make it easier to read are out of luck.
  • Copy and paste: since there's no document, there's nothing to select for these basic text operations.
  • Real links: you can't open pages in a new tab. You can't share them using the OS share panel. You can't do anything with the links, because they aren't really there.
  • GPU acceleration: by doing everything in canvas, they've ignored all the optimizations that browsers actually do to ensure a smooth, battery-efficient experience.
  • View source: inspecting the Flipboard page gives you no information at all, and the JavaScript is minified without source maps. The app is completely opaque to anyone who wants to learn from it.
The irony of doing all this work for a fluid experience that isn't actually fluid is that the kinds of animations they're doing — transform and opacity — are actually the exact properties that browsers can animate at 60FPS. Much has been written about using GPU compositing for smooth animations, but this article is a great start. If Flipboard had stuck with the DOM and used the GPU fully, they probably could have had fluid animations without leaving all those other features behind.

That's an easy thing to say, but is it true? Here's another experiment from my stacked bar chart: when a toggle is pressed, the chart shifts from being a measure of absolute numbers to relative proportions, with each bar smoothly animating up to 100%. I'm using an adaptation of Paul Lewis' FLIP technique, in which animations are set in JavaScript but run via CSS transitions. In my case, each of the 160+ blocks is measured, assigned a transform to "freeze" it in place as a new GPU layer, then transitioned to its final position with a second transform and "thawed" back into a regular, responsive element.

Even though I'm animating many more elements than Flipboard is doing in their demo, the animation is perfectly smooth on my Nexus 5, and on the aging iPad we use for testing. By doing all the hard computational work up front, and then handing the pre-computed transitions over to the browser, I'm actually not JavaScript-bound at all: everything is done on the graphics chip, and in the C++ compositing layer. The result is a smooth 60FPS during the animation, all done via regular DOM elements. So much for "If you touch the DOM in any way during an animation you’ve already blown through your 16ms frame budget."

Again, I'm not claiming that my use case is a perfect analogue. I'm animating a graphic in response to a single button press, and they're attempting to create an "infinite scroll" (sort of — it's not really a scroll so much as an animated pager). But this idea that "the DOM is lava" and touching it will cause your reader's phones to instantly burst into flames of scorn seems patently ridiculous, especially when we look back at that list of everything that was sacrificed in the single-minded pursuit of speed.

Performance is important, and I care deeply and obsessively about it. As a gamer and a graphics nerd, I love tweaking out those last few frames per second, or adding flashy effects to a page. But it's not the most important thing. It's not more important than making your content available to the blind or visually impaired. It's not more important than providing standard UI actions like copy-and-paste or "open in new tab." And it's not more important than providing a fallback for older and less-powerful devices, the kind that are used by poor readers. Let's keep speed in perspective on the web, and not get so caught up in dogma that we abandon useful techniques like client-side templating and the DOM.

Past - Present - Future