Mile Zero http://www.milezero.org/index.php this space intentionally left blank en Bloxsom on PHP Classless components http://www.milezero.org/index.php/tech/web/classless_components.html In early August, I delivered my talk on "custom elements in production" to the CascadiaFest crowd. We've been using these new web platform features at the Seattle Times for more than two years now, and I wanted to share the lessons we've learned, and encourage others to give them a shot. Apart from some awkward technical problems with the projector, I actually think the talk went pretty well: <p> <iframe width="560" height="315" src="https://www.youtube.com/embed/vpNKUYSeT7g" frameborder="0" allowfullscreen></iframe> <p> One of the big changes in the web component world, which I touched on briefly, is the transition from the V0 API that originally shipped in Chrome to the V1 spec currently being finalized. For the most part, the changeover is not a difficult one: some callbacks have been renamed, and there's a new function used to register the element definition. <p> There is, however, one aspect of the new spec that is deeply problematic. In V0, to avoid complicated questions around parser timing and integration, elements were only defined using a prototype object, with the constructor handled internally and inheritance specified in the options hash. V1 relies instead on an ES6 class definition, like so: <code> class CustomElement extends HTMLElement { constructor() { super(); } } customElements.define("custom-element", CustomElement); </code> <p> When I wrote my presentation, I didn't think that this would be a huge problem. The conventional wisdom on classes in JavaScript is that they're just syntactic sugar for the existing prototype system &mdash; it should be possible to write a standard constructor function that's effectively identical, albeit more verbose. <p> The conventional wisdom, sadly, is wrong, as became clear once I started testing the V1 API currently available behind a flag in Chrome Canary. In fact, ES6 classes are not just a wrapper for prototypes: specifically, the <var>super()</var> call is not a straightforward translation to older inheritance models, especially when used to extend browser built-ins as it does here. No matter what workarounds I tried, Chrome's V1 custom elements implementation threw errors when passed an ES5 constructor with an otherwise valid prototype chain. <p> In a perfect world, we would just use the new syntax. But at the Seattle Times, we target Internet Explorer 10 and up, which doesn't support the <var>class</var> keyword. That means that we need to be able to write (or transpile to) an ES5 constructor that will work in both environments. Since the specification is written only in terms of classes, I did what you're supposed to do and <a href="https://github.com/whatwg/html/issues/1704">filed a bug</a> against the spec, asking how to write a backwards-compatible element definition. <p> It shouldn't surprise me, but the responses from the spec authors were wildly unhelpful. Apple's representative flounced off, insisting that it's not his job to teach people how to use new features. Google's rep closed the bug as irrelevant, stating that supporting older browsers isn't their problem. <p> Both of these statements are wrong, although only the second is wrong in an interesting way. Obviously, if you work on standards specifications, it <i>is</i> part of your job to educate developers. A spec isn't just for browsers to implement &mdash; if it were, it'd be written in a machine-readable language like WebIDL, or as a series of automated tests, not in stilted (but still recognizable) English. Indeed, the same Google representative that closed my issue <a href="https://github.com/whatwg/html/issues/1642">previously defended</a> the "tutorial-like" introductory sections elsewhere. Personally, I don't think a little consistency is too much to ask. <p> But it is the dismissal of older browsers, and the spec's responsibility to them, that I find more jarring. Obviously, a spec for a new feature needs to be free to break from the past. But a big part of the <a href="https://github.com/extensibleweb/manifesto">Extensible Web Manifesto</a>, which directly references web components and custom elements, is that the platform should be <i>explainable</i>, and driven by feedback from real web developers. Specifically, it states: <blockquote> Making new features easy to understand and polyfill introduces a virtuous cycle: <ul> <li> Developers can ramp up more quickly on new APIs, providing quicker feedback to the platform while the APIs are still the most malleable. <li> Mistakes in APIs can be corrected quickly by the developers who use them, and library authors who serve them, providing high-fidelity, critical feedback to browser vendors and platform designers. <li> Library authors can experiment with new APIs and create more cow-paths for the platform to pave. </ul> </blockquote> <p> In the case of the V1 custom elements spec, feedback from developers is being ignored &mdash; I'm not the only person that has complained publicly about the way that the class-based definitions are a pain to use in a mixed-browser environment. But more importantly, the spec is <i>actively hostile to polyfills</i> in a way that the original version was not. Authors currently working to shim the V1 API into browsers have faced three problems: <ol> <li> Calling <var>super()</var> invokes magic that's hard to reproduce in ES5, and needlessly so. <li> HTMLElement isn't a callable function in older environments, and has to be awkwardly monkey-patched. <li> Apple publicly opposes extending anything other than the generic HTMLElement, and has only allowed it into the spec so they can kill it later. </ol> <p> The end result is that you can write code that will work in old and new browsers, but it won't exactly look like real V1 code. It's not a true polyfill, more of a mini-framework that looks almost &mdash; but not exactly! &mdash; like the native API. <p> I find this frustrating in part for its inelegance, but more so because it fundamentally puts the lie to the principles of the extensible web. You can't claim that you're explaining the capabilities of the platform when your API is polyfill-hostile, since a polyfill is the mechanism by which we seek to explain and extend those capabilities. <p> More importantly, there is no surer way to slow adoption of a web feature than to artificially restrict its usage, and to refuse to educate developers on how to use it. The spec didn't have to be this way: they could detail ES5 semantics, and help people who are struggling, but they've chosen not to care. As someone who literally stood on a stage in front of hundreds of people and advocated for this feature, that's insulting. <p> Contrast the bullying attitude of the custom elements spec authors with the advocacy that's been done on behalf of Service Worker. You couldn't swing a dead cat in 2016 without hitting a developer advocate talking up their benefits, creating detailed demos, offering advice to people trying them out, and talking about how they gracefully degrade in older browsers. As a result, chances are good that Service Worker will ship in multiple browsers, and see widespread adoption, by the end of next year. <p> Meanwhile, custom elements will probably languish in relative obscurity, as they've done for many years now. It's a shame, because I'd argue that the benefits of custom elements are strong enough to justify using them even via the old V0 polyfill. I still think they're a wonderful way to build and declare UI, and we'll keep using them at the Times. But whatever wider success they achieve will be despite the spec, not because of it. It's a disgrace to the idea of an extensible web. And the authors have only themselves to blame. Fri, 09 Sep 2016 08:56:16 -0700 http://www.milezero.org/index.php/tech/web/classless_components.html/tech/web RIP Chrome apps http://www.milezero.org/index.php/tech/web/rip_chrome_apps.html <b>Update:</b> <a href="http://blog.chromium.org/2016/08/from-chrome-apps-to-web.html">Well, that was prescient.</a> <p> At least once a day, I log into the Chrome Web Store dashboard to check on support requests and see how many users I've still got. Caret has held steady for the last year or so at about 150,000 active users, give or take ten thousand, and the support and feature requests have settled into a predictable rut: <ul> <li> People who can't run Caret because their version of Chrome is too old, and I've started using new ES6 features that aren't supported six browser versions back. <li> People who want split-screen support, and are out of luck barring a major rewrite. <li> People who don't like the built-in search/replace functionality, which makes sense, because it's honestly pretty terrible. <li> People who don't like the icons, and are just going to have to get over it. </ul> <p> In a few cases, however, users have more interesting questions about the fundamental capabilies of developer tooling, like file system monitoring or plugging into the OS in a deeper way. And there I have bad news, because as far as I can tell, Chrome apps are no longer actively developed by the Chromium team at all, and probably never will be again. <p> I don't think Chrome apps are going away immediately &mdash; they're still useful and used by a lot of third-party companies &mdash; but it's pretty clear from the dev side of things that Google's heart isn't in it anymore. New APIs have ceased to roll out, and apps don't get much play at conferences. The new party line is all about progressive web apps, with browser extensions for the few cases where you need more capabilities. <p> Now, progressive web apps are great, and anything that moves offline applications away from a single browser and out to the wider web is a good thing. But the fact remains that while a large number of Chrome apps can become PWAs with little fuss, Caret can't. Because it interacts with the filesystem so heavily, in a way that assumes a broader ecosystem of file-based tools (like Git or Node), there's actually no path forward for it using browser-only APIs. As such, it's an interesting litmus test for just how far web apps can actually reach &mdash; not, as some people have wrongly assumed, because there's an inherent performance penalty on the web, but because of fundamental limits in the security model of the browser. <h4>Bounding boxes</h4> <p> What's considered "possible" for a web app in, say, 2020? It may be easier to talk about what <i>isn't</i> possible, which avoids the judgment call on what is "suitable." For example, it's a safe bet that the following capabilities won't ever be added to the web, even though they've been hotly debated in and out of standards committees for years: <ul> <li> Read/write file access (died when the W3C pulled the plug on the Directories part of the Filesystem API) <li> Non-HTTP sockets and networking (an endless number of reasons, but mostly "routers are awful") </ul> <p> There are also a bunch of APIs that are in experimental stages, but which I seriously doubt will see stable deployment in multiple browsers, such as: <ul> <li> Web Bluetooth (enormous security and usability issues) <li> Web USB (same as Bluetooth, but with added attacks from the physical connection) <li> Battery status (privacy concerns) <li> Web MIDI </ul> <p> It's tough to get worked up about a lot of the initiatives in the second list, which mostly read as a bad case of mobile envy. There are good reasons not to let a web page have drive-by access to hardware, and who's hooking up a MIDI keyboard to a browser anyway? The <a href="https://google.github.io/physical-web/">physical web</a> is a better answer to most of these problems. <p> When you look at both lists together, one thing is clear: Chrome apps have clearly been a testing ground for web features. Almost all the not-to-be-implemented web APIs have counterparts in Chrome apps. And in the end, the web did learn from it &mdash; mainly that even in a sandboxed, locked-down, centrally distributed environment, giving developers that much power with so little install friction could be really dangerous. Rogue extensions and apps are a serious problem for Chrome, as I can attest: about once a week, shady people e-mail me to ask if they can purchase Caret. They don't explicitly say that they're going to use it to distribute malware and takeover ads, but the subtext is pretty clear. <p> The great thing about the web is that it can run code without any installation step, but that's also the worst thing about it. Even as a huge fan of the platform, the idea that any of the uncountable pages I visit in any given week could access USB directly is pretty chilling, especially when combined with exploits for devices that are plugged in, like hacking a phone (a nice twist on the <a href="http://esec-lab.sogeti.com/posts/2011/07/16/analysis-of-the-jailbreakme-v3-font-exploit.html"> drive-by jailbreak of iOS 4</a>). Access to the file system opens up an even bigger can of worms. <p> Basically, all the things that we want as developers are probably too dangerous to hand out to the web. I wish that weren't true, but it is. <h4>Untrusted computing</h4> <p> Let's assume that all of the above is true, and the web can't safely expand for developer tools. You can still build powerful apps in a browser, they just have to be supported by a server. For example, you can use a service like Cloud 9 (<a href="https://c9.io/blog/great-news/">now an AWS subsidiary</a>) to work on a hosted VM. This is the revival of the thick-client model: offline capabilities in a pinch, but ultimately you're still going to need an internet connection to get work done. <p> In this vision, we are leaning more on the browser sandbox: creating a two-tier system with the web as a client runtime, and a native tier for more trust on the local machine. But is that true? Can the web be made safe? Is it safe now? The answer is, at best, "it depends." Every third-party embed or script exposes your users to risk &mdash; if you use an ad network, you don't have any real idea who could be reading their auth cookies or tracking their movements. The miracle of the web isn't that it is safe, it's that it manages to be useful despite how rampantly unsafe its defaults are. <p> So along with the shift back to thick clients has come a change in the browser vendors' attitude toward powerful API features. For example, you can no longer use geolocation or the camera/microphone in Chrome on pages that aren't served over HTTPS, with other browsers to follow. Safari already disallows third-party cookie access as a general rule. New APIs, like Service Worker, require HTTPS. And I don't think it's hard to imagine a world where an API also requires a strict Content Security Policy that bans third-party embeds altogether (another place where Chrome apps <a href="https://developer.chrome.com/apps/contentSecurityPolicy">led the way</a>). <p> The packaged app security model was that if you put these safeguards into place and verified the package contents, you could trust the code to access additional capabilities. But trusting the client was a mistake when people were writing Quakebots, and it stayed a mistake in the browser. In the new model, those controls are the minimum just to keep what you had. Anything extra that lives solely on the client is going to face a serious uphill battle. <h4>Mind the gap</h4> <p> The longer that I work on Caret, the less I'm upset by the idea that its days are numbered. Working on a moderately-successful open source project is exhausting: people have no problems making demands, sending in random changes, or asking the same questions over and over again. It's like having a second boss, but one that doesn't pay me or offer me any opportunities for advancement. It's good for exposure, but people die from exposure. <p> The one regret that I will have is the loss of Caret's educational value. Since its early days, there's been a small but steady stream of e-mail from teachers who are using it in classrooms, both because Chromebooks are huge in education and because Caret provides a pretty good editor with almost no fuss (you don't even have to be signed in). If you're a student, or poor, or a poor student, it's a pretty good starter option, with no real competition for its market niche. <p> There are alternatives, but they tend to be online-only (like Mozilla's <a href="https://thimble.mozilla.org/en-US/">Thimble</a>) or they're not Chromebook friendly (Atom) or they're completely unacceptable in a just world (Vim). And for that reason alone, I hope Chrome keeps packaged apps around, even if they refuse to spend any time improving the infrastructure. Google's not great at end-of-life maintenance, but there are a lot of people counting on this weird little ecosystem they've enabled. It would be a shame to let that die. Wed, 10 Aug 2016 09:02:42 -0700 http://www.milezero.org/index.php/tech/web/rip_chrome_apps.html/tech/web <slide-show> http://www.milezero.org/index.php/tech/web/slide_dash_show.html On Thursday, I'll be giving a talk at CascadiaFest on <a href="http://2016.cascadiafest.org/speakers/thomas-wilburn/">using custom elements in production</a>. It's kind of a sales pitch, to convince people that adopting web components is safe to do, despite the instability of the spec and the contentious politics between browsers. After all, we've been publishing with several components at the Times for almost two years now, with good results. <p> When I presented an early version of this at SeattleJS, I presented by scrolling through a single text file instead of slides, because I've always wanted to do that. But for Cascadia, I wanted to do something a little more special, so I built the presentation itself out of custom elements, with the goal that it would demonstrate how to write code that works with both versions of the spec. It's also meant to be a good example for someone who's just learning how web components function &mdash; I use pretty much every custom elements feature at one point or another in 300 lines of code. You can take a look at the source for it <a href="https://github.com/thomaswilburn/slide-dash-show/">here</a>. <p> There are several strategies that I ended up emphasizing while writing the <var>&lt;slide-show&gt;</var> elements, primarily the <a href="https://github.com/thomaswilburn/slide-dash-show/blob/gh-pages/slide-show.js#L115">heavy use of events to tame asynchronicity</a>. It turns out that between V0, V1, and the two major polyfills, elements and their attributes are resolved by the parser with entirely different timing. It's really important that child elements <a href="https://github.com/thomaswilburn/slide-dash-show/blob/gh-pages/slide-elements.js#L12">notify their parent</a> when they upgrade, and parents shouldn't assume that children are ready at startup. <p> One way to deal with asynchronous upgrades is just to put all your functionality in the parent element (our <var>&lt;leaflet-map&gt;</var> does this), but I wanted to make these slides easier to extend with new types (such as text, code, or image slides). In this case, the slide show looks for a <var>parsedContent</var> property on the current slide, and it's the child's job to <a href="https://github.com/thomaswilburn/slide-dash-show/blob/gh-pages/slide-elements.js#L31">populate and update that value</a>. An earlier version called a <var>parseContents()</var> method, but using properties as "duck-typing" makes it much easier to handle un-upgraded elements, and moving the responsibility to the child also greatly simplified the process of watching slide contents for changes. <p> A nice side effect of using live properties and events is that it "feels" a lot more like a built-in element. The modern DOM API is built on similar primitives, so writing <a href="https://github.com/thomaswilburn/slide-dash-show/blob/gh-pages/script.js">the glue code for the UI</a> ended up being very pleasant, and it's possible to interact using the dev tools in a natural way. I suspect that well-built component libraries in the future will be judged on how well they leverage a declarative interface to blend in with existing elements. <p> Ironically, between child elements and Shadow DOM, it's actually much harder to move between different polyfills than it is to write an element definition for both the new and old specifications. We've always written for Giammarchi's <var>registerElement</var> shim at the Times, and it was shocking for me to find out that Polymer's shim not only diverges from its counterpart, but also differs from Chrome's native implementation. Coding around these differences took a bit of effort, but it's probably work I should have done at the start, and the result is quite a bit nicer than some of the hacks I've done for the Times. I almost feel like I need to go back now and update them with what I've learned. <p> Writing this presentation was a good way to make sure I was current on the new spec, and I'm actually pretty happy with the way things have turned out. When WebKit started prototyping their own API, I started to get a bit nervous, but the resulting changes are relatively minor: some property names have changed, the lifecycle is ordered a bit differently, and upgrade code is called in the constructor (to encourage using the class syntax) instead of from a <var>createdCallback()</var> method. Most of these are positive alterations, and while there are some losses going from V0 to V1 (no <var>is</var> attribute to subclass arbitrary elements), they're not dealbreakers. Overall, I'm more optimistic about the future of web components than I have in quite a while, and I'm looking forward to telling people about it at Cascadia! Mon, 01 Aug 2016 14:59:59 -0700 http://www.milezero.org/index.php/tech/web/slide_dash_show.html/tech/web Emu Nation http://www.milezero.org/index.php/gaming/perspective/emu_nation.html It's hard to hear news of Nintendo creating a tiny, $60 NES package and not think of Frank Cifaldi's provocative <a href="http://www.gdcvault.com/play/1023470/-It-s-Just-Emulation">GDC talk on emulation</a>. Cifaldi, who works on game remastering and preservation (most recently on a <i>Mega Man</i> collection), covers a wide span of really interesting industry backstory, but his presentation is mostly infamous for the following quote: <blockquote> <p> The virtual console is nothing but emulations of Nintendo games. And in fact, if you were to download Super Mario Brothers on the Wii Virtual Console... <p> <i>[shows a screenshot of two identical hex filedumps]</i> <p> So on the left there is a ROM that I downloaded from a ROM site of Super Mario Brothers. It's the same file that's been there since... it's got a timestamp on it of 1996. On the right is Nintendo's Virtual Console version of Super Mario Brothers. I want you to pay particular attention to the hex values that I've highlighted here. <p> <i>[the highlighted sections are identical]</i> <p> That is what's called an iNES header. An iNES header is a header format developed by amateur software emulators in the 90's. What's that doing in a Nintendo product? I would posit that <b>Nintendo downloaded Super Mario Brothers from the internet and sold it back to you.</b> </blockquote> <p> As Cifaldi notes, while the industry has had a strong official anti-emulation stance for years, they've also turned emulation into a regular revenue stream for Nintendo in particular. In fact, Nintendo has used scaremongering about emulation to monopolize the market for any games that were published on its old consoles. In this case, the miniature NES coming to market in November is almost certainly running an emulator inside its little plastic casing. It's not so much that they're opposed to emulation, so much as they're opposed to emulation that they can't milk for cash. <p> To fully understand how demented this has become, consider the case of <i>Yoshi's Island</i>, which is one of the greatest platformers of the 16-bit era. I am terrible at platformers but I love this game so much that I've bought it at least three times: once in the Gameboy Advance port, once on the Virtual Console, and once as an actual SNES cartridge back when Belle and I lived in Arlington. Nintendo made money at least on two of those copies, at least. But now that we've sold our Wii, if I want to play <i>Yoshi's Island</i> again, even though I have owned three legitimate copies of the game I would still have to give Nintendo more money. Or I could grab a ROM and an emulator, which seems infinitely more likely. <p> By contrast, I recently bought a copy of <i>Doom</i>, because I'd never played through the second two episodes. It ran me about $5 on Steam, and consists of the original WAD files, the game executable, and a preconfigured version of DOSBox that hosts it. I immediately went and installed <a href="http://chocolate-doom.org">Chocolate Doom</a> to run the game fullscreen with better sound support. If I want to play <i>Doom</i> on my phone, or on my Chromebook, or whatever, I won't have to buy it again. I'll just copy the WAD. And since I got it from Steam, I'll basically have a copy on any future computers, too. <p> (Episode 1 is definitely the best of the three, incidentally.) <p> Emulation is also at the core of the Internet Archive's groundbreaking work to preserve digital history. They've preserved thousands of games and pieces of software via browser ports of MAME, MESS, and DOSBox. That means I can load up a copy of <a href="https://archive.org/details/msdos_broderbund_print_shop">Broderbund Print Shop</a> and relive summer at my grandmother's house, if I want. But I can also pull up the <a href="https://archive.org/details/canoncat">Canon Cat</a>, a legendary and extremely rare experiment from one of the original Macintosh UI designers, and see what a radically different kind of computing might look like. There's literally no other way I would ever get to experience that, other than emulating it. <p> The funny thing about demonizing emulation is that we're increasingly entering an era of digital entertainment that may be unpreservable with or without it. Modern games are updated over the network, plugged into remote servers, and (on mobile and new consoles) distributed through secured, mostly-inaccessible package managers on operating systems with no tradition of backward compatibility. It may be impossible, 20 years from now, to play a contemporary iOS or Android game, similar to the way that Blizzard themselves <a href="http://www.gamasutra.com/view/news/274750/To_run_WoW_legacy_servers_Blizzard_must_reverseengineer_its_own_game.php">can't recreate a decade-old version of <i>World of Warcraft</i></a>. <p> By locking software up the way that Nintendo (and other game/device companies) have done, as a single-platform binary and not as a reusable data file, we're effectively removing them from history. Maybe in a lot of cases, that's fine &mdash; in his presentation, Cifaldi refers offhand to working on a mobile <i>Sharknado</i> tie-in that's no longer available, which is not exactly a loss for the ages. But at least some of it has to be worth preserving, in the same way even bad films can have lessons for directors and historians. The Canon Cat was not a great computer, but I can still learn from it. <p> I'm all for keeping Nintendo profitable. I like the idea that they're producing their own multi-cart NES reproduction, instead of leaving it to third-party pirates, if only because I expect their version will be slicker and better-engineered for the long haul. But the time has come to stop letting them simultaneously re-sell the same ROM to us in different formats, while insisting that emulation is solely the concern of pirates and thieves. Thu, 14 Jul 2016 20:04:29 -0700 http://www.milezero.org/index.php/gaming/perspective/emu_nation.html/gaming/perspective Under our skin http://www.milezero.org/index.php/culture/america/race_and_class/under_our_skin.html This week, we've launched a major project at the Times on the words people use when talking about race in America. <a href="http://projects.seattletimes.com/2016/under-our-skin/">Under our skin</a> was spearheaded by a small group of journalists after the paper came under fire for some bungled coverage. I think they did a great job &mdash; the subjects are well-chosen, the editing is top-notch, and we're trying to supplement it with guest essays and carefully-curated comments (as opposed to our usual all-or-nothing approach to moderation). I mostly watched from the sidelines on this one, as our resident expert on forcing Brightcove video to behave in a somewhat-acceptable manner, and it was really fascinating watching it take shape. Mon, 20 Jun 2016 13:56:25 -0700 http://www.milezero.org/index.php/culture/america/race_and_class/under_our_skin.html/culture/america/race_and_class Speaking schedule, 2016 http://www.milezero.org/index.php/random/personal/speaking_schedule_2016.html After NICAR, I wasn't really sure I ever wanted to go to any conferences ever again &mdash; the travel, the hassle, the expense... who needs it? But I am also apparently unable to moderate my extracurricular activities in any way, even after leaving a part-time teaching gig, so: I'm happy to announce that I'll be speaking at a couple of professional conferences this summer, albeit about very different topics. <p> First up, I'll be facilitating a session at SRCCON in Portland about <a href="http://srccon.org/sessions/#proposal-312019">designing humane news sites</a>. This is something I've been thinking about for a while now, mostly with regards to bots and "conversational UI" fads, but also as the debate around ads has gotten louder, and the ads themselves have gotten worse (<a href="https://rewire.news/article/2016/05/25/anti-choice-groups-deploy-smartphone-surveillance-target-abortion-minded-women-clinic-visits/">see also</a>). I'm hoping to talk about the ways that we can build both individual interactives and content management systems so that we can minimize the amount of accidental harm that we do to our readers, and retain their trust. <p> My second talk will be at <a href="http://2016.cascadiafest.org/speakers/">CascadiaFest</a> in beautiful Semiahmoo, WA. I'll be speaking on how we've been using custom elements in production at the Times, and encouraging people to build their own. The speaker list at Cascadia is completely bonkers: I'll be sharing a stage with people who I've been following for years, including Rebecca Murphey, Nolan Lawson, and Marcy Sutton. It's a real honor to be included, and I've been nervously rewriting my slides ever since I got in. <p> Of course, by the end of the summer, I may never want to speak publicly again &mdash; I may burn my laptop in a viking funeral and move to Montana, where I can join our <a href="http://www.seattletimes.com/seattle-news/editor-kathy-best-leaving-the-seattle-times/">departing editor</a> in some kind of backwoods hermit colony. But for right now, it feels a lot like the best parts of teaching (getting to show people cool stuff and inspire them to build more) without the worst parts (grading, the school administration). Thu, 26 May 2016 10:30:38 -0700 http://www.milezero.org/index.php/random/personal/speaking_schedule_2016.html/random/personal Behind the Times http://www.milezero.org/index.php/tech/web/behind_the_times.html The paper recently launched a new native app. I can't say I'm thrilled about that, but nobody made me CEO. Still, the technical approach it takes is "interesting:" its backing API converts articles into a linear stream of blocks, each of which is then hand-rendered in the app. That's the plan, at least: at this time, it doesn't support non-text inline content at all. As a result, a lot of our more creative digital content doesn't appear in the app, or is distorted when it does appear. <p> The justification given for this decision was speed, with the implicit statement being that a webview would be inherently too slow to use. But is that true? I can't resist a challenge, and it seemed like a great opportunity to test out some new web features I haven't used much, so I decided to try building a client. You can find the code <a href="https://github.com/thomaswilburn/seatimes-chrome">here</a>. It's currently structured as a Chrome app, but that's just to get around the CORS limit since our API doesn't have the Access-Control-Allow-Origin headers added. <p> The app uses a technique that's been popularized by Nolan Lawson's <a href="http://www.pocketjavascript.com/blog/2015/11/23/introducing-pokedex-org">Pokedex.org</a>, in which almost all of the time-consuming code runs in a Web Worker, and the main thread just handles capturing UI events and re-rendering. I started out with the worker process handling <a href="https://github.com/thomaswilburn/seatimes-chrome/blob/master/src/js/worker/seatimes.js">network and caching in IndexedDB</a> (the poor man's Service Worker), and then expanded it to do <a href="https://github.com/thomaswilburn/seatimes-chrome/blob/master/src/js/worker/sanitize.js">HTML sanitization as well</a>. There's probably other stuff I could move in, but honestly I think it's at a good balance now. <p> By putting all this stuff into a second script that runs independently, it frees up the browser to maintain a smooth frame rate in animations and UI response. It's not just the fact that I'm doing work elsewhere, but also that there's hardly any garbage collection on the main thread, which means no halting while the JavaScript VM cleans up. I thought building an app this way would be difficult, but it turns out to be mostly similar to writing any page that uses a lot of AJAX &mdash; <a href="https://github.com/thomaswilburn/seatimes-chrome/blob/master/src/js/worker/routes.js">structure the worker as a "server"</a> and the patterns are pretty much the same. <p> The other new technology that I learned for this project is <a href="http://mithril.js.org">Mithril</a>, a virtual DOM framework that my old coworkers at ArenaNet rave about. I'm not using much of its MVC architecture, but its view rendering code is great at gradually updating the page as the worker sends back new data: I can generate the initial article list using just the titles that come from one network endpoint, and then <a href="https://github.com/thomaswilburn/seatimes-chrome/blob/master/src/js/ui/sectionView.js#L63">add the thumbnails that I get from a second, lower-priority request</a>. Readers get a faster feed of stories, and I don't have to manually synchronize the DOM with the new data. <p> The metrics from this version of the app are (unsurprisingly) pretty good! The biggest slowdown is the network, which would also be a problem in native code: loading the article list for a section requires one request to get the article IDs, and then one request for each article in that section (up to 21 in total). That takes a while &mdash; about a second, on average. On the other hand, it means we have every article cached by the time that the user can choose something to read, which cuts the time for requesting and loading an individual article hovers around 150ms on my Chromebook. <p> That's not to say that there aren't problems, although I think they're manageable. For one thing, the worker and app bundles are way too big right now (700KB and 200KB, respectively), in part because they're pulling in a bunch of big NPM modules to do their processing. These should be lazy-loaded for speed as much as possible: we don't need HTML parsing right away, for example, which would cut a good 500KB off of the worker's initial size. Every kilobyte of script is roughly 1ms of load time on a mobile device, so spreading that out will drastically speed up the app's startup time. <p> As an interesting side note, we could cut almost all that weight entirely if the <var>document.implementation</var> object was available in Web Workers. Weir, for example, does all its parsing and sanitization <a href="https://github.com/thomaswilburn/Weir/blob/master/public/js/Service.Sanitize.js#L30">in an inert document</a>. Unfortunately, the DOM isn't thread-safe, so nothing related to <var>document</var> is available outside the main process, and I suspect a serious sanitization pass would blow past our frame budget anyway. Oh well: <var>htmlparser2</var> and friends it is. <p> Ironically, the other big issue is mostly a result of packaging this up as a Chrome app. While that lets me talk to the CMS without having CORS support, it also comes with a fearsome content security policy. The app shell can't directly load images or fonts from the network, so we have to load article thumbnails through JavaScript manually instead. Within Chrome's <var>&lt;webview&gt;</var> tag, we have the opposite problem: the webview can't load anything from the app, and it has a weird protocol location when loaded from a data URL, so all relative links have to be rewritten. It's not insurmountable, but you have to be pretty comfortable with the way browsers work to figure it out, and the debugging can get a little hairy. <p> So there you have it: a web app that performs like native, but includes support for features like DocumentCloud embeds or interactive HTML graphs. At the very least, I think you could use this to advocate for a hybrid native/web client on your news site. But there's a strong argument to be made that this could be your <i>only</i> app: add a Service Worker and (in Chrome and Firefox) it could load instantly and work offline after the first visit. It would even get a home screen icon and push notification support. I think the possibilities for <a href="https://addyosmani.com/blog/getting-started-with-progressive-web-apps/">progressive web apps</a> in the news industry are really exciting, and building this client makes me think it's doable without a huge amount of extra work. Tue, 10 May 2016 15:24:10 -0700 http://www.milezero.org/index.php/tech/web/behind_the_times.html/tech/web Reporting with Python http://www.milezero.org/index.php/journalism/education/reporting_with_python.html This month, I'm teaching a class at the University of Washington on reporting with Python. This seems like an odd match for me, since I hardly ever work with Python, but I wanted to do a class that was more journalism-focused (as opposed to the front-end development that I normally teach) and teaching first-time programmers how to do data analysis in Node just isn't realistic. If you're interested in following along, the repository with the class materials is located <a href="https://github.com/thomaswilburn/reporting-with-python">here</a> <p> I'm not the Times' data reporter, so I don't get to do this kind of analysis often, but I always really enjoy it when I do. The danger when planning a class on a fun topic is that it's easy to over-stuff the curriculum in my eagerness to cover the techniques that I think are particularly interesting. To fight that impulse, I typically make a list of material I want to cover, then cut it in half, then think about cutting it in half again. As a result, there's a lot of stuff that didn't make it in &mdash; SQL and web scraping primarily among them. <p> What's left, however, is a pretty solid base for reporters who are interested in starting to use code to generate and explore stories. Last week, we cleaned and searched 1,000 text files for a string, and this week we'll look at doing analysis on CSV files. In the final session, I'm planning on taking a deep dive into regular expressions: so much of reporting is based around interrogating text files, and the nice thing about an education in regex is that it will travel into almost any programming language (as well as being useful for many command line tools like grep or sed). <p> If I can get anything across in this class, I'm hoping to leave students with an understanding of just how big digital scale can be, and how important it is to have tools for handling it. I was talking one night with one of the Girl Develop It organizers, who works for a local analytics company. Whereas millions of rows of data is a pretty big deal for me, for her it's a couple of hours on a Saturday &mdash; she's working at a whole other order of magnitude. I wouldn't even know where to start. <p> Right now, most record requests and data dumps operate more at my scale. A list of <a href="http://www.seattletimes.com/seattle-news/environment/thousands-of-exotic-animals-are-shipped-through-seattle-each-year/">all animal imports/exports in the US for the last ten years</a> is about 7 million records, for example. That's approachable with Python, although you'd be better off learning some SQL for the heavy lifting, but it's past the point where Excel is useful, and it certainly couldn't be explored by hand. If you can't code, or you don't have access to someone who does, you can't write that story. <p> At some point, the leaks and government records that reporters pore over may grow to a larger kind of scale (leaks, certainly; government data has will be aggregated as long as there are privacy concerns). When that happens, reporters will have to develop the kinds of skills that I don't have. We already see hints of this in the tremendous tooling and coordination required for investigating <a href="https://source.opennews.org/en-US/articles/people-and-tech-behind-panama-papers/">the Panama papers</a>. But in the meantime, I think it's tremendously important that students learn how to automate data at a basic level, and I'm really excited that this class will introduce them to it. Fri, 29 Apr 2016 10:04:06 -0700 http://www.milezero.org/index.php/journalism/education/reporting_with_python.html/journalism/education Calculated Amalgamation http://www.milezero.org/index.php/tech/coding/calculated_amalgamation.html In a fit of nostalgia, I've been trying to get my hands on a TI-82 calculator for a few weeks now. TI BASIC was probably the first programming language in which I actually wrote significant amounts of code: although a few years later I'd start working in C for PalmOS and Windows CE, I have a lot of memories of trying to squeeze programs for speed and size during slow class periods. While I keep checking Goodwill for spares, there are plenty of TI calculator emulation apps, so I grabbed one and loaded up a TI-82 ROM to see what I've retained. <p> Actually, TI BASIC is <i>really</i> weird. Things I had forgotten: <ul> <li> You can type in all-caps text if you want, but most of the time you don't, because all of the programming keywords (<var>If</var>, <var>Else</var>, <var>While</var>, etc.) are actually single "character" glyphs that you insert from a menu. <li> In fact, pretty much the only code that's typed manually are variable names, of which you get 26 (one for each letter). There are also six arrays (max length 99), five two-dimensional matrices (limited by memory), and a handful of state variables you can abuse if you really need more. Everything is global. <li> Variables aren't stored using <var>=</var>, which is reserved for testing, but with a left-to-right arrow operator: <var>value &rarr; dest</var> I imagine this clears up a lot of ambiguity in the parser. <li> Of course, if you're processing data linearly, you can do a lot without explicit variables, because the result of any statement gets stored in <var>Ans</var>. So you can chain a lot of operations together as long as you just keep operating on the output of the previous line. <li> There's no debugger, but you can hit the On key to break at any time, and either quit or jump to the current line. <li> You can call other programs and they do return after calling, but there are no function definitions or return values other than <var>Ans</var> (remember, everything is global). There is GOTO, but it apparently causes memory leaks when used (thanks, Dijkstra!). </ul> <p> I'd romanticized it over time &mdash; the self-contained hardware, the variable-juggling, the 1-bit graphics on a 96x64 screen. Even today, I'm kind of bizarrely fascinated by this environment, which feels like the world's most cumbersome register VM. But loading up the emulator, it's obvious why I never actually finished any of my projects: TI BASIC is legitimately a terrible way to work. <p> In retrospect, it's obviously a scripting language for a plotting library, and not the game development environment I wanted it to be when I was trying to build Wolf3D clones. You're supposed to write simple macros in TI BASIC, not full-sized applications. But as a bored kid, it was a great playground, and the limitations of the platform (including its molasses-slow interpreter) made simple problems into brainteasers (it's almost literally the challenge behind <i>TIS-100</i>). <p> These days, the kids have it way better than I did. A micro:bit is cheaper and syncs with a phone or computer. A Raspberry Pi is a real computer of its own, as is the average smartphone. And a laptop or Chromebook with a browser is miles more productive than a TI-82 could ever be. On the other hand, they probably can't sneak any of those into their trig classes and get away with it. And maybe that's for the best &mdash; look how I turned out! Thu, 14 Apr 2016 22:48:16 -0700 http://www.milezero.org/index.php/tech/coding/calculated_amalgamation.html/tech/coding ES6 in anger http://www.milezero.org/index.php/tech/web/es6_in_anger.html One of the (many) advantages of running Seattle Times interactives on an entirely different tech stack from the rest of the paper is that we can use new web features as quickly as we can train ourselves on them. And because each news app ships with an isolated set of dependencies, it's easy to experiment. We've been using a lot of new ES6 features as standard for more than a year now, and I think it's a good chance to talk about how to use them effectively. <p> <h4>The Good</h4> <p> Surprisingly (to me at least), the single most useful ES6 feature has been arrow functions. The key to using them well is to restrict them only to one-liners, which you'd think would limit their usefulness. Instead, it frees you up to write much more readable JavaScript, especially in array processing. As soon as it breaks to a second line (or seems like it might do so in the future), I switch to writing regular function statements. <pre><code> //It's easy to filter and map: var result = list.filter(d => d.id).map(d => d.value); //Better querySelectorAll with the spread operator: var $ = s => [...document.querySelectorAll(s)]; //Fast event logging: map.on("click", e => console.log(e.latlng); //Better styling with template strings: var translate = (x, y) => `translate(${x}px, ${y}px);`; </code></pre> <p> Template strings are the second biggest win, especially as above, where they're combined with arrow functions to create text snippets. Having a multiline string in JS is very useful, and being able to insert arbitrary values makes building dynamic popups or CSS styles enormously simpler. I love writing template strings for quick chunks of templating, or embedding readable SQL in my Node apps. <p> Despite the name, template strings aren't real templates: they can't handle loops, they don't really do interpolation, and the interface for using "tagged" strings is cumbersome. If you're writing very long template strings (say, more than five lines), it's probably a sign that you need to switch to something like Handlebars or EJS. I have yet to see a "templating language" built on tagged strings that didn't seem like a wildly frustrating experience, and despite the industry's shift toward embedded DSLs like React's JSX, there is a benefit to keeping different types of code in different files (if only for widespread syntax highlighting). <p> The last feature I've really embraced is destructuring and object literals. They're mostly valuable for cleanup, since all they do is cut down on repetition. But they're pleasant to use, especially when parsing text and interacting with CommonJS modules. <pre><code> //Splitting dates is much nicer now: var [year, month, day] = dateString.split(/\/|-/); //Or getting substrings out of a regex match: var re = /(\w{3})mlb_(\w{3})mlb_(\d+)/; var [match, away, home, index] = gameString.match(re); //Exporting from a module can be simpler: var x = "a"; var y = "b"; module.exports = { x, y }; //And imports are cleaner: var { x } = require("module"); </code></pre> <h4>The bad</h4> <p> I've tried to like ES6 classes and modules, and it's possible that one day they're going to be really great, but right now they're not terribly friendly. Classes are just syntactic sugar around ES5 prototypes &mdash; although they look like Java-esque <var>class</var> statements, they're still going to act in surprising ways for developers who are used to traditional inheritance. And for JavaScript programmers who understand how the language actually works, class definitions boast a weird, comma-less syntax that's <i>sort of</i> like the new object literal syntax, but far enough off that it keeps tripping me up. <p> The turning point for the new <var>class</var> keyword will be when the related, un-polyfillable features make their way into browsers &mdash; I'm thinking mainly of the new Symbols that serve as feature flags and the ability to extend Array and other built-ins. Until that time, I don't really see the appeal, but on the other hand I've developed a general aversion to traditional object-oriented programming, so I'm probably not the best person to ask. <p> Modules also have some nice features from a technical standpoint, but there's just no reason to use them over CommonJS right now, especially since we're already compiling our applications during the build process (and you have to do that, because browser support is basically nil). The parts that are really interesting to me about the module system &mdash; namely, the configurable loader system &mdash; aren't even fully specified yet. <h4>New discoveries</h4> <p> Most of what we use on the Times' interactive team is restricted to portions of ES6 that can be transpiled by Babel, so there are a lot of features (proxies, for example) that I don't have any experience using. In a Node environment, however, I've had a chance to use some of those features on the server. When I was writing our <a href="https://github.com/seattletimes/mlb-scraper/">MLB scraper</a>, I took the opportunity to try out generators for the first time. <p> Generators are borrowed liberally from Python, and they're basically constructors for custom iterable sequences. You can use them to make normal objects respond to language-level iteration (i.e., <var>for ... of</var> and the spread operator), but you can also define sequences that don't correspond to anything in particular. In my case, I created a generator for the calendar months that the scraper loads from the API, which (when hooked up to the command line flags) lets users restart an MLB download from a later time period: <pre><code> //feed this a starting year and month var monthGen = function*(year, month) { while (year < 2016) { yield { year, month }; month++; if (month > 12) { month = 1; year++; } } }; //generate a sequence from 2008 to 2016 var months = [...monthGen(2008, 1)]; </code></pre> <p> That's a really nice code pattern for creating arbitrary lists, and it opens up a lot of doors for JavaScript developers. I've been reading and writing a bit more Python lately, and it's been amazing to see how much a simple pattern like this, applied language-wide, can really contribute to its ergonomics. Instead of the Stream object that's common in Node, Python often uses generators and iteration for common tasks, like reading a file line-by-line or processing a data pipeline. As a result, I suspect most new Python programmers need to survey a lot less intellectual surface area to get up and running, even while the guts underneath are probably less elegant for advanced users. <p> It surprised me that I was so impressed with generators, since I haven't particularly liked Python very much in the past. But in reading the <a href="http://chimera.labs.oreilly.com/books/1230000000393/index.html">Cookbook</a> to prep for a UW class in Python, I've realized that the two languages are actually closer together than I'd thought, and getting closer. Python's <var>class</var> implementation is actually prototypical behind the scenes, and its use of duck typing for built-in language features (such as the <a href="https://docs.python.org/2/reference/datamodel.html#context-managers"><var>with</var> statement</a>) bears a strong resemblance to the work being done on JavaScript Promises (a.k.a. "then-ables") and iterator protocols. <p> It's easy to be resistant to change, and especially when it's at the level of a language (computer or otherwise). I've been critical of a lot of the decisions made in ES6 in the past, but these are positive additions on the whole. It's also exciting, as someone who has been working in JavaScript at a deep level, to find that it has new tricks, and to stretch my brain a little integrating them into my vocabulary. It's good for all of us to be newcomers every so often, so that we don't get too full of ourselves. Tue, 22 Mar 2016 18:43:02 -0700 http://www.milezero.org/index.php/tech/web/es6_in_anger.html/tech/web