this space intentionally left blank

June 18, 2021

Filed under: tech»coding

Upstream

In 2013, Google decided to shut down Google Reader, one of a number of boneheaded decisions that the company undertook in pursuit of some bizarre competition with Facebook. At the time, I decided to try an experiment: I'd write my own RSS reader, try it for a few months, and if it didn't work out I'd switch to one of the corporate replacement options.

Eight years later, I still use Weir to keep up with various feeds, blogs, and news updates. It's a deeply personal piece of software — the project that made me fall in love with the idea of code tools that are crafted just for a single person, like making your own workbench or sewing your own clothes.

But I also haven't substantially upgraded or altered Weir in all that time, even as I've learned a lot about developing on the web. So this week, while I had the apartment to myself, I decided to experiment again and build a new client (while mostly leaving the server alone). After I get a chance to work out any remaining kinks, I'll move it over to become the new built-in UI for the application.

I love my curvy UI

The original client was written in Angular 1 as a learning project. It's fine! It's mostly fine. The main problem that Angular had — and which other front-end frameworks have inherited — is that it wanted you to do all your work at a level of abstraction from the DOM, and any problems that couldn't be cleanly moved into the state object would get messy. Browsers were also worse in those days: no intersection observers for handling scroll positioning, inconsistent event handling, no support for easy concurrency with async/await. So there's some awkward behavior in the original client that never felt like it would be easy to fix, because it required crossing that abstraction barrier.

Unsurprisingly, for the rewrite I organized the code via web components — extended from the same base class that I used for Radio, and coordinated over a central event bus similar to the command system in Caret. The only code that translated over mostly unchanged was the sanitization module, which loads each post body into an inert document and processes it to remove ads, custom styles, class names, and anything else that isn't plain HTML content.

What is surprising is that the two codebases are not notably different in size — in fact, CLOC gives roughly the same line counts between the two. Of course, that only includes code I wrote. The original Weir client also requires 80KB of Angular runtime code, which has to be downloaded, parsed, compiled, and run before any of my code shows up onscreen. I'm using those precious first-paint seconds to indulge in a build-free workflow — all JS is just loaded as raw ES modules, and components fetch their styles and markup from individual HTML files instead of using Less and Browserify. It all evens out, but if I decide I'm tired of paying a startup penalty, it's certainly easy enough to add Rollup to the process.

Typically when I go framework-less, the thing I miss most is iteration in templates. It's still a little clumsy in the new client code. But combining Element.replaceChildren(), shadow DOM slots, and elements that act as template partials, it's honestly much less of an issue these days. I could add a databinding function to diff and transition elements, as Radio does for its sorted podcast lists, but (other than the feed management table) there's almost no part of the UI here where view data persists between state transitions, so it's not really worth the effort.

Scrolls like the Dead Sea

Instead of using a stack of full-window UI "scenes" for different tasks within the UI (such as settings, feed management, and reading stories), the new client is organized in three columns (admin, story listings, and reader). On desktop, they line up side-by-side across the window, and on mobile each one takes up the whole screen, similar to something like Tweetdeck or Mastodon. CSS scroll snap makes it easy to swipe between them horizontally or scroll vertically within the individual panels as their content requires. In practice this gives us a native-feeling, responsive UI pattern with no JavaScript, and it will feel more natural when snap stop is supported to prevent overscroll.

Desktop view: three columns in a row Weir on desktop

Mobile: swipeable columns (artist's rendering) Weir on a phone

Unfortunately, creating a mobile UI that scrolls in two directions like this means that viewport management is more difficult to handle programmatically. For example, when loading a story into the reader panel, we want to scroll smoothly over to that column from the story list, while immediately jumping within the reader content to the top of the story. In contrast, the story list should scroll smoothly both for its contents (when you use the keyboard shortcuts to select the next item in the list) and when it becomes the primary view on mobile (say, if you reach the end of all unread stories).

Ultimately, the solution was to split scrolling into separate code paths, depending on whether we want to move between columns, or within them. The code still uses scrollIntoView for panel transitions, and modules send a request over the global event bus if they want a different view to take over. The panels themselves are shell custom elements that offer individual control for scrolling content separately from the main viewport — the reader and story list dispatch DOM events up the tree to the ancestor panel when they need their column to scroll vertically to a certain element or offset, with or without an animated transition.

Promises, promises

At the start of the process, I didn't intend to do anything to the server side of Weir. It had already been built to handle cross-domain requests, so I didn't need to change anything for local development, and while it has its quirks, I'm generally pretty happy with how it works. Then I hit a snag: the "mark all as read" API route returns a count of stories that were updated, but not the new unread/total story counts. It was just irritating enough that I decided to dig in and make one little change. Of course it snowballed from there.

Since it was as much a learning experience as it was a legit project, Weir doesn't use a typical Node library for setting up its API. I wrote my own request handler and router on top of the basic HTTP module. That part of the code has actually aged pretty well. However, to manage the async chains involved in making database calls and RSS fetches, I wrote a utility library called Manos (because you're putting your code in the hands of fate), and that stuff was a mess.

These days, the ideal way to handle async flow is with the await keyword, so you don't have to write code out of order or in a snarl of function wrappers. But using await requires functions to return promises instead of accepting callbacks, and all of my code was written before JavaScript promises were standardized. So to make it a little easier to insert a db.getStatus() call in a single handler, I ended up converting the whole application to a promise-based flow.

Luckily, I went through a similar process a few years back with Caret, when async/await shipped in Chrome, so I largely knew what to expect. Surprisingly, the biggest change is not in the routes at all, but in the "Hound" component that periodically fetches feed items from various URLs: subscriptions have to be grouped into batches, then each batch is requested, sometimes decompressed from gzip, fed to a streaming parser, and finally saved to the database. As implemented with Manos, the code was at best out of order, and at worst involved a lot of "clever" functional tricks.

The new Hound flow has its issues — I think there's some leftover weirdness from the way old-school Node streams interact with each other that requires pausing the request as soon as it comes in — but it now reads top to bottom, and most of the complication comes from the problem domain and not the language. At some point, updating the request code to use something like fetch() will probably eliminate most of the remaining issues.

Second-system syndrome

There's a truism in development circles that a rewrite is often a debacle — people point to the rewrite of Netscape 4.0 that's blamed for tanking the company, or the Copland OS at Apple. My personal suspicion is that this is survivorship bias: Netscape itself was a from-scratch rewrite originally from the Mosaic browser, and while Copland was not a success, current Mac OS is built on the bones of NeXT, itself a from-scratch OS.

In any case, most people aren't building browsers or operating systems. For these kinds of small projects, I think there's value in taking another run at an idea, armed with knowledge about what worked or didn't work the first time around. In fact, that might be the best argument for these kinds of small projects (API clients, media players, browser extensions): they're a chance to stop, try something different, and measure our skills against our past selves. I learn a lot from these little rewrites, and I think it's safe to say that I am better at this than I was eight years ago.

I still wish they'd just bring back Reader, though.

February 15, 2021

Filed under: tech»web

Between Amber and Chaos

There isn't, in my opinion, a cooler name for a web standard than the Shadow DOM. The closest runner-up is probably the SubtleCrypto API, and after a decade of Bitcoin the appeal of anything with "crypto" in the name is pretty cloudy. So it's a low bar, but still: Shadow DOM. Pretty cool name.

Although I've been using web components for a long time, I've only been using Shadow DOM with it for a couple of years, in generally in pretty limited ways. For an upcoming project at NPR, I took the chance to really dig into how it's used in a mixed-content environment, one where custom elements are not just leaves of the HTML tree, but also wrap branches of extensive HTML content. The experience was pretty eye-opening, and surprisingly positive!

Walking the pattern

Let's start by talking about what what it is. Like most of the tech under the web components "brand," Shadow DOM is meant to retroactively give developers tools that "explain" what the browser already does, and hook into the same extension points. The goal is to make it possible for regular people to rapidly build out new functionality, because there's no "magic" behind the scenes.

For example, let's create a humble <select> tag:

Right off the bat, this tag has some special treatment that we can't immediately explain through regular HTML: it has a "thumb" (the arrow on the right) that doesn't appear in the DOM and can't be meaningfully styled, but is clearly a UI element that reacts to events. The options, defined as children of the tag, are still surfaced visibly, but not in the same way that children of a paragraph are or a regular text element are. Instead, they're moved to a new location in the dropdown menu and shown conditionally (or, on mobile, through an entirely different UI context).

Using our previous HTML/JS toolkit, it's not possible to duplicate these behaviors, or similar behaviors from tags like <video> or <input type="range">. To explain the "magic" of these elements we need to add Shadow DOM. It gives developers an API to attach a hidden document fragment called a "shadow root" to any given element, which replaces the visible contents of the element. However, even though they're shown to us in the browser, the contents of that document fragment are hidden from normal JavaScript queries, and its CSS styles are isolated — from the inside, you have a blank slate to work from, and from the outside it's as though that shadow content is an intrinsic part of the tag itself, just like the select box's dropdown UI.

What about those select box options, which are written as child tags but appear in a very different way? For that, we add in a <slot> element: inside the shadow, this element will re-parent any children placed in the host element. For example, given a shadow-dom element with the following in its shadow root: <b> SHADOW START </b> <slot></slot> <b> SHADOW END </b>

We could write this in our page as: <shadow-dom> <i>HELLO WORLD</i> </shadow-dom>

The contents of the <i> element aren't shown directly. Instead, they're moved inside the slot element, meaning that the page output will read SHADOW START HELLO WORLD SHADOW END. But, and this is the cool part, that italic tag appears to scripts and dev tools as though it was just a regular child of the <shadow-dom> element — it can be styled as normal, you can query for it, attach event listeners, and edit it as normal. The bold tags, meanwhile, remain in the shadow: they're visible on the page, but they can't be accessed from scripts and their styles are completely isolated.

This, then, is how Shadow DOM "explains" how a select box works. The box itself, including the current item and the thumb UI, live in the shadow. The options you write into the tag are reparented to a slot inside the drop-down area, to be shown when you click the element. We can use this API to create self-contained UI for an application or document without having to worry about new markup or styles polluting the page.

Enter the Logrus

Not everything is rosy, of course. One long-standing complication is that custom elements can't touch their own contents or attributes during construction, for reasons that are tedious and not worth going into here, but they can attach and modify their shadow root. So it's really tempting in custom elements to do everything in a shadow, because it radically simplifies your templating. Now you have null problems. In Radio, I built the entire UI this way, which worked great until I needed to inspect an element that's inside three nested shadow roots, or if I needed to query for the current active element.

Another misunderstanding has been people thinking shadow roots can replace something like Styled Components in terms of style isolation. But Shadow DOM is more like an iframe than anything else: explicitly inherited style properties (like font family) will travel through, but otherwise it's a pretty hard barrier. If you want to provide styling hooks for a component, you need either provide preset options or document a set of CSS custom properties. More importantly, the mechanisms for injecting styles into a shadow root (typically by putting a <style> tag inside) don't play well with standard build tooling.

By contrast, actually populating Shadow DOM tends to be cumbersome without build tooling in place to help. A lot of tutorials recommend building it from an inert <template> tag, which used to be elegantly handled via HTML imports. Now that those are deprecated, you either have to place the Shadow DOM template in your page manually (no), lean into async component definition (awkward), embed the markup into your script as a big JS literal (ugly), or use a build plugin to pull strings in as needed (sigh). None of these are unworkable, or even that difficult, but none of them are nearly as nice as simply being able to define a component's styles, shadow markup, and behavior in a single, imported HTML file.

Major Arcana

My personal feeling is that the biggest barrier to effective Shadow DOM usage, in a lot of cases, is that many developers haven't learned about the browser as much as they've learned about React or another framework, and those frameworks have often diverged in philosophy from the DOM. If you're used to thinking of the page as a JSX function value, the idea of a secret, stateful document fragment that replaces the DOM you tried to render is probably pretty bizarre.

But as someone who writes a lot of minimalist code directly against browser APIs, I actually think Shadow DOM fits in well with my mental model of how elements work, and it has clarified a lot of my thinking on how to build effectively with custom elements — especially through slots and slotted elements.

I'm still learning and experimenting, but I feel comfortable saying that if you're building custom elements, the rule of thumb should be "use Shadow DOM, but not very much." The more you're able to expose HTML to the light DOM by surfacing it through slots, the easier it is to compose them and style content. For example, a custom element that creates a tabbed UI from its children is a great Shadow DOM use case: the tab list lives in the shadow and is generated implicitly by iterating over the slotted elements. Since the actual tab contents are placed back in the light DOM, they're still easy to style and inspect. To really go with the grain of the platform, the host component might show or hide those slotted blocks using the hidden attribute, instead of setting styles or adding classes.

The exception is for elements that should not have children (like input tags) or where children are used for configuration — think video tags or my old Leaflet map component. With these "leaf" components, Shadow DOM lets you treat inner HTML as a domain-specific language, while your visible content lives entirely in the shadow root. That's a great way to create customized behavior, but expose it to designers or novice front-end developers who are very comfortable with markup but would balk at writing a lot of JS.

Ultimately, Shadow DOM feels like it really crystallizes the role of custom elements as a tool for implementing UI widgets, not as a competitor for Svelte. Indeed, by providing a mechanism for moving complex functionality into an opaque facade, it's probably the biggest gift to the "web pages are for documents, not apps" crowd in several years: if you want to build a big single page app, Shadow DOM doesn't really move the needle, but it's great for injecting discrete units of content into an article. As someone who crosses that app/document divide a lot, I'm really excited to see what I can do with it this year.

January 27, 2021

Filed under: journalism»political

2020 Revision

The problem with developing election data expertise is that they make you cover elections.

Even as they go, this was a rough one, and I'm just now starting to feel like I've recovered. After getting through a grueling primary season, we ended up rewriting our entire general election rig — we've got a post on the NPR News Apps blog about it, which is a pretty good overview.

This wasn't my first election night. But it was one of the biggest development efforts and teams I've led in journalism, and obviously the stakes were high. In the end, I'm mostly satisfied with the work we did — there's always room for improvement. I think 2020 is a year I'll look back on in terms of the simple (our technical choices) and the complex (weighing our responsibility for the results).

Act, React

For the primaries, I wrote our results displays as a set of custom elements. That worked well: it was fast, expressive, and with a little work to smooth over the API (primarily adding support for automatic templating and attribute/property mirroring), it was a pretty pleasant framework. However, for the general election, I wanted to build out a single-page application that shared state across multiple views. I didn't have a ton of experience with React, and it seemed like a worthwhile experiment (technically, we used Preact, but it's the same thing in practice).

Turns out I didn't love the experience, but it did have advantages. HTML custom elements do not come with a templating solution built-in, and they don't have a good way to pass non-string data across element boundaries. I wrote a miniature "data to element mapping" function to make that process easier, but it was never as straightforward as JSX templates were. In general, React's render cycle is reasonably pleasant to use, and easy to train people on. It's clear that this is what the library was primarily designed to do.

Unfortunately, it often feels like React doesn't really know how to organize the non-rendering parts of a web application, and when you're building live election results those parts matter. Unlike Vue or custom elements, which separate updates to display and data, changes to a React component properties and state all get piped through the render cycle. This is a poor match for operations that don't easily map to an idempotent template value, like triggering a server fetch or handling focus for accessibility purposes.

As best I can tell, the React community's answer to that problem has been to double down on its fixation on stateless programming, switch to pure functions and "hooks" for rendering, and recommend the use of global stores like Redux to manage data for the application. Class-based components are clearly considered gauche, and JSX templating encourages only using syntax that can fit into a return value, instead of natural structures like loops and conditionals. At times, working with React feels like being trapped with that one brogrammer at a party who keeps telling you about the lambda calculus. We spent decades waiting for better JavaScript syntax features, and now React wants to pretend they don't exist? In this economy?

The best explanation I've seen of this philosphy is Rich Harris's talk "Metaphysics and JavaScript" (be sure to bring up the slide notes). I'm not sure Harris has the right solution either — the new syntax in his own Svelte framework gives me the willies — but when he talks about React in terms of an ideology, I think he's right. And I'm ideological as hell, but I'm also a practical person. If your framework keeps choosing an elegant abstraction over how the actual world works, it's not doing what I need it to do.

So, long story short, it works, and I think it's probably good enough to re-use for another four to eight years. But I went into the experience hoping to see if there was some hidden virtues of React that I just hadn't found, and largely I was disappointed.

Tough Calls

It was obvious to anyone paying the slightest bit of attention that Trump was not going to accept the results of this election if he didn't win, and that his supporters would absolutely be flooding the country with misinformation about it. He'd done it before: in 2016, among other lies, he handed out misleading maps to try to persuade visitors that he had actually won the popular vote and the majority of the country.

But five days before that report surfaced, as usual having utterly failed to read the room, the New York Times had published a largely identical map, effectively pre-validating Trump's choice. Much like Trump's 2020 strategy, this shouldn't have been hard to predict: a map of vote margin by precinct is considered wildly misleading by data visualization experts, and right-wing operatives have consistently overemphasized the mostly-empty land mass that makes up "real America" when they're working the refs. If newsrooms themselves don't understand (or care) how their visual reporting contributes to misinformation, how can we expect voters to differentiate between journalism and lies?

My worst nightmare, going into 2020, was that Trump or someone in his orbit would use our reporting to try to illegally retain power. So we planned for uncertainty. We had a big note at the top of the page to warn readers that mail-in voting could delay results. We didn't mark races as "leading" on any displays until they hit 50% of votes in, to reduce the well-known red-to-blue shift from vote tabulation. We built new displays to emphasize electoral weight over geographic size. And behind the scenes, I built additional overrides into the pipeline just to be safe: although we never used them, there was a whole facility for flagging races with arbitrary metadata that could have been used for legal challenges or voting irregularities.

(In fact, if I can offer one bit of advice to anyone who's new to election results, it's to be extremely pessimistic about the things you might need to override. Since this wasn't my first rodeo, I had control sheets already set up that would let us reset candidate metadata, ballot rosters, and race calls. Sure enough, when Maine's public radio stations wanted long-shot independent candidates included to highlight their impact on ranked-choice voting for Susan Collins' seat, despite not clearing our vote threshold for display, it was just a matter of updating a few cells. Same for when data irregularities muddled the district results coming from those states that proportionally allocate their electoral votes.)

I honestly wanted to go further with our precautions, but I'm satisfied with what we got approved in a fairly traditional newsroom. And in the end, we got lucky: it wasn't actually that close. Biden won by the same electoral margin that Trump had won in 2016, and across enough states by a high enough margin that it couldn't be flipped with a single lawsuit (even assuming Trump's lawyers were competent which... was not the case).

So the good news was that we didn't have to worry about NPR results being posted to Trump's Twitter feed. The bad news, as you know, is that the losers of the election spent the next two months undermining those results, lying about illegal voting, and eventually inciting a riot by white supremacists and QAnon conspiracy theorists at the Capitol during final certification of the electoral college. Kind of a mixed bag.

No, this election wasn't like any other. However, I hope it serves as a chance for all of us who do this work to think carefully about what our role is, and how we use the power we've been given. I have long argued that real-time election results are a civic disaster — they're stressful, misleading, and largely pointless — but that ship has sailed, and like a hypocrite I cash the checks for building them regardless. In the era of The Needle, news organizations aren't going to give up the guaranteed traffic boost no matter how unhealthy it is for the country. At the very least we can choose to design responsibly, and prioritize more than "faster results" and "stickier pages." We're not big on long-term lessons in this industry, but we have to start somewhere.

January 25, 2021

Filed under: gaming»software

Perfect n+1

Everybody says that Paradise Killer is tough to explain, but it's actually quite simple:

  • It's a detective game in the vein of Phoenix Wright, wherein
  • You play as Lady Love Dies, an immortal cultist and detective-slash-"investigation freak"
  • Who was exiled from the series of artificial utopias designed by her fellow immortal cultists
  • (in an aesthetic somewhere between "Zapp 'n Roger album" and "Windows 95 Plus Pack screensaver")
  • To revive their dead and entombed gods via ritual sacrifice at the island's peak
  • Except this time, someone killed the ringleaders before they completed the ceremony
  • And it's LLD's job to find out who and why, and then execute the perpetrators in the name of "justice."
Practically Death of a Salesman, really.

A couple months after finishing it, it dawned on me that one of the interesting things about the game, for as much lore as it packs in (and it is just stuffed with flavor text), is how little it's interested in redeeming its characters. After all, everyone involved (including the protagonist) has been kidnapping and killing ordinary people for millenia in religious rituals that they don't even really seem to like very much. They're monsters, but in the most self-serving, "it's a living" kind of way.

You can (and plenty of people have) read all of this as a metaphor for late-stage capitalism. Especially as we're re-opening for the third wave of COVID-19 because The Economy Must Grow, it's hard not to see the comparison to an upper class that joylessly and incompetently tosses people onto the pyre if it'll increment GDP by a percentage point or two. But I'm less interested in the political implications than I am in how this plays against modern redemption stories.

I suspect that game narrative is often a lot closer to classic television writing than it is to movies. Part of it is the length of the experience, but some of it also has to do with the ways interactive fiction has evolved, particularly in open-world games. Ashley Burch mentions this in an interview on Kotaku about her voice career, noting that once the player has control over the order and timing of activities, it puts limits on the amount of change that a character can believably accomodate — naive and hopeful voice lines in an early quest will seem wildly out of place if the protagonist has spent 40 hours becoming gradually more embittered, and vice versa. The result is that the writing has to have a central core that's largely unchangeable, much in the way that episodic TV used to reset at the end of every show.

For TV, arc-based narratives changed that. Once there are consequences across multiple episodes, and the longer you spend with a character (especially in genre fiction), the more tempting it is to find an excuse or rationale for their actions. Jaime Lannister isn't necessarily evil, he's led astray by his sister. Darth Vader is made a monster by the Emperor, and we'll see how it happened across six movies before George Lucas washes his hands of the whole thing. These kinds of heel-face turns are classic drama, they give actors and writers a lot to chew on, and they are often comforting to the audience, since they serve to reinforce our moral compass.

In contrast, Paradise Killer's cast isn't interested in rehashing their crimes, and so the game just... doesn't do it. It's not unaware — side characters, especially outsiders, will mention how inhuman the whole thing is — but it would be wildly out of character for someone like Lady Love Dies to have a change of heart and deliver a lengthy monologue about the terrible unfairness of it all. She's at the top of the heirarchy and she got there by committing awful, banal atrocities — the kind that aren't fun for an "investigation freak" to dig into. This game is not going to help reassure you that the evil-doers just need to come to their senses. In fact, it argues that they already have.

September 7, 2020

Filed under: gaming»software»sekiro

Live Die Repeat

In a burst of cherry blossoms, the blog is resurrected!

(It's a Sekiro joke. That's what the post is about.)

There are certain models that have become ubiquitous in AAA game design over the last five years, inspired by MMOs: randomized gear (both from enemy drops and loot boxes), player and enemy level progression, and crafting. In "live" games like Destiny, these gameplay loops are meant to keep the game sticky, making sure there's always a goal just out of reach. But they've become common in single-player games as well, a way to extend the length of campaigns that are increasingly expensive in an era of HD assets.

I generally find this off-putting, and I suspect many of the designers do as well. In Control, for example, grafting a loot system onto one of Remedy's quirky, cinematic stories feels extraneous, and the game seems well aware of that fact — the ubiquitous shelters containing exactly one (1) loot box could not possibly be more desultory and half-hearted compared to the exuberant weirdness of Dr. Casper Darling's musical stylings.

All of this provides context for my extremely mixed feelings about Sekiro: Shadows Die Twice. It's the first From Software "souls-like" title that I played to completion, and it'll probably be the last. I don't know that I would recommend it to anyone. But in its purity and its rejection of the "live game" grind, there is also something admirable that sticks with me.

The core mechanic in Sekiro is the relationship between two meters, vitality and posture, possessed by every combat entity in the game, including the player character (the Wolf). Vitality is basically a standard health bar. Posture is similar to a fighting game's guard meter: it increases as a character defends against attacks or when an attack is parried with perfect timing, and when it maxes out, the guard is broken. In the player's case, this leaves you open to attacks, but for enemies it means you can land an instant-kill "deathblow." Even still, vitality isn't useless since posture recovery speed is directly related to health.

Different enemies have different thresholds for their posture and vitality, and the interplay between these two meters is what makes Sekiro's combat interesting and aggressive. Since timed parries inflict extra posture damage (and reduce the posture you'd normally take from defending), many battles revolve around punishing enemy attacks to bring down vitality and slow posture recovery, then baiting counter-attacks strings in order to break posture and land a deathblow. At its best, it feels like a great samurai battle: fast, dynamic, and deadly.

The problem, honestly, is that it's too deadly. To win a fight in Sekiro, you need to learn an enemy's attack patterns well enough that you can parry or dodge them, a task that's made more complicated in fights where a mid-string parry may change the pattern. But mistakes are incredibly punishing: it's not unusual for many bosses to kill the Wolf in two or three hits, and a surprising percentage of them are one-shots. Realistically, you're only going to learn one or two patterns per run before you get to stare at the loading screen and start over.

And that deadliness only really goes one way. Yes, the deathblow is an instant kill — but only after you slowly tick down the target's vitality and then engage in risky exchanges of posture damage. Compare this to Bushido Blade, an obvious inspiration but one where everyone (player and opponent alike) could die in an instant. In Sekiro, you might as well be fighting with a butter knife compared to everyone else. I wouldn't mind the perfectionism so much if it felt like I got more impact out of it.

From Software is known for games where growth comes from player skill, and not from a mechanic or in-game reward. They're also known for masochism and cheap shot tactics. Sekiro feels like the purest expression of both. The result is an experience that I respected more than I enjoyed it. For all that it's thrilling when everything clicks, those moments are punctuation in long stretches of frustration — running the same route over and over, getting just a little bit farther before a lucky shot or a botched parry sends you back to the checkpoint.

This is, of course, part of the appeal for long-term Souls aficionados: your brain remembers the highs, and tends to ignore the long valleys of frustration between them. It's not for me. But I appreciate what it's trying to do, and the pressure it hopefully exerts on modern design. There is a middle ground between loot boxes and "get good," and maybe with HD development becoming increasingly unsustainable, the AAA industry will finally find it.

March 7, 2020

Filed under: tech»open_source

Call Me Al

With Super Tuesday wrapped up, I feel pretty confident in writing about Betty, the new ArchieML parser that powered NPR's new election liveblogs. Language parsers are a pretty fundamental computer science discipline, which of course means that I never formally learned about them. Betty isn't a very advanced parser, compared to something that can handle a real programming language, but it's still pretty neat — and you can't say it's not battle-tested, given the tens of thousands of concurrent readers who unknowingly consumed its output last week.

ArchieML is a markup language created at the New York Times a few years back. It's designed to be easy to learn, error-tolerant, and well-suited to simultaneous editing in Google Docs. I've used it for several bigger story projects, and like it well enough. There are some genuinely smart features in there, and on a slower development cycle, it's easy enough to hand-fix any document bugs that come up.

Unfortunately, in the context of the NPR liveblog system, which deploys updated content on a constant loop, the original ArchieML had some weaknesses that weren't immediately obvious. For example, its system for marking up multi-line strings — signalling them with an :end token — proved fragile in the face of reporters and editors who were typing as fast as they could into a shared document. ArchieML's key-value syntax is identical to common journalistic structures like Sanders: 1,000, which would accidentally turn what the reporter thought was an itemized list into unexpected new data fields and an empty post body. I was spending a lot of time writing document pre-processors using regular expressions to try to catch errors at the input level, instead of processing them at the data level, where it would make sense.

To fix these errors, I wanted to introduce a more explicit multi-line string syntax, as well as offer hooks for input validation and transformation (for example, a way to convert the default string values into native types during parsing). My original impulse was to patch the module offered by the Times to add these features, but it turned out to be more difficult than I'd thought:

  • The parser used many, many regular expressions to process individual lines, which made it more difficult to "read" what it was doing at any given time, or to add to the parsing in an extensible way.
  • It also used a lazy buffering strategy in a single pass, in which different values were added to the output only when the next key was encountered in the document, which made for short but extremely dense code.
  • Finally, it relied on a lot of global scope and nested conditionals in a way that seemed dangerous. In the worst case you'd end up with a condition like !isSkipping && arrayElement.exec(input) && stackScope && stackScope.array && (stackScope.arrayType !== 'complex' && stackScope.arrayType !== 'freeform') && stackScope.flags.indexOf('+') < 0, which I did not particularly want to untangle.

Okay, I thought, how hard can it be to write my own parser? I was a fool. Four days later, I emerged from a trance state with Betty, which manages to pass all the original tests in the repo as well as some of my own for my new syntax. I'm also much more confident in our ability to maintain and patch Betty over time (the ArchieML module on NPM hasn't been updated since 2016).

Betty (who Wikipedia tells me was the mechanic in the comics, appropriately enough) is about twice as large as the original parser was. That size comes from the additional structure in its design: instead of a single pass through the text, Betty builds the final output from three escalating passes.

  1. The tokenizer runs through the document character by character, and outputs a stream of tokens containing one or more characters of text bucketed into different types.
  2. The parser takes that stream of tokens, reassembles them into lines (as is required by the ArchieML spec), and then matches those against syntax patterns to emit a list of assembly instructions (such as setting a key, creating an array, or buffering text).
  3. Finally, the assembler runs the instructions through a final cleanup (consolidating and trimming values) and then uses them to build the output object.

Essentially, Betty trades concision for clarity: during debugging, it was handy to be able to look at the intermediate outputs of each stage to see where something went wrong. Each pipeline section is also much more readable, since it only needs to be concerned with one stage of the process, so it uses less global state and does less bookkeeping. The parser, for example, doesn't need to worry about the current object scope or array types, but can simply defer those to the assembler.

But the real wins are the simplicity of adding new syntax to ArchieML, in ways that the original parser was not extensible. Our new multi-line type means that editors and reporters can write plain English in posts and not have to worry about colliding with the document syntax in unexpected ways. Switching to Betty cleaned up the liveblog code substantially, since we can also take advantage of the assembler's pipeline hooks: keys are automatically camel-cased (Google Docs likes to sentence-case keys if you're not careful), and values can be converted automatically to JavaScript Date objects or numbers at the earliest stage, rather than during output or templating.

If you'd told me a few years ago that I'd be writing something this complicated, I would have been extremely surprised. I don't have a formal CS background, nor did I ever want one. Parsing is often seen as black magic by self-taught developers. But as I have argued in the past, being able to write even simple parsers is an incredibly valuable skill for data journalism, where odd or proprietary data formats are not uncommon. I hope Betty will not just be a useful library for my work at NPR, but also a valuable teaching tool in the community.

December 31, 2019

Filed under: fiction»reviews

A Year in Books: 2019

Starting last January, I tracked every book I read in a spreadsheet, including author information, length, genre, and whether or not I'd read it before. Several of these are inexact measures: page count is of course largely meaningless when most of my books were on a Kindle, and assigning a single genre to a book is often reductionist. But I was curious how it would go.

All told, I read 138 books this year, totalling 51,432 pages. That seems like a lot, but you have to remember: I read a lot of crap — disposable science fiction and mystery novels make up a lot of my media diet. In fact, 54.4% of the titles on my list are some kind of speculative fiction, followed in frequency by non-fiction (19.6%), literary fiction (10.1%), and mystery (4.4%).

I could have done better when it comes to reading authors from different backgrounds. Although 72.5% of the books I read were by women, only 31.9% were by people of color. Combining the two is more dire: women of color made up 24.6% of the authors I read, slightly more than white men (20.3%) but behind white women (47.8%). Men of color were not well-represented in my reading, at 7.3%. And looking at the specific backgrounds, there's relatively few black authors of either gender in the sheet.

One hundred books is enough for them to blur together a bit, especially when — as I said — most of them are pretty pulpy. Many of my favorites have been well-lauded: Celeste Ng's Little Fires Everywhere or Jia Tolentino's Trick Mirror, but there are a few that I haven't seen recognized elsewhere.

Laurie J. Marks' Elemental Logic series has been in progress for most of two decades (starting with Fire Logic in 2002, proceeding through Earth Logic and Water Logic, and wrapping up this year with Air Logic). The long gestation might explain why these phenomenal books have flown under the radar. Where most fantasy yarns peak in a big fight that solves everything, the Logic books are preoccupied with the fallout of wartime and occupation, the trauma it leaves, and the slow and difficult process of recovery. They argue that there's no easy solution, just a lot of painstaking work. Even so, they're full of life, and not as grim as I make them sound. I can't believe these aren't better known.

W.E.B. Du Bois's Data Portraits was my greatest source of professional inspiration this year: comprised of visualizations that he assembled for the 1900 Paris Exposition, it's a rich collection of data storytelling from a time before the field had much in the way of guidance or conventions. Du Bois hoped to show the fullness of black lives in America, as well as the oppression they faced. I love the unorthodox choices he made when displaying outliers or broad data ranges, which you don't see very much in a time when we've largely automated visualization.

Esme Wang's The Collected Schizophrenias made me more uncomfortable than almost anything I read in 2019 — it's not as simple as humanizing or excusing mental illness, or even explaining it. Wang writes frankly about the horrors of being institutionalized, but also the horrors of needing to be for her own safety, and the safety of the people around her. This is not a book with neat answers for anyone.

How to be an Antiracist is the other book that I think about regularly: part memoir, part history, and part theory, Ibram X. Kendi wrote a book that thinks deeply about racism and what it means to actively fight it — to be "antiracist," not just "not racist." Given the failures of American newsrooms to deal responsibly with coverage of race and class, in large part because they don't understand the "antiracist" vs. "not racist" distinction and err toward the latter, I'd recommend it to any reporter or editor.

Finally, Arkady Martine's A Memory Called Empire was one of the last titles I read this year, but I don't think my high opinion is just recency bias at work: it turns out that "diplomatic murder mysteries set in byzantine empires" is exactly my jam. Memory reminded me a lot of Ann Leckie's Ancillary books and the way they re-examined space opera from the point of view of the bureaucracy. In this case, it's the story of a new ambassador whose predecessor died from intrigue-related complications. There's a sequel on the way, apparently, which I'm very much anticipating.

Now that I've measured a year of books, I think next year I'll put the spreadsheet away — or choose a different subject, like cinema. I don't think this changed my habits substantially, but I could feel the temptation to read more (or read differently) in order to add more rows to the list. In 2020, I'm going to be spending a lot of time in spreadsheets for work, so I'd rather keep my virtual bookshelf separate.

November 20, 2019

Filed under: tech»open_source

Repackaged apps

Earlier this week, a member of the Google developer relations team ported Caret to the web. He's actually the second person from Chrome to do this — a member of the browser team created a separate port last month. The reasons for this are simple: Caret is a complete application with a relatively small API surface, most of which revolves around file I/O. Chrome has recently rolled out trial support for the Native Filesystem API, which lets web apps open and edit local files. So it's an ideal test case.

I want to be clear, Google's not doing anything wrong here. Caret is licensed under the GPL, which means pretty much anyone can take it and do whatever they want, as long as they give me credit for the code I wrote and distribute the source, both of which are happening here. They haven't been rude about it (Ben, the earlier developer, very kindly reached out to me first), and even if they were, I couldn't stop it. I intentionally made that decision early on with Caret, because I believe giving the code away for something as fundamental as a text editor is the right thing to do.

That said, my feelings about these ports are extremely mixed.

On the one hand, after a half-decade of semi-active development, Caret has found a nice audience among students and amateur hackers. If it's possible to expand that audience — to use Google's market power to give more students, and more amateurs, the tools to realize their own goals — that's an exciting possibility.

But let's be clear: the reason why a port is necessary is because Google has been slowly removing support for Chrome Apps like Caret from their browser, in favor of active development on progressive web apps. After building on their platform and watching them strip support away from my users on Windows and OS X, with the clear intention of eventually removing it from Chrome OS after its Android support is advanced enough, I'm not particularly thrilled about the idea of using it to push PR for new APIs in Chrome (no other browsers have announced support for Native Filesystem).

People have ported Caret before. But it feels very different when it's a random person who wants to add a particular feature, versus a giant tech corporation with a tremendous amount of power and influence. If Google wants to become the new "owner" of Caret, they're perfectly capable of it. And there's nothing I can do to stop them. Whether they're going to do this or not (I'm pretty sure they won't) doesn't stop my heart from skipping a beat when I think about it. The power gradient here is unsettling.

Lately, a group of journalism students at Northwestern University here in Illinois came under fire for an apology for and retraction of their coverage of protests against former Attorney General Jeff Sessions. This includes the usual suspects, like Bari Weiss, the NYT columnist who regularly publishes columns in the biggest paper in the world about how she's being silenced by critics, but also a number of legitimate journalists concerned about self-censorship. But the editorial itself is quite clear on why they took this step, including one telling paragraph:

We also wanted to explain our choice to remove the name of a protester initially quoted in our article on the protest. Any information The Daily provides about the protest can be used against the participating students — while some universities grant amnesty to student protesters, Northwestern does not. We did not want to play a role in any disciplinary action that could be taken by the University. Some students have also faced threats for being sources in articles published by other outlets. When the source in our article requested their name be removed, we chose to respect the student’s concerns for their privacy and safety. As a campus newspaper covering a student body that can be very easily and directly hurt by the University, we must operate differently than a professional publication in these circumstances.

You may disagree with the idea that journalists should take down or adjust coverage of public events and persons, but it is legitimately more complicated than just "liberal snowflakes bowing to public pressure." No-one is debating that the reporters can take pictures of public protests, or publish the names of those involved. But should they? Likewise, when a newsroom's community is upset about coverage, editors can ignore the outcry, or respond with scorn. It shouldn't be surprising that certain audiences turn away or become distrustful of a paper that does so.

The relationships in this situation, as with various ports of Caret, are complicated by power. In both cases, what would be permissible or normal in one context is changed by the power differential of the parties involved, whether that's students to the paper to the university, or me to Google, or data journalists to the people in their FOIA requests, or tech workers to their employers' government contracts.

Most newsrooms don't think very much about power, in my experience, or they think of it as something they're supposed to check, not something they possess. But we need to take responsibility for our own power. It's possible that the students at the Daily Northwestern overreacted — if you protest in public, you should probably expect that pictures are going to be taken — but they're at least engaging with the question of what to do with the power they wield (directly and, in the case of the university's discipline system, indirectly). Using power in ways that have a real chance of harming your readers, just on principle and the idea that "that's what journalists do," is tautocracy at work.

As much as anything, I think this is one of the key generational shifts taking place in both software and journalism. My own sympathies tend toward a vision of both that prioritizes harm reduction over abstractions like "free speech" or "intellectual property," but I don't have any pat answers. Similarly, I've become acclimated to the idea of a web-based Caret port that's out of my hands, because I think the benefits to users outweigh the frustration I feel personally. I can't do anything about it now. But I will definitely learn from this experience, and it will change how I plan future projects.

November 1, 2019

Filed under: movies»commentary»horror

Wake me up when Shocktober ends

When I was a kid in Lexington, Kentucky, I remember that grocery stores would have a little video rental section at the front of the store, just a few shelves stocked with VHS tapes. I used to be fascinated by the horror movies: when my parents were checking out, I would often walk over and look at the box art, which had its own special, lurid appeal. It was the age of golden plasticky, rubbery practical effects. I could have stared at the cover for Ghoulies for hours, wondering what the movie inside was like.

This year, for the first time, I decided to celebrate Shocktober: watching a horror movie for every day in the month before Halloween. In particular, I tried to watch a lot of the movies my 7-year-old self would have wanted to see. It turns out that these were not generally very good! My full list is below, with the standouts in bold.

  1. Children of the Corn
  2. Nightmare on Elm Street (2010)
  3. Green Room
  4. We Have Always Lived In The Castle
  5. Ma
  6. The Conjuring
  7. Pumpkinhead
  8. Halloween 2
  9. Hellraiser
  10. Black Christmas
  11. Insidious
  12. Doom: Annihilation
  13. Candyman
  14. Little Evil
  15. Cam
  16. Chopping Mall
  17. House (1986)
  18. Creep (2014)
  19. The Perfection
  20. They Wait
  21. My Bloody Valentine (1981)
  22. Ginger Snaps
  23. The Gate
  24. Prophecy
  25. Halloween 3
  26. In the Tall Grass
  27. Head Count
  28. 1922
  29. Emelie
  30. Train to Busan
  31. The Babysitter
  32. The Ring

One thing that becomes obvious very quickly is how inconsistent the horror genre is: not only is it extremely prone to fashion, but also to drought. The mid-to-late 80s had a lot of real stinkers — either "comedy" horror like House, nonsense slashers like My Bloody Valentine, or just mistakes (Children of the Corn, which is amateurish on almost every level). I suspect this parallels a lot of the CG goofball period of the late 2000s (Darkness Falls Hollow Man, They).

On the other hand, there are some real classics in there. Black Christmas predates Halloween by four years, and not only probably inspired it but is also a much better movie: more interesting characters, better sense of place, and a wild Pelham 123-style investigation. Candyman and Hellraiser are both fascinating, complicated movies packed with indelible imagery. And Halloween 3 manages to feel like a companion piece to They Live, trading all connection to the mainline series for a bizarre riff on media paranoia.

Somewhere in the middle is Chopping Mall, a movie that's somehow so terrible, so perfectly 1986, that it becomes compulsively watchable. Its effects are bad, the characters are thinly drawn and largely there for gratuitous nudity, and its marketing materials wildly overpromise what it will deliver. It's perfect, I love it, and I name it the official movie of Shocktober 2019.

May 21, 2019

Filed under: tech»web

Radios Hack

The past few months, I've mostly been writing in public for NPR's News Apps team blog, with posts on the new Dailygraphics rig (and setting it up on Windows), the Mueller report redactions, and building a scrolling audio story. However, in my personal time, I decided to listen to some podcasts. So naturally, I built a web-based listener app, just for me.

I had a few goals for Radio as I was building it. The first was my own personal use case, in which I wanted to track and listen to a few podcasts, but not actually install a dedicated player on my phone. A web app makes perfect sense for this kind of ephemeral use case, especially since I'm not ever really offline anymore. I also wanted to try building something entirely using Web Components instead of a UI framework, and to use modern features like import — in part because I wanted to see if I could recommend it as a standard workflow for younger developers, and for internal newsroom tools.

Was it a success? Well, I've been using it to listen to Waypoint and Says Who for the last couple of months, so I'd say on that metric it proved itself. And I certainly learned a lot. There are definitely parts of this experience that I can whole-heartedly recommend — importing JavaScript modules instead of using a bundler is an amazing experience, and is the right kind of tradeoff for the health of the open web. It's slower (equivalent to dynamic AMD imports) but fast enough for most applications, and it lets many projects opt entirely out of beginner-unfriendly tooling.

Not everything was as smooth. I'm on record for years as a huge fan of Web Components, particularly custom elements. And for an application of this size, it was a reasonably good experience. I wrote a base class that automated some of the rough edges, like templating and synchronizing element attributes and properties. But for anything bigger or more complex, there are some cases where the platform feels lacking — or even sometimes actively hostile.

For example: in the modern, V1 spec for custom elements, they're not allowed to manipulate their own contents until they've been placed in the page. If you want to skip the extra bookkeeping that would require, you are allowed to create a shadow root in the constructor and put whatever HTML you want inside. It feels very much like this is the workflow you're supposed to use. But shadow DOM is harder to inspect at the moment (browser tools tend to leave it collapsed when inspecting the page), and it causes problems with events (which do not cross the shadow DOM boundary unless you alter your dispatch code).

There's also no equivalent in Web Components for the state management that's core to most modern frameworks. If you want to pass information down to child components from the parent, it either needs to be set through attributes (meaning you only get strings) or properties (more bookkeeping during the render step). I suspect if I were building something larger than this simple list-of-lists, I'd want to add something like Redux to manage data. This would tie my components to that framework, but it would substantially clean up the code.

Ironically, the biggest hassle in the development process was not from a new browser feature, but from a very old one: while it's very easy to create an audio tag and set its source to any sound clip on the web, actually getting the list of audio files is often impossible, thanks to CORS. Most podcasts do not publish their episode feeds with the cross-origin header set, so the browser's security settings shut down the AJAX requests completely. It's wild that in 2019, there's still no good way to make a secure request (say, one that transmits no cookies or custom headers) to another domain. I ended up running the final app on Glitch, which provides basic Node hosting, so that I could deploy a simple proxy for feed data.

For me, the neat thing about this project was how it brought back the feeling of hackability on the web, something I haven't really felt since I first built Caret years ago. It's so easy to get something spun up this way, and that's a huge incentive for creating little personal apps. I love being able to make an ugly little app for myself in only a few hours, instead of needing to evaluate between a bunch of native apps run by people I don't entirely trust. And I really appreciated the ways that Glitch made that easy to do, and emphasized that in its design. It helps that podcasting, so far, is still a platform built on open web tech: XML and MP3. More of this, please!

Past - Present - Future