It seems vaguely ridiculous to spend my days working on a computer, and then come home and write assembly code for an hour or two, much less enjoy it. That's how good TIS-100 is: a deranged simulator for a broken alien computer, it's the kind of game where the solutions are half inspiration, half desperate improvisation.
Here's the idea: the game boots up a fake computer, which immediately fails POST and dumps you into the debugger. Inside, it's made up of chips arranged in a grid, each of which can be programmed with up to 15 instructions in an invented assembly language. In a series of puzzles, you're given a set of inputs and a list of expected output, and it's up to you to write the transformation in the middle.
Complicating matters is the fact that TIS-100 processors are designed to be as eccentric as possible. Nodes have no RAM, just a single addressable register and a second backup register that can be swapped out. They're also severely limited in what they can do, but very good at communicating with each other. So each solution usually involves figuring out how to split up logic and shuffle values between nodes without deadlocking the CPU or writing yourself (literally) into a corner. There's a bunch of tricks you learn very quickly, like using a side node as a spare register or sending bits back and forth to synchronize "threads."
It's not perfect. Once I beat TIS-100, I didn't feel an urge to start back up the way I do for something more "game-like" (say, XCOM or Mass Effect). But it does have the wonderful quality that you can solve most of its puzzles while you're away from the computer, and actually typing them in is a bit of an anticlimax. For all its exaggeration and contrivance, I'm not sure you could get a better simulation of programming than that.
Earlier today I took the wraps off of the private repo for our Chambers Bay interactive flyover. You can find the source code here, and a post on our dev blog about it here. It was a really fun challenge, and a rare example of using WebGL in a news capacity (the NYT did the Dawn Wall, but that's the only one I can remember recently).
From a technical standpoint, this was my first three.js project, and the experience was largely positive. I think there's a strong case to be made that three.js is basically jQuery for WebGL: sure, you don't need it, but it only takes a couple of features to make it worthwhile. In this case, I didn't particularly feel like writing a model loader or a scene graph. There are still plenty of hooks to write the parts that I do enjoy, like the fragment shader for the landscape (check out that sweet dithering), or the UI for directing the camera. Sure, three.js is a relatively large library, but I'm loading 4MB of textures and another 4MB of gzipped landscape model, so what's a few more hundred KB of code?
WebGL itself runs surprisingly well these days, although failure modes do not seem to be its strong suit. For example, the browser may have WebGL support enabled, but then crash when it tries to render (or it may be lying about support, as with the remote VM sessions used in Times meeting rooms). That said, I was astonished to find that pretty much everything (mobile included) could run the landscape at a solid frame rate, despite the fact that it's a badly-optimized mesh with 150,000 triangles. Even iOS, which usually falls over and dies when WebGL pushes past its skimpy RAM limits, was able to run smoothly once I added a low-res texture for it to use.
This was an ambitious project using some pretty cutting-edge web technology, which makes it interesting in light of arguments that the web suffers from "featuritis". After all, when you're talking about feature overkill, WebGL is a barnstormer. But this story would have been tough to tell another way, and it would have never had the same reach siloed in an app store.
Or take Paul Ford's mind-boggling What is Code? in Businessweek this month: behind Bloomberg's Trapper Keeper design aesthetic, it's a powerful article that integrates animations, videos, and interactive demonstrations with the textual message. Ironically, I saw many of the same people that criticize web apps going wild for Ford's piece, a stance that I can only attribute to sophistry.
My team at the Seattle Times has gradually abandoned the term "news apps," since everyone who hears it assumes we're actually writing iPhone clients for the paper. As a term of art, it has always been clumsy. But it does strike at a crucial quality of what we do, which occupy a gray area between "text" and "program." And if it seems like I'm touchy about pundits who think we should abandon the web, this is in large part the reason why.
Arguments that browsers should just go back to being a document viewers ignore the fact that HTML is not just a text format: it's a hypermedia format, and those have always blurred the already-fuzzy lines between data and code (see also: Excel, Hypercard, and IPython notebooks). It's true that the features of the web platform are often abused. Nobody likes slow navigation, ad popups, or user tracking scripts. But it's those same features that make new kinds of storytelling possible — my journalism is built on the same heavily-structured, "over-tooled" web platform that critics find so objectionable. I wouldn't give that up for all the native apps in the world.
It's ironic, I guess, that I was so busy at the Seattle Times a couple of weeks ago I forgot to write about my one-year anniversary here. Anyway: it's been quite a year! I've done real estate visualizations, provided an overview of Oso Valley development, and covered the Washington state elections. I did much of the development on our major investigative pieces, Loaded with Lead and Sell Block (not to mention graphs and narrative interactives for the Warren Buffett mobile home investigation). I made a Seahawks fan map so good that the team outright stole it for themselves. For the local architecture buff, I worked on a building quiz, and for the beer fans I helped build the landing page for our Brew with Us project. Want to know where the May Day protests went? I built a map for that. And this is just the big stuff.
In addition to the externally-facing development, I've been working on building tools that are used by the rest of the newsroom. I think our news app scaffolding is as good as anyone's in this business. We're leading the industry in custom element development, with responsive frames, Leaflet maps, and more. The watermarking tool I made on my second day is still in use, and will probably outlive me entirely at this point.
I have always had a low threshold for boredom, a character flaw that's led to overpacking for every trip I've ever taken and a general inability to read literary fiction. I love working in a newsroom for many reasons, but one of the greatest has always been that I am rarely bored here, and it never lasts more than two weeks when it does happen. I cannot recommend this job highly enough for technical people who want to have an impact, or journalists who want to break out of a single beat. Working at the Seattle Times has been the most fun I've had a job in a long time. I can't wait to see what the next year brings.
"As I look back over a misspent life, I find myself more and more convinced that I had more fun doing news reporting than in any other enterprise. It is really the life of kings."
Like all Facebook's attempts to absorb the news industry, there's a probable timeline their new Instant Articles will follow, and it basically looks like this:
Instant Articles is not the first time Facebook has tried to take over the web, and it won't be the last. They're very bad at it, probably because they're the original kings of empty promises: working with Facebook is a constant stream of exasperation, until either you realize that they're incapable of maintaining a stable API/business relationship, or you slit your wrists. They've done it to game developers (goodbye, Farmville), to other newsrooms (remember Washington Post Social Reader?), and to anyone else who's tried to build on the various Facebook "platforms."
Lots of people have written very smart reactions to the Instant Articles announcement — I'm partial to Josh Marshall's behind-the-scenes take, John Herrman's spiral of bemused horror, and Zeynep Tufekci's reminder that Facebook cannot be trusted to engage honestly with its role as gatekeeper.
It's probably more fun to engage with the self-proclaimed "controversial" opinions, like this profoundly dumb thought-leadering from MG Siegler:
With Instant Articles, Facebook has not only done a 180 from what Mark Zuckerberg has called the company's biggest mistake, they've now done another lap just to prove a point.
They did a 180, and then took a lap, so... they ran the race backwards, which is a good thing? Somewhere, Tom Friedman feels a twinge of jealousy.
Not only is the web not fast enough for apps, it's not fast enough for text either. And you know what, they're right.
"They're right" that an app loading pre-cached text can be faster than a web browser downloading that same text from the network, yes. Apparently our plan now is just to restrict your reading material to what Facebook can download ahead of time. I hope you like Upworthy lists.
Though, in a way, Facebook itself really is just a web browser. It's just a different, newfangled one for a new era. A mobile era.
A different, newfangled web browser that only goes to Facebook, apparently. Who would want to read anything else? In the future, all websites are Facebook. (Ironically, according to the Instant Articles FAQ, they're fed from HTML anyway, so they're not even really that "new." But it's probably too much to expect Siegler to do research.)
Even without without bringing in ideology, the "native apps instead of the web" idea faces a tremendous number of problems once you think about it for more than thirty seconds. How do new publications like The Toast or FiveThirtyEight get traction when you have to manually download them from an app store to read them? If they get popular through the web first, why bother transitioning to native? Nobody makes "reader" apps for desktops and laptops, so what happens to them? Does anyone really want to write long-form on Facebook, a service that only recently added an "edit post" button? Who cares: punditry is hard, let's go shopping!
It's easy to pick on shallow people who think Instant Articles represent a grand utopian state, but I'd also like to celebrate people who are actually building in the opposite direction. This weekend, I went to a Knight-Mozilla code convening in Portland, which included a ticket to the Write the Docs convention. I'm not a documentation writer, really, so most of the conference went right past me. But the keynote on the second day was by Ward Cunningham, inventor of the wiki, and it was a fascinating look at what it would really look like to reinvent the web.
For the past few years, Cunningham's been working on "federated" wikis, which store content on multiple servers instead of using a single database. If you link to another person's wiki page and you want to change the content, you fork it a la GitHub, and edit the new local copy (which remembers its origin) right there in your browser. You can also drag-and-drop content into a new page, if you want to merge text from multiple sources. It's pretty neat. The talk isn't online, but he did another presentation at New Relic that covers similar material.
Parts of Cunningham's pitch can sound kind of crankish, although I'm sure I would have said the same thing for the original wiki. But other parts are really interesting, such as the idea of creating a forkable attribution trail for data and reporting. Federated wikis are another attempt to decentralize and diversify the Internet, instead of walling it up behind a corporation's control. And a lot of it is inspired by the main insight that wikis had in the first place: on a wiki, you create a page by first creating a hyperlink to it, then following that link.
And that's why it's ultimately ridiculous to act like some pre-cached news articles are the herald of a new media age. What the web gives us — a freedom for anyone to publish to everyone, a wildly cross-platform programming environment, a rich multimedia container where your plain-text article can live right next to my complex news app — is not going to be superceded by a bunch of native apps, and certainly not by Facebook. Instant Articles won't even be the future of news. Future of the web? Give me a break.
I'm not sure what it says about Seattle that one of our biggest yearly events is a May Day protest that wrecks havoc across big chunks of downtown. What even competes? The Blue Angels shut down traffic on the bridges once a summer, and there's the Sea Fair downtown, but reception to those is always pretty muted in my experience. International Workers Day is the big show.
The May Day map I put together to track our reporters has quickly become one of my favorite projects for the Seattle Times. It was real-time, it posed interesting data challenges, and it really exploited our <leaflet-map> element more than anything else we've done so far. While I also wrote a post on it for our dev blog at work, I wanted to call out a couple of other interesting points here.
The most interesting technical detail here is the use of the Twitter streaming API, which delivers nearly instant updates for a search query (either on users, geolocation, or keyword). Node is a great fit for this, with the twitter module offering a readable stream that fires events as new items come in. Our scaffolding, on the other hand, is not intended to be run as a long-standing process, and I didn't really want to retrofit Grunt into a general-purpose application framework. I ended up writing the Twitter part of the app as a completely separate, continuous Node process, which then dumped out its data as a JSON file and started a standard build/deploy in a child process whenever new data arrived.
To store the tweets from the stream, the application uses a SQLite3 database, since that's the easiest way to query and update data. A static data store like this is not something that we've used on projects before, and I don't know if I'd re-use it again. Using SQLite itself is always a pleasure, but reliance on a local database means that I couldn't just clone the project from home and update it when I wanted to change the coloring on Saturday morning. Using cloud storage, like Google Sheets, has a lot of advantages for distributed and remote development.
Working with Twitter itself is an interesting problem, because it's clear that the company has no real coherent plan for outside developers. Over the last few years, the API for user access has been increasingly limited and broken as Twitter tried to drive third-party clients (which don't show ads and don't make money) out of existence. On the other hand, if you are building a Twitter bot, which our map effectively is, it remains a pretty useful and effective service for pub/sub communication. I'm not sure it says very much about Twitter's strategy that they'll let bots run wild while ordinary people are locked into a client monoculture, but that's honestly the least of my frustrations with them at this point.
All that said, I would personally use with this stack again in a heartbeat. Twitter is not the highest social traffic source for the Seattle Times, but almost all of our reporters use it anyway, and it's much nicer to program against compared to Facebook. The impending dilemma is if (or when) Twitter will decide to switch to a "curated" (read: algorithmically-tampered) stream a la Facebook's timeline. When that happens, its value to me as a news developer drops basically to nothing, because I won't be able to guarantee message delivery any more.
Which brings me to the most boring but probably most profound lesson of this project: we need a better build server. The May Day map ran on a box in the office we've affectionately dubbed "Cronda," which also currently tests our traffic alert application and previously powered the Seahawks fan map. In each of those cases, we've jury-rigged together a solution for pulling the latest source code and running builds at regular intervals (the cron Grunt task), but it's not optimal. We can't check on those builds remotely, or restart them if something goes wrong.
At some point, we'll probably move our builds from Cronda to an EC2 box that we can access remotely, but doing so doesn't honestly solve the problem — it just makes it less fragile. Eventually, I think we'll need to look into a real build monitor like Jenkins, which can automate deployments, track error logs, and respond to queries in our teach chat. I'm not entirely looking forward to that, since it feels like a very heavyweight solution, but the more complex our applications get, the more a little up-front rigor will pay off.
It has been a busy week, but I wanted to take a moment to recognize my colleagues at The Seattle Times for their tremendous work, resulting in a 2015 Pulitzer Prize for breaking news journalism. Their coverage of the Oso landslide was clear, comprehensive, and accurate, and followup work continues to this day (including one of my first projects for the paper). It's very cool to be working in a newsroom that's the winner of 10 Pulitzer Prizes over the years, and I'm looking forward to being here when we win #11.
We have sent several people from the newsroom, including myself, to journalism conferences over the last few months. Most conferences are about 50% inspirational and 50% crap (tilted heavily crap-wards in the keynotes), but you meet good people and you get to see the nuts and bolts behind the scenes of some of the best interactive news stories published.
It's natural to come back from a conference with a kind of inferiority complex, and equally easy to conclude that we're not making similar rich presentations because we don't have the cool tools that those other (richer, more tech-savvy) newsrooms have. We too, according to this train of thought, need to be coding elaborate visualization generators and complicated new CMS features — or, as Ryan Pitts from Mozilla said to me last weekend at the Society for News Design workshop, "let's not rest until every paper in the country has built its own charting application."
I think better newsroom tech is important, but let's play devil's advocate for a bit with an unpopular hypothesis: developing tools for the editors and reporters at your newspaper is a waste of your time, and a distraction from the journalism you should be doing.
Why a waste of your time? Partly because newsroom tools get a lot less uptake than you probably think they do (certainly less than we'd hope they would). I've written a lot of internal applications in my time, and they've never been particularly popular, because most reporters and editors don't care. They're too busy doing journalism to use your solution (which is as it should be), and they are probably not big on technology anyway (I have a lot of reporters who can't use Excel, which pains me greatly). Creating tools for reporters is, most of the time, attacking the problem at the wrong point.
For many newsrooms, that wasted time will end up being twice as expensive, because development resources are scarce and UI is hard. Building a polished, feature-filled chart generator that the average journalist can use will take at least a couple of programmer months, which is time those developers aren't working on stories and visualizations that readers want. Are you willing to sacrifice that time, especially if you can't guarantee that it'll actually get used? That's a pretty big gamble, unless you have the resources of the New York Times. You're probably better off just going with an off-the-shelf package, or even finding a simpler solution.
I don't think it's a coincidence that, for all the noise people make about the new data journalism startups like Vox and FiveThirtyEight, 99% of their chart output does not come from a fancy tool or a complex interactive: they post JPEGs. And that's fine! No actual reader has ever complained about having to look at a picture of a graph instead of a souped-up vector rendering (in Vox's case, they're too busy complaining that the graph was stolen from someone else, but that's another story). JPEG is a perfectly decent solution when it comes to simply telling the story across the entire web platform — in fact, it's a great embodiment of "do the simplest thing that works," which has served me well as a guiding motto in life.
So, as a rule of thumb: don't build charting libraries. Don't build general-purpose databases. Don't build drag-and-drop slideshows. Leave these things to other people, who have time and energy to build them for a living. Does this mean you shouldn't create tools at all? No, but the target audience should be you, the news developer, and other semi-technical newsroom staff like the web producers. In other words, make technology for the people who will actually use it, and can handle something that's not polished to a mirror sheen.
I believe this is the big strength of web components, and one reason I'm so bullish on them at the Seattle Times. They're not glossy, end-user products, but they are a great balance between power and accessibility for people with a little technical skill, and they're very fast to build. If the day comes when we do choose to invest in a slicker newsroom app, we can leverage them anyway, the same way that the NYT's fancy chart designers are all based on the developer-oriented D3 library.
In the meanwhile, while I would consider an anti-tool stance a "strong opinion weakly held," I think there's a workable philosophy there. These days, I feel two concerns very strongly (outside of my normal news/editorial production, of course): how to get the newsroom to make use of our skills, and how to best use the limited developer resources we have. A "no tools" guideline is not an absolute rule, but it serves as a useful heuristic to weed out the kinds of projects that might otherwise take over our time.
The honest answer is "about five years of practice," but that's not the whole story (nor is it something I can take to a curriculum planning committee). I think there are two areas of growth that students need to be aware of, and that I'm planning on stressing for this quarter: tooling and functional programming.
Learning their way around this trio is going to be a huge challenge for my students, most of whom still live in a world where individual files are edited and sent to the browser as-is (possibly with a PHP include or two). They haven't built applications with RESTful routes, or written client-side code in a module system. SCC hasn't typically stressed those techniques, which is a shame.
I'm happy to be the person who forces students into the deep end, but I do want to make sure they have a good, structured experience. Throwing everything at students is a quick way to make sure that they get overwhelmed and give up (not a hypothetical scenario: the previous ITC 298 class had exactly that problem, and ended poorly). To ease them in, we'll try building the following sequence of exercises in our directed lab sessions:
The progression starts with Node, and then builds out gradually so that each step conceptually depends on a previous lesson. Along the way, students will learn a lot about how to structure an application across all three of these environments — which brings us to the second, and probably harder, focus of the class.
The hard part of writing for Node is that you must embrace some degree of functional programming: the continuation-passing style used in the core APIs makes it inescapable. But the great part of writing for Node (especially as the first section of the course) is that it's actually a fairly gentle ramp-up. Callback functions are not that far from event listeners, and the ubiquitous async library softens the difficulty of mapping an array functionally. Between the two, there's no shortage of practice, since there's literally no other way to write a Node program.
That's the other strategy behind the tooling sequence I've laid out. We'll start from Node, and then build toward increasingly complex functional constructs, like modules, constructors, and promises. By the time the class have finished their final projects, they should be old hands at callbacks and closures, which will serve them well in almost any language.
The specifics of this quarter are still a little bit in flux, and will likely remain so, since I think it's good to be flexible the first time teaching a class. But if you're interested in following along, feel free to check out the class repo, which contains the syllabus, supporting materials, and example code so far. Issues and pull requests are also welcome!
In the last couple of weeks, a few more of my Seattle Times projects have gone live — namely, the animated graph in this story about EB-5 visa growth, and the Seattle architecture quiz. Both use the FLIP animation technique I wrote about a few weeks ago, although it's much more elaborate in the EB-5 graph, which animates roughly 150 elements at 60fps on older mobile devices.
I saw a lot of shocked reactions when Nintendo announced it would be partnering with another company to make smartphone games. The company was quick to stress that it wouldn't be moving entirely to app stores controlled by third parties: these games will not be re-releases of existing titles, and Nintendo is still working on new dedicated console hardware for the next generation. You shouldn't expect New Super Mario on your phone anytime soon. Basically, their smartphone games will serve as ads for the "real" games.
Unlike a lot of people, I've never really rooted for Nintendo to become a software-only company. Other companies that make that jump often do so to their detriment — look at Sega, which lost a real creative spark when they got out of the hardware business — and it's even more true for Nintendo, which has always explored the physical aspects of gaming as much as the virtual. The playful design of the GameCube controller buttons, or the weirdness of a double-screened handheld, or the runaway popularity of Wii Sports, are the result of designers who are encouraged to hold strong opinions. A touchscreen, on the other hand, is a weak opinion — even no opinion, as it imitates (but never really emulates) physical controls like buttons or joysticks.
But here's the other thing: what Nintendo represents on dedicated handheld hardware, as much as wacky design chops, is a sustainable market. I play a lot of Android games, I own a Shield, I'm generally positive on the idea of microconsoles. Even given those facts, a lot of the games I play on the go are either emulators or console ports, because the app store model simply does not support development beyond a single mechanic or a few hours of gameplay. The race to the bottom, and the resulting crash of mobile game prices, means that you will almost never see a phone game with the kind of lifespan and complexity you'd get out of even the lamest Nintendo title (Yoshi Touch & Go aside).
I don't think everything Nintendo produces is golden, but they're reliable. People buy Nintendo games because you're pretty much guaranteed a polished, enjoyable experience, to the point where they can start with an expanded riff on a gimmick level and still end up with a solid gameplay hit. They're the Pixar of games. And as a result of that consistency, people will pay $40 for first-party Nintendo titles, largely sight-unseen. This creates a virtuous cycle: the revenue from a relatively-expensive gaming market lets them make the kind of games that justify that cost. It's almost impossible to imagine Nintendo being able to sustain the same halo in a $1-5 game market.
There's room for both experiences in the gaming ecosystem. Microsoft, Sony, and Steam will all provide big-budget, adult-oriented games. The app stores are overflowing with shorter, quirkier, free-to-play fare. Nintendo's niche is that they crossed those lines: oddball software for all ages that was polished to a mirror sheen. Luckily, even though observers seem convinced that Nintendo is doomed, the company itself seems well aware of where their value lies — and it's not on someone else's platform.