this space intentionally left blank

July 23, 2014

Filed under: journalism»new_media

The Landslide

We've just released a new interactive I've been working on for a couple of weeks, this time exploring the Oso landslide earlier this year. Our timeline (source) shows... well, I'll let the intro text explain it:

The decades preceding the deadly landslide near Oso reflect a shifting landscape with one human constant: Even as warnings mounted, people kept moving in. This interactive graphic tells that story, starting in 1887. Thirteen aerial photographs from the 1930s on capture the geographical changes; the hill is scarred by a succession of major slides while the river at its base gets pushed away, only to fight its way back. This graphic lets you go back in time and track the warnings from scientists; the failed attempts to stabilize the hill; the logging on or near the unstable slope; and the 37 homes that were built below the hill only to be destroyed.

The design of this news app is one of those cases where inspiration struck after letting its idea percolate for a while. We really wanted to showcase the aerial photos, originally intending to sync them up with a horizontal timeline. I don't particularly care for timelines — they're basically listicles that you can't scan easily — so I wasn't thrilled with this solution. It also didn't work well on mobile, and that's a no-go for my Seattle Times projects.

One day, while reading through the patterns at Bocoup's Mobile Vis site, it occurred to me that a vertical timeline would answer many of these problems. On mobile, a vertical scroll is a natural, inviting motion. On desktop, it was easier to arrange the elements side-by-side than stacked vertically. Swapping the axes turned out to be a huge breakthrough for the "feel" of the interactive — on phones and tablets that support inertial scrolling for overflow (Chrome and IE), users can even "throw" the timeline up the page to rapidly jump through the images, almost like a flipbook. On desktop, the mouse wheel serves much the same purpose.

On a technical level, this project made heavy use of the app template's ability to read and process CSV files. The reporters could work in Excel, mostly, and their changes would be seamlessly integrated into the presentation, which made copy editing a cinch. I also added live reload to the scaffolding on this project — it's a small tweak, but in group design sessions it's much easier to keep the focus on my editor for tweaks, but let the browser refresh on another monitor for feedback. I used Ractive to build the timeline itself, but that was mostly just for ease of templating and to get a feel for it — my next projects will probably return to Angular.

All in all, I'm extremely happy with the way this feature turned out. The reporting is deep (in a traditional story, it would probably be at least 5,000 words), but we've managed to tell this story visually in an intuitive, at-a-glance format, across multiple device formats. Casual readers can flip through the photos and see the movement of the river (as well as the 2014 devastation), while the curious can dig into individual construction events and warning signs. It's a pretty serious chunk of interactive storytelling, but we're just getting started. If you or someone you know would like to work on projects like this, feel free to apply to our open news app designer and developer positions.

July 9, 2014

Filed under: culture»internet

Outward Facing

Hey, remember that huge Facebook controversy? No, not when they tried to emotionally manipulate thousands of people for a study of dubious worth. Also not when they leaked random purchases to all your friends, thus exposing your secret purchases to the world (what? It's a Seattle thing, all right?). Probably not when they complained about the news culture that they themselves had created. Maybe it was when Facebook removed privacy options (or just changed them around in one of the site's near-constant redesigns).

Actually, I'm not really sure what I'm talking about either. To put it in the Upworthy headline-speak that clogs your news feed until you sigh and grudgingly switch it back to "most recent" mode again, "This social network has a bad habit of treating its users like lab rats. You won't believe how little they care!"

At some point, I resigned myself to the fact that I'm not going to quit Facebook anytime soon, no matter how bad their behavior is. Most people won't. In my case, even if I don't use it often, it's the only place where local dance events get publicized — without it, my ability to participate gets curtailed sharply. For other friends of mine, Facebook messages are a primary means of communication, over e-mail or even SMS. It's how we keep in touch with each other, even just in an ephemeral, transitory sense.

You may remember, however, that when Facebook first started along the path of "we know what you should see better than you do," there was a scrappy crowdfunding effort in response, for a decentralized social network run by users, for users. Diaspora took in a huge amount of money, showed no progress for a couple years, shut down, open sourced itself, and is now puttering along with roughly 15 thousand users. Which is, to be honest, what most reasonable probably saw happening anyway, but we live in a world where people collectively donate $30,000 for potato salad, so maybe a little perspective is in order.

Even though this is all fairly predictable, as someone with some interest in self-hosted versions of cloud services, I'm intrigued by the question of why we don't have a popular, decentralized, open-source alternative to Facebook. I don't think it's a difficult question to answer — rather, it's interesting because there are actually multiple reasons that it doesn't happen, and they point out bigger problems with cloud-based computing for everyday people. Facebook is a great case study for this, because many programmers have a habit (particularly when a little tipsy) of pointing out that they could write a simple Facebook clone in a weekend. This is both true, and entirely missing the point.

To see why, we have to look at a seemingly unrelated incident: Facebook's $1 billion purchase of Instagram. Why so much? It's not because it was an equal competitor: filtered photos don't compete with the Facebook news feed directly. But it had a hook that would pull users — easy sharing using your phone camera instead of a keyboard — and once the audience is there, upgrading to a Facebook-like feature set is easier. In other words, Instagram wasn't valuable because it was like Facebook. It was valuable because it was different enough to get users' attention, but close enough to serve the same social needs.

Every existing competitor to Facebook has a compelling hook. If Instagram has sharing, Snapchat has its (supposed) privacy features, and LinkedIn has a ton of annoying recruiters sending e-mails to random users. Unfortunately, open source is not good at figuring out product hooks — it tends to excel at imitation and evolution. An open-source social network would probably be better written than Facebook, but without that showcase feature, nobody will join. And a social network with no people in it is worthless. Lesson one: find a hook that's not "is open source." I suspect gaming is a possible contender, but it's proven elusive so far (both for small players like OpenFeint and the big vendors).

Assume we have our hook: how do users join up? Other social networks are free, they have nice onboarding procedures, and they don't require you to do any deep thinking about anything other than your relationship status. By contrast, when you join Diaspora, you need to choose a "pod" based on its physical location, size, and software version before you're allowed to sign up. I am shocked — shocked — that this has not taken off.

Home-built social networks tend to be decentralized, and they're often proud of that fact: letting users choose where to put their data, and how it's used, is a huge win for privacy. But decentralized services are more complicated, and require more work from their users. It's even worse if people are expected to actually self-host: the most successful self-hosted web app on the Internet today is Wordpress, and yet being forced to install Wordpress on a cheap, randomly chosen hosting provider is punishment for shoplifting in some countries. Lesson two: web app installation shouldn't be a trial.

There's a lot of thought going into this problem — Docker, for example, is a system for baking web apps into "containers" that can be installed and uninstalled almost like mobile apps. These containers travel with all their dependencies, and configuration, so there's no need to worry about what your host does or doesn't support, or what their particular weird setup is. But until then, the answer for most people tends to be "use a hosted solution a la," which tends to the defeat the purpose of decentralized software.

Without learning these lessons, cloud computing stays out of the hands of regular people, and the hope of a personal Facebook with it. Of course, they're hardly a panacea, and they're certainly not a solution for social networking anway. At this point, it almost doesn't make a difference what Facebook does, or how badly it abuses people. I think this is part of why people get so angry about it: that feeling of helplessness. We can't code our way out of this problem, or leave our friends and family behind. All we can do is hold our noses and soldier on.

June 19, 2014

Filed under: journalism»new_media

Move Fast, Make News

As I mentioned last week, the project scaffolding I'm using for news apps at the Seattle Times has been open sourced. It assumes some proficiency with NodeJS, and is built on top of the grunt-init command.

There are many other newsrooms that have their own scaffolding: NPR has one, and the Tribune often builds its projects on top of Tarbell. Common threads include the ability to load data from CSV or Google Sheets, minifying and templating HTMl with that data, and publishing to S3. My template also does those things, but with some slight differences.

  • It runs on top of NodeJS. This means, in turn, that it runs everywhere, unlike Tarbell, which will not work on Windows.
  • It has no dependencies outside of itself. I find this helpful--where the NPR template has to call out to an external lessc command to do its CSS processing, I can just load the LESS module directly.
  • It is opinionated, but flexible. It assumes you're using AMD modules for your JavaScript, and starts its build at predetermined paths. But it comes with no libraries, for example: instead, Bower is set up so that each project can pull only what it needs, and always have the latest versions.

What do you get from the scaffolding? Out of the box, it sets up a project folder that loads local data, feeds it to powerful templating, starts up a local development server, and watches all your files, rebuilding them whenever you make changes. It'll compile your JavaScript into a single file, with practically no work on your part, and do the same for your LESS files. Once you're done, it'll publish it to S3 for you, too. I've been using it for a project this week, and honestly: it's pretty slick.

If you're working on newroom development, or static app development in general, please feel free to check it out, and I'd appreciate any feedback you might have.

June 10, 2014

Filed under: journalism»new_media

Top Companies 2014

My first interactive feature for the Seattle Times just went live: our Top Northwest Companies features some of the most successful companies from the Pacific Northwest. It's not anything mind-blowing, but it's a good start, and it helped me test out some of the processes I'm planning on using for future news applications. It also has a few interesting technical tricks of its own.

When this piece was originally prototyped by one of the web producers, it used an off-the-shelf library to do the parallax effect via CSS background positions. We quickly found out that it didn't let us position the backgrounds effectively so that you could see the whole image, partly because of the plugin and partly because CSS backgrounds are a pain. We thought about just dropping the parallax, but that bugged me. So I went home, looked around at how other sites (particularly Medium) were accomplishing similar effects, and came up with a different, potentially more interesting solution.

When you load the page in a modern desktop browser now, there aren't actually any images at all. Instead, there's a fixed-position canvas backdrop, and the images are drawn to it via JavaScript as you scroll. Since these are simple blits, with no filtering or fancy effects, this is generally fast enough for a smooth experience, although it churns a little when transferring between two images. I suspect I could have faster rendering in those portions if I updated the code to only render the portions of the image that are visible, or rescaled the image beforehand, but considering that it works well enough on a Chromebook, I'm willing to leave well enough alone.

The table at the bottom of the page is written as an Angular app, and is kind of a perfect showcase for what Angular does well. Wiring up the table to be sortable and filterable was literally only a few minutes of work. The sparklines in the last column are custom elements, and Angular's filters make presenting formatted data a snap. Development for this table was incredibly fast, and the performance is really very good. There are still some issues with this presentation, such as the annoying sticky header, but it was by far the most painless part of the development process.

The most important part of this graphic, however, is not in the scroll or in the table. It's in the workflow. As I've said before, one of the things that I really learned at ArenaNet was the importance of a good build process. You don't build games like Guild Wars 2 without a serious build toolchain, and the web team was no different. The build tool that we used, Dullard, was used to compile templates, create CSS from LESS, hash filenames for CDN busting, start and stop development servers, and generate the module definitions for our JavaScript loader. When all that happens automatically, you get better pages and faster development.

I'm not planning on using Dullard at the Times (sorry, Pat!) only because I want to be able to bring people onboard quickly. So I'm going with the standard Grunt task runner, but breaking up its tasks in a very Dullard-like way and using it to automate as much as possible. There's no hand-edited code in the Top Companies graphic — only templates and data merged via the build process. Reproducing these stories, or updating them later, is as simple as pulling the repo (or, in this case, both repos) and running the Grunt task again.

That simplicity also extends to the publication process. Fast deployment means fast development and fewer mistakes hanging out in the wild when bugs occur. For Seattle Times news apps, I'm planning to host them as flat files on Amazon S3, which is dirt-cheap and rock-solid (NPR and the Chicago Tribune use the same model). Running a deployment is as simple as grunt publish. In testing last night, I could deploy a fixed version of the page faster than people could switch to their browser and press refresh. As a client-side kind of person, I'm a huge fan of the static app model anyway, but the speed and simplicity of this solution exceeded even my expectations.

Going forward, I want my all news apps to benefit from this kind of automation, without having to copy a bunch of files around. I looked at Yeoman for creating app skeletons, but it seemed like overkill, so I'm setting up a template with Grunt's project scaffolding with all the boilerplate already installed. Once that's done, I'll be able to run one command and create a blank project for news apps that includes LESS compilation, JavaScript concatenation, minification, templating, and S3 publishing. Automating all of that boilerplate means faster startup time, and that means more projects to make the newsroom happy.

As I work on these story templates, I'll be open-sourcing them and sharing my ideas. The long and the short of it is that working in a newsroom is unpredictable: crazy deadlines, no requirements to speak of, and wildly different subject matter. This kind of technical architecture may seem unrelated to the act of journalism, but its goal is to lay the groundwork so that there are no distractions from the hard part: telling creative news stories online. I want to worry about making our online journalism better, not debugging servers. And while I don't know what the final solution for that is, I think we're off to a good start.

May 28, 2014

Filed under: tech»education

Lessons in Security

This quarter, I've been teaching ITC 240 at SCC, which is the first of three "web apps" classes. They're in PHP, and the idea is that we start students off with the basics of simple pages, then add frameworks, and finally graduate them to doing full project development sprints, QA and all. As the opening act for all this, I've decided to make a foundational part of the class focused on security.

Teaching security to students is hard, because security itself is hard. Web security depends on a kind of generalized principle that everyone is out to get you at all times: don't trust the database, the URL, user input, user output, JavaScript, the browser, or yourself. This kind of wariness does not come naturally to people. Eventually everything gets broken.

I've done my best to cultivate paranoia in my students, both by telling them horror stories (the time that Google clicked all the delete links on a badly-hidden admin page, that time when the World Bank got hacked and replaced with pictures of Wolfowitz's socks) and by threatening to attack their homework every time I grade it. I'm not sure that it's actually working. I think you may need to be on the other end of something fairly horrific before it really sinks in how bad a break-in can be. The fact that their homework usually involves tracking personal information for my cat is probably not helping them take it seriously, either.

The thing is, PHP doesn't make it easy to keep users safe. There's a short tag for automatically echoing values out, but it does no escaping of HTML, so it's one memory lapse away from being a cross-site scripting bug. Why the <?= $foo ?> tag doesn't call htmlentities() for you like every other template engine on the planet, I'll never know. The result is that it's trivial to forget to sanitize your outputs — I myself forgot for an entire week, so I can hardly blame students for their slipups.

MySQL also makes this a miserable experience. Coming from a PostgreSQL background, I was unprepared (ha!) for this. Executing a prepared query in MySQL takes at least twice as many lines as in its counterpart, and is conceptually more difficult. You also can't quote table or column names in MySQL, which means that mysqli_real_escape_string is useless for queries with an ORDER BY clause — I've had to teach students about whitelists instead, and I suspect it's going in one ear and out the other.

It may be asking a little much of them anyway. Most of my students are still struggling with source control and editors, much less thinking in terms of security. Several of them have checked their passwords into GitHub, requiring a "password amnesty" where everyone got reset. I'd probably be more upset if I didn't think it was kind of funny, and if I wasn't pretty sure that I'd done the same thing in the past.

But even if they're a little bit overwhelmed, I still believe that students should be learning this stuff from the start, if for no other reason than that some of them are going to get jobs working on products that I use, and I would prefer they didn't give my banking information away to hackers in some godforesaken place like Cleveland. Every week, someone sends me a note to let me know that my information got leaked because they couldn't write a secure website — even companies like eBay, Dropbox, and Sony that should know better. We have to be more secure as an industry. That starts with introducing people to the issues early, so they have time to learn the right way as they improve their skills.

May 15, 2014

Filed under: journalism»professional

In These Times

On Monday, I'll be joining the Seattle Times as a newsroom web developer, working with the editorial staff on data journalism and web projects there. It's a great opportunity, and I'm thrilled to be making the shift. I'm also sad to be leaving ArenaNet, where I've worked for almost two years.

While at ArenaNet, I never really got to work on the kinds of big-data projects that drew me to the company, but that doesn't mean my time here was a loss. I had my hands in almost every product, ranging from the account site to the leaderboards to the main marketing site. I contributed to a rewrite of some of the in-game web UI as a cutting-edge single-page application, which will go out later this year and looks tremendously exciting. A short period as the interim team lead gave me a deep appreciation for our build system and server setup, which I fully intend to carry forward. And as a part of the basecamp team, I got to build a data query tool that incorporated heatmapping and WebGL graphing, which served as a testbed for a bunch of experimental techniques.

I also learned a tremendous amount about development in this job. It's possible to argue that ArenaNet is as much a web company as it is a game company, with a high level of quality in both. ArenaNet's web team is an (almost) all-JavaScript shop, and hires brilliant specialists in the language to write full-stack web apps. It's hard to think of another place where I'd have a chance to work with other people who know as much about the web ecosystem, and where I'd be exposed to really interesting code on a daily basis. The conversations I had here were often both infuriating and educational, in the way the best programming discussions should be.

Still, at heart, I'm not a coder: I'm a journalist who publishes with code. When we moved to Seattle in late 2011, I figured that I'd never get the chance to work in a newsroom again: the Times wasn't hiring for my skill set, and there aren't a lot of other opportunities in the area. But I kept my hand in wherever possible, and when the Times' news apps editor took a job in New York, I put out some feelers to see if they were looking for a replacement.

This position is a new one for the Times — they've never had a developer embedded in their newsroom before. Of course, that's familiar territory for me, since it's much the same situation I was in at CQ when I was hired to be a multimedia producer even though no-one there had a firm idea of what "multimedia" meant, and that ended up as one of the best jobs I've ever had. The Seattle Times is another chance to figure out how embedded data journalism can work effectively in a newsroom, but this time at a local paper instead of a political trade publication: covering a wider range of issues across a bigger geographic area, all under a new kind of deadline pressure. I can't wait to meet the challenge.

May 8, 2014

Filed under: gaming»hardware»android

Select Start

I've owned an Nvidia Shield for a little under a year now. The situation hasn't entirely changed: I use it most often as a portable emulator, and it's wonderful for that. I beat Mother 3, Drill Dozer, and Super Metroid a while back, and I'm working my way through Final Fantasy 6 now.

But there are more Android games that natively support physical controls now, especially as the Ouya and add-on joysticks have raised the profile for Android gaming. It's not a huge library, but between Humble Bundles and what's in the Google Play store, I certainly don't feel cheated. If you're thinking about picking one up, here's what's good (and what's merely playable) so far.

Aquaria may be the best value for the dollar on Shield, which makes it weird that apparently you can't buy it for Android anymore. A huge, sprawling Metroid-alike set underwater, with a beautifully-painted art style, it's the first game that I played where the Shield's controls not only worked, they worked really well (which figures, since it was developed for XBox controls alongside mouse and touch). If you managed to nab this in an old Humble Bundle, it's well worth the installation.

Actually the third in a series of "tower offense" games, where you send a small group of tanks through a path filled with obstacles, Anomaly 2 is one of the weird cases where the physical controls work, and are very well-tuned, but you still kind of wish the game was on a touchscreen. Playing the earlier Anomaly titles on a phone, you'd get into a groove of tapping icons to balance your resources, targeting, and path. It had a nice Google Maps fluidity to it, and that kind of speed suffers a little bit when panning around via thumbstick. It's still worth a look, but probably better played on a touch device.

In contrast, Badlands seems like a poor match for the Shield--it's a single-press game similar to any number of other smartphone titles (Flappy Bird, Tiny Wings, etc). But there's one distinguishing factor, which is that the triggers on the Shield (which are mapped to the "flap" action) are fully analog, so the harder you pull the faster the onscreen character flies. It's a small change, but it completely alters the feel of the game for the better. The layered, 2D art style is also gorgeous, and the sound design is beautiful, but on the other hand I actually have no idea what's going on, or why some little black blobby creature is trying to travel from left to right.

Most of these games come from the Humble Bundles, which are almost always worth throwing $5 at, but I actually bought Clarc from the Google Play store. It plays a bit like Sokoban, mixed with Portal 2's laser puzzles and Catherine's block/enemy entrapment. Previously released on Ouya, the controls are still solid on the Shield, and the puzzles follow a nice pattern of seeming impossible, then seeming obvious once they're worked out. A super-fast checkpoint system also helps. It's cute, funny, and good for about 6 hours of serious play.

I'm in favor of anything that puts Crazy Taxi on every platform in existence, but only if it's coded well. The problem is that while this port supports the gamepad, it's hamstrung by the adaptations made for phones — namely, the Crazy Drift can't be triggered manually, and the Crazy Dash feels sluggish. In a game where you need to be drifting or dashing almost all the time, this pretty much ruins your ability to run the map. I'd say to skip this unless it's on sale.

Gunman Clive was originally released on the PS Vita, and it shows: a cel-shaded platformer with a strong Contra influence, this is another bite-sized chunk of gameplay. It does seem to be missing some of the bonus features from the original release, but there's still plenty of variety (and some huge, fun bosses) to fight. Considering that it's only a couple of bucks, it's well worth the price if you're in the mood for some neo-retro shooting.

Speaking of retro, one of my favorite discoveries is the games that Orange Pixel has been tossing out for all kinds of platforms, particularly Gunslugs and Heroes of Loot. Both are procedurally-generated takes on classic games (Metal Slug and Gauntlet respectively) with a pixel-art design and a goofy sense of humor. The rogue-like randomization of the levels makes both of them compulsively playable, too. They're great time-wasters.

One of Gameloft's derivative mobile clones, NOVA 3 is trying very hard to either be Crysis or Halo. It doesn't really matter which since the result is just boring man-in-suit shooting, with sloppy, ill-configured thumbstick controls. All that, and it's still one of the more expensive titles in this list. Definitely skip this one.

Rochard was released on Steam a while back, and then re-released just for Shield this spring. It's a clever little puzzle-platformer that's based around a Half-Life gravity gun, but also some light combat. It would probably be better without the latter: the AI is generally terrible, and the weapons aren't inspiring. At its best, Rochard has you toggling low-gravity jumps, stacking crates, and juggling power cells to disable force fields, and those are the parts that make it worth playing.

Finally, fans of shooters have plenty of options (including remakes of R-Types I and II), but there's something to be said for time-travel epic Sine Mora. Although it makes no sense whatsoever, it's a great bullet-hell shmup with a strong emphasis on replayability through different ships and abilities, score attack modes, and boss fights. I love a good shooter, even if I'm terrible at them, and this is no exception.

What's missing from the games on Shield so far? I'd like to see more tactical options, a la Advance Wars or XCOM (which has a port, but doesn't understand gamepads). I'd appreciate a good RPG. And I'd love to see a real, serious shooter that's not a tossed-off Wolfenstein demake. But it's worth also understanding why these games don't exist: the economics of the mobile market don't support them. When your software sells for $5 a pop, maximum, you can't afford to do a lot of content development or design.

The result, except for ports from more sustainable platforms, is a bunch of quick hits instead of real investments. Almost all the games above, the ones that are worth playing at least, were either released on PC/console first, or simultaneously. The good news is that tools like Unity and Unreal Engine 4 promote simultaneous mobile/PC development. The bad news is that getting better games for mobile may mean cheapening development on the big platforms. If you thought that consoles were ruining PC game design before, wait until phones start to make an impact.

April 30, 2014

Filed under: tech»mobile

Company Towns

Pretend, for a second, that you opened up your web browser one day to buy yourself socks and deoderant from your favorite online retailer (, maybe). You fill your cart, click buy, and 70% of your money actually goes toward foot coverings and fragrance. The other portion goes to Microsoft, because you're using a computer running Windows.

You'd probably be upset about this, especially since the shop raised prices to compensate for that fee. After all, Microsoft didn't build the store. They don't handle the shipping. They didn't knit the socks. It's unlikely that they've moved into personal care products. Why should they get a cut of your hard-earned footwear budget just because they wrote an operating system?

That's an excellent question. Bear it in mind when reading about how Comixology removed in-app purchases from their comic apps on Apple devices. I've seen a lot of people writing about how awful this is, but everyone seems to be blaming Comixology (or, more accurately, their new owners: Amazon). As far as I can tell, however, they don't have much of a choice.

Consider the strict requirements for in-app purchases on Apple's mobile hardware:

  • All payments made inside the app must go through Apple, who will take 30% off the top. This is true even the developer handles their own distribution, archival, and account management: Apple takes 30% just for acting as a payment processor.
  • No other payment methods are allowed in the App Store — no Paypal, no Google Wallet, no Amazon payments. (No Bitcoin, of course, but no Monopoly money either, so that's probably fair.) Developers can't process their own payments or accept credit cards. It's Apple or nothing.
  • Vendors can run a web storefront and then download content purchased online in the app... but they can't link to the site or acknowledge its existence in any way. They can't even write a description of how to buy content. Better hope users can figure it out!

Apple didn't write the Comixology app. They didn't build the infrastructure that powers it, or sign the deals that fill it with content. They don't store the comics, and they don't handle the digital conversion. But they want 30 cents out of every dollar that Comixology makes, just for the privilege of manufacturing the screen you're reading on. If Microsoft had tried to pull this trick in the 90s, can you imagine the hue and cry?

This is classic, harmful rent-seeking behavior: Apple controls everything about their platform, including its only software distribution mechanism, and they can (and do) enforce rules to effectively tax everything that platform touches. There was enough developer protest to allow the online store exception, but even then Apple ensures that it's a cumbersome, ungainly experience. The deck is always stacked against the competition.

Unfortunately, that water has been boiling for a few years now, so most people don't seem to notice they're being cooked. Indeed, you get pieces like this one instead, which manages to describe the situation with reasonable accuracy and then (with a straight face) proposes that Apple should have more market power as a solution. It's like listening to miners in a company town complain that they have to travel a long way for shopping. If only we could just buy everything from the boss at a high markup — that scrip sure is a handy currency!

It's a shame that Comixology was bought by Amazon, because it distorts the narrative: Apple was found guilty of collusion and price fixing after they worked with book publishers to force Amazon onto an agency model for e-books, so now this can all be framed as a rivalry. If a small company had made this stand, we might be able to have a real conversation about how terrible this artificial marketplace actually is, and how much value is lost. Of course, if a small company did this, nobody would pay attention: for better or worse, it takes an Amazon to opt out of Apple's rules successfully (and I suspect it will be successful — it's worked for them on Kindle).

I get tired of saying it over and over again, but this is why the open web is important. If anyone charged 30% for purchases through your browser, there would be riots in the street (and rightly so). For all its flaws and annoyances, the only real competition to the closed, exploitative mobile marketplaces is the web. The only place where a small company can have equal standing with the tech giants is in your browser. In the short term, pushing companies out of walled gardens for payments is annoying for consumers. But in the long term, these policies might even be doing us a favor by sending people out of the app and onto the web: that's where we need to be anyway.

April 25, 2014

Filed under: tech

Service as a Service

If you've ever wanted to get in touch with more people who are either unhinged or incredibly needy (or both), by all means, start a modestly successful open source project.

That sounds bitter. Let me rephrase: one of the surprising aspects of open-sourcing Caret has been that much of the time I spend on it does not involve coding at all. Instead, it's community management that absorbs my energy. Don't get me wrong: I'm happy to have an audience. Caret users seem like a great group of people, in general. But in my grumpier moments, after closing issue requests and answering clueless user questions (sample, and I am not making this up: "how do I save a file?"), there are times I really sympathize with project leaders who simply abandon their code. You got this editor for free, I want to say: and now you expect me to work miracles too?

Take a pull request, for example. That's when someone else does the work to implement a feature, then sends me a note on GitHub with a button I can press to automatically merge it in. Sounds easy, right? The problem is that someone may have written that code, but it's almost guaranteed that they won't be the one maintaining it (that would be me). Before I accept a pull request, I have to read through the whole thing to make sure it doesn't do anything crazy, check the code style against the rest of Caret, and keep an eye out for how these changes will fit in with future plans. In some cases, the end result has to be a nicely-worded rejection note, which feels terrible to write and to receive. Either way, it's often hours of work for something as simple as a a new tab button.

These are not new problems, and I'm not the first person to comment on them. Steve Klabnik compares the process to being an "open source gardener," which horrifies me a little since I have yet to meet a plant I can't kill. But it is surprising to me how badly "social" code sites handle the social part of open source. For example, on GitHub, they finally added a "block" feature, but there's no fine-grained permissions--it's all or nothing, on a per-user basis. All projects there also automatically get a wiki that's editable by any user, whether they own the repo or not, which seems ripe for abuse.

Ultimately, the burden of community management falls on me, not on the tools. Oddly enough, there don't seem to be a lot of written guides for improving open source management skills. A quick search turned up Producing Open Source Software by Karl Fogel, but otherwise everyone seems to learn on their own. That would do a lot to explain the wide difference in tone between a lot of projects, like the wide difference I see between Chromium (always pleasant) and Mozilla (surprisingly abrasive, even before the Eich fiasco).

If I had a chance to do it all again, despite all the hassle, I would probably keep all the communication channels open. I think it's important to be nice to people, and to offer help instead of just dumping a tool on the world. And I like to think that being responsive has helped account for the nearly 60,000 people who use Caret weekly. But I would also set rules for myself, to keep the problem manageable. I'd set times for when I answer e-mails, or when I close issues each day. I'd probably disable the e-mail subscription feature for the repository. I'd spend some time early on writing up a style guide for contributors.

All of these are ways of setting boundaries, but they're also the way a project gets a healthy culture. I have a tremendous amount of respect for projects like Chromium that manage to be both successful and — whenever I talk to their organizers — pleasant and understanding. Other people may be able to maintain that kind of demeanor full-time, but I'm too grumpy, and nobody's compensating me for being nice (apart from the one person who sends me a quarter on Gittip every week). So if you're in contact with me about Caret, and I seem to be taking a little longer these days to get back to you, just remember what you're paying for service.

April 10, 2014

Filed under: fiction»reviews»kindle

Digital Bookshelf: No Intro Edition

Hild, by Nicola Griffith

This book is a weird beast. Set in Britain around the year 600AD, around the time that the island was converting to Christianity, it follows a woman who would eventually become St. Hilda of Whitby (no, I don't know who she is either). Hild is a seer from an early age, not really because she has any mystical powers but more because she's been raised by her mother to be a highly-trained political operator, surrounded by people who aren't looking much past their own self-interest. Caught between the Catholic church, Irish war parties, and her own hostile king, Hild spends much of the book trying to figure out how to keep herself and her family safe by predicting events before anyone else realizes what's going on.

The elevator pitch for this — Dune if Paul Atreides was a woman in the middle ages — is so good, it's all the more annoying that Hild herself comes across as one-dimensional and unrealistic. She's setting policy by the age of ten, and running large chunks of the country by 16. It's not really a Mary Sue — Hild has plenty of flaws, and regularly makes mistakes — so much as it's merely undramatic. The narration tends to tell, rather than show, with little in the way of suspense or surprise. Griffith's goal, at least in part, seems to be to use Hild as a critique of passive female characters in fantasy literature, which is a fine goal. It's frustrating that she seems to have forgotten to make her very interesting in the process.

Precision Journalism, by Philip Meyer

This book is often cited on the NICAR discussion list as the go-to textbook for data journalists, but I'd never read it. The Kindle version is the 2002 4th edition, which seems to be the newest copy. As a result, parts of it are dated or a little "quaint," but for the most part I think it actually holds up to its reputation. Meyer keeps a light touch throughout the book, walking reporters through standard statistical tests, surveys and polling, and databases without getting bogged down into too much operational detail. There's a lot of "here's the formula, and here's where to go to learn more," which seems reasonable.

Inadvertently, being a textbook for an undergraduate audience, Precision Journalism is revealing as much for what it thinks students won't know as it is for what it explicitly teaches. For example, there's an early chapter that covers probability, which makes sense: probability is confusing, and many people get it wrong even after a statistics class. I'm a little snobbier about the following chapter, in which Meyer details how to figure percentage change and change in percentage (subtly different concepts). Part of me wants is glad that it's being covered. Another part is annoyed that students don't know it already.

That said, Meyer's enthusiasm and practical outlook on what we now call "data journalism" really resonated with me. I'd like to have seen more emphasis on SQL instead of SAS, but that's nitpicking. For the most part, Precision Journalism does a great job of covering the strengths and weaknesses of computer-assisted reporting, with lots of examples and wry humor. I guess there's a reason it's a classic.

Debt, by David Graeber

Everything in Debt is kind of a letdown after its second chapter. That would be section where Graeber disembowels the common economic myth of a "barter economy" — the idea that in some mythical village, one person had chickens but wanted shoes, and the other person had shoes but didn't want chickens, and so to enable them both to trade despite their conflicting desires, we invented money. How convenient!

Turns out it's also a complete fabrication, despite the efforts of decades of anthropologists trying to find such a barter society. Instead, the historical record shows that people in non-money societies are linked by an interwoven network of casual debts and favors, not strict one-for-one exchanges. We invented money not to supplant barter, but when we needed a method of exchange that didn't involve trust — usually to give soldiers a way to pay for things when they camped somewhere, given that they were only temporary occupiers and not accountable for the same kind of debts as a neighbor.

This is not new research, apparently — Graeber complains that anthropologists have been trying to convince economists to find a new origin story for years — but it was new to me. The realization that the foundational mythology of economics is a fairy tale doesn't disprove its validity as a field, but it does raise a lot of really interesting questions. Graeber, a former leader within the Occupy movement, certainly pulls no punches in his criticisms.

The rest of the book is good and similarly thought-provoking, but it can't help but seem a bit underwhelming. Graeber works his way forward methodically through all the ways that we conceptualize obligations, then through the history of debt and payment up through the modern age. At times, this is fascinating, especially when he discusses "reversions" from a monetary economy to an informal debt economy. Ultimately, the book builds to a theory of international politics that ties debt to "tribute." Is it convincing? For my part, not entirely, no. But it's a fascinating and deeply-researched argument.

Halo: Kilo-Five trilogy, by Karen Traviss

Karen Traviss is one of those writers who makes me resent the licensed-property industry a little bit. A talented genre writer — her Wess'har books are a sharp and unsettling rumination on politics and veganism — Traviss gets tapped a lot to write tie-in novels for movies and games. She's good enough that the result sometimes transcends its origin, so every now and then I'll give one a shot. The Kilo-Five books are basically what you get if you cross Halo's backstory with a spy yarn.

Set between the third and fourth games, the Kilo Five books bear little resemblance to the action of the source material. There aren't a lot of firefights on offer: instead, the plot bears more resemblance to Operation Mincemeat, the WWII counterintelligence operation that disguised the fact that the Allies had broken Nazi codes. Having won a war against hostile aliens, the books' human protagonists are working covertly to keep them destabilized by creating civil unrest and sabotaging infrastructure. It's also a subversive take on the macho warrior spirit of the Halo franchise, which makes the Amazon reviews from wounded fans almost worth the price of admission. I'm still glad Traviss is getting back to original fiction, though.

Girl Sleuth, by Melanie Rehak

When I was a kid, my dad went to a second-hand bookstore and bought ten or fifteen of the Tom Swift Jr. pulp novels for me. Even though at that point they were probably thirty years old, dated with golly-gee-whiz references to the wonders of atomic power (oh, to have lived in the uncomplicated world before Three Mile Island), I read them cover to cover multiple times. Tom Swift, of course, was a product of the Stratemeyer Syndicate and its potboiler formula — the same one that powered the Hardy Boys and Nancy Drew, neither of which I read but which I'm sure I would have found equally compelling.

Girl Sleuth is nominally a history of Nancy Drew, but it also serves as a look at the Stratemeyer dynasty: started by an enterprising writer named Edward Stratemeyer, then carried on by his daughter Harriet when he passed away. It's also the story of Mildred Wirt, the woman who wrote almost all the original Nancy Drew, but was for years hidden behind the syndicate's pen name, Carolyn Keene. Rehak traces the evolution of the character, as well as the parallel tension between the younger Stratemeyer, who wrote many of the series outlines, and Wirt, an adventurous newspaper journalist who churned out an unthinkable number of pages for the series. Both women believed, not without reason, that they were the real author of Nancy Drew.

As much as anything else, Rehak's re-telling is a fascinating look at the lifecycle of pop culture. Nancy Drew began as a semi-disreputable pulp sensation: hated by librarians, but a hot commodity among kids. For whatever reason, the series took off, and was beloved enough that (like my Tom Swifts) it was passed on to a new generation, who took the old stories and found new contemporary values in them. In a way, it could be argued that she was as much a creation of the readers as of either of her "authors." Transformed by the changing youth culture of the 20th century, Nancy Drew became a proto-feminist icon, then an American tradition, and is now an article of nostalgia. Rehak seems optimistic that she can adapt even further, but I wonder if that's not belaboring the point. Sometimes a good story should just end.

Future - Present - Past