this space intentionally left blank

April 10, 2014

Filed under: fiction»reviews»kindle

Digital Bookshelf: No Intro Edition

Hild, by Nicola Griffith

This book is a weird beast. Set in Britain around the year 600AD, around the time that the island was converting to Christianity, it follows a woman who would eventually become St. Hilda of Whitby (no, I don't know who she is either). Hild is a seer from an early age, not really because she has any mystical powers but more because she's been raised by her mother to be a highly-trained political operator, surrounded by people who aren't looking much past their own self-interest. Caught between the Catholic church, Irish war parties, and her own hostile king, Hild spends much of the book trying to figure out how to keep herself and her family safe by predicting events before anyone else realizes what's going on.

The elevator pitch for this — Dune if Paul Atreides was a woman in the middle ages — is so good, it's all the more annoying that Hild herself comes across as one-dimensional and unrealistic. She's setting policy by the age of ten, and running large chunks of the country by 16. It's not really a Mary Sue — Hild has plenty of flaws, and regularly makes mistakes — so much as it's merely undramatic. The narration tends to tell, rather than show, with little in the way of suspense or surprise. Griffith's goal, at least in part, seems to be to use Hild as a critique of passive female characters in fantasy literature, which is a fine goal. It's frustrating that she seems to have forgotten to make her very interesting in the process.

Precision Journalism, by Philip Meyer

This book is often cited on the NICAR discussion list as the go-to textbook for data journalists, but I'd never read it. The Kindle version is the 2002 4th edition, which seems to be the newest copy. As a result, parts of it are dated or a little "quaint," but for the most part I think it actually holds up to its reputation. Meyer keeps a light touch throughout the book, walking reporters through standard statistical tests, surveys and polling, and databases without getting bogged down into too much operational detail. There's a lot of "here's the formula, and here's where to go to learn more," which seems reasonable.

Inadvertently, being a textbook for an undergraduate audience, Precision Journalism is revealing as much for what it thinks students won't know as it is for what it explicitly teaches. For example, there's an early chapter that covers probability, which makes sense: probability is confusing, and many people get it wrong even after a statistics class. I'm a little snobbier about the following chapter, in which Meyer details how to figure percentage change and change in percentage (subtly different concepts). Part of me wants is glad that it's being covered. Another part is annoyed that students don't know it already.

That said, Meyer's enthusiasm and practical outlook on what we now call "data journalism" really resonated with me. I'd like to have seen more emphasis on SQL instead of SAS, but that's nitpicking. For the most part, Precision Journalism does a great job of covering the strengths and weaknesses of computer-assisted reporting, with lots of examples and wry humor. I guess there's a reason it's a classic.

Debt, by David Graeber

Everything in Debt is kind of a letdown after its second chapter. That would be section where Graeber disembowels the common economic myth of a "barter economy" — the idea that in some mythical village, one person had chickens but wanted shoes, and the other person had shoes but didn't want chickens, and so to enable them both to trade despite their conflicting desires, we invented money. How convenient!

Turns out it's also a complete fabrication, despite the efforts of decades of anthropologists trying to find such a barter society. Instead, the historical record shows that people in non-money societies are linked by an interwoven network of casual debts and favors, not strict one-for-one exchanges. We invented money not to supplant barter, but when we needed a method of exchange that didn't involve trust — usually to give soldiers a way to pay for things when they camped somewhere, given that they were only temporary occupiers and not accountable for the same kind of debts as a neighbor.

This is not new research, apparently — Graeber complains that anthropologists have been trying to convince economists to find a new origin story for years — but it was new to me. The realization that the foundational mythology of economics is a fairy tale doesn't disprove its validity as a field, but it does raise a lot of really interesting questions. Graeber, a former leader within the Occupy movement, certainly pulls no punches in his criticisms.

The rest of the book is good and similarly thought-provoking, but it can't help but seem a bit underwhelming. Graeber works his way forward methodically through all the ways that we conceptualize obligations, then through the history of debt and payment up through the modern age. At times, this is fascinating, especially when he discusses "reversions" from a monetary economy to an informal debt economy. Ultimately, the book builds to a theory of international politics that ties debt to "tribute." Is it convincing? For my part, not entirely, no. But it's a fascinating and deeply-researched argument.

Halo: Kilo-Five trilogy, by Karen Traviss

Karen Traviss is one of those writers who makes me resent the licensed-property industry a little bit. A talented genre writer — her Wess'har books are a sharp and unsettling rumination on politics and veganism — Traviss gets tapped a lot to write tie-in novels for movies and games. She's good enough that the result sometimes transcends its origin, so every now and then I'll give one a shot. The Kilo-Five books are basically what you get if you cross Halo's backstory with a spy yarn.

Set between the third and fourth games, the Kilo Five books bear little resemblance to the action of the source material. There aren't a lot of firefights on offer: instead, the plot bears more resemblance to Operation Mincemeat, the WWII counterintelligence operation that disguised the fact that the Allies had broken Nazi codes. Having won a war against hostile aliens, the books' human protagonists are working covertly to keep them destabilized by creating civil unrest and sabotaging infrastructure. It's also a subversive take on the macho warrior spirit of the Halo franchise, which makes the Amazon reviews from wounded fans almost worth the price of admission. I'm still glad Traviss is getting back to original fiction, though.

Girl Sleuth, by Melanie Rehak

When I was a kid, my dad went to a second-hand bookstore and bought ten or fifteen of the Tom Swift Jr. pulp novels for me. Even though at that point they were probably thirty years old, dated with golly-gee-whiz references to the wonders of atomic power (oh, to have lived in the uncomplicated world before Three Mile Island), I read them cover to cover multiple times. Tom Swift, of course, was a product of the Stratemeyer Syndicate and its potboiler formula — the same one that powered the Hardy Boys and Nancy Drew, neither of which I read but which I'm sure I would have found equally compelling.

Girl Sleuth is nominally a history of Nancy Drew, but it also serves as a look at the Stratemeyer dynasty: started by an enterprising writer named Edward Stratemeyer, then carried on by his daughter Harriet when he passed away. It's also the story of Mildred Wirt, the woman who wrote almost all the original Nancy Drew, but was for years hidden behind the syndicate's pen name, Carolyn Keene. Rehak traces the evolution of the character, as well as the parallel tension between the younger Stratemeyer, who wrote many of the series outlines, and Wirt, an adventurous newspaper journalist who churned out an unthinkable number of pages for the series. Both women believed, not without reason, that they were the real author of Nancy Drew.

As much as anything else, Rehak's re-telling is a fascinating look at the lifecycle of pop culture. Nancy Drew began as a semi-disreputable pulp sensation: hated by librarians, but a hot commodity among kids. For whatever reason, the series took off, and was beloved enough that (like my Tom Swifts) it was passed on to a new generation, who took the old stories and found new contemporary values in them. In a way, it could be argued that she was as much a creation of the readers as of either of her "authors." Transformed by the changing youth culture of the 20th century, Nancy Drew became a proto-feminist icon, then an American tradition, and is now an article of nostalgia. Rehak seems optimistic that she can adapt even further, but I wonder if that's not belaboring the point. Sometimes a good story should just end.

April 3, 2014

Filed under: tech»education

Teaching with Git: Lessons Learned

Last quarter, for the first time, I taught Intro to JavaScript at SCC (previously SCCC) using Git as the primary method for turning in homework. They say that the best way to learn something is to teach it, and based on my experience I'd say that's true, particularly if the "it" in question is "how many ways can Git go wrong in a classroom?"

My students were sturdy and patient guinea pigs: source control must have been a shock since many of them had only recently learned about FTP and remote filesystems. Some of them seemed suspicious about the whole "files" thing to begin with, and for them I could only offer my sympathies. I was asking a lot, on top of learning a new language with unfamiliar constraints of its own.

Midway through the quarter, though, workflows developed and people adjusted. I was no longer spending my time answering Git questions and debugging commit issues. As an instructor, it was hugely successful: pulling source code is much easier than using "view source" on hosted pages, and commenting line-by-line on GitHub commits is far superior to code critique via e-mail. I have no qualms about using Git in class again, but shortening the adjustment period is a priority for me.

Using software in a classroom is an amazing way to discover failure cases you would otherwise never see in a million years, and this was no exception. Add the fact that I was teaching it for the first time, and some fun obstacles cropped up. Here's a short list of issues students hit during the first few weeks of class:

  • GitHub's desktop software works fine until you hit a snag, and then it throws up its hands and surrenders completely.
  • The desktop client is also a lovely Metro-inspired design that requires an updated .NET installation to sync with remote repos. Guess what lab computers and many of my students were missing?
  • Students didn't understand the difference between GitHub the program and GitHub the web site, which led to a lot of confusion.
  • Students tried to move the Git directories around, which meant the client lost track of them and broke.
  • One student still maintained revisions to files manually by renaming them, then committed all those revisions to the repo as separate files.
  • People tried to use the commit messages on GitHub as if they were folder descriptions, then felt bad if a revision touched files in multiple folders and screwed up their nice, neat labels.
  • SCC students, like most computer users, are scared of the command line, which is problem since most good Git advice involves the shell.
These problems are troubling, but hardly insurmountable. Indeed, I already have a plan for addressing them next quarter for my Web Apps 1 class, but it's really not so much a "plan" so much as a radical re-imagining. To put it bluntly, I'm throwing most of my previous strategy away and starting over with three guiding principles: tools, concepts, and context.

For a start, students will be connecting to their servers over SSH to debug and edit their PHP, so I'll be teaching Git from the command line instead of using graphical tools like GitHub for Windows. This sounds more complicated, but it means that the experience is consistent for all students and across all operations. It also means that students will be able to use Pro Git as a textbook and search the web for advice on commands, instead of relying on the generally abysmal help files that come with graphical Git clients and tutorials that I throw together before each quarter.

Of course, Pro Git isn't just valuable because it's a free book that walks users through basics of source control in a friendly manner. It also does a great job of explaining what Git is actually doing at each stage of the way — it explains the concepts behind every command. Treating Git as a black box last quarter ultimately caused more problems than it was worth, and it left people scared of what they were doing. It's worth sacrificing a week of advanced topics like object-orientation (especially in the entry-level class) if it means students actually understand what happens when they stage and commit.

Finally, and perhaps most importantly, I'm going to provide an origin repo for students to clone, and then walk them through setting up a deploy repo as well, with an eye to providing the larger development context. The takeaway is not "here are Git commands you should know," but "this is how and why we use source control to make our lives easier." Using Git in class the same way that people use it in the field is experience that students can take with them.

What do these three parts of my strategy — tooling, concepts, and context — have in common? They're all about process. This is probably unsurprising, as process and workflow have been hobbyhorses of mine since I taught a disastrous capstone class last year. In retrospect, it seems obvious that the last class of the web development program is not an appropriate time for students to be introduced to group development. They were unfamiliar with feature planning, source control, and QA testing — worse, I didn't recognize this in time to turn it into a crash course in project management. As a result, teams spent the entire quarter drifting in and out of crisis.

Best practices, it turns out, are a little like safety protocols around power tools. Granted, my students are a little less likely to lose a finger, but writing code without a plan or a collaboration workflow can still be deadly for a team's progress. I'm proud that the Web Apps class sequence I helped redesign stresses process in addition to raw coding. Git is useful for a lot of reasons, like its ecosystem, but the fact that it gives us a way to introduce basic project management in the very first class of the sequence is high on the list.

March 19, 2014

Filed under: tech»web

Spoiled for Choice

Paul Kinlan's post, Add-to-homescreen Is Not What the Web Needs, is only the most recent in a long-running debate surrounding "apps" on mobile, but it is thought-provoking. Kinlan, who cheerleads for the Web Intents integration system in Chrome, naturally thinks that having an "add-to-homescreen" option misses the point:

I want to see something much more fundamental. The web offers something far richer: it encourages lightweight usage with no required installation and interaction with on-demand permissions. I never want to see an install button or the requirement to understand all the potential permissions requried before trying the app. The system should understand that I am using an app and how frequently that I use it and it should then automatically integrate with the launch points in the OS.

Kinlan has a great point, in that reducing the web to "just another app" is kind of a shame. The kinds of deeper integration he wants would probably be prone to abuse, but they're not at all impossible. Mozilla wants to do something similar with Firefox OS, although it probably gets lost in the vague muddle of its current state. Worse, Firefox OS illustrates the fundamental problem with web "apps" on mobile, and it's probably going to take a lot more than a clever bookmark to solve the problem. That's because the real problem with the web on mobile is URLs, and nobody wants to admit that.

As a web developer, I love URLs. They're the command line of the web: a powerful tool for organizing information and streaming it from place to place. Unfortunately, they're also like the command line in other ways: they're arbitrary, much-abused, and ultimately difficult to type on mobile. More importantly, nobody who isn't a developer really understands them.

There is a now-infamous example of the fact that people don't understand URLs, which you may remember as the infamous Facebook login of 2010. That was the point at which the web community realized that for a lot of users, logging into Facebook went a lot like this:

  1. Search Google for "facebook login"
  2. Click the first link
  3. Look for the password box

As a process, this was fine until ReadWriteWeb actually published a story about Facebook's unified login that rose to the top spot in the Google search listings, at which point hundreds of people began commenting on the article thinking that it was a new Facebook design. As long as they got to Facebook in the end, to these people, one skinny textbox was basically as good as another. I've actually seen people do this in my classes, and just about ground my teeth to nubs watching it happen.

In other words, the problem is discovery. An app store gives you a way to flip through the listings, see what's popular, and try it out. You don't need to search, and you certainly don't need to remember a cryptic address (all these clever .io and .ly addresses are, I'm pretty sure, much harder to remember than plain old .com). For most of the apps people use, they probably don't even scroll very far: the important stuff, like Facebook and Candy Crush, is almost certainly at the top of the store anyway. Creating add-to-homescreen mechanisms is addressing the wrong problem. It's not useless, but the real problem is not that people don't know how to make bookmarks, it's that they can't find your web app in the first place.

The current Firefox OS launcher isn't perfect, but it at least shows someone thinking about the problem. When you start the device, it initially shows a search box titled "I'm thinking of...". Tap into the box and even before you start typing it'll instantly start showing a set of curated sites sorted into categories like "social" and "games." If you want isn't there, you can continue to search the web as a whole. Sites launched from this view start in "app mode" with no URL bar, even though they're still just web sites and nothing's technically been installed. Press the bookmark button, and it's added to your homescreen. It's exactly as seamless as we've always claimed the web could be.

On top of this, sadly, Mozilla adds the Marketplace app, which can install "packaged" apps similar to Chrome OS. It's an attempt to solve the discoverability problem, but it lacks the elegant fluidity of the curated results from the launcher search (not to mention that it's kind of confusing). I'm not wild about curation at the best of times — app stores are a personal pet peeve — but it serves a purpose. We need both: an open web, because that's the spirit of things, and a market destination, because it solves the URL discovery process.

What we're left with is a tragedy of the commons. Mozilla's marketplace can't serve the purpose of the open web, because it's a curated and little-loved space that's only for Firefox OS users. Google is preoccupied with its own Chrome web store, even though it's certainly in a position to organically track the usage of web apps via user searches. Apple couldn't care less. In the meantime, web app discovery gets left with the scraps: URLs and search. There's basically no way, other than word of mouth, that your app will be discovered by normal people unless it comes from an app store. And that, not add-to-homescreen flaws, is why we can't have nice things on the web.

February 27, 2014

Filed under: tech»coding

Just Use Ed

There's a regular, recurring movement to replace text-based programming with some kind of graphical version. These range from Scratch (offering "blocks" to make text syntax more friendly) to Pure Data (node-based dataflow programming). Rarely do any of them take off (Scratch and pd are successful within education and audio, respectively, but little-used elsewhere), but that doesn't stop anyone from trying.

It may be the fact that I started as a writer, or that I was a language nut in college, but I've always felt that text-based programming doesn't get a lot of respect. The written word is one of the great advances of civilization. You can pack a lot of meaning into a line of text, and code is no different. Good source code can range from whimsical to workmanlike, a gamut that's hard to imagine existing in the nest of wiring that is the graphical languages.

As a result, text editing is important to me. It's important to a lot of people, but most of them don't write an editor, and I ended up doing that. I figured I'd write up some notes on the different ways people have written their editors, and why I picked one model in particular for Caret. It may be news to many people that there are even multiple models to consider, but that's programming for you: there's at least four ways to put letters into a document, and bitter wars between factions for each of them.

The weirdest editor still in common usage, of course, is Vim. Born from the days when network connections were too slow to actually update text in realtime, Vim uses a shorthand language for text editing. You don't hold delete until some amount of text is gone in Vim — instead, you type "d2w", meaning "delete two words." You also can't type directly until you switch into the "insert" mode with the "i" or "a" commands. Like many abusive subcultures, people who learn this shorthand will swear up and down that it's the only way to work, even though it's clearly a relic of a savage, bygone age.

(Vim and Emacs are often mentioned in comparison to each other, because they tend to be used by very similar kinds of people who, nevertheless, insist that they're very different. I don't really know very much about Emacs, other than it's written in Lisp and it's not as eyeball-rolling weird as Vim, so I'm ignoring it for the purposes of this discussion.)

Acme tends to look a little more traditional, but it is actually (I think) more radical than Vim, because it redefines the relationship between interface and editor. Acme turns all documents into hypertext: middle clicking a filename opens that file, and clicking a word (like "copy" or "paste") actually runs that command (either in a shell, or in Acme). There's no fixed interface in Acme, just a set of menu bars that are also text fields. I love the elegance of this idea, where a person builds an text editor's UI just by... editing text.

Which brings us to Sublime. I've been very clear that Caret is modeled closely on Sublime, with a few changes to account for quirks of the platform and my own preferences. That's partly because it's generally considered the tool of choice for web developers, and partly because it's genuinely the editor that has my favorite workflow tools. Insofar as Sublime has a philosophy, it is to prioritize clarity and transparency over power. That's not to say it's not powerful — it certainly is. But it tries to be obvious in a way that other editors do not.

For example, say you need to change a variable name throughout a function. Instead of immediately writing a regex or a macro, Sublime lets you select all the instances of that variable with the mouse or keyboard, which creates multiple cursors. Then you just type the new name. It's not as powerful as a regular expression, but 90% of the time, it's probably what you wanted to do anyway. Sublime's command/go-to palette is another smart-but-obvious idea: instead of hunting through the menu or the filesystem, open the palette and type to fuzzy-filter the list. It's the speed of a command line without the hostility.

To paraphrase an old saw, the best feature is the one you have with you. That's why putting the command palette in Caret was a must, since it puts all the menu items just a few keystrokes away. Even now, I don't always remember where a given menu item is in the toolbar in my own editor, because I hardly ever use the mouse. There was a good week when menus looked completely wrong, and I never even noticed.

The reason I've started looking over other editors now is that I think Caret can reach for more than just parity with Sublime. I'm intrigued by the ways that Acme makes it easy to jump around files, and lately I've been thinking about what it means to be an editor built in "web technology." Adding the ability to open links from a URL is a given, but it's only the start: given that OAuth provides a simple, standard method of authenticating against a remote server, a File implementation for Caret could easily open files against service endpoints for something like Github or Ghost in a generic way. It would be a universal cloud editor, but easily capable of running locally.

Of course, Caret won't be the last editor to try something different (just this week, Github announced their own effort), but it's still pretty amazing how many ways we have to solve a simple problem like "typing letters into a file." As a writer and a coder, I love being spoiled for choice.

February 20, 2014

Filed under: music»business

The Grind Date

Lots of musicians have given their work away for free, but De La Soul is different. On February 14th, to celebrate the 25th anniversary of Three Feet High and Rising, they uploaded their back catalog and made it available to anyone who signed up for their mailing list. There are at least three really interesting things about De La's Valentine's Day gift, especially given the fact that the albums on offer have never been available digitally before.

Of course, they almost weren't available last week, either. The original links sent out that morning went to a Dropbox account, which (no surprise) was almost immediately shut down for excessive bandwidth use when everyone on the Internet went to download the free tracks. A new solution was soon found, but it just goes to show that even a band that you'd think would absolutely have a nerdy, Internet-savvy friend, didn't. I kind of like that, though. It gives the whole affair a charming, straight-from-their-garage feel to it.

The first interesting thing is the question of why the albums were released for free in the first place. Reports are vague, but the gist is that De La Soul's label, Warner Brothers, hasn't cleared the samples on the albums, so they can't be sold online. Due to the weirdness of music contracts, you can still buy a physical copy of Three Feet High — it's even been re-released with bonus material a couple of times — but you can't buy the MP3. While it's true that people still buy CDs, I'm guessing that number doesn't include most of De La's fanbase.

But that leads us to the second twist in the story, which is that what De La Soul did is probably illegal. Like a lot of musicians, they own the songs, but they don't own the music: the master recordings of those albums are owned by the label instead. The fact that De La Soul could be sued for pirating their own albums explains a lot about both the weird, exploitative world of music contracts, as well as the ambivalence a lot of musicians feel for labels.

Let's say that nobody sues, however, and Warner Bros. decides to tacitly endorse the giveaway. De La Soul still doesn't have access to the masters, so how did they get the songs to distribute? Interesting fact number three: when people examined the metadata for the tracks, they turned out to be from a Russian file-sharing site of dubious legality. Basically, the band really did pirate their own work. I'm a little disappointed they didn't rip their own CDs, but considering that they didn't have anyone around to tell them not to use Dropbox as a CDN, we probably shouldn't be surprised. It was probably easier this way, anyway — which says a lot about the music industry, as well.

If what De La did was legal, does that make the pirated copies also legal? Would it have been legal for me to download the exact same files from Russian servers while the "official" songs were available? And now that the campaign is over and you still can't buy Stakes Is High from Amazon MP3, are the pirate sites back to being illegal? Nothing I can remember from the Napster days answers these questions for me — although to be fair, all I really remember from Napster is a number of novelty punk covers and making fun of Lars Ulrich.

Assuming they're not sued, and so far they've gotten away from it, the download promotion should be good for De La Soul. Or to put it more bluntly, they probably figured it couldn't hurt, and they're likely right: if these songs were never going to end up for sale online, most of their remaining value is promotional (for shows and other albums) anyway. So it's a savvy move, but it's one unlike the other artists (Nine Inch Nails, Radiohead) that have offered their music for free online. Those bands were issuing new material, unencumbered by sample clearance, and in support of an entirely different genre. I suspect a lot of classic hip-hop artists in similar situations may be watching this new promotion with a lot of interest. Chances are, that's just the way De La likes it.

February 12, 2014

Filed under: journalism»industry

Last Against the Wall

I think most of us can imagine the frustrating experience of sharing a newspaper with the New York Times op-ed page. It must burn to do good reporting work, knowing that it'll all be lumped in with Friedman's Mighty Mustache of Commerce and his latest taxi driver. Let's face it: the op-ed section is long overdue for amputation, given that there's an entire Internet of opinion out there for free, and almost all of it is more coherent than whatever white-bread panic David Brooks is in this week.

But even I was surprised by the story in the New York Observer last week, detailing just how bad the anger between the journalists and the pundits has gotten:

The Times declined to provide exact staffing numbers, but that too is a source of resentment. Said one staffer, “Andy’s got 14 or 15 people plus a whole bevy of assistants working on these three unsigned editorials every day. They’re completely reflexively liberal, utterly predictable, usually poorly written and totally ineffectual. I mean, just try and remember the last time that anybody was talking about one of those editorials. You know, I can think of one time recently, which is with the [Edward] Snowden stuff, but mostly nobody pays attention, and millions of dollars is being spent on that stuff.”

First of all, the Times still runs unsigned editorials? And it takes more than ten people to write them? Sweet mother of mercy, that's insane. I thought the only outlet these days with an actual "from the editors" editorial was the Onion, and even they think it's an old joke. You might as well include an AOL keyword at the end.

And yet it's worth reading on, once you pick your jaw up off the floor, to see the weird, awkward cronyism that's not just the visible portions of the op-ed page, but its entire structure. Why is the editorial section so bad? In part, apparently, because it's ruled by the entitled, petty son of a former managing editor, who reports directly to the paper's publisher (and not the executive editor) because of a family debt. Could anything be more appropriate? As The Baffler notes:

What a perfect way to boil tapioca. Dynasties kill flavor. A page edited by a son because dad was kind of a big deal is a page edited with an eye to status and credentials. Hey, Friedman must be good—he won some Pulitzers. That’s a prize, you see, that Pulitzer thing. Big, big prize. We put it up on the wall. (Pause) Anyway, ready for a cocktail?

The Observer argues that the complaints from the newsroom at large are professional, not budgetary: reporters are angry about shoddy work being published under the same masthead as their stories. But it's hard to imagine that money doesn't enter into it at all. A staff of ten or more people, plus hundreds of thousands of dollars for each of the featured op-ed writers, would translate into serious money for journalism. It would hire a lot of staff, pay for a lot of equipment. You could use it to give interns a living wage, or institute a program for boosting minority participation in media. Arguably, you could put it into a sack and sink it into the Hudson, and still end up ahead of what it's currently funding.

Of course, most papers don't maintain a costly op-ed section, so it's not like this is an industry-wide problem. I don't know that I would even care, normally, beyond the sense of schadenfreude, except for the fact that it's such a perfect little chunk of journalistic mismanagement: when finances get strained, the cuts don't get made from politically-connected fiefdoms, or from upper-level salaries. They get taken from the one place that should be protected, which is the newsroom itself.

Call me an anarchist, but the most depressing part of the whole debate is that it's focused on how big the op-ed budget should be, or how it should be run, instead of whether it should exist at all. What's the point of keeping it around? Or, at the very least, why populate it with the same bland, predictable voices every day? One of the things I respect about the New York Times is the paper's forays into bucking conventional wisdom, from the porous subscription paywall to its legitimately innovative interactive storytelling. There's a lot of romance and tradition in the newsroom, but the op-ed page shouldn't be a part of it. I say burn it to the ground, and let's see what we can grow on the ashes.

February 6, 2014

Filed under: tech»web

Chromecastic

After a busy couple of weeks, Seattle went and won the Super Bowl, leading to the world's most polite celebration in our neighborhood:

There was another prize for the weekend: a friend of ours gifted us a Chromecast, which will be much appreciated since there's currently no way to watch HBO on the PS4. On Monday, Google released the public SDK for the platform, so I decided to poke around a bit.

Chromecast has a decidedly-odd way of loading content. The device itself is just a thin shell around a Chrome window, and it loads web pages like any other browser. But there's no keyboard of any kind, so how does it know which page to load? The answer is that each "app" has an ID listed with Google, corresponding to a set of URLs that the developer provides. When a mobile app or a computer running Chrome triggers the Chromecast, it sends the app ID, which the device then sends to Google and gets a URL in return (or, if the app hasn't been listed, it does nothing). From that point on, you can send messages to the page over via Google's cloud, and your page can do whatever you want it to do. Getting your pages linked to an application ID on the Chromecast lookup servers costs $5.

Five dollars is a low price, but it's more than I really want to pay for a glorified DNS. I'm a little dismayed by the restrictions on the open web — I'd like the option to just send a URL directly. I'm also holding out for a pure JavaScript API, instead of piggybacking on the Chrome extension. So I probably won't be writing any Chromecast apps any time soon. But it's certainly not for a lack of ideas. The interaction model that Chromecast uses — where the screen is just a dumb display, but it can receive commands from other web-accessible devices — is strikingly similar to Microsoft's SmartGlass model. And where Microsoft seems to see it as a way to create companion apps for XBox, I think it's interesting to think about how this "distributed I/O" model could be used for standalone applications.

  • The Chromecast isn't going to rival any consoles, but it wouldn't have to be for a lot of group gaming experiences. Just having a screen that could be used as a scoreboard, or a trivia question where phones are used as buzzers, would be a cool usage that doesn't require precise controls or rich graphics. Turn-based games could easily use the screen as a board overview, while letting people zoom in and move their pieces from their local touchscreen. It also provides an interesting split between public and private information for players that many video games (excepting the Wii U and Dreamcast) couldn't duplicate.
  • I love maps. I think they're the real face of augmented reality, as any regular traveler can attest these days. But they don't have to be mobile. A Chromecast could easily serve as a map up on your wall, updated with whatever information you find interesting. Maybe that's as simple as the weather, but imagine being able to tag it with RFID information or last-known positions for people in your household. Systems like Google Now, which learn from your schedule, could even post notifications for the buses that are coming or traffic problems that you're likely to face.
  • Along those same lines, a simple dashboard could be helpful for businesses and individuals. Being able to throw metrics up on the wall with a web browser is not a new thing, but tying it to a smart, feed-aware service would open up all kinds of new tricks, like being able to leave yourself notes via a hashtag on social networks. There's not really any input needed: it's just a passive display of whatever you want to keep yourself caught up on, in an easy at-a-glance format.
  • Finally, it's probably just all the public speaking I've been doing lately, but it's tempting to think that a presentation app for Chromecast would be super-helpful for speakers. A lot of times, when I go to a meetup or a new classroom, it's hard to predict what kind of video hookups the projector will have, assuming that they even have a projector. But many times, there will be a big-screen LCD TV, with a handy HDMI input. Being able to carry a Chromecast with me to make my presentations, especially if the speaker notes can be viewed separately, would be awesome.
When we talk about the web being device-agnostic, the Chromecast is a perfect example of what we're talking about. It's radically different from other web clients: low DPI on a big screen, no local input, and unpredictable performance. But that's the power of the platform — as a toolkit, its reach is unparalleled. And the restrictions prove to be exciting inspiration for new uses, just as touchscreens came with their own unique challenges and advantages. I don't know if Chromecast is going to be successful, but the hacks for it are going to be really interesting.

January 23, 2014

Filed under: random»personal

This Week

...is ridiculously busy. Class at SCCC has ramped up, I've been prepping for the University of Washington workshop, and of course I've got my everyday work at ArenaNet as well. In lieu of a more substantial post, here are some quick notes about what's on my plate.

  • The homepage for the news apps workshop (now formally COM499B at UW) is located here. It's not terribly comprehensive yet, but my goal is to update it with my presentations, things that I mention during class, and in-class work as I go. In other words, it's not a textbook, it's a record that students can refer back to later. I'd welcome suggestions for other additions to it.
  • While uploading the COM499 page, I accidentally overwrote the root index.html on my portfolio, which led to A) a panicky few minutes finding the Google cache of my page and recovering it, and B) the realization that I had no way to recreate my portfolio page, now that it's not generated via Blosxom anymore. As Mike Bostock wrote in his ode to Make, automating any production process is important because it formalizes things, and makes them reproducible. After an hour's work or so, ThomasWilburn.net is still a static page, but it's built from a template and a Node-based recipe, so I'll never lose it entirely again. Once I've cleaned things up a little, I may open-source the tool for other ex-Blosxom users.
  • Caret will break 30,000 users today or tomorrow. This is, to be honest, mind-boggling to me. The last time I wrote about it here, it had 1,500 users. Since that time, I've added a ton of new functionality, accepted contributions from other developers who want to help add features, found out that members of the Chrome team are actively using it (although, I assume, not to write Chrome itself), and filed several bugs against the chrome.* APIs. Caret's my daily editor for class work and personal projects, and I'm incredibly pleased by how well it compares against Sublime and other native editors.
  • Dates have been locked in for Urban Artistry's Soul Society festival in DC this year: April 14-20, with the main event toward the end of the week as usual. We'll be adding specific event schedules in the next couple days, so keep an eye out.

January 16, 2014

Filed under: gaming»software

Field of Streams

At the Consumer Electronics Show, Sony showed off the fruits of their acquisition of Gaikai, a company that streams video games over the Internet. This is something with which I have a little experience: in 2012, I worked for Big Fish on the now-discontinued Big Fish Unlimited, which did the same thing but for casual games. I was pretty sure that it was doomed then, and I'm pretty sure that similar platforms — including Sony's Playstation Now and OnLive — are doomed now, and for the forseeable future. The reason is simple: streaming just doesn't scale.

Let's say you're playing a game by streaming it to yourself from another computer you own, as in Nvidia's Shield or Valve's SteamOS. To do this you need two boxes: one to actually run the game, and one to act as a thin client, providing the display and input support. There are lots of things that can cause problems here — network connectivity, slow hardware, latency — but at the very least you're always going to have enough hardware to run the game, because you own both ends of it. Streaming scales in a linear fashion.

Now pretend you're doing the same thing, but instead of running the host machine yourself, it lives in a remote datacenter. For each person who's playing, the streaming service needs another computer to run the game — the scaling is still linear. You can't cache a game on a cheap edge node like you can a regular file, because it's an interactive program. And there's no benefit to running all those games simultaneously, the way that Google can leverage millions of GMail customers to lower e-mail transmission costs and spam processing algorithms cheaper. No, you're stuck with a simple equation: n players = n servers. And those servers are not cheap: they need hefty graphics cards, local storage, cooling systems, sound cards, etc.

And it gets worse, because those players are not going to obligingly spread their playtime around the clock to keep the load constant. No, they're going to pile in every evening in much higher numbers — much higher — than any other time of the day. League of Legends, a single (albeit very popular) game has had more than 5 million concurrent players. During peak hours, streaming providers will struggle to run enough host boxes. During off hours, all that expensive hardware just sits idle, completely unused.

At first, these problems seem solvable. When you don't have a lot of customers, it's not that bad to add new hosts to compensate for growth. Those earlier players may be more forgiving of wait times, attributing them to growing pains. But consider the endgame: if something like Playstation Now achieves real, widespread success (despite all the other network latency and quality of service issues these services always face), Sony's ultimate scenario is having literally millions of rack-mounted PS4s in datacenters around the country, many of them running at peak capacity for hours on end. That's more servers than Google, Microsoft, and Facebook put together.

At Big Fish, the business argument was that casual games could be run with multiple applications to a machine, so the scaling pressure was lower. But it's still linear: you've lowered the total number of machines you might need in the long run, but there's still a direct relationship between that number and your number of players. The only way to scale online gaming gracefully is to either find a way that games can share state (i.e., MMOs), or offload more of it to the client. As a front-end JavaScript specialist, I always thought Big Fish would have better luck porting its games to the browser instead of streaming them to a Java applet.

But there's only so much you can move to the client when your selling point is "next-gen games without next-gen hardware." In the case of Sony and OnLive, no amount of browser wizardry or Playstation branding is going to solve that fundamental scaling problem. It may be workable for instant demos, or beta previews. But for mainstream gaming, without a miraculous breakthrough in the way games are built and executed, the math just isn't there.

January 10, 2014

Filed under: journalism»new_media

App-y New Year

At the end of January, I'll be teaching a workshop at the University of Washington on "news apps," thanks to an offer from the outgoing news app editor at the Seattle Times. It's a great opportunity, and a chance to revisit my more editorial skills. From the description:

This bootcamp will introduce students to the basic components of creating news applications, which are data-powered digital stories tied together through design, programming and journalism. We’ll walk through all the components of creating a news application, look at industry examples of what works and what doesn’t, and learn the basic coding skills required to build a news app.
Sounds cool, but it's still a wide-open field — "data-powered digital stories" covers a huge range of approaches. What do you even teach, and how do you do it in two 4-hour workshops?

It turns out that for almost any definition of "news app," there's an exception. NPR's presidential election board is a data-powered news app, but it's not interactive beyond an auto-update. Snow Fall is certainly a news app, but it's hard to call it "data-powered." How can we craft a category that includes these, but also includes traditional, data-oriented interactives like The Atlantic's Netflix Genre Generator and the Seattle Times mayoral race comparison? More importantly, how do we get young journalists to be able to think both expansively and productively about telling stories online?

That said, I think there is, actually, a unifying principle for news apps. In fact, I think it cuts to the heart of what draws me to web journalism, and the web in general. News apps are journalistic stories told via hypermedia — or, to put it simply, they have links.

A link seems like a small thing after years on the web, so it's good to revisit just how fundamentally groundbreaking they are. Links can support or subvert their anchor, creating new rhetorical devices of their own. At the most basic level, they contextualize a story. More abstractly, they create non-linearity: users explore a news app at their own pace and with their own priorities, rather than the direct stream of narrative from a text story.

A link is a simple starting place. But it starts us down a path of thinking about more complicated applications and usage. I'm fond of saying that an interactive visualization is constructed in many layers, with users peeling open the onion as far as they may want. If we're thinking in terms of other hypertext documents (a.k.a., the TV Tropes Rabbit Hole) from the start, we're already prepared when readers use similar interaction patterns to browse data-based interactives — either by shallowly skipping around, or diving in depth for a specific feature.

By reconceptualizing news apps as being hypermedia instead of a specific technology or group of technologies, such as mapping or graphing, introducing students to web storytelling gets a lot easier — particularly since I won't have time to teach them much beyond some basic HTML and CSS (in the first workshop) and a little scripting (in the second).

It also leaves them plenty of room to think creatively when presenting stories. I'd love for budding news app developers to be as interested in wikis and Twine as they are in D3 and PostGIS. Most importantly, I'd love for an appreciation of hypertext to leak into their writing in general, if only to reduce the number of print die-hards in newsrooms around the country. You don't have to end up a programmer to create new, interesting journalism that's really native to the web.

Past - Present - Future