this space intentionally left blank

September 19, 2013

Filed under: gaming»software»aquaria

Cave Stories

There's a fine line between nonchalance and disregard for the player, and I'm not sure that Aquaria doesn't cross over it. As one of the best games on the Shield right now, I've been playing a lot of it — or, rather, alternating between playing it and looking up clues online. In a way, I respect the sheer amount of content the developers have put together, and the confidence they have in players to discover it, but I could use a little more signposting and, to be honest, a bit more challenge.

For example, the middle section of Aquaria is mostly non-linear: certain areas are locked away until you've beaten a few bosses and taken their abilities, but the order is still mostly flexible. Although it sounds great in theory, in practice this just means you're repeatedly lost and without a real goal. Having enormous maps just makes exacerbates the problem, because it means you'll wander one way across the world only to find out that you're not quite ready yet and need to hunt down another boss somewhere — probably all the way at the other end.

I'm goal-oriented in games, so this kind of ambiguity has always bugged me. The Castlevania titles post-Symphony of the Night suffer from this to some extent, but they usually offered something to do during the trip that made it feel productive--levelling up your character, or offering random weapon drops. Aquaria has a limited cooking system, but it's only really necessary in boss fights and it rarely does anything besides offer healing and specific boosts, so it's not very compelling.

According to an interview with the developers, Aquaria was originally controlled with keyboard and mouse, and they eventually moved it to mouse-only (which came in handy when it was ported to touch devices). Every now and then the original design peeks through, like when certain enemies fire projectiles in a bullet-hell shooter pattern. The Shield's twin-stick controls make this really easy (and fun) to dodge, but since the game was intended for touch, these enemies are relatively rare, and the lengthy travel through the game tends toward the monotonous.

Look, I get that we have entered a brave new world of touch-based control schemes. For the most part, I am in favor of that — I'm always happy to see innovation and experimentation. But playing Aquaria on the Shield makes it clear that there's a lot of tension between physical and touch controls, and it's easy to lose something in the transition from the former to the latter. Aquaria designed around a gamepad (and an un-obstructed screen) could be a much more interesting game. Yes, it would be harder and less accessible — but the existing game leaves us with "easy and tedious," which is arguably a worse crime.

I'm starting to think that in our rush to embrace casual, touch experiences (in no small part because of the rise of touch-only devices), we may be making assumptions about the audience that aren't true — such as the idea that it's the buttons themselves that were scary — and it's not always a net positive for game design. At its heart, Aquaria is a "core" game, not a casual game: it's just too big, and the bosses are too rough, for this to be in the same genre as Angry Birds or whatever. Compare this to Cave Story (its obvious inspiration), a game that was free to cram a ridiculous amount of non-linear content into its setting because its traditional platforming gameploy was so solid.

There is a disturbing tendency for many people to insist that there must be a winner and a loser in any choice. In the last two weeks, every tech site on the planet decided that the loser was Nintendo: why don't they just close up shop and make iPhone games? I think it's a silly idea — anyone measuring Nintendo's success now against their performance with the Wii is grading them on the wrong end of a ridiculous curve — and Aquaria only makes me feel stronger about that. For all that smartphone gaming brings us, there are some experiences that are just going to be better with buttons and real gaming hardware. As long as that's the case, consoles are in no danger of extinction.

September 12, 2013

Filed under: tech»web

Beta Caret-ene

At this point, Caret has been in the Chrome Web Store for about a week. I think that's long enough to say that the store is a pretty miserable experience for developers.

When I first uploaded it last week, Caret had these terrible promo tiles that I threw together, mostly involving a big pile of carrots (ba dum bum). At some point, I made some slightly less terrible promo tiles, stripping it down to just bold colors and typography. Something set off the store's automated review process, and my new images got stuck in review for four days — during which time it was stuck at the very bottom of the store page and nobody saw it.

On Tuesday, I uploaded the first version of Caret that includes the go-to/command palette. That release is kind of a big deal--the palette is one of the things that people really love about Sublime, and I definitely wanted it in my editor. For some reason, this has triggered another automatic review, this one applied to the entire application. I can unpublish Caret, but I can't edit anything — or upload new versions — until someone checks off a box and approves my changes. No information has been provided on why it was flagged, or what I can do to prevent these delays in the future.

Even at the best of times, the store takes roughly 30 minutes to publish a new version. I'm used to pushing out changes continuously on the web, so slow updates drive me crazy. Between this and the approval hijinks, it feels like I'm developing for iOS, but without the sense of baseless moral superiority. What makes it really frustrating is the fact that the Play store for Android has none of these problems, so I know that they can be solved. There's just no indication that the Chrome team cares.

I was planning on publishing a separate, Google-free version of the app anyway, so I worked out how to deploy a standalone .crx file. The installation experience for these isn't great — the file has to be dragged onto the Chrome extensions list, and can't just be installed from the link — and it introduces another fun twist: even though they promised it would be possible for years, there's no way to download the private key that Google uses in the Chrome store, meaning that the two installations are treated as completely different applications when installed.

Fair enough: I'll just make the standalone version the "edge" release with a different icon, and let the web store lag behind a little bit. As a last twist of the knife, generating a .crx package as part of a process that A) won't include my entire Git history, and B) will work reliably across platforms, is a nightmare. Granted, this is partly due to issues with Grunt, but Chrome's not helping matters with its wacky packaging system.

All drama aside, everything's now set up in a way that, if not efficient, is at least not actively harmful. The new Caret home page is here, including a link to the preview channel file (currently 3 releases ahead of the store). As soon as Google decides I'm not a menace to society, I'll make it the default website for the store entry as well.

The problems with Google's web store bug me, not so much because they're annoying in and of themselves, but because they feel like they miss the point of what packaged web apps should be. Installing Caret is fast, secure, and easy to update, just as regular web apps are. Developing Caret, likewise, is exactly as easy and simple as writing a web app (easier, actually: I abuse flexbox like crazy for layout, because I know my users have a modern browser). Introducing this opaque, delay-ridden publication step in between development and installation just seems perverse. It won't stop people from using the store (if nothing else, external installation is too much of a pain not to go through the official channel), but it's certainly going to keep me from enjoying it.

September 5, 2013

Filed under: tech»web

Caret

As I mentioned in my Chromebook notes, one of the weak points for using Chrome OS as a developer is the total lack of good graphical editor. You can install Crouton, which lets you run Vim from the command line or even run a full graphical stack. But there aren't very many good pure text editors that run within Chrome OS proper — most of the ones that do exist are tied to hosted services like Cloud9 or Nitrous. If you just want to write local files without a lot of hassle, you're out of luck.

I don't particularly want to waste what little RAM the Chromebook has running a whole desktop environment just for a notepad, and I'm increasingly convinced that Vim is a practical joke perpetuated by sadists. So I built the Chrome OS editor I wanted to have as a packaged app (just in time!), and posted it up in the store this weekend. It's 100% open source, of course, and contributions are welcome.

Caret is a shell around the Ace code editor, which also powers the editor for Cloud9. I'm extremely impressed with Ace: it's a slick package that provides a lot of must-have features, like syntax highlighting, multiple cursors, and search/replace, while still maintaining typing responsiveness. On top of that base, Caret adds support for tabbed editing, local file support, cloud settings storage, and Sublime-compatible keystrokes.

In fact, Sublime has served as a major inspiration during the development of Caret. In part, this is just because it's the standard for web developers that must be met, but also because it got a lot of things right in very under-appreciated ways. For example, instead of having a settings dialog that adds development complexity, all of Sublime's settings are stored in JSON files and edited through the same window as any other text files — the average Sublime user probably finds this as natural as a graphical interface (if not more so). Caret uses the same concept for its settings, although it saves the files to Chrome's sync service, so all your computers can share your preferences automatically.

The current release of Caret, 0.0.10, is usable enough that I think you could do serious editing with it — I've certainly done professional work with less effective tools, including the initial development on Caret itself — but I'm on a roll adding features and expect to have a lot of improvements made by the end of next week. My first priorities are getting the keybindings into full working condition and adding a command palette, but from that point on it's mostly just polish, bugfixes, and investigating how to get plugin support past Chrome's content security policy. Once I'm at 1.0, I'll also be posting a standalone CRX package that you can use to install Caret without needing a Google account (it'll even auto-update for you).

Working with Chrome's new packaged app support has been rough at times: there are still a lot of missing capabilities, and calling the documentation "patchy" is an insult to quilts everywhere. But I am impressed with what packaged apps can do, not the least of which is the ease of installation: if you have Chrome, you can now pretty much instantly have a professional-grade text editor available, no matter what your operating system of choice. This has always been a strong point for web apps anyway, but Chrome apps combine that with the kinds of features that have typically been reserved for native programs: local file access, real network sockets, or hardware device access. There's a lot of potential there.

If you'd like to help, even something as simple as giving Caret a chance and commenting with your impressions would be great. Filing bugs would be even better. Even if you're not a programmer, having a solid document editor may be something you'd find handy, and together we can make that happen.

August 30, 2013

Filed under: gaming»hardware»android

Shield's Up

Let's say that you're making a new game console, and you're not one of the big three (Sony, Microsoft, and Nintendo). You can't afford to take time for developers to get up to speed, because you're already at a mindshare deficit. So you pick a commodity middleware that runs on a lot of hardware, preferably one that already has lots of software and a decent SDK. These days that means using Android, which is why most of the new microconsoles (Ouya, Gamestick, Mojo) are just running re-skinned versions of Android 4.x.

Nvidia's Shield is no different in terms of the underlying OS, but it does change the form factor compared to the other Android microconsoles. Instead of a set-top box or HDMI stick, it effectively crams the company's ridiculously powerful Tegra 4 chipset into an XBox controller, and then bolts on an LCD screen. I like Android, I like buttons, and I spend a lot of time bored on a bus during my commute, so I bought one late last week.

It's a bulky chunk of plastic, for sure. I don't particularly want to try throwing both it and the Chromebook into the same small Timbuktu bag. But in the hand it feels almost exactly like an XBox 360 controller — meaning it's very comfortable, and not at all cumbersome. It's definitely the best package I've ever used for emulators: playing GBA games feels pretty much like the real thing, except with a much larger, prettier screen. I'd have bought it just for emulation, which is well-supported on Android these days.

Actual Android games are kind of a mixed bag. I own a fair number of them, between the occassional Play Store purchase and all the Humble Bundles, and most of them aren't designed for gamepad controls. The Shield does have a touchscreen (as well as the ability to use the right thumbstick as a mouse cursor), but the way it's set up doesn't promote touch-only gaming: there's no good way to hold the screen while the body of the controller sits in the way, and portrait mode is even more awkward.

But if the developer has added gamepad support, the experience is really, really good. I've been playing Asphalt 8, Aquaria, and No Gravity lately, and feeling pretty satisfied. For a lot of games, particularly traditional genres like racing or shooters that require multiple simultaneous inputs, you just can't beat having joysticks and physical buttons. It also helps showcase the kinds of graphics that phones/tablets can pump out if your thumbs aren't always blocking the screen.

So the overall software situation looks a little lopsided: lots of great emulators, but only a few native titles that really take advantage of the hardware. I'm okay with this, and I actually expect it to get better. Since almost all the new microconsoles are Android-based, and almost all of them use gamepads (for which there's a standard API), it's only going to be natural for developers to add controller support to their games. I think the real question is going to be whether Android (or any mobile OS) can support the kinds of lengthy, high-quality titles that have been the standard on traditional, $40/game consoles.

If Android manages to become a home for decent "core" games, it'll probably be due to what Chris Pruett, a game developer and former Android team member, calls out in this interview: the implicit creation of a "standardized" console platform. Instead of developers needing to learn completely new systems with every console generation, they can write for a PC-like operating system across many devices (cue "fragmentation" panics). Systems like the Shield, which push the envelope for portable graphics, are going to play a serious role in that transition, whether or not the device is successful in and of itself.

The other interesting question if microconsoles take off will be whether there's a driver for innovation there. In the full-sized console space, it's been relatively easy for the big three companies to throw out crazy ideas from time to time, ranging from Kinect and Eyetoy to pretty much everything Nintendo's done for the last decade. PCs have been much slower to change, a fact that has frustrated some designers. Are microconsoles more like desktop computers, in that they have a standard OS and commodity hardware? Or are they more like regular consoles, since they're cheap enough to make crazy gambles affordable?

The Shield, perhaps unsurprisingly from Nvidia, points to the former. It's an unabashedly traditional console experience, from the emphasis on graphics to the eight-button controller. It's good at playing the kind of games that you'd find on a set-top box (or indeed, emulating those boxes themselves), but it's probably not the next Wii: you're buying iteration, not innovation--technologically, at least. It just so happens that after a couple of years of trying to play games with only a touchscreen, sometimes that's exactly what I want.

August 22, 2013

Filed under: politics»national»agencies»nsa

The Revolution Will Not Be Encrypted

This week, Internet law commentary site Groklaw shut down, citing the lack of privacy in a world where the government is (maybe, possibly) reading all your e-mail. On the one hand, you can argue that this is evidence of dangerous chilling effects from surveillance on the fourth estate. On the other hand, shutting down a public blog (one that's focused on publicly-available legal filings) because the NSA can read your correspondence seems... ill-considered, but sadly not atypical.

In the initial wake of the NSA wiretapping stories, David Simon, author of The Wire, wrote a series of essays saying, effectively, "Welcome to the security state, white people."

Those arguing about scope are saying, in a backhanded way, that thousands of Baltimoreans, predominantly black, can have their data collected for weeks or months on end because they happened to use a string of North Avenue payphones, because they have the geographic misfortune to live where they do. And it’s the same thing when it’s tens of thousands of Baltimoreans, predominantly black, using a westside cell tower and having their phone data captured. That’s cool, too. That’s law and order, and constitutionally sound law and order, at that. But wait: Now, for the sake of another common societal goal — in this case, counter-terror operations — when it’s time for all Americans to ante in with the same, exact legal intrusion, the white folks, the middle-class, the affluent go righteously, batshit, Patrick-Henry quoting crazy? Really?

Whether you find these situations comparable will probably indicate how credible you find Simon's argument in general. It's important to note that he's not trying to say we should just roll over for the NSA. The question he raises is one of social justice: when we talk about fixing these problems, are we worried about strengthening protections for everyone? Or just in ways that will preserve privacy for people who can afford it? What Simon doesn't say is that technological solutions are mutually exclusive with social justice — without fail, they always fall into the latter category.

By this point, there's been a lot of ink spilled on how to "protect yourself" from the NSA. People write long how-to guides on setting up a secure mail server (like hilariously long "two hour" guide) or using PGP encryption. None of this is manageable by normal human beings: speaking as someone who has actually set up a private, unencrypted mail server, it's completely out of reach for all but the most devoted shut-ins. You could not pay me enough to edit my Postfix config again, much less try to add encryption to it.

Okay, so the open-source situation is rough at best. That scratching sound you hear is a million start-ups raiding their trust funds to create the new Shiny, User-Friendly Crypto Solution. None of them will answer the following questions:

  • Does this make Facebook/Twitter/Social Network X secure? Because those aren't going away.
  • How much does it cost? If it's not free, it's already only for the privileged, and an ad-supported privacy program is a contradiction in terms.
  • How do you know it's safe — as in, how do you really, really know? Even if I trust the app, can I trust the random number generator that powers its cryptography? Can I trust the OS that provides that generator? Can I trust the chip running the OS (or the baseband chip running the radios)?
But see what happened there? We got distracted by the technical issues again, forgetting that there are no technological solutions to political problems.

I am increasingly uncomfortable with all of this technocratic rhetoric — "the solution to our political problem is more software" — because it sounds an awful lot like "the solution to a dangerous government is more guns (and particularly more guns for white people)" from the NRA. Both arguments are misguided, but more importantly they both invoke a siege mentality. They assume that nothing can be done as a community, or even at all. Instead, their response is to hole up in a bunker and look out for number one.

Personally, I think the great thing about our system of government is that it is designed to be rebuilt on a regular basis. There is no law in the USA that can't be changed. Everything up to and including the Constitution is under debate, if you can convince enough people. Granted, activism requires participation and cooperation, and both of those (especially compared to buying a firearm or coding a protocol) are hard. But they are robust solutions that address the wider problem for everyone, instead of merely fulfilling someone's resistance fighter fantasy.

It's easier to look for loopholes and clever fixes. It's easier to write manifestos for (just to pick on a single random example that popped up while I was writing this) "a better web" through framework improvements or decentralized software. But neither of those actually changes anything. At best, they're workarounds. At worst, they're snake oil. Take whatever actions you want online--write new code, sign petitions, or unpublish your blog. Until that energy is matched offline, with old-fashioned, inefficient politics, you're just wasting your time.

August 15, 2013

Filed under: movies»television

Summer Streaming

It's been a beautiful summer, even by Seattle standards, and Belle and I have gotten at least some good out of it. We've been camping, traveling, and lately we even broke out the grill. Take that, state-wide burn ban!

Indoors, of course, a lot of the broadcast TV we watch takes the summer off. We've been picking up a few shows via Netflix and Amazon instead. I'm not quite ready to write off our TiVo yet, but I'm impressed with the choices we've had.

Orange is the New Black

Surprisingly good. Shockingly good, even. There's none of the lazy writing and faux-transgressiveness that marked Jenji Kohan's previous show, Weeds. It's got a rich cast of characters without feeling contrived, it's funny without going broad, and it's comfortable mining a deep vein of dark humor from its setting. There have been a few comparisons between this and The Wire. Orange is the New Black isn't quite that good--what is?--but it's not an inapt pairing. Like its predecessor, Orange features a diverse cast filled with actors of color. Both shows also lack marquee names (but feature stellar performances from little-known actors). And of course, the subject material in both cases is fascinating in and of itself.

Beyond the confines of the show, it's interesting to see Netflix so clearly taking a page from HBO's book. Lots of networks have halo shows--it's only thanks to Mad Men that I can differentiate between AMC and A&E--but it was really HBO that realized shows were "stickier" than movies. And unlike HBO, Netflix doesn't force you to haggle with your local cable overlords. If they can put out more material at this quality level, their constant battles over licensing big film titles for streaming look a lot less troubling. I could definitely see keeping a Netflix subscription just for a couple of shows like this.

Alphas

A show that never really found an audience on the SciFi channel, Alphas folded after a couple of seasons, and Amazon snagged it as one of their early exclusives. It's not groundbreaking television: the special effects are decidedly bargain-basement, the writers can't decide if they want to steal from Heroes or X-Men, and the direction ranges from competent to not terrible. It's a good summer show, though, with a more thoughtful core than either of its inspirations would lead you to believe.

Alphas has three things going for it. The first is David Strathairn, an actor who is way too good to be doing a superhero show on basic cable. The second is a genuine rapport between the actors, who really sell the workplace chemistry--especially between Gary, the autistic electro-telepath and Bill, the temperamental bruiser. Finally, Alphas does manage a single clever twist on its formula: the idea that its superpowers are basically neuroses, for which most of the cast are in therapy (if nothing else, this is a wry joke at the expense of the Xavier Academy for Gifted Youth). I'm not sure it ever really embraces that fully--there hasn't been a single hero-on-a-couch scene that I remember--but it does make me feel better about my own psychological tics.

The Fall

The Fall doesn't try to hide its villain: you'll know whodunnit by the end of the first episode. Instead, it serves as a kind of character study for its chilly detective, Stella Gibson, played by Gillian Anderson. In many ways it reminds me of the BBC's prototypical female detective drama, Prime Suspect: Gibson spends as much time fighting a sexist bureaucracy as she does hunting the actual murderer.

When it's good, The Fall is very good, but it takes its time getting there. It's odd that, for a season that's only six episodes long, so much of it feels like padding. But I think part of that comes down to the delivery method. Streaming (and DVD, as well) makes it easy to burn through a show in a matter of hours. That's great for hook-driven puzzlers like Fringe or monster-of-the-week shows like Doctor Who, but it might not work so well for atmosphere-driven dramas.

It makes me wonder if we'll see a change in how people write narratives as streaming TV-on-demand becomes more common. Some people consider the non-Netflix Arrested Development to be designed for obsessive DVD rewatching. Is streaming different? More social? More portable?

August 1, 2013

Filed under: tech»web

Learning WebGL

Once Microsoft announced IE 11 will offer WebGL, that was pretty much the last straw: Apple may drag their feet at enabling it in Safari, but everyone else seems to have decided that it's secure enough and capable enough for production. I still think it's a little nutty, but I don't really have an excuse to avoid it anymore. So when an EaselJS-based visualization at work started having performance issues, I wrote a WebGL shim as a learning project.

I stand by my earlier impressions of the WebGL API--it's clumsy and ill-suited to JavaScript--but the performance is undeniably there. Rendering through EaselGL is often orders of magnitude faster than vanilla Easel, particularly when it comes to mouseover responsiveness. It's worth struggling through the learning process if you find that canvas is becoming a bottleneck for your application. I think a more interesting question is why so many WebGL tutorials are awful. And they are awful:

  • They hide the WebGL boilerplate behind library code--say, a function call that loads shaders or does matrix math--and force readers to dig through the source to figure out what mat4mult() or loadShader() does.
  • Even when they leave the code all in one place, they treat the often-confusing GL code as boilerplate, and don't explain it. Why do I need to call bindBuffer()? What do the six (!) parameters of vertexAttribPointer() actually control? What's the deal with all the constants, like gl.STATIC_DRAW?
  • Since WebGL is not a 3D API, they require a lot of 3D math. Most people teach this badly, assuming they don't double down on the mistakes above and just hide it behind calls to library functions. As a result, it's easy to get lost, and hard to reach the "putting shapes on the screen" feedback loop that keeps students engaged.

Of course, these are not uncommon mistakes in programming tutorials. In fact, they're extremely common--I just haven't had to learn anything from scratch in a while, and had forgotten how confusing the process could be. Anyone writing for beginners would do well to keep these errors in mind.

There are a few walkthroughs that I found more helpful. As I've mentioned, Greg Tavares's series on WebGL was eye-opening, and Brandon Jones provided the only worthwhile explanation of attribute array setup that I found. Between those two, and countless Google searches, I managed to cobble together a basic understanding of how the GL state machine actually works.

As a way of distilling out that knowledge, I've assembled the WebGL demo script that I would have wanted when I started out. It uses no external code--everything's right there on the same page. It explains each parameter that's used, and what each function call does. And it's only concerned with drawing a basic 2D shape--no matrix math is involved. It's stored in a Github Gist, so feel free to file pull requests against anything you find confusing. Also, feel free to look through EaselGL: it's a bit more advanced and I need to add more comments, but as a 2D API I think it's quite a bit easier to understand than the typical game library, particularly for ex-ActionScript developers like myself.

July 24, 2013

Filed under: journalism»new_media

The Narrative

I had planned on writing a post about Nate Silver's departure from the New York Times this week, but Lance pretty much beat me to it:

Silver is now legendary for being a numbers guy. But there aren't going to be any useful numbers for analyzing the next Presidential election until the middle of 2015 at the earliest. The circumstances under which the election will take place---the state of the economy, whether we're at war or peace, the President's popularity and if and how that will transfer to the Democratic nominee, what issues are galvanizing which voters, etc.---won't make themselves known and so won't show up as numbers in polls at least until then. And until then, everything said about the election is idle speculation, and we know how Silver feels about idly speculating.

But we also know that the most incorrigible idle speculators believe idle speculation is the point.

It's well worth the time to read the whole thing.

I've seen some people assert, in light of this departure, that lots of people could do what Silver did for the Times: his models weren't that complicated, after all, and how hard can it be to write about them? I think this dramatically underestimates the uniqueness of FiveThirtyEight and, to some extent, signifies how threatening it really was to political pundits.

There are, no doubt, a few journalists who could put together Nate Silver's models, and then write about them with clarity. I don't think anyone doubted that evidence-driven political reporting was possible. What he did was show that it could be successful, and that it could draw eyeballs. I think it was John Rogers who said that the best thing about blogging was not the enabling effect for amateurs, but for experts. Suddenly people with actual skills--economists, historians, political scientists, statisticians--could have the kind of audience that op-ed pages commanded.

This should not have been a surprise for newspapers, except that the industry has spent years convincing itself that investigative teams and deep expertise in a beat aren't worth funding. To be fair, the New York Times has put money behind a lot of data journalism in the past few years. If they can't keep the attention of someone like Silver, who can? I guess we're going to find out.

July 19, 2013

Filed under: tech»web

Over the Top

By the time you read this, I'll have been running Weir as my full-time RSS reader for two and a half weeks, starting on July 1. It's going well! Having just added OPML export, so that I can switch if it stops being worth the trouble, I've had a chance to sit back and consider some lessons learned from the project.

  1. Eight megabytes does not seem like a lot of data these days, but it adds up. Before I started culling feeds (and before I added Gzip support to the request service), Weir was pulling down roughly eight megs of data with each fetch, at ten minute intervals. That's 48MB per hour, a small amount that adds up to over a gigabyte per day. By default, I get 24 gigs per month of transfer allowance on my server. Something had to go.

    It's interesting to note, by the way, that this is something that RSS services like Newsblur or Feedly don't worry so much about, because the cost of each feed is spread across all subscribers. I didn't cost Google Reader as much traffic as Weir requires on its own.

  2. So I started unsubscribing. The majority of my original subscription list in Google Reader came from Paul Irish's front-end feed collection, and while I had already unsubscribed from the crazy people, it turns out most of the other blogs in that collection were dead. Even with 304 support added in, Weir was downloading a ton of RSS, only to discard much of it as being past the configured expiration date. I don't think this means blogging is a thing of the past, personally, but it's clearly down from its heyday in favor of social services, particularly (in the technical community) Google+.
  3. That said, using Feedburner seems to be a clear indication that you weren't that interested in blogging anyway, because it makes up a disproportionate amount of the abandoned or simply broken RSS feeds on my list. I suspect this is because it signals a lack of ownership. If you care about your feed, you maintain it yourself.
  4. Even with feeds that work, sometimes connections fail, or things break, just because it's the wild and crazy web out there. Taking that into consideration from the start, and tracking the last result for every feed, was one of the smarter things I did. I should probably be tracking more, but I'm too lazy to do real logging.
  5. Feeds are messy, and sanitization is hard. People inject all kinds of styles into their RSS. They include height and width attributes that don't play well with mobile. They put things into tables. They load scripts that I don't want to run, and images that I'd like to defer until their containing post is activated. Right now, I'm using document.implementation.createHTMLDocument() to make a functional (but "dead") DOM, then running a sanitization task over that, but figuring out that process--and making it watertight--has not been easy.
  6. In fact, working with RSS--ostensibly a "machine-readable" format--tends to drive home just how porous the web can be, and how amazing it is that it works at all. Take RSS date formats discovered by the developer of another reader web app, for example. I'm relatively isolated from the actual parsing, but there's code in Weir to work around buggy HTTP responses, missing feed information, and weird characters. Postel's Law has a lot to answer for, in my opinion.
  7. "Worse-is-better" really works for me as a personal development philosophy. My priority has been to get things running, no matter how badly--hacks get added to the .plan file and addressed later on, when I have time to figure out a graceful solution. This has kept my momentum high, and ensured that I don't get bogged down with architecture that I might not even need.
  8. The value of open-source software for this project can't be overstated. There's no way I could have built Weir on my own. In addition to Node, of course, I'm using a number of open-source modules for parsing feeds, handling two-factor auth, and storing posts in the database. Because of open source, I can patch those various libraries together, add my own code on top, and have a newsreader application that does everything I need. Weir doesn't stand on the shoulders of giants--it stands on the shoulders of countless other people, each giving a little bit back to the wider community.

July 16, 2013

Filed under: politics»issues»firearms

Trigger Happy

I wish I could say I'm surprised by the verdict in the Trayvon Martin case. It would have been nice to see the manslaughter charge stick--even Florida should be able to prosecute the poor man's murder--but that was probably a long shot. The fix was in from the moment that the police had to be embarrassed into even charging Zimmerman in the first place.

There were a lot of ugly parts of the American character wrapped up in the case. There was the casual, almost off-handed racism of the whole affair, but there was also the clownishness of our national fixation on firearms. That's the narrative that drove George Zimmerman, after all: a one-man neighborhood watch, following whatever "punks" were unlucky enough to find themselves the antagonists of his inner screening of Death Wish. I imagine that there are a lot of people who knew Zimmerman, who thought he was a nut and a caricature--almost a joke. After all, I have known people who were a hair's breadth from being George Zimmerman, and I've laughed at them. They're a lot less funny now.

Over the last year, since the shooting in Newtown, Josh Marshall has reposted stories to the TPM Editor's Blog whenever there's a story on the wire services of child deaths caused by guns. On average, there's probably one a week. It's a powerful, if understated, kind of journalism, like one of those Family Guy hanging gags: at first it's horrific, then it becomes routine, and then the normality of that routine becomes devastating in and of itself. There have been a lot of kids killed in the last eight months, with surprisingly little outcry.

It's as if, across the country, we've decided to raise our kids in a tank filled with deadly scorpions. Even though we lose children on a regular basis, discussing the obvious solution--getting rid of the scorpions, maybe buying a puppy instead--doesn't seem to be an option. To the contrary: more scorpions, screams the NRA! Scorpions for everyone! Only when everyone is covered in poisonous arthropods will we truly be safe!

(Of course, as various people have commented, the NRA is oddly silent on whether or not Trayvon Martin would have been safe from assault if he had been packing heat. This differs markedly from their usual argument that the solution is always more, and more powerful, weaponry. I can't imagine what's different in this case.)

With every recent shooting, there's been a sense on the left that this time, people will see how terrible our firearms fetish has become: Tucson, Aurora, and Newtown each brought a fresh sense of unreality to the whole debate. And now George Zimmerman walks, after shooting an unarmed black teenager for the simple crime of being where George Zimmerman didn't think he belonged. Maybe, finally, this will be the case when we start to think about what all these guns actually mean as a society, but I doubt it. That gives us too much credit: the deadlier our guns, the more we cling to them for comfort. Heaven help us all.

Past - Present - Future