this space intentionally left blank

June 27, 2013

Filed under: movies»reviews»scifi

Magic Missile

I'm not entirely sure why you would make films based on a franchise that you never liked. I'm on record as believing that the first JJ Abrams Star Trek flick was a reasonable popcorn flick but it didn't share anything with the original product except some character names. That's not true for the second movie. Into Darkness (to use its weird, not-really-a-subtitle subtitle) isn't just bad Trek, it's loathesome filmmaking.

The low-hanging fruit is that the plot doesn't even try to make sense for more than five minutes at a time, but since the original series was hardly airtight, I have a number of other bones to pick, including:

  • The Enterprise is not a submarine.
  • In a franchise known for its progressivism, it's painful to see all of the women reduced to either needy girlfriends or passive sex objects.
  • Along the same lines, I like Benedict Cumberbatch just fine (actually, I think most of the actors do a decent job), but he is surely one of the whitest people on earth and should not be playing Khan Noonien Singh.
  • The Enterprise is not a submarine.
  • Scotty's Magical Transporter and Plot Hole Device can now send people all the way across to the Klingon empire, but our heroes still get in a ship to follow him because there wouldn't be a chance for a pointless shootout otherwise.
  • Star Fleet dress uniforms that bear an uncanny resemblance to Death Star formalware.
  • Warp speed is now basically Rainbow Road, complete with starships spinning out into space with skidding sounds when they get hit with a blue shell magical laser beam.

Sure, much of this probably seems like nitpicks and nerd rage. I've watched a lot of Star Trek, probably more than most people, and so there are a lot of things that to me are instinctively not right but aren't necessarily invalid. I think it's a shame to lose those parts of the Trek canon (and I tend to think that Abrams' alterations are worse than the material he's replacing), but I'm hardly objective. Lance believes that he's just trolling us, and I'm not sure that's wrong.

I find the movie's general incoherence to be frustrating. But that's not what actually makes me angry.

At the end of Star Trek Into Grim Serious Incoherence, Khan crashes his spaceship into San Francisco. Hundreds of thousands, if not millions, of people are killed, but that's okay because they're not the protagonists and presumably their psychological issues were less attractive. This is, to put it lightly, not really what Gene Roddenberry had in mind when he pitched "wagon train in space" to some bored Desilu executives.

Speaking personally, I'm getting a little sick of the whole "it's been a decade since 9/11, so let's crash a flying vehicle into a city and call it emotional resonance" thing that every hack director with a render farm has been on lately. Abrams is doing it, apparently the new Superman movie does it, The Avengers did it. It's a cheap, transparent ploy to make otherwise airy summer entertainment seem important, so that critics can write that your otherwise incoherent summer tentpole flick has "real-world allusions" in it. Blowing up a planet in the first reboot movie wasn't enough, I guess.

Nowhere is that more true than in Star Trek No Subtitles Just Darkness. Khan doesn't really have a good reason to crash his ship into a major city. It doesn't particularly help him achieve his goals. He just does it because, as with every other reason that anyone does anything in a JJ Abrams movie, it's part of the story checklist they wrote before actually getting to outmoded concerns like "dialogue" or "motivation" or "character." City destroyed: tragedy achieved. On to the next setpiece!

Reboot or not, there are some things that a Star Trek movie shouldn't do, and mass murder is one of them. I'm under no illusions about the ideological purity of Star Trek, especially under Paramount's management, but I like to think that Roddenberry's vision should mean something regardless. As it is, there must be a little whirlwind somewhere around the ionosphere where his ashes are spinning. If JJ Abrams wants to participate in a little cinematic disaster porn, he's welcome to do so, but I wish he'd restrict it to some other, less established franchise. It's probably just as well that he's moving on to Star Wars: this kind of bankrupt cheesiness will fit right in there.

June 19, 2013

Filed under: random»linky

Remember the Linkblog!

Obviously I've been a little obsessed with RSS the past couple of weeks (get used to it: it'll be everyone else's turn come July 1). Along the way, I've been trimming my subscription list: I've been blogging for more than nine years now (!), and collecting feeds for nearly as long. A lot of those URLs are now broken, which is a little sad. In a precursor to the whole Google Reader situation, if you were on Feedburner, there's a pretty good chance I'm not reading you anymore.

Speaking of things that people don't really do in a post-Twitter world, I was reminded this week that I need to post another set of links--not so much because anyone else is interested, but because between the dismal searchability of social media and the death of bookmark services like Delicious, it's the only way I can be sure to find anything more than three months from now. And so:

  • A lot of people linked to Jeremy Keith's defense of RSS-as-API this week. Indeed, when I was at CQ, getting RSS running for our various services and reports was one of my constant campaigns. In many ways, it's one of the purest expressions of the web: a machine-readable format of human-centric information.
  • What reminded me of link-blogging in the first place was this study of privacy and de-anonymization, which I knew I'd posted to one service or another but could not for the life of me locate when I wanted it. It's a fascinating case of matching health records to individuals through obscured metadata and demographics--food for thought in light of the NSA metadata hubbub.
  • Earlier than expected, and all too soon, Iain Banks died last week. Ken Macleod has a passionate remembrance in the Guardian.
  • I have always been skeptical of WebGL, but it looks like it'll graduate to legitimate technology with a rumored inclusion in IE11. I still think it's a terrible API. That said, this article by Greg Tavares (one of the Chrome coders on WebGL) got me more excited about it than any other tutorial has ever done. Tavares points out that it's not actually a 3D API, but a 2D drawing API with decent tools for projection math. In that light, and given my love for 2D, I've actually started screwing around with WebGL a little.
  • If you are interested in using WebGL for 3D, though, this presentation does a great job of presenting both the what and the why of the math involved. It almost made me care about matrices again.
  • It is taking years, but people are finally realizing that the web is not killing long-form journalism. If anything, it may be enhancing its chances.
  • I really enjoyed this retrospective on the Portal 2 alternate reality game. The section on false clues and coincidence is a testament to people's ability to match patterns, whether they exist or not. It sounds like a fun gig.

June 12, 2013

Filed under: tech»web

Outward Vectors

I'm happy to say that Weir is now in a beta-ready state. You'll need a server capable of running NodeJS and PostgreSQL (for now), and you'll need an OPML file to populate the feed list (Google Takeout will accomodate you nicely with a subscriptions.xml if you're fleeing Reader). But if you pull from the repo and then follow the instructions in the readme file, everything should be in a good-enough state to fetch, read, and mark stories as read. Feedback would be awesome.

The front end for Weir is written using AngularJS, because it's supposed to be great for rapid development and I'm all about failing fast on this project. Indeed, getting the client-side application up and running has gone very quickly, but Angular itself takes some adjustment, especially if you're used to other JavaScript frameworks.

I'm not convinced that this is a bad thing. Predictions are a mug's game, but I suspect that future libraries are going to look a lot more like Angular than its competitors. Before I can explain why, we have to first look at the way client-side JavaScript has been traditionally organized, and then see how Angular works differently.

JavaScript MVC libraries, from Backbone to Ember, find themselves confronted with a language that's very different from the languages where Model-View-Controller philosophies evolved:

  • JavaScript has no privacy, and (until recently) no getters and setters. Between the two, it's hard to know if a given object has changed since the last redraw.
  • The DOM is not designed to be strongly linked with JavaScript data structures.
  • Multi-level inheritance of values is fine, but inheritance of behavior is a mess.
Despite these quirks, libraries are still designed as if JavaScript was similar to SmallTalk. They work around the differences by using manual getter and setter functions on Model classes, registering for DOM events inside View classes, and retemplating using templates when one or the other is changed.

This works--and is certainly a million times better than writing jQuery spaghetti code--but it's not what you'd call "clean." For example, here's some code written in an imaginary (but typical) library, just to update a simple list view: var Song = new Vertebrae.Model.extend({ title: { value: "" }, listens: { value: 0 }, file: { value: "" }, starred { value: false } }); var SongView = new Vertebrae.View.extend({ render: function() { var model = this.get("model"); var el = this.get("element"); el.find(".can-template").html( templates.song(model.toJSON())); var rev = model.get("review"); el.find(".cannot").val(rev); } });

That is a lot of boilerplate just to display a song (and it doesn't even include the templates, or loading the actual data). Heavy object classes are necessary so that the framework can be notified of changes--hence all the extend and get calls, as well as the awkward way of defining default values. In places, we can at least use templates, but we're still having to place them manually into the DOM. It's like a terrible parody of Java's worst bits glued onto jQuery.

In contrast, Angular uses regular JavaScript objects, written with normal JavaScript syntax, for its models. There are no getter or setter functions, unless you really want them: change an object, and if it is attached to the $scope variable, it will be scanned for changes automatically. And while you're not discouraged from using inheritance, you're not really encouraged to do so, either. Angular uses prototypal inheritance to manage values under the hood, but its developer-facing APIs tend to bear more resemblance to AMD or CommonJS modules. It feels like JavaScript, in other words.

On the other hand, Angular is all about augmenting HTML: although templates are available to ease re-use, an Angular page actually gets marked up using custom tags and attributes, then compiled and linked into components that respond instantly to the application's backing data. This is very forward-thinking--in fact, it's not dissimilar from the Extensible Web Manifesto, and I can dig that--but it definitely comes across as "magic" the first time that you use it. After years of logic-less template engines being popular, Angular stakes out a very different position.

Normally, I'm not a fan of magic in programming: it's hard to debug what you don't understand. In this case, the novelty of Angular's approach--and its undeniable effectiveness--overcame my skepticism, to the point where it's really grown on me. Using Angular makes me much more aware of the boilerplate that's required by the traditional MVC frameworks I use in my day job. Simple tasks require less code, and I don't feel like I'm fighting my way through thick layers of abstraction.

If there's a place where Angular still feels awkward, it's anything to do with the DOM. Angular will let you get access to elements of your page, but only reluctantly--it would really prefer that you only alter your model data and let the DOM react. Most of the time, this is fine: the less page manipulation I have to do, the happier I am. But there are some times when it is inevitable, such as when I'd like to perform deferred image loading, and those are definitely the ugliest parts of Weir's client code so far.

But here's the rub: if the web ecosystem teaches us anything, it's that you can always make a simple framework faster and more powerful, but people won't use an API that's clumsy and tiresome (see also: jQuery vs. pretty much everything else). Yes, DOM manipulation isn't great in Angular--they'll have to write some new directives, to cover the edge cases and holes. Yes, the object polling that Angular does is kind of scary, but browsers will add features like Object.observe() to make it faster overnight. Meanwhile, nothing's going to make those heavy Model and View classes any more fun to use.

There has been (and still is) a lot of time in the JavaScript community spent trying to make it work like something more familiar. That's how you end up with Coffeescript, or YUI, or all these MVC frameworks. Those projects have a place, and there are certainly times when I want something familiar, but it's also good to see tools (like Angular, Node, or D3) that are built around JavaScript weirdness. There hasn't been an oddball language with a profile this high in a long time, so let's shake things up while we've got the chance.

May 30, 2013

Filed under: tech»coding

Project Seymour

A month from now, Google will shut down Reader, leaving RSS addicts in the lurch. I suspect this will be both more and less disruptive than anticipated: expect replacement services to go through another set of growing pains, but RSS isn't exactly a high lock-in situation, and most people will find a new status quo fairly quickly.

I am not eager to move from one hosted service to another (once burned, twice shy), nor do I want to go back to native applications that can't share progress, so as soon as the shutdown was announced I started working on a self-hosted RSS reader. I applied the same techniques I'd used for Big Fish Unlimited: an easy-to-configure router, a series of views talking to the database only through model classes, and heavy use of closures for dependency management and callbacks. I built a wrapper around PHP's dismal cURL library. It was a nice piece of architecture.

It also bogged down very, very quickly. My goal was a single-page application with straightforward database queries, but I was building the foundation for a sprawling, multi-page site. Any time I started to dip in and add functionality, I found myself frustrated by how much plumbing I needed in order to do it "the right way." I was also annoyed by the difficulty of safely requesting a large number of feeds in parallel in PHP. The language just isn't built for that kind of task, even with the adaptations and improvements that have been pasted on.

This week I decided to start over, this time using Node.js and adopting a strict worse is better philosophy. When I use Reader, 99% of my time is spent in "All Items" pressing the spacebar (or, on mobile, clicking "Mark Items as Read") to advance the stream. So I made that functionality my primary concern, and wrote only as much as I needed to (both in terms of code size and elegance) to make that happen. In two days, I've gotten farther than I had with the PHP, and I'm much happier with the underlying platform as well--Node is unsurprisingly well suited to firing off tens and hundreds of concurrent requests.

I've just posted the work-in-progress code for the application, which I'm calling Weir (just barely winning out over "Audrey II"), to a public GitHub repo. It is currently ugly, badly-documented, and patchy in places. The Angular code I'm using for the front-end is obviously written by a someone with very little experience using the library. There's lots of room for improvement. On the other hand, my momentum is very good. By next week, I expect Weir will be good enough for me to dogfood it full time, and at that point improvements will come naturally whenever I need to smooth out the rough edges.

I like this way of working--"worse is better"--quite a bit. It's not always pretty, but it seems effective so far. It also fits in well with my general coding style, which is (perhaps unsurprisingly) on the left-ish side of Steve Yegge's developer politics. I like elegance and architecture as much as the next person, but when it all comes down to it, there's no point in elegant code that never gets used.

Writing my own Reader alternative is also proving educational. The conventional wisdom is that RSS readers benefit greatly from running at scale: operations like feed retrieval can be performed once for all subscribers, spreading the costs out. The flip side is that you're at the mercy of the server bot for when you get updates. High-frequency feeds, such as politics or news, get batched up instead of coming in as they're posted. I'm also able to get a lot more feedback on which feeds are dead, which came as a surprise: Reader just swallowed the errors whole. All in all, I doubt the experience will be any worse.

Currently, Weir isn't much good for public consumption. I've made a sanitized copy of my config file in the repo, but there's no setup script for the database, and no import step for getting your subscriptions loaded up. I hope to have that ready soon, and the code is licensed under the GPL, so pull requests and feature suggestions are welcomed as it becomes usable for other people.

May 22, 2013

Filed under: tech»education

Equal Opportunity

Last Friday, I gave a short presentation for a workshop run by the SCCC Byte Club called "Technical Interview Mastery for Women." Despite the name, it was attended by both men and women. Most of my advice was non-gender specific, anyway: I wanted to encourage people to interview productively by taking into account the perspective from the other side of the table, and seeing the process more as a dialog instead of a confrontation.

Still, during the question and answer period, several people asked about being women in the interview process. Given that my co-presenter has many years more experience being a woman, I deferred to her whenever possible, but I did chime in when the conversation turned to interaction styles. One participant said she was ignored if she wasn't assertive enough, but was then considered unpleasant if she stuck up for herself--what could she do about this?

It's one thing, I said, to suggest ways that women should adapt their communications for a male-dominated workplace--that kind of pragmatic code-switching may well do the trick. But I think it's unfair to put all the burden on women to adapt to men. There needs to be a way to remind men that it's their responsibility to act reasonably.

The problem is that it's often difficult to have that conversation without falling afoul of the same double-standard that says women in the workplace shouldn't be too loud. Complaining about sexism tends to raise hackles--meaning that the offending statement not only goes uncorrected, but dialog gets shut down. I don't know that I have any good solutions to that, but I suggested finding ways to phrase the issue akin to Jay Smooth's presentations on How To Tell People They Sound Racist. I like to think that most people aren't trying to be sexist, they're just not very self-aware. This may be a faulty assumption.

There are still people who argue that the tech industry isn't sexist--that women just aren't as inherently good at coding (this is often hidden behind comments that it's a "meritocracy"--in which, conveniently, women somehow just haven't had merit). From my point of view, I don't see any way that could be correct. My best JavaScript students are split 50/50 between men and women (so are the worst students). I trained equal numbers of men and women on the multimedia team at CQ (and probably would have given the effectiveness prize to the women in a pinch). Moreover, I've never seen any evidence that the skills I use in day-to-day work--spatial reasoning, some basic math, navigating abstraction--are gender-exclusive (or, indeed, required for all programming: the job of a web programmer is markedly different from a systems coder or security investigator, and yet those also suffer from serious inequality issues).

My talk at the workshop was specifically about interviewing, but obviously this is an issue that goes beyond hiring. Something is happening between the classroom and the workplace that causes this disparity. We have a word for this--sexism--regardless of the specific mechanics. And I would love to have more discussions of those specifics, but it's like climate change: every time there's a decent conversation in a public forum about solutions, it gets derailed by people who insist loudly that they don't think there's a problem in the first place.

That said, assuming that people just don't realize when they've done something wrong, there are doubtless ways to address the topic without defensiveness. If the description "sexist" derails, I'm personally happy to use other terms, like "unprofessional" or "rude"--I'm just embarrassed that I (and others) need to resort to euphemism. We need to change the culture around this discussion--to make it clear that we (both men and women) take this seriously, including respectful responses to criticism. We can do better, and I'd like to be able to tell future workshops that we're trying.

May 16, 2013

Filed under: tech»web

Why the Web Wins

Last year, Google spent most of its I/O conference keynote talking about hardware: Android, Glass, and tablets. This year, someone seems to have reminded Google that they're a web company, since most of the new announcements were all running in a browser, and in many cases (like the photo editing and WebGL maps) pushing the envelope for what's possible. As much as I like Android, I'm really happy to see the web getting some love.

There's been a drumbeat for several years now, particularly as smartphones got more powerful, to move away from web apps, and Google's focus on Android lent credence to that perspective. A conventional wisdom has emerged: web apps were a misstep, but we're past that now, and it'll be all native from this point out. I can't disagree with that more, and Google's clearly staking its claim as well.

The reason the web wins (such that anything will) is not, ultimately, because of its elegance or its purity (it's not big on either) but because of its ubiquity. The browser is the worst cross-platform API except for all the other ones, and (more importantly) it offers persistence. I can turn on any computer with an Internet connection and have near-instant access to files and applications without installing anything or worrying about compatibility. Every computer is my computer on the web.

For context, there was a time in my high school years when Java was on fire. As a cross-platform language with a network-savvy runtime, it was going to revive thin clients: I remember talking to people about the idea that I could log into any computer and load my desktop (with all my software) over the Internet connection. There wouldn't be any point to having your own dedicated hardware in a world like that, because you'd just grab whatever was handy and use it as a host. It was going to be like living in a William Gibson novel.

Java ended up being too heavy and too slow to make that actually happen. Instead, this weird combination of JavaScript, HTML, and CSS took over, like weeds springing up and somehow forming a fully-furnished apartment block. The surprise was that the ad-hoc web platform turned out to be competitive with Java on the front-end. Even though it's meant to be a document viewer, the browser is pretty good at building UI, and it's getting a lot better. I've been creating some web apps lately without worrying about backwards compatibility, and it's been remarkably pleasant, both as a developer and a user.

I don't believe that native programs will ever entirely go away. But I do think we see web applications spreading their tentacles over time, because if something is possible in the browser--if it's a decent user experience, plus it has the web's advantages of instant, no-install launch and sharing across devices--there's not much point in keeping it native. It's better to have your e-mail on any device. It's better for me to do presentations from a browser, instead of carrying a Powerpoint file around. It's better to keep my RSS reader in the cloud, instead of tying its state to individual machines. As browsers improve, this will be true of more and more applications, just as it was true of the Java applets that web technology replaced.

Google and I disagree with where those applications should be hosted, of course. Google thinks they should run it (which for many people is perfectly okay), and I want to run them myself. But that's a difference of degree, not principle. We both think the basic foundation--an open, hackable, portable web--is an important priority.

I like to look at it in terms of "design fiction"--the dramatic endpoint that proponents of each approach are aiming to achieve. With native apps, devices themselves are valuable, because native code is heavy: it takes time to install, it stores data locally, and it's probably locked to a given OS or architecture. Web apps don't give us the same immediate power, but their ultimate goal is a world where your local hardware doesn't matter--walk up to any web-capable surface, and your applications are there. Software in the web-centric viewpoint follows you, not your stuff. There are lots of reasons why I'm bullish on the web, but that particular vision is, for me, the most compelling one.

May 8, 2013

Filed under: music»performance»dance

That's Guerrilla With a U

Soul Society is here again, and so am I. If you're in the DC area this weekend, check it out.

May 2, 2013

Filed under: gaming»design

POV

There's a common complaint about the Bioshock games, which is that they're not very good shooters. People writing about Bioshock Infinite tend to mention this, saying that the story is interesting and the writing is sharp but the actual game is poor. And this is true: it's not a very good first-person shooter, and it's arguably much worse than its predecessors. But this implication of most of these comments, from Kotaku's essay on its violence to Brainy Gamer's naming it the "apotheosis of FPS, is that Infinite is bad in many ways because it's a first-person shooter--that it's shackled to its point of view. In doing so, it has become a sort of stand-in for the whole genre, from Call of Duty to Halo.

I sympathize with the people who feel like the game's violence is incoherent (it is), and who are sick of the whole console-inspired manshooting genre. But I love shooters, and it bugs me a little to see them saddled with the burden of everything that's wrong with American media.

Set aside Infinite's themes and its apparent belief that the best superpower is the ability to literally generate plot holes--when we say that it's not a good FPS, what does that means? What is it, mechanically, that separates the two? I'm not a designer, but as a avid FPS player, there are basically three rules that Infinite breaks.

First of all, the enemy progression can't be just about "bigger lifebars." A good shooter increases difficulty by forcing players to change their patterns because they're not able to rely on the same rote strategy. Halo, for all its flaws, gets this right: few of its enemies are actually "tough," but each of them has a different method of avoiding damage, and a different weapon style. By throwing in different combinations, players are forced to change up their tactics for each encounter, or even at multiple points during the encounter. Almost all of Infinite's enemies, on the other hand, are the same walking tanks, with similar (dim-witted) behaviors and hitscan weaponry. I never had to change my approach, only the amount of ammo I used.

Along those lines, weapons need strengths and weaknesses. Each one should have a situation where they feel thrillingly powerful, as well as a larger set of situations where they're relatively useless. This doesn't have to conflict with a limited inventory--I loved Crysis 2's sniper rifle, spending the entire game sneaking between cover positions in stealth mode, but it was always paired with a strong close-in gun for when I was overrun. A good game forces you to change weapons for reasons other than "out of ammunition." Infinite's close-range weapons feel identical, and its sniper rifle is rarely useful, since a single shot alerts everyone to your position.

Finally, every fight cannot simply be about shooting. Most shooters are actually about navigating space and territory, and the shooting becomes a way of altering the priorities for movement. Do you take cover, or dodge in the open? Do you need more range, or need to close on an enemy? The original Bioshock made the interplay between the environment and your abilities one of its most compelling features: electrifying pools of water, setting fire to flammable objects, flinging scenery around with telekinesis. But at the very least, you need an objective from time to time with more complexity than "kill everything," both as a player and in terms of narrative.

Bioshock Infinite has, in all seriousness, no period I can remember when my objective was not reduced to "kill everything." Combined with a bland arsenal and blander enemies, this makes it a tedious game, but it also puts it at complete odds with its characters. The writing in Infinite is unusually good for a shooter, but it's hard not to notice that Elizabeth freaks out (rightfully) during one of Booker's murderous rampages, comes to a cheery acceptance with it a few minutes later, and then spends the rest of the game tossing helpful items to you under fire. That's writing that makes both the narrative and the mechanics worse, by drawing attention to the worst parts of both.

It's not the only shooter with those flaws--people just had higher expectations for it. The average FPS is badly written, and it's a favorite genre for warmongering propaganda pieces. But that's true of many games, and yet we don't see pieces talking about the "apotheosis of platformers," or talking about RTS as though they're emblematic of wider ills just because Starcraft II is kind of a mess. And there's still interesting stuff being done in the genre: Portal and Thirty Flights of Loving come to mind. To say that FPS have reached their limits, ironically, seems like a pretty limited perspective.

April 25, 2013

Filed under: culture»internet

Network Affect

A couple of years ago, I spent the money for a subscription to Ars Technica, because I really liked their Anonymous/HBGary reporting, and wanted the full RSS feeds. Along with that, every now and then they'll send out a message about a coupon or special offer, which is how I ended up with a free account on App.net, the for-pay Twitter clone. Then I forgot about it, because the last thing I need is a way to find more people that annoy me.

And then someone linked me to this blog post, which made my week. It's a pitch for App.net in the most overwrought, let-them-eat-cake way. I'm going to excerpt a bit, but you should click through: it's better when you can just soak up the majesty of the whole thing:

The difference between a public and a private golf course is so profound that it's hard to play a public course after being a member of a private course. It's like flying coach your entire life, and then getting a first class seat on Asiana — it's damned hard to go back.

That's the difference between Twitter and App.net to me. Twitter is the public golf course, the coach seat. It's where everyone is, and that's exactly the problem. App.net is where a few people that are invested in the product, its direction, and the overall health of the service, go to socialize online.

[paragraph of awkward self-promotion removed]

Welcome to the first-class Twitter experience.

I actually don't know if I could write a parody of upper-class snobbery that good. If you hold your hand up the screen, you can almost feel the warmth of his self-regard--but not too close! They don't let just anyone into this country club, you know.

Seriously, though: while I've amused myself endlessly trying to come up with even-less-relatable metaphors for things ("Twitter is the black truffle, as opposed to the finer white truffles I eat at my summer home in Tuscany"), one random doofus with a blog is not cause for comment. Silly as it is, that post made me reconsider they way I look at internet advertising and ownership--if only to avoid agreeing with him.

In general, I'm not a big fan of advertising or ad-supported services. On Android, I usually buy apps instead of using the free versions, and I believe that people should own their content on the Internet. But let's be realistic: most people will not pay for their own server or software, and many people can't--whether because they don't have the money, or because they don't have access to the infrastructure (bank account, credit card, etc.) that's required. Owning your stuff on the internet is both a privilege and a visible signifier of that privilege.

This creates heirarchies between users, and even non-savvy people pick up on that. When Instagram finally decided to release an Android client, the moaning from a number of users about "those people" invading their clean, tasteful, iPhone-only service was a sight to behold. The irony of Instagram snobbery is that the company was only valuable because of its huge audience. It only got that userbase because it was free. Therein lies the catch-22 of these kinds of services: the scale that makes them useful and valuable also makes them profoundly expensive to run. Subscription-based or self-hosted business models are more sustainable, but they're never going to get as big.

Meanwhile, the technical people who think they could do something about these problems--"a few people that are invested in the product, its direction, and the overall health of the service"--are off building their own special first-class seating. Not that I think they'll make it, personally--it's a perpetual tragedy that the people threatening to Go Galt never do, since that would require them to stop bothering the rest of us.

I often see people expressing distaste for ad-supported sites with the oft-quoted line "you're not the customer, you're the product." That's nice when you have the option of paying for your own e-mail, and running your own blog, in the same way that minimalism looks awfully nice when you have the credit rating to afford it. People without money have to live with clutter. If we're interested in an internet that offers opportunity to everyone, we have to accept a more forgiving view of ad-supported business, and focus on how to make it safer for people who have no other option. Otherwise we're just congratulating each other on getting into the country club.

April 17, 2013

Filed under: meta»announce»delays

The Cruelty of One's Early Thirties

Normally, I try to have something written and posted here by Wednesday night each week, because I feel like that's the minimum I can write and still call myself a blogger. This week, unfortunately, between writing my textbook (highly recommended!) and trudging through Bioshock Infinite (not at all recommended!), my right wrist is probably in the worst shape it's been in for about five years now. To recover, I'm giving myself the week off from computers outside of work.

I figure you don't really need to know this, but if I write it up here, I'm more likely to stick to it.

While I'm complaining, my knees hurt and these kids won't stay off my lawn.

Future - Present - Past