Belle and I were planning on getting a Roku to replace our five-year-old XBox this Christmas, since the games are drying up and it doesn't make any sense to pay for a Live subscription just to watch Netflix and HBO. I still kind of bear a grudge against Sony for the CD rootkit they passed around years ago, but then my employers at ArenaNet bought everyone a PS4 as a holiday bonus. I am, it turns out, not above being a hypocrite when it comes to free stuff.
You can explain a lot about the last three generations of consoles by remembering that, at heart, Microsoft is a software company and Sony is a hardware company. Why did the XBox 360 suffer regular heat failures? Why does the PS Vita interface look like an After Dark screensaver? Our 360 was clearly on the edge of another DVD failure, so I bear them no particular good will. But you have to admit: up to the point that a given XBox malfunctions in one way or another, Microsoft knows how to build a usable operating system. Sony... well, it's not so much a core skill of theirs.
For example, after you turn on the PS4, and after the hundreds of megabytes of updates are done downloading and installing themselves a few times, you're greeted with a row of boxes:
Apparently I'm a little grumpy about the menus.
Anyway for us, this is a media player, which means we'd like to have a remote control, but those don't exist for PS4 yet and it can't use regular IR remotes. The controller layout may make sense to someone who owned a PS3, but it's just baffling to me: why is the button normally used to go backwards assigned here to play/pause duties? To be fair, the XBox never really had a great controller story for DVDs either (both of them put fast-forward on the triggers, where you're guaranteed to accidentally hit it while setting the controller down), but at least it tried to be consistent with the rest of the OS.
You can pair a smartphone with the PS4, which one would think could be a chance to show custom controls for media, what with the touchscreen and all. You'd be wrong: the PS4 app dedicates 90% of its surface to a swipeable touchpad, apparently on the assumption that the three directional inputs on the actual controller are insufficient.
The whole time you're watching a movie, of course, the controller will glow like some sort of demented blue firefly, which helps the camera (which I don't have) to see where I am (hint: the couch). Since you can't just turn off the LED, I've got the whole controller set to shut itself off after ten minutes. This solves the glow, and keeps the batteries from draining themselves at an alarming rate, but now when I want to actually use the controller for something — say, to pause the movie because our dog has started making that special "I'm going to throw up" face — it interrupts with a bright blue screen, every single time, to ask me who I am. Meanwhile, my movie keeps playing in the background.
This is worth some emphasis: on the XBox, a console where we actually had multiple accounts, each new controller that was activated would either log in as the current user or just kind of wait in "guest" mode until the player actually signed in. On the PS4, a console where we have one account, to which I was already signed in with our only controller 20 minutes ago, Sony needs to know my identity before I can perform the critical, account-bound task of pausing a movie. Meanwhile, the dog is now standing sheepishly in front of a vomit-stained rug.
I'm a little grumpy about the media functions, too.
I'm well aware it's a little ridiculous to gripe this much about a free game system. It's not that the PS4 is a bad machine — it's on par with your average DVD player in terms of usability — but I tend to feel like maybe they should aim a little higher. I'm really hoping that these kinds of fixes will be easy to update, since most of the UI is apparently built using web technology instead of painstakingly coded native widgets.
What's really interesting about comparing consoles from both companies is that the kinds of things I really miss from the XBox (pinned items, Kinect voice commands, good media apps) weren't there from the start. Microsoft has gone through at least three major revisions since they released the 360 in 2005. Even though there have been regressions (and the ads have certainly gotten bigger over time), the overall trend has been for the better — in part because they've been effectively allowed to throw the whole thing away and start over. As far as I can tell, the PS3 was also improved, even if it wasn't reinvented in the same way. It takes a lot of nerve to make sweeping changes like that, and as well as a conviction that the physical box is not what you're selling — a philosophy that's well-suited to Microsoft's software background, but that even hardware companies can no longer ignore.
I've been so embedded in a constantly-shifting web environment for so long that I sometimes forget that not everything updates on a monthly basis. Sony will be more conservative than Microsoft, but even they will be rolling out patches to the PS4, many of which will probably address my complaints. We live in a world where you can turn around and find that your DVD player, or your phone, or your browser suddenly looks and acts completely differently. That's great for people like me who thrive on novelty, but it now occurs to me just how disorienting this might be for ordinary people. It may be worth considering whether a little stability might be good for us — even if it means preserving the bad with the good — and whether the technical community might benefit from a little sympathy to users overwhelmed by our love of change.
This is me, thinking about plugins for Caret, as I find myself doing these days. In theory, extensibility is my focus for the next major release, because I think it's a big deal and a natural next step for a code editor. In practice, it's not that simple.
Chrome has a pretty tight security model applied to packaged apps, not the least of which is a strict content security policy. You can't run code from outside your application. You can't construct code using eval (that's good!) or new Function (that's bad). You can't add new files to your application (mostly).
Chrome does expose an inter-app messaging system similar to postMessage, and I initially thought about using this to create a series of hooks that external applications could use. Caret would broadcast notifications to registered listeners when it did something, and those listeners could respond. They could also trigger Caret commands via message (I do still plan to add this, it's too handy not to have).
Writing plugins this way is neatly encapsulated and secure, but it's also going to be intensely frustrating. It would require auditing much of Caret's code to make sure that it's all okay with asynchronous operation, which is not usually the case right now. I'd have to make sure that Caret is sufficiently chatty, because we'd need hooks everywhere, which would clutter the code with broadcast/response blocks. And it would probably mean writing a helper app to serve as a patchboard between applications, and as a debugging tool.
I'm not wild about this one.
I've been trying to think of a way around the whole inter-app messaging paradigm for about a month now. At the same time, I've been responding to requests for Git and remote filesystem support, which will not be a core Caret feature. For some reason, thinking about the two in close proximity started me thinking along a new track: what if there were a way to work around the security policy using the HTML5 file system? I decided to run some tests.
It turns out this is absolutely possible: Chrome apps can download a script from any server that's whitelisted in their manifest, write that out to the filesystem, and then get a special URL to load that file into a <script> tag. I assume this has survived security audits because it involves jumping through too many hoops to be anything other than deliberate.
The advantages of this approach are numerous. Plugin code would operate directly alongside of Caret's source, able to access the same functions and modules and call the same APIs that I use. It would be powerful, and would not require users to publish plugins to the Chrome store as if they were full applications. And it would scale well--all I would need to do is maintain the index and provide some helper functions for developers to use when downloading and caching their code.
Unfortunately, it is also apparently forbidden by the Chrome Web Store policies, which state:
Packaged apps should not ... Download or execute scripts dynamically outside a sandboxed environment such as a webview or a sandboxed iframe.At that point, we're back to postMessage unless I want to be banned from the store. So much for the workaround.
So how can I make plugins work for end users? Well, honestly, maybe I don't. One of the nice things about writing developer tools, particularly oddball developer tools, is that the people using them and wanting to expand on them are expected to have some degree of technical knowledge. They can be trusted to figure out processes that wouldn't necessarily be acceptable for average computer users. In this case, that might mean running Caret as an unpacked app.
Loading Caret from source is not difficult--I do it all the time while I'm testing. Right now, if someone wants to fork Caret and add their own features, that's easy enough to do (and actually, a couple of people have done so already). What it lacks is a simple entry point for people who want to contribute functionality without digging into all the modules I've already written.
By setting up a plugins directory and a little bit of infrastructure, it's possible to reach a middle ground. Developers who really want extra packages can load Caret from source, dump their code into a designated location, and have their code bootstrapped automatically. It's not as friendly as having web store distribution, and it's not as elegant as allowing for a central repo, but it does deliver power without requiring major rewrites.
Working through all these different approaches has given me a new appreciation for insecurity, which sounds funny but it's true. Obviously I'm in favor of secure computing, but working with mobile operating systems and Chrome OS that strongly sandbox their code tends to make a person aware of how helpful a few security holes can be, and vice versa: the same points for easy extension and flexibility are also weak points that can be exploited by an attacker. At times like this, even though I should maybe know better, that tradeoff seems absolutely worth it.
Assuming that the hamster powering the Chrome web store stats is just resting, Caret clicked over to 10,000 installations sometime on Monday. That's a lot of downloads. At a buck a piece, even if only a fraction of those people had bought a for-pay version, that might be a lot of money. So why is Caret free? More importantly, why is it free and open source? Ultimately, there are three reasons:
Originally, I had planned on writing about how I reconcile being a passionate supporter of paid writing while giving away my hobby code, but I don't actually see any conflict. I expect a paycheck for freelance coding the same way I expect it for journalism — writing here (and coding Caret) doesn't directly benefit anyone but me, and it doesn't really cost me anything.
In fact, it turns out that both industries also share some uncomfortable habits when it comes to labor. Ashe Dryden writes:
Statistically, we expect that the demographic breakdown of people contributing to OSS would be about the same as the people who are participating in the OSS community, but we aren't seeing that. Ethnicity of computing and the US population breaks down what we would hope to see as far as ethnicity goes. As far as gender, women make up 24% of the industry, according to the same paper that gave us the 1.5% OSS contributor statistic.
Dryden was responding to a sentiment that I've seen myself (and even been guilty of, from time to time): using a person's open source record on sites like GitHub as a proxy for hireability. As she points out, however, building an open source portfolio is something that's a lot easier for white men. We're more likely to have free time, more often have jobs that will pay for open source contributions, and far less likely to be harassed or dismissed. I was aware of those factors, but I was still shocked to see that diversity numbers in open source are so low. We need to do better.
As eye-opening as that is, I think Dryden's middle section centers around a really interesting question: who profits?
I'd argue that the people who benefit the most from the unpaid labor of OSS as well as the underpaid labor of marginalized people in technology are business owners and stakeholders in these companies. Having to pay additional hundreds of thousands or millions of dollars for this labor would mean smaller profit margins. Technology is one of the most profitable industries in the US and certainly could support at least pay equality, especially considering how low our current participation is from marginalized people.Her conclusion — that the community benefits, but it's mostly businesses who boost their profits from free software — should be unsettling for anyone who contributes to open source, and particularly those of us who see it as a way to spread a little socialist good will. For this reason, if nothing else, I'll always prefer the GPL and other "copyleft" licenses, forcing businesses to play ball if they want to use my code.
...Open source originally broke us free from the shackles of proprietary software which forced us to "pay to play" and gave us little in the way of choices for customization. Without realizing it, we've ended up in a similar scenario where we are now paying for the development of software that large companies financially benefit from with little cost to them.
Now that Apple and Microsoft (with a few others) have formed a shell company in order to sue Google over a bunch of old Nortel patents, and IBM has accused Twitter of infringing on a similar set of bogus "inventions," it's probably time again to talk about software patents in general.
Q. When it comes to software patents...
Q. Hang on. Are they a valid thing? Are any of these suits valid?
A. See above: no, and no.
Q. Why not?
A. Because software patents, as a description of software, are roughly like saying that I could get a patent for the automobile by writing "four tires and an engine" on a piece of paper. They're vague to the point of uselessness, and generally obvious to anyone who thinks about a problem for more than thirty seconds.
Q. Yeah, but so what? Isn't the point of patents to create innovation? Haven't we had lots of innovation in software thanks to their protection?
A. True, the point of patents is to make sure that people let other people use their inventions in exchange for licensing fees, which is supposed to incentivize innovation. In patents for physical inventions, that makes sense: I need to know how to build something if I want to use it as a part of my product, and the patent application will tell me how to do that. But software patents are not descriptive in this way: nobody could write software based on their claims, because they're written in dense legal jargon, not in code.
Let's take one of the patents in question, IBM's "Programmatic discovery of common contacts." This patent covers the case where you have two contact lists for separate people, and you'd like to find out which contacts they have in common. It describes:
As for the innovation argument, it's impossible to prove a negative: I can't show you that it wasn't increased by patents. But consider this: most of the companies that we think of as Internet innovators are strikingly low on patent holdings. Reportedly, Twitter owns nine, and has pledged not to sue anyone over them. Google's only applied for a few, although they have purchased many as a defensive tactic. Facebook is not known for licensing them or taking them to court (indeed, just the opposite: enter the Winklevii). For the most part, patent litigation is limited to companies who are either no longer trailblazers (Microsoft), are trying to suppress market competition (Apple), or don't invent anything at all (Intellectual Ventures). Where's the innovation here?
This American Life actually did a pair of shows on patents, strongly arguing that they've been harmful: companies have been driven out of business by patent trolls. Podcasters have been sued for the undeniably disruptive act of "putting sound files on the Internet." The costs to the industry are in the billions of dollars, and it disproportionately affects new players — exactly the kind of people that patents are meant to protect.
Q. So there's no such thing as a valid software patent?
A. That's actually a really interesting question. For example, last week I was reading The Code Book, by Simon Singh. Much of the book is taken up by the story of public key encryption, which underlies huge swathes of information technology. The standard algorithm was invented by a trio of researchers: Ron Rivest, Adi Shamir, and Len Adleman. As RSA, the three patented their invention and successfully licensed it to software firms all over the world.
The thing about the RSA patent is that, unlike most software patents, it is non-obvious. It's extremely non-obvious, in fact, to the degree that Rivest, Shamir, and Adleman literally spent years thinking about the problem before they invented their solution, based on even more years of thinking on key exchange solutions by Whitfield Diffie, Martin Hellman, and Ralph Merkle. RSA is genuinely innovative work.
It is also work that was independently invented by the espionage community several years before (although obviously they weren't allowed to apply for patents). Moreover, a lot of the interesting parts of RSA are in the math, and math is not generally considered patentable. Nevertheless, if there's anything that would be a worthy software patent, RSA should qualify.
It goes without saying that matching contacts, or showing search ads, or scrolling a list based on touch, are not quite in the same league. And it's clear that patents are not creators of value in software. People aren't buying Windows computers or iPhones because they're patented. They're buying them because they run the software people want, or run on good-looking hardware, or any number of other reasons. In other words: software is valuable because of what it does, not because of how.
Q. So software shouldn't have any protection?
A. Sure, software should be protected. In fact, it already is. Code can be copyrighted, and you can sue people who take it and use it without your permission. But that's a different matter. Copyright says you can sue people who publish your romance novel. Software patents would be like suing anyone who writes about boy-meets-girl.
Q. Okay, fine. What's the answer?
A. Ultimately, we need patent reform. The steps for this are the same as any other reform, unfortunately: writing letters, asking your political candidates, and putting pressure on the government. It's not a sexy recommendation, but it's effective. If we could frame these as another type of frivolous lawsuit, the issue may even get some traction.
Personally, though, I'm trying to vote with my wallet. I'd like to not give money to companies that use patents offensively. Incidentally, this is why I'm cautiously optimistic for Valve's Steam Machines: it's much harder for me to not give money to Microsoft, since I play a lot of games on Windows (and XBox). A Linux box with a good game library and a not-terrible window manager would make my day.
Finally, there's a community site that can help. Ask Patents is a forum set up by the people who run the Stack Overflow help group for programmers. It takes advantage of a law that says regular people can submit "prior art" — previously-existing examples of the patented invention — that invalidate patents. Ask Patents has successfully blocked bad software patents from being granted in the first place, which means that they can't be used for infringement claims. Over time, success from finding prior art makes patents more expensive to file (because they're not automatically granted), which means fewer stupid ones will be filed and companies will need to compete in the market, not the courtroom.
It is never a bad time to remember that Orson Scott Card is a terrible person. But this week, as millions of people will go to theaters to see a movie based on his most famed work (sorry, Lost Boys), it is good to also remind ourselves: Ender's Game is not a good book. It's barely even a bad one. Consider the following three essays, ranked in descending order of plausibility:
Williams' story is unlikely, I think, but it's too much fun not to mention (and for a long time, his account was the only place you could read about the Nazi connection). Radford makes a stronger case, but chances are much of Ender's similarity to Hitler is just coincidence: Ender ends up on a planet of Brazilians because Card is a hack who went on Mormon mission to Brazil as a young adult, he's a misogynist because his author is one, and he justifies his genocide with a lot of blather about "intention" because Card chickened out on the clear implication of the first book: that his protagonist really was a psychopath that wiped out an entire civilization based on an elaborate self-deception.
It's Kessel's essay that's been the most quoted over the years, and for good reason. It's a brutal deconstruction of the tropes used to build Ender's Game, and ends in a deft examination of why the book remains so popular:
It offers revenge without guilt. If you ever as a child felt unloved, if you ever feared that at some level you might deserve any abuse you suffered, Ender’s story tells you that you do not. In your soul, you are good. You are specially gifted, and better than anyone else. Your mistreatment is the evidence of your gifts. You are morally superior. Your turn will come, and then you may severely punish others, yet remain blameless. You are the hero.
Ender never loses a single battle, even when every circumstance is stacked against him. And in the end, as he wanders the lonely universe dispensing compassion for the undeserving who think him evil, he can feel sorry for himself at the same time he knows he is twice over a savior of the entire human race.
God, how I would have loved this book in seventh grade! It’s almost as good as having a nuclear device.
Like a lot of people, I did have this book in seventh grade (or earlier — I'm pretty sure I read it while attending junior high in Indiana). And I did love it as a kid, for most of the reasons that Kessel states: I was a bright kid who didn't have a lot of friends, felt persecuted and misunderstood, and struggled to find a way to express those feelings. Eventually, I grew up. Looking back on it, Ender's Game didn't really do any harm — like a lot of kids, I wasn't actually reading that critically. It's just kind of embarrassing now, and I definitely don't want to go to a theater and relive it.
Feeling embarrassed by your childhood reading material is a common rite of passage for many people, and science fiction readers probably more than others. Jo Walton refers to this as the Suck Fairy. It's tempting, when this happens, to wish we could go back in time and take these books off the shelves — or stop readers now from encountering them in the first place — but it's probably a better idea to foster discussion (a happy side effect of an active adult readership for "young adult" titles) or have alternatives ready on hand.
Recently I re-read another beloved book from my childhood: The Westing Game by Ellen Raskin. If you haven't taken a look at it lately, you really should. Apart from the titles, the two books have aged in radically different ways — in fact, it's probably better now than it was then. I remember reading it mostly as a puzzle: first to solve it, and then again to appreciate the little clues that Raskin works in. But as for the warmth, the sympathetic characterization, and most of all the humor (seriously, it's an uproariously funny book): I missed out on all of these things when I was a precocious youngster identifying with Turtle and her shin-kicking ways, just like I missed Ender's fascist tendencies.
And so ultimately, I'm not worried about young people reading Ender's Game and being influenced for the worse, because I suspect that what they take from it is not what Card actually wants them. It's sometimes difficult — but also crucial — to remember that the reader creates the story while reading, almost as much as the author does. Should we speak out against hateful works, and try not to give money to hatemongers? Sure. Will I be going to see Ender's Game at the local cinema? Definitely not. But I'll always understand people who have a soft spot for it anyway. Despite my bravado, despite the fact that I dislike everything it has come to stand for, I'm one of them, and I'm not going to let Card make me feel bad about that.
In the time since I last wrote about Caret, it's jumped up to 1.0 (and then 1.1). I've added tab memory, lots of palette search options, file modification watches, and all kinds of other features — making it legitimately comparable with Sublime. I've been developing the application using Caret itself since version 0.0.16, and I haven't really missed anything from native editors. Other people seem to agree: it's one of the top dev tools in the Chrome web store, with more than 1,500 users (and growing rapidly) and a 4.5/5 star rating. I'm beating out some of Google's own apps, at this point.
Belle's reaction: "Now you charge twenty bucks for it and make millions!" It's good to know one of us has some solid business sense.
Caret is already designed around message-passing for its internal APIs (as is Ace, the editing component I use), so it won't be too difficult to add external hooks, but it'll never have the same power as something like Sublime, which embeds its own Python interpreter. I can understand why Google made the security decisions they did, but I wish there was a way to relax them in this case.
I figure I have roughly six months to a year before Caret has any serious competition on Chrome OS. Most of the other editors aren't interested in offline editing or are poorly-positioned to do so for architectural reasons. The closest thing to Caret from the established players would be Brackets, which still relies on NodeJS for its back-end and can't yet run the front-end in a regular browser. They're working on the latter, and the former will be shimmable, but the delay gives me a pretty good head start. Google also has an app in the works, but theirs looks markedly less general-purpose (i.e. it seems aimed specifically at people building Chrome extensions only). Mostly, though, it's just hard to believe that someone hadn't really jumped in with something before I got there.
Between Caret, Weir, and my textbook, this has been a pretty productive year for me. I'm actually thinking that for my next project I may write another short book — one on writing Chrome apps using what I've learned. The documentation from Google's end is terrible, and I hate to think of other people having to stumble through the APIs the way I did. It might also be a good way to get some more benefit out of the small developer community that's forming around Caret, and to find out if there's actually a healthy market there or not. I'm hoping there is: writing Caret has been fun, and I'd to have the chance to do more of this kind of development in the future.
A few years ago, I started blacklisting web sites that I didn't think were healthy: gadget sites, some of the more strident political sites, game blogs that just churned crappy content all day long. If it didn't leave me better informed, or I felt like my traffic there was supporting bad content, or if the only reason I visited was for the rush of outrage, I tried to cut it out (or at least cut it down). All in all, I think it was a good decision. I felt better about myself, at least.
This week, I added Hacker News to the list of sites I don't visit. HN is the current hotspot for tech community news--kind of a modern-day Slashdot. Unfortunately (possibly by virtue of being run by venture capital firm Y Combinator), it's also equally targeted at A) terrible startup company brogrammers and B) libertarian bitcoin enthusiasts. Browsing the links submitted by the community there is kind of like eating at dive restaurants in a new city: you'll find some winners, but the price is a fair amount of food poisoning.
For a while, I've been running a Greasemonkey script that tries to filter out the worst of the listings (sample search terms: lisp, techcrunch, hustle). This is not as easy as it sounds, because HN is written using the cutting-edge technologies of 1995: a bunch of nested tables with inline styles, served via Lisp variant that causes constant timeouts on anything other than the front page. But even though I had a workable filter from a technical perspective, at some point, it's time to hang up the scripts and admit that the HN community is toxic. There's only so long you can not read the comments, especially on any thread involving sexism, racism, and other real problems that Silicon Valley would like to pretend don't exist.
For example, here's some of the things I've been trying to ignore:
The tech bubble isn't just financial: these are signs of a community that's isolated from difference — of gender, of opinion, and of class. The venture capital system even protects them from consequences: how much money will Twitter lose this year? The fog of arrested development that hangs over Hacker News is its own argument for increased diversity in the tech industry. And it affects more than just a few comment threads, unless you also think the best use of smart people's time is the development of a $130 smoke detector that talks to your iPad.
Leaving a well-known watering hole like this is a little scary — HN is how I've stayed current on a lot of new developments in the field. It's frustrating, feeling like good information is being held hostage by a bunch of creeps. But given a choice between reading an article a couple of days after everyone else or feeling like I constantly need a shower, I'm happy to work on my patience.
John: Hey, Bush is now at 37% approval. I feel much less like Kevin McCarthy screaming in traffic. But I wonder what his base is --
John: ... you said that immmediately, and with some authority.
Tyrone: Obama vs. Alan Keyes. Keyes was from out of state, so you can eliminate any established political base; both candidates were black, so you can factor out racism; and Keyes was plainly, obviously, completely crazy. Batshit crazy. Head-trauma crazy. But 27% of the population of Illinois voted for him. They put party identification, personal prejudice, whatever ahead of rational judgement. Hell, even like 5% of Democrats voted for him. That's crazy behaviour. I think you have to assume a 27% Crazification Factor in any population.
John: Objectively crazy or crazy vis-a-vis my own inertial reference frame for rational behaviour? I mean, are you creating the Theory of Special Crazification or General Crazification?
Tyrone: Hadn't thought about it. Let's split the difference. Half just have worldviews which lead them to disagree with what you consider rationality even though they arrive at their positions through rational means, and the other half are the core of the Crazification -- either genuinely crazy; or so woefully misinformed about how the world works, the bases for their decision making is so flawed they may as well be crazy.
John: You realize this leads to there being over 30 million crazy people in the US?
Tyrone: Does that seem wrong?
John: ... a bit low, actually.
I saw a CBS poll this morning stating that 25% of the public favors the shutdown of the federal government. 80 representatives (that's 18.3%, one third of the Republican caucus in the House and representing roughly 18% of the total population) signed the original manifesto leading to the shutdown. Even if the numbers are a little low, is there any remaining doubt that John Rogers' Crazification Factor remains more accurate and revealing than most of Politico on any given day?
This is what you get when you elect people who don't believe in government to political office. You cannot deal with the Suicide Caucus, because they don't recognize the legitimacy of the rules that the Congress is supposed to operate under (thus the endless parade of funding delays and filibusters over the last seven years). Besides, they don't want to negotiate. They've gotten what they wanted: the government is basically closed for business, and they couldn't be more thrilled about it.
Everybody has one style of dance that resonates with them. They may see house, or locking, or waacking, and immediately know that's what they want to do. For me, strutting clicked. I'm not particularly good at it, because I don't practice enough, but I'm tall and have a good memory for shapes and angles. The "feel" of strutting, too, is something I seem to grasp easier than when I was first learning b-boy toprock. In DC, I had a pair of knowledgeable mentors, Rashaad and Future. But in Seattle, there aren't a lot of people for me to crib from, so a few weeks ago I went to San Francisco to learn from the original strutters.
Strutting is not particularly well-known, even in the dance community. You're certainly not going to see it on "So You Think You Can Dance" any time soon. But it was hugely influential in its day — it was one of the precursors to popping, and from there a lot of hip hop movement — and it's made a bit of a comeback in recent years, due in part to the advocacy of a dancer named Lonnie "Pop Tart" Greene.
The descendent of a San Francisco style called boogaloo, strutting combines party dancing and "posing" with its own particular attitude to create something different: it emphasizes strong shapes and angles formed at punctuated stops. Strutters don't pop, they "dime-stop" by halting their motion right on the beat. If you do this fast enough, or with enough force, your muscles tend to contract hard enough that your body shakes a little — that's where the pop originally comes from.
You can perform solo, but strutting's defining feature is that it's a group activity. Standing shoulder-to-shoulder in a line, strutters competed in neighborhood talent shows and dance competitions for the length of an entire song: long, complex displays of synchronized and syncopated rhythm. Even today, while certain moves like the Fresno and the Fillmore have broken out into solo form, the best way to watch strutting is in its group form.
(Warning: the audience in these clips is extremely enthusiastic.)
Rashaad and Future also showed up with something they've been working on. It's a little rough, but I really like how they combine their newer styles with strutting. It's especially interesting the way they put more three-dimensional movement into their routines to compensate for only being two people.
A lot of what I learned on the trip, technique-wise, is evident in these two videos. It's not a specific movement — it's the feel of strutting, which combines a party swagger with sharp precision. As I mentioned, strutters and boogaloo dancers don't "pop" the way we think about it now. Instead, they just stop moving so precisely with the music that it creates the illusion of popping.
Along with that incredible dime-stop proficiency comes a real intentionality for all their movements. When the really good strutters make a movement, they commit to it completely. Their gaze extends along the arm or leg, and their body leans into the motion. I've always known that this was important, but seeing people whose dancing was so stripped-down, without all the surrounding technique that poppers have built up, was revelatory (and a lot of fun to watch).
On first (or second, or third) glance, it's easy to think that Pop Tart is a little crazy. He gets names wrong in funny ways, and he's prone to outbursts about hip hop, which he feels took over and obscured the history of strutting. He's obsessed with his own biography, a relentless self-promoter who has written, directed, and filmed a movie in which assassins from the future are sent back to kill him and keep him from teaching other people the original Oakland styles. But to fixate on these things, which are undeniably a little nutty, is to misjudge the man.
Like almost all American folk dance, strutting and boogaloo comes from poverty. Unlike b-boying, which had a period of exploitation that its pioneers managed (with varying degrees of success) to turn into sustainable business, strutting stayed poor, and so did its innovators. For a long time, Pop Tart was forgotten. He and the other members of his crew, PT-3000, performed on boxes in Fisherman's Wharf as living statues and robot men. This is not a career that puts you in touch with a lot of other successful artists. You don't pick up a lot of social media tips.
If I shake off some of my deeply-ingrained prejudices as a middle class, white, East coast person, Pop Tart's eccentricities look less like craziness and more like ambition. I don't think he knows exactly how to get from where he is now to the kind of fame and influence he'd like to have — but then, who does? In the meantime, he's hustling as hard as he can, and the results are not unimpressive. Sure, his movies are shot on what looks like an old VHS camcorder, but he's working to document his culture the best way he can. He digs up footage of groups that everyone else has forgotten. He records interviews with the dancers that are still around. In fact, at the BRS Alliance dance celebration, he made a point of bringing back the original dancers, having them tell their stories, and presented a bunch of them with awards to recognize their influence, even in just a small way.
If anything, I learned as much from the stories these dancers told as I did from watching them move. It lends context to the movements, like learning that the distinctive cross-stepping motion used during a strutting routine comes from old Meow Mix commercials, or hearing how inventions like waving and popping traveled out of Oakland and into LA. I heard from the first dancer to use Kraftwerk as a backing track, which (given the dominance of electronica in modern popping) is kind of a big deal. Indeed, that context reaches beyond the dance itself, because strutting and boogaloo are very much the product of their times.
But it's easy to imagine a time when Randolph would not have been seen that way by mainstream America, and not just in the sense of being a black man from Oakland, CA. Look at the names of the boogaloo groups: Black Resurgents, Black Messengers, Medea Sirkas, Demons of the Mind... these are names that reflect the black power movement in which they were created. The dancers weren't necessarily political, except in the sense that W. Kamau Bell once commented: "If you're black and you have opinions that don't rhyme, you're political." Their costumes and movements took inspiration from TV and movies, but also from their surroundings (there's a lot of pimp- and gang-inspired moves in the strutting repertoire).
Now, of course, these are just old guys from a bad neighborhood, trying to figure out where they fit and ride the (admittedly small) wave of rediscovery. They're still proud of where they come from, and simultaneously frustrated at having to be "rediscovered" in the first place. Lots of the speakers spent part of their time griping about Soul Train, which was kind of hilarious, when you think about it: dancers in most of the country see Soul Train as the program that helped bring African-American dance and music to a wider audience, but the Oakland dancers couldn't afford to travel down to Hollywood and dance in a studio for free, which means that strutting and boogaloo never reached the same prominence as LA styles like locking.
The boogaloos have a strong sense of regret about being passed over, even though there's probably nothing they could have done about it. Pop Tart even made a mini-documentary about the groups that never left San Francisco, called The Day Before Hip Hop. It's really obvious to them that history is written by the victors — except, can you have victors if there wasn't really a war? Nobody fought against strutting, it's just that nobody at the time really fought for it, for a whole variety of reasons only tangentially related to the dance itself.
We might as well ask how much of this history is reliable in the first place. How much can we believe? Was Oakland really the original home of huge swathes of hip hop dance? Or is it just myth-making in progress? At times like this, I like to remember the approach taken by Joe Schloss, NYU professor and late-blooming b-boy, in his groundbreaking work of hip hop dance history, Foundation: B-boys, B-girls, and Hip Hop Culture in New York:
The uprock debate embodies the benefits and liabilities of the b-boy approach to history. Full of mystery and apparent contradictions, it was never meant to be comprehensive. Each person has his or her own perspective, and each perspective is an important part of the overall fabric of urban dance history. If these stories resist being assimilated and smoothed over, perhaps that itself is where the significance lies. I would argue that b-boy history, like b-boying itself, has to be contentious. Any history that pleases everybody would-by that fact alone-lack important elements of b-boying: competition, ego, self-aggrandizement, battling. The goal of b-boy histories, like the goal of b-boying itself, is to represent yourself and your community. Is the Bronx more significant than Brooklyn? Are African Americans more important than Latinos? Is uprocking a gang dance or an anti-gang dance? It depends on where you stand, and it should.
In a way, I think it will almost be a shame for the woolly oral history of strutting to be tamed into a single, conventional narrative — even though such a simplification will probably help preserve the dance for the future. Strutting should always be a little unsettling, I think. True to the name, maybe it should strut its stuff, strike its poses, and then — when the song ends — step back into dangerous obscurity.
And as for me? Where, as Schloss says, do I stand? I have no particular authority on strutting, of course, but that doesn't mean I'm not invested. There's a lyric from Yasiin Bey's "Fear Not of Man" that I love, where he says:
People be asking me all the time, Yo Mos, what's gettin' ready to happen with hip hop? (Where do you think hip hop is goin'?) I tell 'em, You know what's gonna happen with hip hop: Whatever's happening with us. If we smoked out, hip hop is gonna be smoked out. If we doin' all right, hip hop is gonna be doin' all right. People talk about hip hop like it's some giant Living in the hillside Comin' down to visit the townspeople. We are hip hop.
Sometimes it's hard for me to tell where I stand in regards to dance. Unlike a lot of people in Urban Artistry, I don't really like going to clubs. I don't battle as much as I probably should. I'm a little introverted. But while I'm not a part of strutting's history, it is part of mine. Its context — from black power to funk music to urban sprawl — is my context, as an American. And so while it's sometimes difficult for me to figure out how to represent strutting and popping respectfully, the journey is near and dear from my heart. I came back from Oakland a little more knowledgeable, a little more uncertain, and a little closer to understanding. What more could I ask?
There's a fine line between nonchalance and disregard for the player, and I'm not sure that Aquaria doesn't cross over it. As one of the best games on the Shield right now, I've been playing a lot of it — or, rather, alternating between playing it and looking up clues online. In a way, I respect the sheer amount of content the developers have put together, and the confidence they have in players to discover it, but I could use a little more signposting and, to be honest, a bit more challenge.
For example, the middle section of Aquaria is mostly non-linear: certain areas are locked away until you've beaten a few bosses and taken their abilities, but the order is still mostly flexible. Although it sounds great in theory, in practice this just means you're repeatedly lost and without a real goal. Having enormous maps just makes exacerbates the problem, because it means you'll wander one way across the world only to find out that you're not quite ready yet and need to hunt down another boss somewhere — probably all the way at the other end.
I'm goal-oriented in games, so this kind of ambiguity has always bugged me. The Castlevania titles post-Symphony of the Night suffer from this to some extent, but they usually offered something to do during the trip that made it feel productive--levelling up your character, or offering random weapon drops. Aquaria has a limited cooking system, but it's only really necessary in boss fights and it rarely does anything besides offer healing and specific boosts, so it's not very compelling.
According to an interview with the developers, Aquaria was originally controlled with keyboard and mouse, and they eventually moved it to mouse-only (which came in handy when it was ported to touch devices). Every now and then the original design peeks through, like when certain enemies fire projectiles in a bullet-hell shooter pattern. The Shield's twin-stick controls make this really easy (and fun) to dodge, but since the game was intended for touch, these enemies are relatively rare, and the lengthy travel through the game tends toward the monotonous.
Look, I get that we have entered a brave new world of touch-based control schemes. For the most part, I am in favor of that — I'm always happy to see innovation and experimentation. But playing Aquaria on the Shield makes it clear that there's a lot of tension between physical and touch controls, and it's easy to lose something in the transition from the former to the latter. Aquaria designed around a gamepad (and an un-obstructed screen) could be a much more interesting game. Yes, it would be harder and less accessible — but the existing game leaves us with "easy and tedious," which is arguably a worse crime.
I'm starting to think that in our rush to embrace casual, touch experiences (in no small part because of the rise of touch-only devices), we may be making assumptions about the audience that aren't true — such as the idea that it's the buttons themselves that were scary — and it's not always a net positive for game design. At its heart, Aquaria is a "core" game, not a casual game: it's just too big, and the bosses are too rough, for this to be in the same genre as Angry Birds or whatever. Compare this to Cave Story (its obvious inspiration), a game that was free to cram a ridiculous amount of non-linear content into its setting because its traditional platforming gameploy was so solid.
There is a disturbing tendency for many people to insist that there must be a winner and a loser in any choice. In the last two weeks, every tech site on the planet decided that the loser was Nintendo: why don't they just close up shop and make iPhone games? I think it's a silly idea — anyone measuring Nintendo's success now against their performance with the Wii is grading them on the wrong end of a ridiculous curve — and Aquaria only makes me feel stronger about that. For all that smartphone gaming brings us, there are some experiences that are just going to be better with buttons and real gaming hardware. As long as that's the case, consoles are in no danger of extinction.