It is never a bad time to remember that Orson Scott Card is a terrible person. But this week, as millions of people will go to theaters to see a movie based on his most famed work (sorry, Lost Boys), it is good to also remind ourselves: Ender's Game is not a good book. It's barely even a bad one. Consider the following three essays, ranked in descending order of plausibility:
Williams' story is unlikely, I think, but it's too much fun not to mention (and for a long time, his account was the only place you could read about the Nazi connection). Radford makes a stronger case, but chances are much of Ender's similarity to Hitler is just coincidence: Ender ends up on a planet of Brazilians because Card is a hack who went on Mormon mission to Brazil as a young adult, he's a misogynist because his author is one, and he justifies his genocide with a lot of blather about "intention" because Card chickened out on the clear implication of the first book: that his protagonist really was a psychopath that wiped out an entire civilization based on an elaborate self-deception.
It's Kessel's essay that's been the most quoted over the years, and for good reason. It's a brutal deconstruction of the tropes used to build Ender's Game, and ends in a deft examination of why the book remains so popular:
It offers revenge without guilt. If you ever as a child felt unloved, if you ever feared that at some level you might deserve any abuse you suffered, Ender’s story tells you that you do not. In your soul, you are good. You are specially gifted, and better than anyone else. Your mistreatment is the evidence of your gifts. You are morally superior. Your turn will come, and then you may severely punish others, yet remain blameless. You are the hero.
Ender never loses a single battle, even when every circumstance is stacked against him. And in the end, as he wanders the lonely universe dispensing compassion for the undeserving who think him evil, he can feel sorry for himself at the same time he knows he is twice over a savior of the entire human race.
God, how I would have loved this book in seventh grade! It’s almost as good as having a nuclear device.
Like a lot of people, I did have this book in seventh grade (or earlier — I'm pretty sure I read it while attending junior high in Indiana). And I did love it as a kid, for most of the reasons that Kessel states: I was a bright kid who didn't have a lot of friends, felt persecuted and misunderstood, and struggled to find a way to express those feelings. Eventually, I grew up. Looking back on it, Ender's Game didn't really do any harm — like a lot of kids, I wasn't actually reading that critically. It's just kind of embarrassing now, and I definitely don't want to go to a theater and relive it.
Feeling embarrassed by your childhood reading material is a common rite of passage for many people, and science fiction readers probably more than others. Jo Walton refers to this as the Suck Fairy. It's tempting, when this happens, to wish we could go back in time and take these books off the shelves — or stop readers now from encountering them in the first place — but it's probably a better idea to foster discussion (a happy side effect of an active adult readership for "young adult" titles) or have alternatives ready on hand.
Recently I re-read another beloved book from my childhood: The Westing Game by Ellen Raskin. If you haven't taken a look at it lately, you really should. Apart from the titles, the two books have aged in radically different ways — in fact, it's probably better now than it was then. I remember reading it mostly as a puzzle: first to solve it, and then again to appreciate the little clues that Raskin works in. But as for the warmth, the sympathetic characterization, and most of all the humor (seriously, it's an uproariously funny book): I missed out on all of these things when I was a precocious youngster identifying with Turtle and her shin-kicking ways, just like I missed Ender's fascist tendencies.
And so ultimately, I'm not worried about young people reading Ender's Game and being influenced for the worse, because I suspect that what they take from it is not what Card actually wants them. It's sometimes difficult — but also crucial — to remember that the reader creates the story while reading, almost as much as the author does. Should we speak out against hateful works, and try not to give money to hatemongers? Sure. Will I be going to see Ender's Game at the local cinema? Definitely not. But I'll always understand people who have a soft spot for it anyway. Despite my bravado, despite the fact that I dislike everything it has come to stand for, I'm one of them, and I'm not going to let Card make me feel bad about that.
In the time since I last wrote about Caret, it's jumped up to 1.0 (and then 1.1). I've added tab memory, lots of palette search options, file modification watches, and all kinds of other features — making it legitimately comparable with Sublime. I've been developing the application using Caret itself since version 0.0.16, and I haven't really missed anything from native editors. Other people seem to agree: it's one of the top dev tools in the Chrome web store, with more than 1,500 users (and growing rapidly) and a 4.5/5 star rating. I'm beating out some of Google's own apps, at this point.
Belle's reaction: "Now you charge twenty bucks for it and make millions!" It's good to know one of us has some solid business sense.
Caret is already designed around message-passing for its internal APIs (as is Ace, the editing component I use), so it won't be too difficult to add external hooks, but it'll never have the same power as something like Sublime, which embeds its own Python interpreter. I can understand why Google made the security decisions they did, but I wish there was a way to relax them in this case.
I figure I have roughly six months to a year before Caret has any serious competition on Chrome OS. Most of the other editors aren't interested in offline editing or are poorly-positioned to do so for architectural reasons. The closest thing to Caret from the established players would be Brackets, which still relies on NodeJS for its back-end and can't yet run the front-end in a regular browser. They're working on the latter, and the former will be shimmable, but the delay gives me a pretty good head start. Google also has an app in the works, but theirs looks markedly less general-purpose (i.e. it seems aimed specifically at people building Chrome extensions only). Mostly, though, it's just hard to believe that someone hadn't really jumped in with something before I got there.
Between Caret, Weir, and my textbook, this has been a pretty productive year for me. I'm actually thinking that for my next project I may write another short book — one on writing Chrome apps using what I've learned. The documentation from Google's end is terrible, and I hate to think of other people having to stumble through the APIs the way I did. It might also be a good way to get some more benefit out of the small developer community that's forming around Caret, and to find out if there's actually a healthy market there or not. I'm hoping there is: writing Caret has been fun, and I'd to have the chance to do more of this kind of development in the future.
A few years ago, I started blacklisting web sites that I didn't think were healthy: gadget sites, some of the more strident political sites, game blogs that just churned crappy content all day long. If it didn't leave me better informed, or I felt like my traffic there was supporting bad content, or if the only reason I visited was for the rush of outrage, I tried to cut it out (or at least cut it down). All in all, I think it was a good decision. I felt better about myself, at least.
This week, I added Hacker News to the list of sites I don't visit. HN is the current hotspot for tech community news--kind of a modern-day Slashdot. Unfortunately (possibly by virtue of being run by venture capital firm Y Combinator), it's also equally targeted at A) terrible startup company brogrammers and B) libertarian bitcoin enthusiasts. Browsing the links submitted by the community there is kind of like eating at dive restaurants in a new city: you'll find some winners, but the price is a fair amount of food poisoning.
For a while, I've been running a Greasemonkey script that tries to filter out the worst of the listings (sample search terms: lisp, techcrunch, hustle). This is not as easy as it sounds, because HN is written using the cutting-edge technologies of 1995: a bunch of nested tables with inline styles, served via Lisp variant that causes constant timeouts on anything other than the front page. But even though I had a workable filter from a technical perspective, at some point, it's time to hang up the scripts and admit that the HN community is toxic. There's only so long you can not read the comments, especially on any thread involving sexism, racism, and other real problems that Silicon Valley would like to pretend don't exist.
For example, here's some of the things I've been trying to ignore:
The tech bubble isn't just financial: these are signs of a community that's isolated from difference — of gender, of opinion, and of class. The venture capital system even protects them from consequences: how much money will Twitter lose this year? The fog of arrested development that hangs over Hacker News is its own argument for increased diversity in the tech industry. And it affects more than just a few comment threads, unless you also think the best use of smart people's time is the development of a $130 smoke detector that talks to your iPad.
Leaving a well-known watering hole like this is a little scary — HN is how I've stayed current on a lot of new developments in the field. It's frustrating, feeling like good information is being held hostage by a bunch of creeps. But given a choice between reading an article a couple of days after everyone else or feeling like I constantly need a shower, I'm happy to work on my patience.
John: Hey, Bush is now at 37% approval. I feel much less like Kevin McCarthy screaming in traffic. But I wonder what his base is --
John: ... you said that immmediately, and with some authority.
Tyrone: Obama vs. Alan Keyes. Keyes was from out of state, so you can eliminate any established political base; both candidates were black, so you can factor out racism; and Keyes was plainly, obviously, completely crazy. Batshit crazy. Head-trauma crazy. But 27% of the population of Illinois voted for him. They put party identification, personal prejudice, whatever ahead of rational judgement. Hell, even like 5% of Democrats voted for him. That's crazy behaviour. I think you have to assume a 27% Crazification Factor in any population.
John: Objectively crazy or crazy vis-a-vis my own inertial reference frame for rational behaviour? I mean, are you creating the Theory of Special Crazification or General Crazification?
Tyrone: Hadn't thought about it. Let's split the difference. Half just have worldviews which lead them to disagree with what you consider rationality even though they arrive at their positions through rational means, and the other half are the core of the Crazification -- either genuinely crazy; or so woefully misinformed about how the world works, the bases for their decision making is so flawed they may as well be crazy.
John: You realize this leads to there being over 30 million crazy people in the US?
Tyrone: Does that seem wrong?
John: ... a bit low, actually.
I saw a CBS poll this morning stating that 25% of the public favors the shutdown of the federal government. 80 representatives (that's 18.3%, one third of the Republican caucus in the House and representing roughly 18% of the total population) signed the original manifesto leading to the shutdown. Even if the numbers are a little low, is there any remaining doubt that John Rogers' Crazification Factor remains more accurate and revealing than most of Politico on any given day?
This is what you get when you elect people who don't believe in government to political office. You cannot deal with the Suicide Caucus, because they don't recognize the legitimacy of the rules that the Congress is supposed to operate under (thus the endless parade of funding delays and filibusters over the last seven years). Besides, they don't want to negotiate. They've gotten what they wanted: the government is basically closed for business, and they couldn't be more thrilled about it.
Everybody has one style of dance that resonates with them. They may see house, or locking, or waacking, and immediately know that's what they want to do. For me, strutting clicked. I'm not particularly good at it, because I don't practice enough, but I'm tall and have a good memory for shapes and angles. The "feel" of strutting, too, is something I seem to grasp easier than when I was first learning b-boy toprock. In DC, I had a pair of knowledgeable mentors, Rashaad and Future. But in Seattle, there aren't a lot of people for me to crib from, so a few weeks ago I went to San Francisco to learn from the original strutters.
Strutting is not particularly well-known, even in the dance community. You're certainly not going to see it on "So You Think You Can Dance" any time soon. But it was hugely influential in its day — it was one of the precursors to popping, and from there a lot of hip hop movement — and it's made a bit of a comeback in recent years, due in part to the advocacy of a dancer named Lonnie "Pop Tart" Greene.
The descendent of a San Francisco style called boogaloo, strutting combines party dancing and "posing" with its own particular attitude to create something different: it emphasizes strong shapes and angles formed at punctuated stops. Strutters don't pop, they "dime-stop" by halting their motion right on the beat. If you do this fast enough, or with enough force, your muscles tend to contract hard enough that your body shakes a little — that's where the pop originally comes from.
You can perform solo, but strutting's defining feature is that it's a group activity. Standing shoulder-to-shoulder in a line, strutters competed in neighborhood talent shows and dance competitions for the length of an entire song: long, complex displays of synchronized and syncopated rhythm. Even today, while certain moves like the Fresno and the Fillmore have broken out into solo form, the best way to watch strutting is in its group form.
(Warning: the audience in these clips is extremely enthusiastic.)
Rashaad and Future also showed up with something they've been working on. It's a little rough, but I really like how they combine their newer styles with strutting. It's especially interesting the way they put more three-dimensional movement into their routines to compensate for only being two people.
A lot of what I learned on the trip, technique-wise, is evident in these two videos. It's not a specific movement — it's the feel of strutting, which combines a party swagger with sharp precision. As I mentioned, strutters and boogaloo dancers don't "pop" the way we think about it now. Instead, they just stop moving so precisely with the music that it creates the illusion of popping.
Along with that incredible dime-stop proficiency comes a real intentionality for all their movements. When the really good strutters make a movement, they commit to it completely. Their gaze extends along the arm or leg, and their body leans into the motion. I've always known that this was important, but seeing people whose dancing was so stripped-down, without all the surrounding technique that poppers have built up, was revelatory (and a lot of fun to watch).
On first (or second, or third) glance, it's easy to think that Pop Tart is a little crazy. He gets names wrong in funny ways, and he's prone to outbursts about hip hop, which he feels took over and obscured the history of strutting. He's obsessed with his own biography, a relentless self-promoter who has written, directed, and filmed a movie in which assassins from the future are sent back to kill him and keep him from teaching other people the original Oakland styles. But to fixate on these things, which are undeniably a little nutty, is to misjudge the man.
Like almost all American folk dance, strutting and boogaloo comes from poverty. Unlike b-boying, which had a period of exploitation that its pioneers managed (with varying degrees of success) to turn into sustainable business, strutting stayed poor, and so did its innovators. For a long time, Pop Tart was forgotten. He and the other members of his crew, PT-3000, performed on boxes in Fisherman's Wharf as living statues and robot men. This is not a career that puts you in touch with a lot of other successful artists. You don't pick up a lot of social media tips.
If I shake off some of my deeply-ingrained prejudices as a middle class, white, East coast person, Pop Tart's eccentricities look less like craziness and more like ambition. I don't think he knows exactly how to get from where he is now to the kind of fame and influence he'd like to have — but then, who does? In the meantime, he's hustling as hard as he can, and the results are not unimpressive. Sure, his movies are shot on what looks like an old VHS camcorder, but he's working to document his culture the best way he can. He digs up footage of groups that everyone else has forgotten. He records interviews with the dancers that are still around. In fact, at the BRS Alliance dance celebration, he made a point of bringing back the original dancers, having them tell their stories, and presented a bunch of them with awards to recognize their influence, even in just a small way.
If anything, I learned as much from the stories these dancers told as I did from watching them move. It lends context to the movements, like learning that the distinctive cross-stepping motion used during a strutting routine comes from old Meow Mix commercials, or hearing how inventions like waving and popping traveled out of Oakland and into LA. I heard from the first dancer to use Kraftwerk as a backing track, which (given the dominance of electronica in modern popping) is kind of a big deal. Indeed, that context reaches beyond the dance itself, because strutting and boogaloo are very much the product of their times.
But it's easy to imagine a time when Randolph would not have been seen that way by mainstream America, and not just in the sense of being a black man from Oakland, CA. Look at the names of the boogaloo groups: Black Resurgents, Black Messengers, Medea Sirkas, Demons of the Mind... these are names that reflect the black power movement in which they were created. The dancers weren't necessarily political, except in the sense that W. Kamau Bell once commented: "If you're black and you have opinions that don't rhyme, you're political." Their costumes and movements took inspiration from TV and movies, but also from their surroundings (there's a lot of pimp- and gang-inspired moves in the strutting repertoire).
Now, of course, these are just old guys from a bad neighborhood, trying to figure out where they fit and ride the (admittedly small) wave of rediscovery. They're still proud of where they come from, and simultaneously frustrated at having to be "rediscovered" in the first place. Lots of the speakers spent part of their time griping about Soul Train, which was kind of hilarious, when you think about it: dancers in most of the country see Soul Train as the program that helped bring African-American dance and music to a wider audience, but the Oakland dancers couldn't afford to travel down to Hollywood and dance in a studio for free, which means that strutting and boogaloo never reached the same prominence as LA styles like locking.
The boogaloos have a strong sense of regret about being passed over, even though there's probably nothing they could have done about it. Pop Tart even made a mini-documentary about the groups that never left San Francisco, called The Day Before Hip Hop. It's really obvious to them that history is written by the victors — except, can you have victors if there wasn't really a war? Nobody fought against strutting, it's just that nobody at the time really fought for it, for a whole variety of reasons only tangentially related to the dance itself.
We might as well ask how much of this history is reliable in the first place. How much can we believe? Was Oakland really the original home of huge swathes of hip hop dance? Or is it just myth-making in progress? At times like this, I like to remember the approach taken by Joe Schloss, NYU professor and late-blooming b-boy, in his groundbreaking work of hip hop dance history, Foundation: B-boys, B-girls, and Hip Hop Culture in New York:
The uprock debate embodies the benefits and liabilities of the b-boy approach to history. Full of mystery and apparent contradictions, it was never meant to be comprehensive. Each person has his or her own perspective, and each perspective is an important part of the overall fabric of urban dance history. If these stories resist being assimilated and smoothed over, perhaps that itself is where the significance lies. I would argue that b-boy history, like b-boying itself, has to be contentious. Any history that pleases everybody would-by that fact alone-lack important elements of b-boying: competition, ego, self-aggrandizement, battling. The goal of b-boy histories, like the goal of b-boying itself, is to represent yourself and your community. Is the Bronx more significant than Brooklyn? Are African Americans more important than Latinos? Is uprocking a gang dance or an anti-gang dance? It depends on where you stand, and it should.
In a way, I think it will almost be a shame for the woolly oral history of strutting to be tamed into a single, conventional narrative — even though such a simplification will probably help preserve the dance for the future. Strutting should always be a little unsettling, I think. True to the name, maybe it should strut its stuff, strike its poses, and then — when the song ends — step back into dangerous obscurity.
And as for me? Where, as Schloss says, do I stand? I have no particular authority on strutting, of course, but that doesn't mean I'm not invested. There's a lyric from Yasiin Bey's "Fear Not of Man" that I love, where he says:
People be asking me all the time, Yo Mos, what's gettin' ready to happen with hip hop? (Where do you think hip hop is goin'?) I tell 'em, You know what's gonna happen with hip hop: Whatever's happening with us. If we smoked out, hip hop is gonna be smoked out. If we doin' all right, hip hop is gonna be doin' all right. People talk about hip hop like it's some giant Living in the hillside Comin' down to visit the townspeople. We are hip hop.
Sometimes it's hard for me to tell where I stand in regards to dance. Unlike a lot of people in Urban Artistry, I don't really like going to clubs. I don't battle as much as I probably should. I'm a little introverted. But while I'm not a part of strutting's history, it is part of mine. Its context — from black power to funk music to urban sprawl — is my context, as an American. And so while it's sometimes difficult for me to figure out how to represent strutting and popping respectfully, the journey is near and dear from my heart. I came back from Oakland a little more knowledgeable, a little more uncertain, and a little closer to understanding. What more could I ask?
There's a fine line between nonchalance and disregard for the player, and I'm not sure that Aquaria doesn't cross over it. As one of the best games on the Shield right now, I've been playing a lot of it — or, rather, alternating between playing it and looking up clues online. In a way, I respect the sheer amount of content the developers have put together, and the confidence they have in players to discover it, but I could use a little more signposting and, to be honest, a bit more challenge.
For example, the middle section of Aquaria is mostly non-linear: certain areas are locked away until you've beaten a few bosses and taken their abilities, but the order is still mostly flexible. Although it sounds great in theory, in practice this just means you're repeatedly lost and without a real goal. Having enormous maps just makes exacerbates the problem, because it means you'll wander one way across the world only to find out that you're not quite ready yet and need to hunt down another boss somewhere — probably all the way at the other end.
I'm goal-oriented in games, so this kind of ambiguity has always bugged me. The Castlevania titles post-Symphony of the Night suffer from this to some extent, but they usually offered something to do during the trip that made it feel productive--levelling up your character, or offering random weapon drops. Aquaria has a limited cooking system, but it's only really necessary in boss fights and it rarely does anything besides offer healing and specific boosts, so it's not very compelling.
According to an interview with the developers, Aquaria was originally controlled with keyboard and mouse, and they eventually moved it to mouse-only (which came in handy when it was ported to touch devices). Every now and then the original design peeks through, like when certain enemies fire projectiles in a bullet-hell shooter pattern. The Shield's twin-stick controls make this really easy (and fun) to dodge, but since the game was intended for touch, these enemies are relatively rare, and the lengthy travel through the game tends toward the monotonous.
Look, I get that we have entered a brave new world of touch-based control schemes. For the most part, I am in favor of that — I'm always happy to see innovation and experimentation. But playing Aquaria on the Shield makes it clear that there's a lot of tension between physical and touch controls, and it's easy to lose something in the transition from the former to the latter. Aquaria designed around a gamepad (and an un-obstructed screen) could be a much more interesting game. Yes, it would be harder and less accessible — but the existing game leaves us with "easy and tedious," which is arguably a worse crime.
I'm starting to think that in our rush to embrace casual, touch experiences (in no small part because of the rise of touch-only devices), we may be making assumptions about the audience that aren't true — such as the idea that it's the buttons themselves that were scary — and it's not always a net positive for game design. At its heart, Aquaria is a "core" game, not a casual game: it's just too big, and the bosses are too rough, for this to be in the same genre as Angry Birds or whatever. Compare this to Cave Story (its obvious inspiration), a game that was free to cram a ridiculous amount of non-linear content into its setting because its traditional platforming gameploy was so solid.
There is a disturbing tendency for many people to insist that there must be a winner and a loser in any choice. In the last two weeks, every tech site on the planet decided that the loser was Nintendo: why don't they just close up shop and make iPhone games? I think it's a silly idea — anyone measuring Nintendo's success now against their performance with the Wii is grading them on the wrong end of a ridiculous curve — and Aquaria only makes me feel stronger about that. For all that smartphone gaming brings us, there are some experiences that are just going to be better with buttons and real gaming hardware. As long as that's the case, consoles are in no danger of extinction.
At this point, Caret has been in the Chrome Web Store for about a week. I think that's long enough to say that the store is a pretty miserable experience for developers.
When I first uploaded it last week, Caret had these terrible promo tiles that I threw together, mostly involving a big pile of carrots (ba dum bum). At some point, I made some slightly less terrible promo tiles, stripping it down to just bold colors and typography. Something set off the store's automated review process, and my new images got stuck in review for four days — during which time it was stuck at the very bottom of the store page and nobody saw it.
On Tuesday, I uploaded the first version of Caret that includes the go-to/command palette. That release is kind of a big deal--the palette is one of the things that people really love about Sublime, and I definitely wanted it in my editor. For some reason, this has triggered another automatic review, this one applied to the entire application. I can unpublish Caret, but I can't edit anything — or upload new versions — until someone checks off a box and approves my changes. No information has been provided on why it was flagged, or what I can do to prevent these delays in the future.
Even at the best of times, the store takes roughly 30 minutes to publish a new version. I'm used to pushing out changes continuously on the web, so slow updates drive me crazy. Between this and the approval hijinks, it feels like I'm developing for iOS, but without the sense of baseless moral superiority. What makes it really frustrating is the fact that the Play store for Android has none of these problems, so I know that they can be solved. There's just no indication that the Chrome team cares.
I was planning on publishing a separate, Google-free version of the app anyway, so I worked out how to deploy a standalone .crx file. The installation experience for these isn't great — the file has to be dragged onto the Chrome extensions list, and can't just be installed from the link — and it introduces another fun twist: even though they promised it would be possible for years, there's no way to download the private key that Google uses in the Chrome store, meaning that the two installations are treated as completely different applications when installed.
Fair enough: I'll just make the standalone version the "edge" release with a different icon, and let the web store lag behind a little bit. As a last twist of the knife, generating a .crx package as part of a process that A) won't include my entire Git history, and B) will work reliably across platforms, is a nightmare. Granted, this is partly due to issues with Grunt, but Chrome's not helping matters with its wacky packaging system.
All drama aside, everything's now set up in a way that, if not efficient, is at least not actively harmful. The new Caret home page is here, including a link to the preview channel file (currently 3 releases ahead of the store). As soon as Google decides I'm not a menace to society, I'll make it the default website for the store entry as well.
The problems with Google's web store bug me, not so much because they're annoying in and of themselves, but because they feel like they miss the point of what packaged web apps should be. Installing Caret is fast, secure, and easy to update, just as regular web apps are. Developing Caret, likewise, is exactly as easy and simple as writing a web app (easier, actually: I abuse flexbox like crazy for layout, because I know my users have a modern browser). Introducing this opaque, delay-ridden publication step in between development and installation just seems perverse. It won't stop people from using the store (if nothing else, external installation is too much of a pain not to go through the official channel), but it's certainly going to keep me from enjoying it.
As I mentioned in my Chromebook notes, one of the weak points for using Chrome OS as a developer is the total lack of good graphical editor. You can install Crouton, which lets you run Vim from the command line or even run a full graphical stack. But there aren't very many good pure text editors that run within Chrome OS proper — most of the ones that do exist are tied to hosted services like Cloud9 or Nitrous. If you just want to write local files without a lot of hassle, you're out of luck.
I don't particularly want to waste what little RAM the Chromebook has running a whole desktop environment just for a notepad, and I'm increasingly convinced that Vim is a practical joke perpetuated by sadists. So I built the Chrome OS editor I wanted to have as a packaged app (just in time!), and posted it up in the store this weekend. It's 100% open source, of course, and contributions are welcome.
Caret is a shell around the Ace code editor, which also powers the editor for Cloud9. I'm extremely impressed with Ace: it's a slick package that provides a lot of must-have features, like syntax highlighting, multiple cursors, and search/replace, while still maintaining typing responsiveness. On top of that base, Caret adds support for tabbed editing, local file support, cloud settings storage, and Sublime-compatible keystrokes.
In fact, Sublime has served as a major inspiration during the development of Caret. In part, this is just because it's the standard for web developers that must be met, but also because it got a lot of things right in very under-appreciated ways. For example, instead of having a settings dialog that adds development complexity, all of Sublime's settings are stored in JSON files and edited through the same window as any other text files — the average Sublime user probably finds this as natural as a graphical interface (if not more so). Caret uses the same concept for its settings, although it saves the files to Chrome's sync service, so all your computers can share your preferences automatically.
The current release of Caret, 0.0.10, is usable enough that I think you could do serious editing with it — I've certainly done professional work with less effective tools, including the initial development on Caret itself — but I'm on a roll adding features and expect to have a lot of improvements made by the end of next week. My first priorities are getting the keybindings into full working condition and adding a command palette, but from that point on it's mostly just polish, bugfixes, and investigating how to get plugin support past Chrome's content security policy. Once I'm at 1.0, I'll also be posting a standalone CRX package that you can use to install Caret without needing a Google account (it'll even auto-update for you).
Working with Chrome's new packaged app support has been rough at times: there are still a lot of missing capabilities, and calling the documentation "patchy" is an insult to quilts everywhere. But I am impressed with what packaged apps can do, not the least of which is the ease of installation: if you have Chrome, you can now pretty much instantly have a professional-grade text editor available, no matter what your operating system of choice. This has always been a strong point for web apps anyway, but Chrome apps combine that with the kinds of features that have typically been reserved for native programs: local file access, real network sockets, or hardware device access. There's a lot of potential there.
If you'd like to help, even something as simple as giving Caret a chance and commenting with your impressions would be great. Filing bugs would be even better. Even if you're not a programmer, having a solid document editor may be something you'd find handy, and together we can make that happen.
Let's say that you're making a new game console, and you're not one of the big three (Sony, Microsoft, and Nintendo). You can't afford to take time for developers to get up to speed, because you're already at a mindshare deficit. So you pick a commodity middleware that runs on a lot of hardware, preferably one that already has lots of software and a decent SDK. These days that means using Android, which is why most of the new microconsoles (Ouya, Gamestick, Mojo) are just running re-skinned versions of Android 4.x.
Nvidia's Shield is no different in terms of the underlying OS, but it does change the form factor compared to the other Android microconsoles. Instead of a set-top box or HDMI stick, it effectively crams the company's ridiculously powerful Tegra 4 chipset into an XBox controller, and then bolts on an LCD screen. I like Android, I like buttons, and I spend a lot of time bored on a bus during my commute, so I bought one late last week.
It's a bulky chunk of plastic, for sure. I don't particularly want to try throwing both it and the Chromebook into the same small Timbuktu bag. But in the hand it feels almost exactly like an XBox 360 controller — meaning it's very comfortable, and not at all cumbersome. It's definitely the best package I've ever used for emulators: playing GBA games feels pretty much like the real thing, except with a much larger, prettier screen. I'd have bought it just for emulation, which is well-supported on Android these days.
Actual Android games are kind of a mixed bag. I own a fair number of them, between the occassional Play Store purchase and all the Humble Bundles, and most of them aren't designed for gamepad controls. The Shield does have a touchscreen (as well as the ability to use the right thumbstick as a mouse cursor), but the way it's set up doesn't promote touch-only gaming: there's no good way to hold the screen while the body of the controller sits in the way, and portrait mode is even more awkward.
But if the developer has added gamepad support, the experience is really, really good. I've been playing Asphalt 8, Aquaria, and No Gravity lately, and feeling pretty satisfied. For a lot of games, particularly traditional genres like racing or shooters that require multiple simultaneous inputs, you just can't beat having joysticks and physical buttons. It also helps showcase the kinds of graphics that phones/tablets can pump out if your thumbs aren't always blocking the screen.
So the overall software situation looks a little lopsided: lots of great emulators, but only a few native titles that really take advantage of the hardware. I'm okay with this, and I actually expect it to get better. Since almost all the new microconsoles are Android-based, and almost all of them use gamepads (for which there's a standard API), it's only going to be natural for developers to add controller support to their games. I think the real question is going to be whether Android (or any mobile OS) can support the kinds of lengthy, high-quality titles that have been the standard on traditional, $40/game consoles.
If Android manages to become a home for decent "core" games, it'll probably be due to what Chris Pruett, a game developer and former Android team member, calls out in this interview: the implicit creation of a "standardized" console platform. Instead of developers needing to learn completely new systems with every console generation, they can write for a PC-like operating system across many devices (cue "fragmentation" panics). Systems like the Shield, which push the envelope for portable graphics, are going to play a serious role in that transition, whether or not the device is successful in and of itself.
The other interesting question if microconsoles take off will be whether there's a driver for innovation there. In the full-sized console space, it's been relatively easy for the big three companies to throw out crazy ideas from time to time, ranging from Kinect and Eyetoy to pretty much everything Nintendo's done for the last decade. PCs have been much slower to change, a fact that has frustrated some designers. Are microconsoles more like desktop computers, in that they have a standard OS and commodity hardware? Or are they more like regular consoles, since they're cheap enough to make crazy gambles affordable?
The Shield, perhaps unsurprisingly from Nvidia, points to the former. It's an unabashedly traditional console experience, from the emphasis on graphics to the eight-button controller. It's good at playing the kind of games that you'd find on a set-top box (or indeed, emulating those boxes themselves), but it's probably not the next Wii: you're buying iteration, not innovation--technologically, at least. It just so happens that after a couple of years of trying to play games with only a touchscreen, sometimes that's exactly what I want.
This week, Internet law commentary site Groklaw shut down, citing the lack of privacy in a world where the government is (maybe, possibly) reading all your e-mail. On the one hand, you can argue that this is evidence of dangerous chilling effects from surveillance on the fourth estate. On the other hand, shutting down a public blog (one that's focused on publicly-available legal filings) because the NSA can read your correspondence seems... ill-considered, but sadly not atypical.
In the initial wake of the NSA wiretapping stories, David Simon, author of The Wire, wrote a series of essays saying, effectively, "Welcome to the security state, white people."
Those arguing about scope are saying, in a backhanded way, that thousands of Baltimoreans, predominantly black, can have their data collected for weeks or months on end because they happened to use a string of North Avenue payphones, because they have the geographic misfortune to live where they do. And it’s the same thing when it’s tens of thousands of Baltimoreans, predominantly black, using a westside cell tower and having their phone data captured. That’s cool, too. That’s law and order, and constitutionally sound law and order, at that. But wait: Now, for the sake of another common societal goal — in this case, counter-terror operations — when it’s time for all Americans to ante in with the same, exact legal intrusion, the white folks, the middle-class, the affluent go righteously, batshit, Patrick-Henry quoting crazy? Really?
Whether you find these situations comparable will probably indicate how credible you find Simon's argument in general. It's important to note that he's not trying to say we should just roll over for the NSA. The question he raises is one of social justice: when we talk about fixing these problems, are we worried about strengthening protections for everyone? Or just in ways that will preserve privacy for people who can afford it? What Simon doesn't say is that technological solutions are mutually exclusive with social justice — without fail, they always fall into the latter category.
By this point, there's been a lot of ink spilled on how to "protect yourself" from the NSA. People write long how-to guides on setting up a secure mail server (like hilariously long "two hour" guide) or using PGP encryption. None of this is manageable by normal human beings: speaking as someone who has actually set up a private, unencrypted mail server, it's completely out of reach for all but the most devoted shut-ins. You could not pay me enough to edit my Postfix config again, much less try to add encryption to it.
Okay, so the open-source situation is rough at best. That scratching sound you hear is a million start-ups raiding their trust funds to create the new Shiny, User-Friendly Crypto Solution. None of them will answer the following questions:
I am increasingly uncomfortable with all of this technocratic rhetoric — "the solution to our political problem is more software" — because it sounds an awful lot like "the solution to a dangerous government is more guns (and particularly more guns for white people)" from the NRA. Both arguments are misguided, but more importantly they both invoke a siege mentality. They assume that nothing can be done as a community, or even at all. Instead, their response is to hole up in a bunker and look out for number one.
Personally, I think the great thing about our system of government is that it is designed to be rebuilt on a regular basis. There is no law in the USA that can't be changed. Everything up to and including the Constitution is under debate, if you can convince enough people. Granted, activism requires participation and cooperation, and both of those (especially compared to buying a firearm or coding a protocol) are hard. But they are robust solutions that address the wider problem for everyone, instead of merely fulfilling someone's resistance fighter fantasy.
It's easier to look for loopholes and clever fixes. It's easier to write manifestos for (just to pick on a single random example that popped up while I was writing this) "a better web" through framework improvements or decentralized software. But neither of those actually changes anything. At best, they're workarounds. At worst, they're snake oil. Take whatever actions you want online--write new code, sign petitions, or unpublish your blog. Until that energy is matched offline, with old-fashioned, inefficient politics, you're just wasting your time.