On Monday, I'll be joining the Seattle Times as a newsroom web developer, working with the editorial staff on data journalism and web projects there. It's a great opportunity, and I'm thrilled to be making the shift. I'm also sad to be leaving ArenaNet, where I've worked for almost two years.
While at ArenaNet, I never really got to work on the kinds of big-data projects that drew me to the company, but that doesn't mean my time here was a loss. I had my hands in almost every guildwars2.com product, ranging from the account site to the leaderboards to the main marketing site. I contributed to a rewrite of some of the in-game web UI as a cutting-edge single-page application, which will go out later this year and looks tremendously exciting. A short period as the interim team lead gave me a deep appreciation for our build system and server setup, which I fully intend to carry forward. And as a part of the basecamp team, I got to build a data query tool that incorporated heatmapping and WebGL graphing, which served as a testbed for a bunch of experimental techniques.
Still, at heart, I'm not a coder: I'm a journalist who publishes with code. When we moved to Seattle in late 2011, I figured that I'd never get the chance to work in a newsroom again: the Times wasn't hiring for my skill set, and there aren't a lot of other opportunities in the area. But I kept my hand in wherever possible, and when the Times' news apps editor took a job in New York, I put out some feelers to see if they were looking for a replacement.
This position is a new one for the Times — they've never had a developer embedded in their newsroom before. Of course, that's familiar territory for me, since it's much the same situation I was in at CQ when I was hired to be a multimedia producer even though no-one there had a firm idea of what "multimedia" meant, and that ended up as one of the best jobs I've ever had. The Seattle Times is another chance to figure out how embedded data journalism can work effectively in a newsroom, but this time at a local paper instead of a political trade publication: covering a wider range of issues across a bigger geographic area, all under a new kind of deadline pressure. I can't wait to meet the challenge.
I've owned an Nvidia Shield for a little under a year now. The situation hasn't entirely changed: I use it most often as a portable emulator, and it's wonderful for that. I beat Mother 3, Drill Dozer, and Super Metroid a while back, and I'm working my way through Final Fantasy 6 now.
But there are more Android games that natively support physical controls now, especially as the Ouya and add-on joysticks have raised the profile for Android gaming. It's not a huge library, but between Humble Bundles and what's in the Google Play store, I certainly don't feel cheated. If you're thinking about picking one up, here's what's good (and what's merely playable) so far.
Aquaria may be the best value for the dollar on Shield, which makes it weird that apparently you can't buy it for Android anymore. A huge, sprawling Metroid-alike set underwater, with a beautifully-painted art style, it's the first game that I played where the Shield's controls not only worked, they worked really well (which figures, since it was developed for XBox controls alongside mouse and touch). If you managed to nab this in an old Humble Bundle, it's well worth the installation.
Actually the third in a series of "tower offense" games, where you send a small group of tanks through a path filled with obstacles, Anomaly 2 is one of the weird cases where the physical controls work, and are very well-tuned, but you still kind of wish the game was on a touchscreen. Playing the earlier Anomaly titles on a phone, you'd get into a groove of tapping icons to balance your resources, targeting, and path. It had a nice Google Maps fluidity to it, and that kind of speed suffers a little bit when panning around via thumbstick. It's still worth a look, but probably better played on a touch device.
In contrast, Badlands seems like a poor match for the Shield--it's a single-press game similar to any number of other smartphone titles (Flappy Bird, Tiny Wings, etc). But there's one distinguishing factor, which is that the triggers on the Shield (which are mapped to the "flap" action) are fully analog, so the harder you pull the faster the onscreen character flies. It's a small change, but it completely alters the feel of the game for the better. The layered, 2D art style is also gorgeous, and the sound design is beautiful, but on the other hand I actually have no idea what's going on, or why some little black blobby creature is trying to travel from left to right.
Most of these games come from the Humble Bundles, which are almost always worth throwing $5 at, but I actually bought Clarc from the Google Play store. It plays a bit like Sokoban, mixed with Portal 2's laser puzzles and Catherine's block/enemy entrapment. Previously released on Ouya, the controls are still solid on the Shield, and the puzzles follow a nice pattern of seeming impossible, then seeming obvious once they're worked out. A super-fast checkpoint system also helps. It's cute, funny, and good for about 6 hours of serious play.
I'm in favor of anything that puts Crazy Taxi on every platform in existence, but only if it's coded well. The problem is that while this port supports the gamepad, it's hamstrung by the adaptations made for phones — namely, the Crazy Drift can't be triggered manually, and the Crazy Dash feels sluggish. In a game where you need to be drifting or dashing almost all the time, this pretty much ruins your ability to run the map. I'd say to skip this unless it's on sale.
Gunman Clive was originally released on the PS Vita, and it shows: a cel-shaded platformer with a strong Contra influence, this is another bite-sized chunk of gameplay. It does seem to be missing some of the bonus features from the original release, but there's still plenty of variety (and some huge, fun bosses) to fight. Considering that it's only a couple of bucks, it's well worth the price if you're in the mood for some neo-retro shooting.
Speaking of retro, one of my favorite discoveries is the games that Orange Pixel has been tossing out for all kinds of platforms, particularly Gunslugs and Heroes of Loot. Both are procedurally-generated takes on classic games (Metal Slug and Gauntlet respectively) with a pixel-art design and a goofy sense of humor. The rogue-like randomization of the levels makes both of them compulsively playable, too. They're great time-wasters.
One of Gameloft's derivative mobile clones, NOVA 3 is trying very hard to either be Crysis or Halo. It doesn't really matter which since the result is just boring man-in-suit shooting, with sloppy, ill-configured thumbstick controls. All that, and it's still one of the more expensive titles in this list. Definitely skip this one.
Rochard was released on Steam a while back, and then re-released just for Shield this spring. It's a clever little puzzle-platformer that's based around a Half-Life gravity gun, but also some light combat. It would probably be better without the latter: the AI is generally terrible, and the weapons aren't inspiring. At its best, Rochard has you toggling low-gravity jumps, stacking crates, and juggling power cells to disable force fields, and those are the parts that make it worth playing.
Finally, fans of shooters have plenty of options (including remakes of R-Types I and II), but there's something to be said for time-travel epic Sine Mora. Although it makes no sense whatsoever, it's a great bullet-hell shmup with a strong emphasis on replayability through different ships and abilities, score attack modes, and boss fights. I love a good shooter, even if I'm terrible at them, and this is no exception.
What's missing from the games on Shield so far? I'd like to see more tactical options, a la Advance Wars or XCOM (which has a port, but doesn't understand gamepads). I'd appreciate a good RPG. And I'd love to see a real, serious shooter that's not a tossed-off Wolfenstein demake. But it's worth also understanding why these games don't exist: the economics of the mobile market don't support them. When your software sells for $5 a pop, maximum, you can't afford to do a lot of content development or design.
The result, except for ports from more sustainable platforms, is a bunch of quick hits instead of real investments. Almost all the games above, the ones that are worth playing at least, were either released on PC/console first, or simultaneously. The good news is that tools like Unity and Unreal Engine 4 promote simultaneous mobile/PC development. The bad news is that getting better games for mobile may mean cheapening development on the big platforms. If you thought that consoles were ruining PC game design before, wait until phones start to make an impact.
Pretend, for a second, that you opened up your web browser one day to buy yourself socks and deoderant from your favorite online retailer (SocksAndSmells.com, maybe). You fill your cart, click buy, and 70% of your money actually goes toward foot coverings and fragrance. The other portion goes to Microsoft, because you're using a computer running Windows.
You'd probably be upset about this, especially since the shop raised prices to compensate for that fee. After all, Microsoft didn't build the store. They don't handle the shipping. They didn't knit the socks. It's unlikely that they've moved into personal care products. Why should they get a cut of your hard-earned footwear budget just because they wrote an operating system?
That's an excellent question. Bear it in mind when reading about how Comixology removed in-app purchases from their comic apps on Apple devices. I've seen a lot of people writing about how awful this is, but everyone seems to be blaming Comixology (or, more accurately, their new owners: Amazon). As far as I can tell, however, they don't have much of a choice.
Consider the strict requirements for in-app purchases on Apple's mobile hardware:
Apple didn't write the Comixology app. They didn't build the infrastructure that powers it, or sign the deals that fill it with content. They don't store the comics, and they don't handle the digital conversion. But they want 30 cents out of every dollar that Comixology makes, just for the privilege of manufacturing the screen you're reading on. If Microsoft had tried to pull this trick in the 90s, can you imagine the hue and cry?
This is classic, harmful rent-seeking behavior: Apple controls everything about their platform, including its only software distribution mechanism, and they can (and do) enforce rules to effectively tax everything that platform touches. There was enough developer protest to allow the online store exception, but even then Apple ensures that it's a cumbersome, ungainly experience. The deck is always stacked against the competition.
Unfortunately, that water has been boiling for a few years now, so most people don't seem to notice they're being cooked. Indeed, you get pieces like this one instead, which manages to describe the situation with reasonable accuracy and then (with a straight face) proposes that Apple should have more market power as a solution. It's like listening to miners in a company town complain that they have to travel a long way for shopping. If only we could just buy everything from the boss at a high markup — that scrip sure is a handy currency!
It's a shame that Comixology was bought by Amazon, because it distorts the narrative: Apple was found guilty of collusion and price fixing after they worked with book publishers to force Amazon onto an agency model for e-books, so now this can all be framed as a rivalry. If a small company had made this stand, we might be able to have a real conversation about how terrible this artificial marketplace actually is, and how much value is lost. Of course, if a small company did this, nobody would pay attention: for better or worse, it takes an Amazon to opt out of Apple's rules successfully (and I suspect it will be successful — it's worked for them on Kindle).
I get tired of saying it over and over again, but this is why the open web is important. If anyone charged 30% for purchases through your browser, there would be riots in the street (and rightly so). For all its flaws and annoyances, the only real competition to the closed, exploitative mobile marketplaces is the web. The only place where a small company can have equal standing with the tech giants is in your browser. In the short term, pushing companies out of walled gardens for payments is annoying for consumers. But in the long term, these policies might even be doing us a favor by sending people out of the app and onto the web: that's where we need to be anyway.
If you've ever wanted to get in touch with more people who are either unhinged or incredibly needy (or both), by all means, start a modestly successful open source project.
That sounds bitter. Let me rephrase: one of the surprising aspects of open-sourcing Caret has been that much of the time I spend on it does not involve coding at all. Instead, it's community management that absorbs my energy. Don't get me wrong: I'm happy to have an audience. Caret users seem like a great group of people, in general. But in my grumpier moments, after closing issue requests and answering clueless user questions (sample, and I am not making this up: "how do I save a file?"), there are times I really sympathize with project leaders who simply abandon their code. You got this editor for free, I want to say: and now you expect me to work miracles too?
Take a pull request, for example. That's when someone else does the work to implement a feature, then sends me a note on GitHub with a button I can press to automatically merge it in. Sounds easy, right? The problem is that someone may have written that code, but it's almost guaranteed that they won't be the one maintaining it (that would be me). Before I accept a pull request, I have to read through the whole thing to make sure it doesn't do anything crazy, check the code style against the rest of Caret, and keep an eye out for how these changes will fit in with future plans. In some cases, the end result has to be a nicely-worded rejection note, which feels terrible to write and to receive. Either way, it's often hours of work for something as simple as a a new tab button.
These are not new problems, and I'm not the first person to comment on them. Steve Klabnik compares the process to being an "open source gardener," which horrifies me a little since I have yet to meet a plant I can't kill. But it is surprising to me how badly "social" code sites handle the social part of open source. For example, on GitHub, they finally added a "block" feature, but there's no fine-grained permissions--it's all or nothing, on a per-user basis. All projects there also automatically get a wiki that's editable by any user, whether they own the repo or not, which seems ripe for abuse.
Ultimately, the burden of community management falls on me, not on the tools. Oddly enough, there don't seem to be a lot of written guides for improving open source management skills. A quick search turned up Producing Open Source Software by Karl Fogel, but otherwise everyone seems to learn on their own. That would do a lot to explain the wide difference in tone between a lot of projects, like the wide difference I see between Chromium (always pleasant) and Mozilla (surprisingly abrasive, even before the Eich fiasco).
If I had a chance to do it all again, despite all the hassle, I would probably keep all the communication channels open. I think it's important to be nice to people, and to offer help instead of just dumping a tool on the world. And I like to think that being responsive has helped account for the nearly 60,000 people who use Caret weekly. But I would also set rules for myself, to keep the problem manageable. I'd set times for when I answer e-mails, or when I close issues each day. I'd probably disable the e-mail subscription feature for the repository. I'd spend some time early on writing up a style guide for contributors.
All of these are ways of setting boundaries, but they're also the way a project gets a healthy culture. I have a tremendous amount of respect for projects like Chromium that manage to be both successful and — whenever I talk to their organizers — pleasant and understanding. Other people may be able to maintain that kind of demeanor full-time, but I'm too grumpy, and nobody's compensating me for being nice (apart from the one person who sends me a quarter on Gittip every week). So if you're in contact with me about Caret, and I seem to be taking a little longer these days to get back to you, just remember what you're paying for service.
This book is a weird beast. Set in Britain around the year 600AD, around the time that the island was converting to Christianity, it follows a woman who would eventually become St. Hilda of Whitby (no, I don't know who she is either). Hild is a seer from an early age, not really because she has any mystical powers but more because she's been raised by her mother to be a highly-trained political operator, surrounded by people who aren't looking much past their own self-interest. Caught between the Catholic church, Irish war parties, and her own hostile king, Hild spends much of the book trying to figure out how to keep herself and her family safe by predicting events before anyone else realizes what's going on.
The elevator pitch for this — Dune if Paul Atreides was a woman in the middle ages — is so good, it's all the more annoying that Hild herself comes across as one-dimensional and unrealistic. She's setting policy by the age of ten, and running large chunks of the country by 16. It's not really a Mary Sue — Hild has plenty of flaws, and regularly makes mistakes — so much as it's merely undramatic. The narration tends to tell, rather than show, with little in the way of suspense or surprise. Griffith's goal, at least in part, seems to be to use Hild as a critique of passive female characters in fantasy literature, which is a fine goal. It's frustrating that she seems to have forgotten to make her very interesting in the process.
This book is often cited on the NICAR discussion list as the go-to textbook for data journalists, but I'd never read it. The Kindle version is the 2002 4th edition, which seems to be the newest copy. As a result, parts of it are dated or a little "quaint," but for the most part I think it actually holds up to its reputation. Meyer keeps a light touch throughout the book, walking reporters through standard statistical tests, surveys and polling, and databases without getting bogged down into too much operational detail. There's a lot of "here's the formula, and here's where to go to learn more," which seems reasonable.
Inadvertently, being a textbook for an undergraduate audience, Precision Journalism is revealing as much for what it thinks students won't know as it is for what it explicitly teaches. For example, there's an early chapter that covers probability, which makes sense: probability is confusing, and many people get it wrong even after a statistics class. I'm a little snobbier about the following chapter, in which Meyer details how to figure percentage change and change in percentage (subtly different concepts). Part of me wants is glad that it's being covered. Another part is annoyed that students don't know it already.
That said, Meyer's enthusiasm and practical outlook on what we now call "data journalism" really resonated with me. I'd like to have seen more emphasis on SQL instead of SAS, but that's nitpicking. For the most part, Precision Journalism does a great job of covering the strengths and weaknesses of computer-assisted reporting, with lots of examples and wry humor. I guess there's a reason it's a classic.
Turns out it's also a complete fabrication, despite the efforts of decades of anthropologists trying to find such a barter society. Instead, the historical record shows that people in non-money societies are linked by an interwoven network of casual debts and favors, not strict one-for-one exchanges. We invented money not to supplant barter, but when we needed a method of exchange that didn't involve trust — usually to give soldiers a way to pay for things when they camped somewhere, given that they were only temporary occupiers and not accountable for the same kind of debts as a neighbor.
This is not new research, apparently — Graeber complains that anthropologists have been trying to convince economists to find a new origin story for years — but it was new to me. The realization that the foundational mythology of economics is a fairy tale doesn't disprove its validity as a field, but it does raise a lot of really interesting questions. Graeber, a former leader within the Occupy movement, certainly pulls no punches in his criticisms.
The rest of the book is good and similarly thought-provoking, but it can't help but seem a bit underwhelming. Graeber works his way forward methodically through all the ways that we conceptualize obligations, then through the history of debt and payment up through the modern age. At times, this is fascinating, especially when he discusses "reversions" from a monetary economy to an informal debt economy. Ultimately, the book builds to a theory of international politics that ties debt to "tribute." Is it convincing? For my part, not entirely, no. But it's a fascinating and deeply-researched argument.
Karen Traviss is one of those writers who makes me resent the licensed-property industry a little bit. A talented genre writer — her Wess'har books are a sharp and unsettling rumination on politics and veganism — Traviss gets tapped a lot to write tie-in novels for movies and games. She's good enough that the result sometimes transcends its origin, so every now and then I'll give one a shot. The Kilo-Five books are basically what you get if you cross Halo's backstory with a spy yarn.
Set between the third and fourth games, the Kilo Five books bear little resemblance to the action of the source material. There aren't a lot of firefights on offer: instead, the plot bears more resemblance to Operation Mincemeat, the WWII counterintelligence operation that disguised the fact that the Allies had broken Nazi codes. Having won a war against hostile aliens, the books' human protagonists are working covertly to keep them destabilized by creating civil unrest and sabotaging infrastructure. It's also a subversive take on the macho warrior spirit of the Halo franchise, which makes the Amazon reviews from wounded fans almost worth the price of admission. I'm still glad Traviss is getting back to original fiction, though.
When I was a kid, my dad went to a second-hand bookstore and bought ten or fifteen of the Tom Swift Jr. pulp novels for me. Even though at that point they were probably thirty years old, dated with golly-gee-whiz references to the wonders of atomic power (oh, to have lived in the uncomplicated world before Three Mile Island), I read them cover to cover multiple times. Tom Swift, of course, was a product of the Stratemeyer Syndicate and its potboiler formula — the same one that powered the Hardy Boys and Nancy Drew, neither of which I read but which I'm sure I would have found equally compelling.
Girl Sleuth is nominally a history of Nancy Drew, but it also serves as a look at the Stratemeyer dynasty: started by an enterprising writer named Edward Stratemeyer, then carried on by his daughter Harriet when he passed away. It's also the story of Mildred Wirt, the woman who wrote almost all the original Nancy Drew, but was for years hidden behind the syndicate's pen name, Carolyn Keene. Rehak traces the evolution of the character, as well as the parallel tension between the younger Stratemeyer, who wrote many of the series outlines, and Wirt, an adventurous newspaper journalist who churned out an unthinkable number of pages for the series. Both women believed, not without reason, that they were the real author of Nancy Drew.
As much as anything else, Rehak's re-telling is a fascinating look at the lifecycle of pop culture. Nancy Drew began as a semi-disreputable pulp sensation: hated by librarians, but a hot commodity among kids. For whatever reason, the series took off, and was beloved enough that (like my Tom Swifts) it was passed on to a new generation, who took the old stories and found new contemporary values in them. In a way, it could be argued that she was as much a creation of the readers as of either of her "authors." Transformed by the changing youth culture of the 20th century, Nancy Drew became a proto-feminist icon, then an American tradition, and is now an article of nostalgia. Rehak seems optimistic that she can adapt even further, but I wonder if that's not belaboring the point. Sometimes a good story should just end.
My students were sturdy and patient guinea pigs: source control must have been a shock since many of them had only recently learned about FTP and remote filesystems. Some of them seemed suspicious about the whole "files" thing to begin with, and for them I could only offer my sympathies. I was asking a lot, on top of learning a new language with unfamiliar constraints of its own.
Midway through the quarter, though, workflows developed and people adjusted. I was no longer spending my time answering Git questions and debugging commit issues. As an instructor, it was hugely successful: pulling source code is much easier than using "view source" on hosted pages, and commenting line-by-line on GitHub commits is far superior to code critique via e-mail. I have no qualms about using Git in class again, but shortening the adjustment period is a priority for me.
Using software in a classroom is an amazing way to discover failure cases you would otherwise never see in a million years, and this was no exception. Add the fact that I was teaching it for the first time, and some fun obstacles cropped up. Here's a short list of issues students hit during the first few weeks of class:
For a start, students will be connecting to their servers over SSH to debug and edit their PHP, so I'll be teaching Git from the command line instead of using graphical tools like GitHub for Windows. This sounds more complicated, but it means that the experience is consistent for all students and across all operations. It also means that students will be able to use Pro Git as a textbook and search the web for advice on commands, instead of relying on the generally abysmal help files that come with graphical Git clients and tutorials that I throw together before each quarter.
Of course, Pro Git isn't just valuable because it's a free book that walks users through basics of source control in a friendly manner. It also does a great job of explaining what Git is actually doing at each stage of the way — it explains the concepts behind every command. Treating Git as a black box last quarter ultimately caused more problems than it was worth, and it left people scared of what they were doing. It's worth sacrificing a week of advanced topics like object-orientation (especially in the entry-level class) if it means students actually understand what happens when they stage and commit.
Finally, and perhaps most importantly, I'm going to provide an origin repo for students to clone, and then walk them through setting up a deploy repo as well, with an eye to providing the larger development context. The takeaway is not "here are Git commands you should know," but "this is how and why we use source control to make our lives easier." Using Git in class the same way that people use it in the field is experience that students can take with them.
What do these three parts of my strategy — tooling, concepts, and context — have in common? They're all about process. This is probably unsurprising, as process and workflow have been hobbyhorses of mine since I taught a disastrous capstone class last year. In retrospect, it seems obvious that the last class of the web development program is not an appropriate time for students to be introduced to group development. They were unfamiliar with feature planning, source control, and QA testing — worse, I didn't recognize this in time to turn it into a crash course in project management. As a result, teams spent the entire quarter drifting in and out of crisis.
Best practices, it turns out, are a little like safety protocols around power tools. Granted, my students are a little less likely to lose a finger, but writing code without a plan or a collaboration workflow can still be deadly for a team's progress. I'm proud that the Web Apps class sequence I helped redesign stresses process in addition to raw coding. Git is useful for a lot of reasons, like its ecosystem, but the fact that it gives us a way to introduce basic project management in the very first class of the sequence is high on the list.
Paul Kinlan's post, Add-to-homescreen Is Not What the Web Needs, is only the most recent in a long-running debate surrounding "apps" on mobile, but it is thought-provoking. Kinlan, who cheerleads for the Web Intents integration system in Chrome, naturally thinks that having an "add-to-homescreen" option misses the point:
I want to see something much more fundamental. The web offers something far richer: it encourages lightweight usage with no required installation and interaction with on-demand permissions. I never want to see an install button or the requirement to understand all the potential permissions requried before trying the app. The system should understand that I am using an app and how frequently that I use it and it should then automatically integrate with the launch points in the OS.
Kinlan has a great point, in that reducing the web to "just another app" is kind of a shame. The kinds of deeper integration he wants would probably be prone to abuse, but they're not at all impossible. Mozilla wants to do something similar with Firefox OS, although it probably gets lost in the vague muddle of its current state. Worse, Firefox OS illustrates the fundamental problem with web "apps" on mobile, and it's probably going to take a lot more than a clever bookmark to solve the problem. That's because the real problem with the web on mobile is URLs, and nobody wants to admit that.
As a web developer, I love URLs. They're the command line of the web: a powerful tool for organizing information and streaming it from place to place. Unfortunately, they're also like the command line in other ways: they're arbitrary, much-abused, and ultimately difficult to type on mobile. More importantly, nobody who isn't a developer really understands them.
There is a now-infamous example of the fact that people don't understand URLs, which you may remember as the infamous Facebook login of 2010. That was the point at which the web community realized that for a lot of users, logging into Facebook went a lot like this:
As a process, this was fine until ReadWriteWeb actually published a story about Facebook's unified login that rose to the top spot in the Google search listings, at which point hundreds of people began commenting on the article thinking that it was a new Facebook design. As long as they got to Facebook in the end, to these people, one skinny textbox was basically as good as another. I've actually seen people do this in my classes, and just about ground my teeth to nubs watching it happen.
In other words, the problem is discovery. An app store gives you a way to flip through the listings, see what's popular, and try it out. You don't need to search, and you certainly don't need to remember a cryptic address (all these clever .io and .ly addresses are, I'm pretty sure, much harder to remember than plain old .com). For most of the apps people use, they probably don't even scroll very far: the important stuff, like Facebook and Candy Crush, is almost certainly at the top of the store anyway. Creating add-to-homescreen mechanisms is addressing the wrong problem. It's not useless, but the real problem is not that people don't know how to make bookmarks, it's that they can't find your web app in the first place.
The current Firefox OS launcher isn't perfect, but it at least shows someone thinking about the problem. When you start the device, it initially shows a search box titled "I'm thinking of...". Tap into the box and even before you start typing it'll instantly start showing a set of curated sites sorted into categories like "social" and "games." If you want isn't there, you can continue to search the web as a whole. Sites launched from this view start in "app mode" with no URL bar, even though they're still just web sites and nothing's technically been installed. Press the bookmark button, and it's added to your homescreen. It's exactly as seamless as we've always claimed the web could be.
On top of this, sadly, Mozilla adds the Marketplace app, which can install "packaged" apps similar to Chrome OS. It's an attempt to solve the discoverability problem, but it lacks the elegant fluidity of the curated results from the launcher search (not to mention that it's kind of confusing). I'm not wild about curation at the best of times — app stores are a personal pet peeve — but it serves a purpose. We need both: an open web, because that's the spirit of things, and a market destination, because it solves the URL discovery process.
What we're left with is a tragedy of the commons. Mozilla's marketplace can't serve the purpose of the open web, because it's a curated and little-loved space that's only for Firefox OS users. Google is preoccupied with its own Chrome web store, even though it's certainly in a position to organically track the usage of web apps via user searches. Apple couldn't care less. In the meantime, web app discovery gets left with the scraps: URLs and search. There's basically no way, other than word of mouth, that your app will be discovered by normal people unless it comes from an app store. And that, not add-to-homescreen flaws, is why we can't have nice things on the web.
There's a regular, recurring movement to replace text-based programming with some kind of graphical version. These range from Scratch (offering "blocks" to make text syntax more friendly) to Pure Data (node-based dataflow programming). Rarely do any of them take off (Scratch and pd are successful within education and audio, respectively, but little-used elsewhere), but that doesn't stop anyone from trying.
It may be the fact that I started as a writer, or that I was a language nut in college, but I've always felt that text-based programming doesn't get a lot of respect. The written word is one of the great advances of civilization. You can pack a lot of meaning into a line of text, and code is no different. Good source code can range from whimsical to workmanlike, a gamut that's hard to imagine existing in the nest of wiring that is the graphical languages.
As a result, text editing is important to me. It's important to a lot of people, but most of them don't write an editor, and I ended up doing that. I figured I'd write up some notes on the different ways people have written their editors, and why I picked one model in particular for Caret. It may be news to many people that there are even multiple models to consider, but that's programming for you: there's at least four ways to put letters into a document, and bitter wars between factions for each of them.
The weirdest editor still in common usage, of course, is Vim. Born from the days when network connections were too slow to actually update text in realtime, Vim uses a shorthand language for text editing. You don't hold delete until some amount of text is gone in Vim — instead, you type "d2w", meaning "delete two words." You also can't type directly until you switch into the "insert" mode with the "i" or "a" commands. Like many abusive subcultures, people who learn this shorthand will swear up and down that it's the only way to work, even though it's clearly a relic of a savage, bygone age.
(Vim and Emacs are often mentioned in comparison to each other, because they tend to be used by very similar kinds of people who, nevertheless, insist that they're very different. I don't really know very much about Emacs, other than it's written in Lisp and it's not as eyeball-rolling weird as Vim, so I'm ignoring it for the purposes of this discussion.)
Acme tends to look a little more traditional, but it is actually (I think) more radical than Vim, because it redefines the relationship between interface and editor. Acme turns all documents into hypertext: middle clicking a filename opens that file, and clicking a word (like "copy" or "paste") actually runs that command (either in a shell, or in Acme). There's no fixed interface in Acme, just a set of menu bars that are also text fields. I love the elegance of this idea, where a person builds an text editor's UI just by... editing text.
Which brings us to Sublime. I've been very clear that Caret is modeled closely on Sublime, with a few changes to account for quirks of the platform and my own preferences. That's partly because it's generally considered the tool of choice for web developers, and partly because it's genuinely the editor that has my favorite workflow tools. Insofar as Sublime has a philosophy, it is to prioritize clarity and transparency over power. That's not to say it's not powerful — it certainly is. But it tries to be obvious in a way that other editors do not.
For example, say you need to change a variable name throughout a function. Instead of immediately writing a regex or a macro, Sublime lets you select all the instances of that variable with the mouse or keyboard, which creates multiple cursors. Then you just type the new name. It's not as powerful as a regular expression, but 90% of the time, it's probably what you wanted to do anyway. Sublime's command/go-to palette is another smart-but-obvious idea: instead of hunting through the menu or the filesystem, open the palette and type to fuzzy-filter the list. It's the speed of a command line without the hostility.
To paraphrase an old saw, the best feature is the one you have with you. That's why putting the command palette in Caret was a must, since it puts all the menu items just a few keystrokes away. Even now, I don't always remember where a given menu item is in the toolbar in my own editor, because I hardly ever use the mouse. There was a good week when menus looked completely wrong, and I never even noticed.
The reason I've started looking over other editors now is that I think Caret can reach for more than just parity with Sublime. I'm intrigued by the ways that Acme makes it easy to jump around files, and lately I've been thinking about what it means to be an editor built in "web technology." Adding the ability to open links from a URL is a given, but it's only the start: given that OAuth provides a simple, standard method of authenticating against a remote server, a File implementation for Caret could easily open files against service endpoints for something like Github or Ghost in a generic way. It would be a universal cloud editor, but easily capable of running locally.
Of course, Caret won't be the last editor to try something different (just this week, Github announced their own effort), but it's still pretty amazing how many ways we have to solve a simple problem like "typing letters into a file." As a writer and a coder, I love being spoiled for choice.
Lots of musicians have given their work away for free, but De La Soul is different. On February 14th, to celebrate the 25th anniversary of Three Feet High and Rising, they uploaded their back catalog and made it available to anyone who signed up for their mailing list. There are at least three really interesting things about De La's Valentine's Day gift, especially given the fact that the albums on offer have never been available digitally before.
Of course, they almost weren't available last week, either. The original links sent out that morning went to a Dropbox account, which (no surprise) was almost immediately shut down for excessive bandwidth use when everyone on the Internet went to download the free tracks. A new solution was soon found, but it just goes to show that even a band that you'd think would absolutely have a nerdy, Internet-savvy friend, didn't. I kind of like that, though. It gives the whole affair a charming, straight-from-their-garage feel to it.
The first interesting thing is the question of why the albums were released for free in the first place. Reports are vague, but the gist is that De La Soul's label, Warner Brothers, hasn't cleared the samples on the albums, so they can't be sold online. Due to the weirdness of music contracts, you can still buy a physical copy of Three Feet High — it's even been re-released with bonus material a couple of times — but you can't buy the MP3. While it's true that people still buy CDs, I'm guessing that number doesn't include most of De La's fanbase.
But that leads us to the second twist in the story, which is that what De La Soul did is probably illegal. Like a lot of musicians, they own the songs, but they don't own the music: the master recordings of those albums are owned by the label instead. The fact that De La Soul could be sued for pirating their own albums explains a lot about both the weird, exploitative world of music contracts, as well as the ambivalence a lot of musicians feel for labels.
Let's say that nobody sues, however, and Warner Bros. decides to tacitly endorse the giveaway. De La Soul still doesn't have access to the masters, so how did they get the songs to distribute? Interesting fact number three: when people examined the metadata for the tracks, they turned out to be from a Russian file-sharing site of dubious legality. Basically, the band really did pirate their own work. I'm a little disappointed they didn't rip their own CDs, but considering that they didn't have anyone around to tell them not to use Dropbox as a CDN, we probably shouldn't be surprised. It was probably easier this way, anyway — which says a lot about the music industry, as well.
If what De La did was legal, does that make the pirated copies also legal? Would it have been legal for me to download the exact same files from Russian servers while the "official" songs were available? And now that the campaign is over and you still can't buy Stakes Is High from Amazon MP3, are the pirate sites back to being illegal? Nothing I can remember from the Napster days answers these questions for me — although to be fair, all I really remember from Napster is a number of novelty punk covers and making fun of Lars Ulrich.
Assuming they're not sued, and so far they've gotten away from it, the download promotion should be good for De La Soul. Or to put it more bluntly, they probably figured it couldn't hurt, and they're likely right: if these songs were never going to end up for sale online, most of their remaining value is promotional (for shows and other albums) anyway. So it's a savvy move, but it's one unlike the other artists (Nine Inch Nails, Radiohead) that have offered their music for free online. Those bands were issuing new material, unencumbered by sample clearance, and in support of an entirely different genre. I suspect a lot of classic hip-hop artists in similar situations may be watching this new promotion with a lot of interest. Chances are, that's just the way De La likes it.
I think most of us can imagine the frustrating experience of sharing a newspaper with the New York Times op-ed page. It must burn to do good reporting work, knowing that it'll all be lumped in with Friedman's Mighty Mustache of Commerce and his latest taxi driver. Let's face it: the op-ed section is long overdue for amputation, given that there's an entire Internet of opinion out there for free, and almost all of it is more coherent than whatever white-bread panic David Brooks is in this week.
But even I was surprised by the story in the New York Observer last week, detailing just how bad the anger between the journalists and the pundits has gotten:
The Times declined to provide exact staffing numbers, but that too is a source of resentment. Said one staffer, “Andy’s got 14 or 15 people plus a whole bevy of assistants working on these three unsigned editorials every day. They’re completely reflexively liberal, utterly predictable, usually poorly written and totally ineffectual. I mean, just try and remember the last time that anybody was talking about one of those editorials. You know, I can think of one time recently, which is with the [Edward] Snowden stuff, but mostly nobody pays attention, and millions of dollars is being spent on that stuff.”
First of all, the Times still runs unsigned editorials? And it takes more than ten people to write them? Sweet mother of mercy, that's insane. I thought the only outlet these days with an actual "from the editors" editorial was the Onion, and even they think it's an old joke. You might as well include an AOL keyword at the end.
And yet it's worth reading on, once you pick your jaw up off the floor, to see the weird, awkward cronyism that's not just the visible portions of the op-ed page, but its entire structure. Why is the editorial section so bad? In part, apparently, because it's ruled by the entitled, petty son of a former managing editor, who reports directly to the paper's publisher (and not the executive editor) because of a family debt. Could anything be more appropriate? As The Baffler notes:
What a perfect way to boil tapioca. Dynasties kill flavor. A page edited by a son because dad was kind of a big deal is a page edited with an eye to status and credentials. Hey, Friedman must be good—he won some Pulitzers. That’s a prize, you see, that Pulitzer thing. Big, big prize. We put it up on the wall. (Pause) Anyway, ready for a cocktail?
The Observer argues that the complaints from the newsroom at large are professional, not budgetary: reporters are angry about shoddy work being published under the same masthead as their stories. But it's hard to imagine that money doesn't enter into it at all. A staff of ten or more people, plus hundreds of thousands of dollars for each of the featured op-ed writers, would translate into serious money for journalism. It would hire a lot of staff, pay for a lot of equipment. You could use it to give interns a living wage, or institute a program for boosting minority participation in media. Arguably, you could put it into a sack and sink it into the Hudson, and still end up ahead of what it's currently funding.
Of course, most papers don't maintain a costly op-ed section, so it's not like this is an industry-wide problem. I don't know that I would even care, normally, beyond the sense of schadenfreude, except for the fact that it's such a perfect little chunk of journalistic mismanagement: when finances get strained, the cuts don't get made from politically-connected fiefdoms, or from upper-level salaries. They get taken from the one place that should be protected, which is the newsroom itself.
Call me an anarchist, but the most depressing part of the whole debate is that it's focused on how big the op-ed budget should be, or how it should be run, instead of whether it should exist at all. What's the point of keeping it around? Or, at the very least, why populate it with the same bland, predictable voices every day? One of the things I respect about the New York Times is the paper's forays into bucking conventional wisdom, from the porous subscription paywall to its legitimately innovative interactive storytelling. There's a lot of romance and tradition in the newsroom, but the op-ed page shouldn't be a part of it. I say burn it to the ground, and let's see what we can grow on the ashes.