this space intentionally left blank

August 28, 2014

Filed under: tech»education

Process Over Programs

This fall, I'll be teaching ITC 210 at Seattle Central College, which is the capstone class of the web development program there. It's taught as a combined class with WEB 210 (the designer's capstone). The last time I taught this course, it didn't go particularly well: although the goal is for students to implement a WordPress site for a real-world client, many of them weren't actually that experienced with the technology.

More importantly, they had never been taught any of the development methods that let teams work together efficiently. I suggested some of the basics — using source control, setting tasks, and using a "waterfall" structure — but I didn't require them, which was a mistake. Under pressure, students fell back on improvised strategies, and many of them ended up in a crunch as a result.

For the upcoming quarter, I plan to remedy those mistakes. But to do so, it's helpful to look at the web development program from a macro level. What is that we're trying to do here, and what should this capstone class actually mean to students?

Although the name has changed, Seattle Central is still very much a community college, and this is very much a trade program. We need to focus on practical job skills, not on CS theory. And so while the faculty are still working on many of the details, one of our goals for curriculum redesign was to create a simple progression between the three web applications classes: first teach basic programming in ITC 240, followed by an MVC framework in ITC 250, and finish the process with a look at development processes (agile, waterfall, last-minute panic, etc.) in ITC 260. By the end, students should feel like they can take a project from start to finish as part of a team in an organized fashion.

Of course, just because that's what our intentions were doesn't mean that it's working out that way. These changes are large shifts in the SCC curriculum, and like steering an Oldsmobile, those take time. So while it would be nice to assume that students have been through the basics of project management by the time that they reach the capstone, I can't count on it — and even then, they probably won't have put it to practice in teams, since the prior classes are individually-graded.

To bring this back to ITC 210, then, we have two problems. First, students don't know how to manage development, because they've spent most of their time just learning how to code. Second, the structure of the class hasn't historically encouraged them to develop those skills. Assignments on the development side tend to be based around the design milestones, which makes their workload "lumpy:" a lot of waiting for design resources, followed by an intense, panicky burst at the end. This may sometimes be an accurate picture of the job, but it's a terrible class experience. Ideally, we want the developers to be working constantly throughout the quarter.

So here's my new plan: this year, ITC 210 will be organized for students around a series of five agile sprints, just like any real-world coding project. At the start of each sprint, they'll assign time and staff to tasks, and at the end of each sprint they'll do a retrospective to help determine their velocity. Grades will be largely organized around documentation of this process. During the last sprint, they'll pick up another team's site and file bugs against it as QA, while fixing the bugs that are filed against them.

This won't entirely smooth out the development process — devs will still be bottlenecked on design work from time to time — but it will make it clear that I expect them to be working the entire time on laying groundwork. It'll also familiarize them with the ways that real teams coordinate their efforts, and it will force them to fit into a common workflow instead of fragmenting into a million angry swarms of random methodology.

I tend to make fun of programmers for thinking that they're the only ones who can invent a workflow, but it's easy to forget that coordinating a team is hard, and nobody comes by it naturally. I made that mistake last time around, and although we scraped by, there were times when it was rough. This quarter, I'm not giving students a choice: they'll work like a regular software team, or they'll fail the course. It may seem harsh, but I think it'll pay off for them when it comes time to do this for a living.

May 28, 2014

Filed under: tech»education

Lessons in Security

This quarter, I've been teaching ITC 240 at SCC, which is the first of three "web apps" classes. They're in PHP, and the idea is that we start students off with the basics of simple pages, then add frameworks, and finally graduate them to doing full project development sprints, QA and all. As the opening act for all this, I've decided to make a foundational part of the class focused on security.

Teaching security to students is hard, because security itself is hard. Web security depends on a kind of generalized principle that everyone is out to get you at all times: don't trust the database, the URL, user input, user output, JavaScript, the browser, or yourself. This kind of wariness does not come naturally to people. Eventually everything gets broken.

I've done my best to cultivate paranoia in my students, both by telling them horror stories (the time that Google clicked all the delete links on a badly-hidden admin page, that time when the World Bank got hacked and replaced with pictures of Wolfowitz's socks) and by threatening to attack their homework every time I grade it. I'm not sure that it's actually working. I think you may need to be on the other end of something fairly horrific before it really sinks in how bad a break-in can be. The fact that their homework usually involves tracking personal information for my cat is probably not helping them take it seriously, either.

The thing is, PHP doesn't make it easy to keep users safe. There's a short tag for automatically echoing values out, but it does no escaping of HTML, so it's one memory lapse away from being a cross-site scripting bug. Why the <?= $foo ?> tag doesn't call htmlentities() for you like every other template engine on the planet, I'll never know. The result is that it's trivial to forget to sanitize your outputs — I myself forgot for an entire week, so I can hardly blame students for their slipups.

MySQL also makes this a miserable experience. Coming from a PostgreSQL background, I was unprepared (ha!) for this. Executing a prepared query in MySQL takes at least twice as many lines as in its counterpart, and is conceptually more difficult. You also can't quote table or column names in MySQL, which means that mysqli_real_escape_string is useless for queries with an ORDER BY clause — I've had to teach students about whitelists instead, and I suspect it's going in one ear and out the other.

It may be asking a little much of them anyway. Most of my students are still struggling with source control and editors, much less thinking in terms of security. Several of them have checked their passwords into GitHub, requiring a "password amnesty" where everyone got reset. I'd probably be more upset if I didn't think it was kind of funny, and if I wasn't pretty sure that I'd done the same thing in the past.

But even if they're a little bit overwhelmed, I still believe that students should be learning this stuff from the start, if for no other reason than that some of them are going to get jobs working on products that I use, and I would prefer they didn't give my banking information away to hackers in some godforesaken place like Cleveland. Every week, someone sends me a note to let me know that my information got leaked because they couldn't write a secure website — even companies like eBay, Dropbox, and Sony that should know better. We have to be more secure as an industry. That starts with introducing people to the issues early, so they have time to learn the right way as they improve their skills.

April 30, 2014

Filed under: tech»mobile

Company Towns

Pretend, for a second, that you opened up your web browser one day to buy yourself socks and deoderant from your favorite online retailer (SocksAndSmells.com, maybe). You fill your cart, click buy, and 70% of your money actually goes toward foot coverings and fragrance. The other portion goes to Microsoft, because you're using a computer running Windows.

You'd probably be upset about this, especially since the shop raised prices to compensate for that fee. After all, Microsoft didn't build the store. They don't handle the shipping. They didn't knit the socks. It's unlikely that they've moved into personal care products. Why should they get a cut of your hard-earned footwear budget just because they wrote an operating system?

That's an excellent question. Bear it in mind when reading about how Comixology removed in-app purchases from their comic apps on Apple devices. I've seen a lot of people writing about how awful this is, but everyone seems to be blaming Comixology (or, more accurately, their new owners: Amazon). As far as I can tell, however, they don't have much of a choice.

Consider the strict requirements for in-app purchases on Apple's mobile hardware:

  • All payments made inside the app must go through Apple, who will take 30% off the top. This is true even the developer handles their own distribution, archival, and account management: Apple takes 30% just for acting as a payment processor.
  • No other payment methods are allowed in the App Store — no Paypal, no Google Wallet, no Amazon payments. (No Bitcoin, of course, but no Monopoly money either, so that's probably fair.) Developers can't process their own payments or accept credit cards. It's Apple or nothing.
  • Vendors can run a web storefront and then download content purchased online in the app... but they can't link to the site or acknowledge its existence in any way. They can't even write a description of how to buy content. Better hope users can figure it out!

Apple didn't write the Comixology app. They didn't build the infrastructure that powers it, or sign the deals that fill it with content. They don't store the comics, and they don't handle the digital conversion. But they want 30 cents out of every dollar that Comixology makes, just for the privilege of manufacturing the screen you're reading on. If Microsoft had tried to pull this trick in the 90s, can you imagine the hue and cry?

This is classic, harmful rent-seeking behavior: Apple controls everything about their platform, including its only software distribution mechanism, and they can (and do) enforce rules to effectively tax everything that platform touches. There was enough developer protest to allow the online store exception, but even then Apple ensures that it's a cumbersome, ungainly experience. The deck is always stacked against the competition.

Unfortunately, that water has been boiling for a few years now, so most people don't seem to notice they're being cooked. Indeed, you get pieces like this one instead, which manages to describe the situation with reasonable accuracy and then (with a straight face) proposes that Apple should have more market power as a solution. It's like listening to miners in a company town complain that they have to travel a long way for shopping. If only we could just buy everything from the boss at a high markup — that scrip sure is a handy currency!

It's a shame that Comixology was bought by Amazon, because it distorts the narrative: Apple was found guilty of collusion and price fixing after they worked with book publishers to force Amazon onto an agency model for e-books, so now this can all be framed as a rivalry. If a small company had made this stand, we might be able to have a real conversation about how terrible this artificial marketplace actually is, and how much value is lost. Of course, if a small company did this, nobody would pay attention: for better or worse, it takes an Amazon to opt out of Apple's rules successfully (and I suspect it will be successful — it's worked for them on Kindle).

I get tired of saying it over and over again, but this is why the open web is important. If anyone charged 30% for purchases through your browser, there would be riots in the street (and rightly so). For all its flaws and annoyances, the only real competition to the closed, exploitative mobile marketplaces is the web. The only place where a small company can have equal standing with the tech giants is in your browser. In the short term, pushing companies out of walled gardens for payments is annoying for consumers. But in the long term, these policies might even be doing us a favor by sending people out of the app and onto the web: that's where we need to be anyway.

April 25, 2014

Filed under: tech

Service as a Service

If you've ever wanted to get in touch with more people who are either unhinged or incredibly needy (or both), by all means, start a modestly successful open source project.

That sounds bitter. Let me rephrase: one of the surprising aspects of open-sourcing Caret has been that much of the time I spend on it does not involve coding at all. Instead, it's community management that absorbs my energy. Don't get me wrong: I'm happy to have an audience. Caret users seem like a great group of people, in general. But in my grumpier moments, after closing issue requests and answering clueless user questions (sample, and I am not making this up: "how do I save a file?"), there are times I really sympathize with project leaders who simply abandon their code. You got this editor for free, I want to say: and now you expect me to work miracles too?

Take a pull request, for example. That's when someone else does the work to implement a feature, then sends me a note on GitHub with a button I can press to automatically merge it in. Sounds easy, right? The problem is that someone may have written that code, but it's almost guaranteed that they won't be the one maintaining it (that would be me). Before I accept a pull request, I have to read through the whole thing to make sure it doesn't do anything crazy, check the code style against the rest of Caret, and keep an eye out for how these changes will fit in with future plans. In some cases, the end result has to be a nicely-worded rejection note, which feels terrible to write and to receive. Either way, it's often hours of work for something as simple as a a new tab button.

These are not new problems, and I'm not the first person to comment on them. Steve Klabnik compares the process to being an "open source gardener," which horrifies me a little since I have yet to meet a plant I can't kill. But it is surprising to me how badly "social" code sites handle the social part of open source. For example, on GitHub, they finally added a "block" feature, but there's no fine-grained permissions--it's all or nothing, on a per-user basis. All projects there also automatically get a wiki that's editable by any user, whether they own the repo or not, which seems ripe for abuse.

Ultimately, the burden of community management falls on me, not on the tools. Oddly enough, there don't seem to be a lot of written guides for improving open source management skills. A quick search turned up Producing Open Source Software by Karl Fogel, but otherwise everyone seems to learn on their own. That would do a lot to explain the wide difference in tone between a lot of projects, like the wide difference I see between Chromium (always pleasant) and Mozilla (surprisingly abrasive, even before the Eich fiasco).

If I had a chance to do it all again, despite all the hassle, I would probably keep all the communication channels open. I think it's important to be nice to people, and to offer help instead of just dumping a tool on the world. And I like to think that being responsive has helped account for the nearly 60,000 people who use Caret weekly. But I would also set rules for myself, to keep the problem manageable. I'd set times for when I answer e-mails, or when I close issues each day. I'd probably disable the e-mail subscription feature for the repository. I'd spend some time early on writing up a style guide for contributors.

All of these are ways of setting boundaries, but they're also the way a project gets a healthy culture. I have a tremendous amount of respect for projects like Chromium that manage to be both successful and — whenever I talk to their organizers — pleasant and understanding. Other people may be able to maintain that kind of demeanor full-time, but I'm too grumpy, and nobody's compensating me for being nice (apart from the one person who sends me a quarter on Gittip every week). So if you're in contact with me about Caret, and I seem to be taking a little longer these days to get back to you, just remember what you're paying for service.

April 3, 2014

Filed under: tech»education

Teaching with Git: Lessons Learned

Last quarter, for the first time, I taught Intro to JavaScript at SCC (previously SCCC) using Git as the primary method for turning in homework. They say that the best way to learn something is to teach it, and based on my experience I'd say that's true, particularly if the "it" in question is "how many ways can Git go wrong in a classroom?"

My students were sturdy and patient guinea pigs: source control must have been a shock since many of them had only recently learned about FTP and remote filesystems. Some of them seemed suspicious about the whole "files" thing to begin with, and for them I could only offer my sympathies. I was asking a lot, on top of learning a new language with unfamiliar constraints of its own.

Midway through the quarter, though, workflows developed and people adjusted. I was no longer spending my time answering Git questions and debugging commit issues. As an instructor, it was hugely successful: pulling source code is much easier than using "view source" on hosted pages, and commenting line-by-line on GitHub commits is far superior to code critique via e-mail. I have no qualms about using Git in class again, but shortening the adjustment period is a priority for me.

Using software in a classroom is an amazing way to discover failure cases you would otherwise never see in a million years, and this was no exception. Add the fact that I was teaching it for the first time, and some fun obstacles cropped up. Here's a short list of issues students hit during the first few weeks of class:

  • GitHub's desktop software works fine until you hit a snag, and then it throws up its hands and surrenders completely.
  • The desktop client is also a lovely Metro-inspired design that requires an updated .NET installation to sync with remote repos. Guess what lab computers and many of my students were missing?
  • Students didn't understand the difference between GitHub the program and GitHub the web site, which led to a lot of confusion.
  • Students tried to move the Git directories around, which meant the client lost track of them and broke.
  • One student still maintained revisions to files manually by renaming them, then committed all those revisions to the repo as separate files.
  • People tried to use the commit messages on GitHub as if they were folder descriptions, then felt bad if a revision touched files in multiple folders and screwed up their nice, neat labels.
  • SCC students, like most computer users, are scared of the command line, which is problem since most good Git advice involves the shell.
These problems are troubling, but hardly insurmountable. Indeed, I already have a plan for addressing them next quarter for my Web Apps 1 class, but it's really not so much a "plan" so much as a radical re-imagining. To put it bluntly, I'm throwing most of my previous strategy away and starting over with three guiding principles: tools, concepts, and context.

For a start, students will be connecting to their servers over SSH to debug and edit their PHP, so I'll be teaching Git from the command line instead of using graphical tools like GitHub for Windows. This sounds more complicated, but it means that the experience is consistent for all students and across all operations. It also means that students will be able to use Pro Git as a textbook and search the web for advice on commands, instead of relying on the generally abysmal help files that come with graphical Git clients and tutorials that I throw together before each quarter.

Of course, Pro Git isn't just valuable because it's a free book that walks users through basics of source control in a friendly manner. It also does a great job of explaining what Git is actually doing at each stage of the way — it explains the concepts behind every command. Treating Git as a black box last quarter ultimately caused more problems than it was worth, and it left people scared of what they were doing. It's worth sacrificing a week of advanced topics like object-orientation (especially in the entry-level class) if it means students actually understand what happens when they stage and commit.

Finally, and perhaps most importantly, I'm going to provide an origin repo for students to clone, and then walk them through setting up a deploy repo as well, with an eye to providing the larger development context. The takeaway is not "here are Git commands you should know," but "this is how and why we use source control to make our lives easier." Using Git in class the same way that people use it in the field is experience that students can take with them.

What do these three parts of my strategy — tooling, concepts, and context — have in common? They're all about process. This is probably unsurprising, as process and workflow have been hobbyhorses of mine since I taught a disastrous capstone class last year. In retrospect, it seems obvious that the last class of the web development program is not an appropriate time for students to be introduced to group development. They were unfamiliar with feature planning, source control, and QA testing — worse, I didn't recognize this in time to turn it into a crash course in project management. As a result, teams spent the entire quarter drifting in and out of crisis.

Best practices, it turns out, are a little like safety protocols around power tools. Granted, my students are a little less likely to lose a finger, but writing code without a plan or a collaboration workflow can still be deadly for a team's progress. I'm proud that the Web Apps class sequence I helped redesign stresses process in addition to raw coding. Git is useful for a lot of reasons, like its ecosystem, but the fact that it gives us a way to introduce basic project management in the very first class of the sequence is high on the list.

March 19, 2014

Filed under: tech»web

Spoiled for Choice

Paul Kinlan's post, Add-to-homescreen Is Not What the Web Needs, is only the most recent in a long-running debate surrounding "apps" on mobile, but it is thought-provoking. Kinlan, who cheerleads for the Web Intents integration system in Chrome, naturally thinks that having an "add-to-homescreen" option misses the point:

I want to see something much more fundamental. The web offers something far richer: it encourages lightweight usage with no required installation and interaction with on-demand permissions. I never want to see an install button or the requirement to understand all the potential permissions requried before trying the app. The system should understand that I am using an app and how frequently that I use it and it should then automatically integrate with the launch points in the OS.

Kinlan has a great point, in that reducing the web to "just another app" is kind of a shame. The kinds of deeper integration he wants would probably be prone to abuse, but they're not at all impossible. Mozilla wants to do something similar with Firefox OS, although it probably gets lost in the vague muddle of its current state. Worse, Firefox OS illustrates the fundamental problem with web "apps" on mobile, and it's probably going to take a lot more than a clever bookmark to solve the problem. That's because the real problem with the web on mobile is URLs, and nobody wants to admit that.

As a web developer, I love URLs. They're the command line of the web: a powerful tool for organizing information and streaming it from place to place. Unfortunately, they're also like the command line in other ways: they're arbitrary, much-abused, and ultimately difficult to type on mobile. More importantly, nobody who isn't a developer really understands them.

There is a now-infamous example of the fact that people don't understand URLs, which you may remember as the infamous Facebook login of 2010. That was the point at which the web community realized that for a lot of users, logging into Facebook went a lot like this:

  1. Search Google for "facebook login"
  2. Click the first link
  3. Look for the password box

As a process, this was fine until ReadWriteWeb actually published a story about Facebook's unified login that rose to the top spot in the Google search listings, at which point hundreds of people began commenting on the article thinking that it was a new Facebook design. As long as they got to Facebook in the end, to these people, one skinny textbox was basically as good as another. I've actually seen people do this in my classes, and just about ground my teeth to nubs watching it happen.

In other words, the problem is discovery. An app store gives you a way to flip through the listings, see what's popular, and try it out. You don't need to search, and you certainly don't need to remember a cryptic address (all these clever .io and .ly addresses are, I'm pretty sure, much harder to remember than plain old .com). For most of the apps people use, they probably don't even scroll very far: the important stuff, like Facebook and Candy Crush, is almost certainly at the top of the store anyway. Creating add-to-homescreen mechanisms is addressing the wrong problem. It's not useless, but the real problem is not that people don't know how to make bookmarks, it's that they can't find your web app in the first place.

The current Firefox OS launcher isn't perfect, but it at least shows someone thinking about the problem. When you start the device, it initially shows a search box titled "I'm thinking of...". Tap into the box and even before you start typing it'll instantly start showing a set of curated sites sorted into categories like "social" and "games." If you want isn't there, you can continue to search the web as a whole. Sites launched from this view start in "app mode" with no URL bar, even though they're still just web sites and nothing's technically been installed. Press the bookmark button, and it's added to your homescreen. It's exactly as seamless as we've always claimed the web could be.

On top of this, sadly, Mozilla adds the Marketplace app, which can install "packaged" apps similar to Chrome OS. It's an attempt to solve the discoverability problem, but it lacks the elegant fluidity of the curated results from the launcher search (not to mention that it's kind of confusing). I'm not wild about curation at the best of times — app stores are a personal pet peeve — but it serves a purpose. We need both: an open web, because that's the spirit of things, and a market destination, because it solves the URL discovery process.

What we're left with is a tragedy of the commons. Mozilla's marketplace can't serve the purpose of the open web, because it's a curated and little-loved space that's only for Firefox OS users. Google is preoccupied with its own Chrome web store, even though it's certainly in a position to organically track the usage of web apps via user searches. Apple couldn't care less. In the meantime, web app discovery gets left with the scraps: URLs and search. There's basically no way, other than word of mouth, that your app will be discovered by normal people unless it comes from an app store. And that, not add-to-homescreen flaws, is why we can't have nice things on the web.

February 27, 2014

Filed under: tech»coding

Just Use Ed

There's a regular, recurring movement to replace text-based programming with some kind of graphical version. These range from Scratch (offering "blocks" to make text syntax more friendly) to Pure Data (node-based dataflow programming). Rarely do any of them take off (Scratch and pd are successful within education and audio, respectively, but little-used elsewhere), but that doesn't stop anyone from trying.

It may be the fact that I started as a writer, or that I was a language nut in college, but I've always felt that text-based programming doesn't get a lot of respect. The written word is one of the great advances of civilization. You can pack a lot of meaning into a line of text, and code is no different. Good source code can range from whimsical to workmanlike, a gamut that's hard to imagine existing in the nest of wiring that is the graphical languages.

As a result, text editing is important to me. It's important to a lot of people, but most of them don't write an editor, and I ended up doing that. I figured I'd write up some notes on the different ways people have written their editors, and why I picked one model in particular for Caret. It may be news to many people that there are even multiple models to consider, but that's programming for you: there's at least four ways to put letters into a document, and bitter wars between factions for each of them.

The weirdest editor still in common usage, of course, is Vim. Born from the days when network connections were too slow to actually update text in realtime, Vim uses a shorthand language for text editing. You don't hold delete until some amount of text is gone in Vim — instead, you type "d2w", meaning "delete two words." You also can't type directly until you switch into the "insert" mode with the "i" or "a" commands. Like many abusive subcultures, people who learn this shorthand will swear up and down that it's the only way to work, even though it's clearly a relic of a savage, bygone age.

(Vim and Emacs are often mentioned in comparison to each other, because they tend to be used by very similar kinds of people who, nevertheless, insist that they're very different. I don't really know very much about Emacs, other than it's written in Lisp and it's not as eyeball-rolling weird as Vim, so I'm ignoring it for the purposes of this discussion.)

Acme tends to look a little more traditional, but it is actually (I think) more radical than Vim, because it redefines the relationship between interface and editor. Acme turns all documents into hypertext: middle clicking a filename opens that file, and clicking a word (like "copy" or "paste") actually runs that command (either in a shell, or in Acme). There's no fixed interface in Acme, just a set of menu bars that are also text fields. I love the elegance of this idea, where a person builds an text editor's UI just by... editing text.

Which brings us to Sublime. I've been very clear that Caret is modeled closely on Sublime, with a few changes to account for quirks of the platform and my own preferences. That's partly because it's generally considered the tool of choice for web developers, and partly because it's genuinely the editor that has my favorite workflow tools. Insofar as Sublime has a philosophy, it is to prioritize clarity and transparency over power. That's not to say it's not powerful — it certainly is. But it tries to be obvious in a way that other editors do not.

For example, say you need to change a variable name throughout a function. Instead of immediately writing a regex or a macro, Sublime lets you select all the instances of that variable with the mouse or keyboard, which creates multiple cursors. Then you just type the new name. It's not as powerful as a regular expression, but 90% of the time, it's probably what you wanted to do anyway. Sublime's command/go-to palette is another smart-but-obvious idea: instead of hunting through the menu or the filesystem, open the palette and type to fuzzy-filter the list. It's the speed of a command line without the hostility.

To paraphrase an old saw, the best feature is the one you have with you. That's why putting the command palette in Caret was a must, since it puts all the menu items just a few keystrokes away. Even now, I don't always remember where a given menu item is in the toolbar in my own editor, because I hardly ever use the mouse. There was a good week when menus looked completely wrong, and I never even noticed.

The reason I've started looking over other editors now is that I think Caret can reach for more than just parity with Sublime. I'm intrigued by the ways that Acme makes it easy to jump around files, and lately I've been thinking about what it means to be an editor built in "web technology." Adding the ability to open links from a URL is a given, but it's only the start: given that OAuth provides a simple, standard method of authenticating against a remote server, a File implementation for Caret could easily open files against service endpoints for something like Github or Ghost in a generic way. It would be a universal cloud editor, but easily capable of running locally.

Of course, Caret won't be the last editor to try something different (just this week, Github announced their own effort), but it's still pretty amazing how many ways we have to solve a simple problem like "typing letters into a file." As a writer and a coder, I love being spoiled for choice.

February 5, 2014

Filed under: tech»web

Chromecastic

After a busy couple of weeks, Seattle went and won the Super Bowl, leading to the world's most polite celebration in our neighborhood:

There was another prize for the weekend: a friend of ours gifted us a Chromecast, which will be much appreciated since there's currently no way to watch HBO on the PS4. On Monday, Google released the public SDK for the platform, so I decided to poke around a bit.

Chromecast has a decidedly-odd way of loading content. The device itself is just a thin shell around a Chrome window, and it loads web pages like any other browser. But there's no keyboard of any kind, so how does it know which page to load? The answer is that each "app" has an ID listed with Google, corresponding to a set of URLs that the developer provides. When a mobile app or a computer running Chrome triggers the Chromecast, it sends the app ID, which the device then sends to Google and gets a URL in return (or, if the app hasn't been listed, it does nothing). From that point on, you can send messages to the page over via Google's cloud, and your page can do whatever you want it to do. Getting your pages linked to an application ID on the Chromecast lookup servers costs $5.

Five dollars is a low price, but it's more than I really want to pay for a glorified DNS. I'm a little dismayed by the restrictions on the open web — I'd like the option to just send a URL directly. I'm also holding out for a pure JavaScript API, instead of piggybacking on the Chrome extension. So I probably won't be writing any Chromecast apps any time soon. But it's certainly not for a lack of ideas. The interaction model that Chromecast uses — where the screen is just a dumb display, but it can receive commands from other web-accessible devices — is strikingly similar to Microsoft's SmartGlass model. And where Microsoft seems to see it as a way to create companion apps for XBox, I think it's interesting to think about how this "distributed I/O" model could be used for standalone applications.

  • The Chromecast isn't going to rival any consoles, but it wouldn't have to be for a lot of group gaming experiences. Just having a screen that could be used as a scoreboard, or a trivia question where phones are used as buzzers, would be a cool usage that doesn't require precise controls or rich graphics. Turn-based games could easily use the screen as a board overview, while letting people zoom in and move their pieces from their local touchscreen. It also provides an interesting split between public and private information for players that many video games (excepting the Wii U and Dreamcast) couldn't duplicate.
  • I love maps. I think they're the real face of augmented reality, as any regular traveler can attest these days. But they don't have to be mobile. A Chromecast could easily serve as a map up on your wall, updated with whatever information you find interesting. Maybe that's as simple as the weather, but imagine being able to tag it with RFID information or last-known positions for people in your household. Systems like Google Now, which learn from your schedule, could even post notifications for the buses that are coming or traffic problems that you're likely to face.
  • Along those same lines, a simple dashboard could be helpful for businesses and individuals. Being able to throw metrics up on the wall with a web browser is not a new thing, but tying it to a smart, feed-aware service would open up all kinds of new tricks, like being able to leave yourself notes via a hashtag on social networks. There's not really any input needed: it's just a passive display of whatever you want to keep yourself caught up on, in an easy at-a-glance format.
  • Finally, it's probably just all the public speaking I've been doing lately, but it's tempting to think that a presentation app for Chromecast would be super-helpful for speakers. A lot of times, when I go to a meetup or a new classroom, it's hard to predict what kind of video hookups the projector will have, assuming that they even have a projector. But many times, there will be a big-screen LCD TV, with a handy HDMI input. Being able to carry a Chromecast with me to make my presentations, especially if the speaker notes can be viewed separately, would be awesome.
When we talk about the web being device-agnostic, the Chromecast is a perfect example of what we're talking about. It's radically different from other web clients: low DPI on a big screen, no local input, and unpredictable performance. But that's the power of the platform — as a toolkit, its reach is unparalleled. And the restrictions prove to be exciting inspiration for new uses, just as touchscreens came with their own unique challenges and advantages. I don't know if Chromecast is going to be successful, but the hacks for it are going to be really interesting.

November 22, 2013

Filed under: tech»coding

Plug In, Turn On, Drop Out

This is me, thinking about plugins for Caret, as I find myself doing these days. In theory, extensibility is my focus for the next major release, because I think it's a big deal and a natural next step for a code editor. In practice, it's not that simple.

Approach #1: Postal Services

Chrome has a pretty tight security model applied to packaged apps, not the least of which is a strict content security policy. You can't run code from outside your application. You can't construct code using eval (that's good!) or new Function (that's bad). You can't add new files to your application (mostly).

Chrome does expose an inter-app messaging system similar to postMessage, and I initially thought about using this to create a series of hooks that external applications could use. Caret would broadcast notifications to registered listeners when it did something, and those listeners could respond. They could also trigger Caret commands via message (I do still plan to add this, it's too handy not to have).

Writing plugins this way is neatly encapsulated and secure, but it's also going to be intensely frustrating. It would require auditing much of Caret's code to make sure that it's all okay with asynchronous operation, which is not usually the case right now. I'd have to make sure that Caret is sufficiently chatty, because we'd need hooks everywhere, which would clutter the code with broadcast/response blocks. And it would probably mean writing a helper app to serve as a patchboard between applications, and as a debugging tool.

I'm not wild about this one.

Approach #2: Repo, Man

I've been trying to think of a way around the whole inter-app messaging paradigm for about a month now. At the same time, I've been responding to requests for Git and remote filesystem support, which will not be a core Caret feature. For some reason, thinking about the two in close proximity started me thinking along a new track: what if there were a way to work around the security policy using the HTML5 file system? I decided to run some tests.

It turns out this is absolutely possible: Chrome apps can download a script from any server that's whitelisted in their manifest, write that out to the filesystem, and then get a special URL to load that file into a <script> tag. I assume this has survived security audits because it involves jumping through too many hoops to be anything other than deliberate.

The advantages of this approach are numerous. Plugin code would operate directly alongside of Caret's source, able to access the same functions and modules and call the same APIs that I use. It would be powerful, and would not require users to publish plugins to the Chrome store as if they were full applications. And it would scale well--all I would need to do is maintain the index and provide some helper functions for developers to use when downloading and caching their code.

Unfortunately, it is also apparently forbidden by the Chrome Web Store policies, which state:

Packaged apps should not ... Download or execute scripts dynamically outside a sandboxed environment such as a webview or a sandboxed iframe.
At that point, we're back to postMessage unless I want to be banned from the store. So much for the workaround.

Approach #3: Local hosting

So how can I make plugins work for end users? Well, honestly, maybe I don't. One of the nice things about writing developer tools, particularly oddball developer tools, is that the people using them and wanting to expand on them are expected to have some degree of technical knowledge. They can be trusted to figure out processes that wouldn't necessarily be acceptable for average computer users. In this case, that might mean running Caret as an unpacked app.

Loading Caret from source is not difficult--I do it all the time while I'm testing. Right now, if someone wants to fork Caret and add their own features, that's easy enough to do (and actually, a couple of people have done so already). What it lacks is a simple entry point for people who want to contribute functionality without digging into all the modules I've already written.

By setting up a plugins directory and a little bit of infrastructure, it's possible to reach a middle ground. Developers who really want extra packages can load Caret from source, dump their code into a designated location, and have their code bootstrapped automatically. It's not as friendly as having web store distribution, and it's not as elegant as allowing for a central repo, but it does deliver power without requiring major rewrites.

Working through all these different approaches has given me a new appreciation for insecurity, which sounds funny but it's true. Obviously I'm in favor of secure computing, but working with mobile operating systems and Chrome OS that strongly sandbox their code tends to make a person aware of how helpful a few security holes can be, and vice versa: the same points for easy extension and flexibility are also weak points that can be exploited by an attacker. At times like this, even though I should maybe know better, that tradeoff seems absolutely worth it.

November 14, 2013

Filed under: tech»coding

For Free Ninety Nine I'll Beat 99 Acts Down

Assuming that the hamster powering the Chrome web store stats is just resting, Caret clicked over to 10,000 installations sometime on Monday. That's a lot of downloads. At a buck a piece, even if only a fraction of those people had bought a for-pay version, that might be a lot of money. So why is Caret free? More importantly, why is it free and open source? Ultimately, there are three reasons:

  1. I feel like I owe the open source community for the value I've gotten from it (basically, everything on the Internet), and this is a way to repay that debt.
  2. Caret isn't really just mine. It's heavily influenced by Sublime, and builds on another open source project for its text processing. As such, it feels awkward to charge money for other peoples' work, even if Caret's unique code is significant in its own right.
  3. I think I get more value (i.e. job marketability, reputation, skill practice) out of being the person with a chart-topping Chrome app, in the long term, than I would get from sales.

Originally, I had planned on writing about how I reconcile being a passionate supporter of paid writing while giving away my hobby code, but I don't actually see any conflict. I expect a paycheck for freelance coding the same way I expect it for journalism — writing here (and coding Caret) doesn't directly benefit anyone but me, and it doesn't really cost me anything.

In fact, it turns out that both industries also share some uncomfortable habits when it comes to labor. Ashe Dryden writes:

Statistically, we expect that the demographic breakdown of people contributing to OSS would be about the same as the people who are participating in the OSS community, but we aren't seeing that. Ethnicity of computing and the US population breaks down what we would hope to see as far as ethnicity goes. As far as gender, women make up 24% of the industry, according to the same paper that gave us the 1.5% OSS contributor statistic.

Dryden was responding to a sentiment that I've seen myself (and even been guilty of, from time to time): using a person's open source record on sites like GitHub as a proxy for hireability. As she points out, however, building an open source portfolio is something that's a lot easier for white men. We're more likely to have free time, more often have jobs that will pay for open source contributions, and far less likely to be harassed or dismissed. I was aware of those factors, but I was still shocked to see that diversity numbers in open source are so low. We need to do better.

As eye-opening as that is, I think Dryden's middle section centers around a really interesting question: who profits?

I'd argue that the people who benefit the most from the unpaid labor of OSS as well as the underpaid labor of marginalized people in technology are business owners and stakeholders in these companies. Having to pay additional hundreds of thousands or millions of dollars for this labor would mean smaller profit margins. Technology is one of the most profitable industries in the US and certainly could support at least pay equality, especially considering how low our current participation is from marginalized people.

...Open source originally broke us free from the shackles of proprietary software which forced us to "pay to play" and gave us little in the way of choices for customization. Without realizing it, we've ended up in a similar scenario where we are now paying for the development of software that large companies financially benefit from with little cost to them.

Her conclusion — that the community benefits, but it's mostly businesses who boost their profits from free software — should be unsettling for anyone who contributes to open source, and particularly those of us who see it as a way to spread a little socialist good will. For this reason, if nothing else, I'll always prefer the GPL and other "copyleft" licenses, forcing businesses to play ball if they want to use my code.

Future - Present - Past