this space intentionally left blank

May 30, 2013

Filed under: tech»coding

Project Seymour

A month from now, Google will shut down Reader, leaving RSS addicts in the lurch. I suspect this will be both more and less disruptive than anticipated: expect replacement services to go through another set of growing pains, but RSS isn't exactly a high lock-in situation, and most people will find a new status quo fairly quickly.

I am not eager to move from one hosted service to another (once burned, twice shy), nor do I want to go back to native applications that can't share progress, so as soon as the shutdown was announced I started working on a self-hosted RSS reader. I applied the same techniques I'd used for Big Fish Unlimited: an easy-to-configure router, a series of views talking to the database only through model classes, and heavy use of closures for dependency management and callbacks. I built a wrapper around PHP's dismal cURL library. It was a nice piece of architecture.

It also bogged down very, very quickly. My goal was a single-page application with straightforward database queries, but I was building the foundation for a sprawling, multi-page site. Any time I started to dip in and add functionality, I found myself frustrated by how much plumbing I needed in order to do it "the right way." I was also annoyed by the difficulty of safely requesting a large number of feeds in parallel in PHP. The language just isn't built for that kind of task, even with the adaptations and improvements that have been pasted on.

This week I decided to start over, this time using Node.js and adopting a strict worse is better philosophy. When I use Reader, 99% of my time is spent in "All Items" pressing the spacebar (or, on mobile, clicking "Mark Items as Read") to advance the stream. So I made that functionality my primary concern, and wrote only as much as I needed to (both in terms of code size and elegance) to make that happen. In two days, I've gotten farther than I had with the PHP, and I'm much happier with the underlying platform as well--Node is unsurprisingly well suited to firing off tens and hundreds of concurrent requests.

I've just posted the work-in-progress code for the application, which I'm calling Weir (just barely winning out over "Audrey II"), to a public GitHub repo. It is currently ugly, badly-documented, and patchy in places. The Angular code I'm using for the front-end is obviously written by a someone with very little experience using the library. There's lots of room for improvement. On the other hand, my momentum is very good. By next week, I expect Weir will be good enough for me to dogfood it full time, and at that point improvements will come naturally whenever I need to smooth out the rough edges.

I like this way of working--"worse is better"--quite a bit. It's not always pretty, but it seems effective so far. It also fits in well with my general coding style, which is (perhaps unsurprisingly) on the left-ish side of Steve Yegge's developer politics. I like elegance and architecture as much as the next person, but when it all comes down to it, there's no point in elegant code that never gets used.

Writing my own Reader alternative is also proving educational. The conventional wisdom is that RSS readers benefit greatly from running at scale: operations like feed retrieval can be performed once for all subscribers, spreading the costs out. The flip side is that you're at the mercy of the server bot for when you get updates. High-frequency feeds, such as politics or news, get batched up instead of coming in as they're posted. I'm also able to get a lot more feedback on which feeds are dead, which came as a surprise: Reader just swallowed the errors whole. All in all, I doubt the experience will be any worse.

Currently, Weir isn't much good for public consumption. I've made a sanitized copy of my config file in the repo, but there's no setup script for the database, and no import step for getting your subscriptions loaded up. I hope to have that ready soon, and the code is licensed under the GPL, so pull requests and feature suggestions are welcomed as it becomes usable for other people.

May 22, 2013

Filed under: tech»education

Equal Opportunity

Last Friday, I gave a short presentation for a workshop run by the SCCC Byte Club called "Technical Interview Mastery for Women." Despite the name, it was attended by both men and women. Most of my advice was non-gender specific, anyway: I wanted to encourage people to interview productively by taking into account the perspective from the other side of the table, and seeing the process more as a dialog instead of a confrontation.

Still, during the question and answer period, several people asked about being women in the interview process. Given that my co-presenter has many years more experience being a woman, I deferred to her whenever possible, but I did chime in when the conversation turned to interaction styles. One participant said she was ignored if she wasn't assertive enough, but was then considered unpleasant if she stuck up for herself--what could she do about this?

It's one thing, I said, to suggest ways that women should adapt their communications for a male-dominated workplace--that kind of pragmatic code-switching may well do the trick. But I think it's unfair to put all the burden on women to adapt to men. There needs to be a way to remind men that it's their responsibility to act reasonably.

The problem is that it's often difficult to have that conversation without falling afoul of the same double-standard that says women in the workplace shouldn't be too loud. Complaining about sexism tends to raise hackles--meaning that the offending statement not only goes uncorrected, but dialog gets shut down. I don't know that I have any good solutions to that, but I suggested finding ways to phrase the issue akin to Jay Smooth's presentations on How To Tell People They Sound Racist. I like to think that most people aren't trying to be sexist, they're just not very self-aware. This may be a faulty assumption.

There are still people who argue that the tech industry isn't sexist--that women just aren't as inherently good at coding (this is often hidden behind comments that it's a "meritocracy"--in which, conveniently, women somehow just haven't had merit). From my point of view, I don't see any way that could be correct. My best JavaScript students are split 50/50 between men and women (so are the worst students). I trained equal numbers of men and women on the multimedia team at CQ (and probably would have given the effectiveness prize to the women in a pinch). Moreover, I've never seen any evidence that the skills I use in day-to-day work--spatial reasoning, some basic math, navigating abstraction--are gender-exclusive (or, indeed, required for all programming: the job of a web programmer is markedly different from a systems coder or security investigator, and yet those also suffer from serious inequality issues).

My talk at the workshop was specifically about interviewing, but obviously this is an issue that goes beyond hiring. Something is happening between the classroom and the workplace that causes this disparity. We have a word for this--sexism--regardless of the specific mechanics. And I would love to have more discussions of those specifics, but it's like climate change: every time there's a decent conversation in a public forum about solutions, it gets derailed by people who insist loudly that they don't think there's a problem in the first place.

That said, assuming that people just don't realize when they've done something wrong, there are doubtless ways to address the topic without defensiveness. If the description "sexist" derails, I'm personally happy to use other terms, like "unprofessional" or "rude"--I'm just embarrassed that I (and others) need to resort to euphemism. We need to change the culture around this discussion--to make it clear that we (both men and women) take this seriously, including respectful responses to criticism. We can do better, and I'd like to be able to tell future workshops that we're trying.

May 16, 2013

Filed under: tech»web

Why the Web Wins

Last year, Google spent most of its I/O conference keynote talking about hardware: Android, Glass, and tablets. This year, someone seems to have reminded Google that they're a web company, since most of the new announcements were all running in a browser, and in many cases (like the photo editing and WebGL maps) pushing the envelope for what's possible. As much as I like Android, I'm really happy to see the web getting some love.

There's been a drumbeat for several years now, particularly as smartphones got more powerful, to move away from web apps, and Google's focus on Android lent credence to that perspective. A conventional wisdom has emerged: web apps were a misstep, but we're past that now, and it'll be all native from this point out. I can't disagree with that more, and Google's clearly staking its claim as well.

The reason the web wins (such that anything will) is not, ultimately, because of its elegance or its purity (it's not big on either) but because of its ubiquity. The browser is the worst cross-platform API except for all the other ones, and (more importantly) it offers persistence. I can turn on any computer with an Internet connection and have near-instant access to files and applications without installing anything or worrying about compatibility. Every computer is my computer on the web.

For context, there was a time in my high school years when Java was on fire. As a cross-platform language with a network-savvy runtime, it was going to revive thin clients: I remember talking to people about the idea that I could log into any computer and load my desktop (with all my software) over the Internet connection. There wouldn't be any point to having your own dedicated hardware in a world like that, because you'd just grab whatever was handy and use it as a host. It was going to be like living in a William Gibson novel.

Java ended up being too heavy and too slow to make that actually happen. Instead, this weird combination of JavaScript, HTML, and CSS took over, like weeds springing up and somehow forming a fully-furnished apartment block. The surprise was that the ad-hoc web platform turned out to be competitive with Java on the front-end. Even though it's meant to be a document viewer, the browser is pretty good at building UI, and it's getting a lot better. I've been creating some web apps lately without worrying about backwards compatibility, and it's been remarkably pleasant, both as a developer and a user.

I don't believe that native programs will ever entirely go away. But I do think we see web applications spreading their tentacles over time, because if something is possible in the browser--if it's a decent user experience, plus it has the web's advantages of instant, no-install launch and sharing across devices--there's not much point in keeping it native. It's better to have your e-mail on any device. It's better for me to do presentations from a browser, instead of carrying a Powerpoint file around. It's better to keep my RSS reader in the cloud, instead of tying its state to individual machines. As browsers improve, this will be true of more and more applications, just as it was true of the Java applets that web technology replaced.

Google and I disagree with where those applications should be hosted, of course. Google thinks they should run it (which for many people is perfectly okay), and I want to run them myself. But that's a difference of degree, not principle. We both think the basic foundation--an open, hackable, portable web--is an important priority.

I like to look at it in terms of "design fiction"--the dramatic endpoint that proponents of each approach are aiming to achieve. With native apps, devices themselves are valuable, because native code is heavy: it takes time to install, it stores data locally, and it's probably locked to a given OS or architecture. Web apps don't give us the same immediate power, but their ultimate goal is a world where your local hardware doesn't matter--walk up to any web-capable surface, and your applications are there. Software in the web-centric viewpoint follows you, not your stuff. There are lots of reasons why I'm bullish on the web, but that particular vision is, for me, the most compelling one.

May 8, 2013

Filed under: music»performance»dance

That's Guerrilla With a U

Soul Society is here again, and so am I. If you're in the DC area this weekend, check it out.

May 2, 2013

Filed under: gaming»design


There's a common complaint about the Bioshock games, which is that they're not very good shooters. People writing about Bioshock Infinite tend to mention this, saying that the story is interesting and the writing is sharp but the actual game is poor. And this is true: it's not a very good first-person shooter, and it's arguably much worse than its predecessors. But this implication of most of these comments, from Kotaku's essay on its violence to Brainy Gamer's naming it the "apotheosis of FPS, is that Infinite is bad in many ways because it's a first-person shooter--that it's shackled to its point of view. In doing so, it has become a sort of stand-in for the whole genre, from Call of Duty to Halo.

I sympathize with the people who feel like the game's violence is incoherent (it is), and who are sick of the whole console-inspired manshooting genre. But I love shooters, and it bugs me a little to see them saddled with the burden of everything that's wrong with American media.

Set aside Infinite's themes and its apparent belief that the best superpower is the ability to literally generate plot holes--when we say that it's not a good FPS, what does that means? What is it, mechanically, that separates the two? I'm not a designer, but as a avid FPS player, there are basically three rules that Infinite breaks.

First of all, the enemy progression can't be just about "bigger lifebars." A good shooter increases difficulty by forcing players to change their patterns because they're not able to rely on the same rote strategy. Halo, for all its flaws, gets this right: few of its enemies are actually "tough," but each of them has a different method of avoiding damage, and a different weapon style. By throwing in different combinations, players are forced to change up their tactics for each encounter, or even at multiple points during the encounter. Almost all of Infinite's enemies, on the other hand, are the same walking tanks, with similar (dim-witted) behaviors and hitscan weaponry. I never had to change my approach, only the amount of ammo I used.

Along those lines, weapons need strengths and weaknesses. Each one should have a situation where they feel thrillingly powerful, as well as a larger set of situations where they're relatively useless. This doesn't have to conflict with a limited inventory--I loved Crysis 2's sniper rifle, spending the entire game sneaking between cover positions in stealth mode, but it was always paired with a strong close-in gun for when I was overrun. A good game forces you to change weapons for reasons other than "out of ammunition." Infinite's close-range weapons feel identical, and its sniper rifle is rarely useful, since a single shot alerts everyone to your position.

Finally, every fight cannot simply be about shooting. Most shooters are actually about navigating space and territory, and the shooting becomes a way of altering the priorities for movement. Do you take cover, or dodge in the open? Do you need more range, or need to close on an enemy? The original Bioshock made the interplay between the environment and your abilities one of its most compelling features: electrifying pools of water, setting fire to flammable objects, flinging scenery around with telekinesis. But at the very least, you need an objective from time to time with more complexity than "kill everything," both as a player and in terms of narrative.

Bioshock Infinite has, in all seriousness, no period I can remember when my objective was not reduced to "kill everything." Combined with a bland arsenal and blander enemies, this makes it a tedious game, but it also puts it at complete odds with its characters. The writing in Infinite is unusually good for a shooter, but it's hard not to notice that Elizabeth freaks out (rightfully) during one of Booker's murderous rampages, comes to a cheery acceptance with it a few minutes later, and then spends the rest of the game tossing helpful items to you under fire. That's writing that makes both the narrative and the mechanics worse, by drawing attention to the worst parts of both.

It's not the only shooter with those flaws--people just had higher expectations for it. The average FPS is badly written, and it's a favorite genre for warmongering propaganda pieces. But that's true of many games, and yet we don't see pieces talking about the "apotheosis of platformers," or talking about RTS as though they're emblematic of wider ills just because Starcraft II is kind of a mess. And there's still interesting stuff being done in the genre: Portal and Thirty Flights of Loving come to mind. To say that FPS have reached their limits, ironically, seems like a pretty limited perspective.

April 25, 2013

Filed under: culture»internet

Network Affect

A couple of years ago, I spent the money for a subscription to Ars Technica, because I really liked their Anonymous/HBGary reporting, and wanted the full RSS feeds. Along with that, every now and then they'll send out a message about a coupon or special offer, which is how I ended up with a free account on, the for-pay Twitter clone. Then I forgot about it, because the last thing I need is a way to find more people that annoy me.

And then someone linked me to this blog post, which made my week. It's a pitch for in the most overwrought, let-them-eat-cake way. I'm going to excerpt a bit, but you should click through: it's better when you can just soak up the majesty of the whole thing:

The difference between a public and a private golf course is so profound that it's hard to play a public course after being a member of a private course. It's like flying coach your entire life, and then getting a first class seat on Asiana — it's damned hard to go back.

That's the difference between Twitter and to me. Twitter is the public golf course, the coach seat. It's where everyone is, and that's exactly the problem. is where a few people that are invested in the product, its direction, and the overall health of the service, go to socialize online.

[paragraph of awkward self-promotion removed]

Welcome to the first-class Twitter experience.

I actually don't know if I could write a parody of upper-class snobbery that good. If you hold your hand up the screen, you can almost feel the warmth of his self-regard--but not too close! They don't let just anyone into this country club, you know.

Seriously, though: while I've amused myself endlessly trying to come up with even-less-relatable metaphors for things ("Twitter is the black truffle, as opposed to the finer white truffles I eat at my summer home in Tuscany"), one random doofus with a blog is not cause for comment. Silly as it is, that post made me reconsider they way I look at internet advertising and ownership--if only to avoid agreeing with him.

In general, I'm not a big fan of advertising or ad-supported services. On Android, I usually buy apps instead of using the free versions, and I believe that people should own their content on the Internet. But let's be realistic: most people will not pay for their own server or software, and many people can't--whether because they don't have the money, or because they don't have access to the infrastructure (bank account, credit card, etc.) that's required. Owning your stuff on the internet is both a privilege and a visible signifier of that privilege.

This creates heirarchies between users, and even non-savvy people pick up on that. When Instagram finally decided to release an Android client, the moaning from a number of users about "those people" invading their clean, tasteful, iPhone-only service was a sight to behold. The irony of Instagram snobbery is that the company was only valuable because of its huge audience. It only got that userbase because it was free. Therein lies the catch-22 of these kinds of services: the scale that makes them useful and valuable also makes them profoundly expensive to run. Subscription-based or self-hosted business models are more sustainable, but they're never going to get as big.

Meanwhile, the technical people who think they could do something about these problems--"a few people that are invested in the product, its direction, and the overall health of the service"--are off building their own special first-class seating. Not that I think they'll make it, personally--it's a perpetual tragedy that the people threatening to Go Galt never do, since that would require them to stop bothering the rest of us.

I often see people expressing distaste for ad-supported sites with the oft-quoted line "you're not the customer, you're the product." That's nice when you have the option of paying for your own e-mail, and running your own blog, in the same way that minimalism looks awfully nice when you have the credit rating to afford it. People without money have to live with clutter. If we're interested in an internet that offers opportunity to everyone, we have to accept a more forgiving view of ad-supported business, and focus on how to make it safer for people who have no other option. Otherwise we're just congratulating each other on getting into the country club.

April 17, 2013

Filed under: meta»announce»delays

The Cruelty of One's Early Thirties

Normally, I try to have something written and posted here by Wednesday night each week, because I feel like that's the minimum I can write and still call myself a blogger. This week, unfortunately, between writing my textbook (highly recommended!) and trudging through Bioshock Infinite (not at all recommended!), my right wrist is probably in the worst shape it's been in for about five years now. To recover, I'm giving myself the week off from computers outside of work.

I figure you don't really need to know this, but if I write it up here, I'm more likely to stick to it.

While I'm complaining, my knees hurt and these kids won't stay off my lawn.

April 8, 2013

Filed under: fiction»litcrit

An Iain Banks Primer

Last week, Iain Banks announced that he has terminal cancer, with probably a year remaining to live. He'll hopefully see the publication of one more book, Quarry, before he goes.

Banks has long been one of my favorite authors, to the point that our living room bookshelves have several units devoted entirely to his work. I even had Belle bring me back paperbacks of his literary fiction from a trip to England, since those are still hard to find on this side of the pond. I'm tremendously saddened that he's doing so poorly, and I hope his plans to enjoy his remaining time as much as possible are a success.

If you've never really read any of Banks' work, and you'd like to see what the fuss is about now, where should you start? The answer seems to be fairly personal--especially within the science fiction genre, opinions often differ wildly on which books are better. This is my take, sorted between the two genres (literary and SF) that Banks called home.


  • The Wasp Factory: His debut novel, this very much introduces two common elements of Banks' fiction: twist endings, and sympathy for characters who are very much unsympathetic. The Wasp Factory centers on a young sociopath living in a Scottish village, who ritually tortures insects as a method of self-therapy. It's better than it sounds, but don't start here.
  • The Business: This is a better place for first-time readers. Banks uses this book to gently satirize capitalism, with its main character being a senior manager for the shadow company that runs most of the world behind the scenes, and would now like to buy its own country for tax purposes. It's a little fluffy, but also tremendously fun.
  • Walking on Glass: Published soon after The Wasp Factory, many of the same tics are present, but this time the story is told from multiple perspectives--one of which is entirely fanciful. I think this is the first of Banks' novels that I read, and it blew me away, but didn't hold up nearly as well on a second reading.
  • The Bridge: Is this science fiction, or literary? The bridge sees Banks learning how to combine the techniques from his previous two books, but leave off the twist ending in favor of more character development and discovery. I also love the chapters written in full-Scottish brogue as a parody of Conan-esque barbarian tales. This'll always be one of my favorites, and is a great place to jump in.
  • Dead Air: Of the literary side, this is the only title I'd actively skip. Banks can be a bit of a polemicist, which doesn't normally bother me, but in this book about a shock jock he lets the character rail on a bit more than is really justified. If you want a book about character redemption, you're better off with Espedair Street or The Crow Road.

Science Fiction

  • Player of Games: Generally considered the best intro to the Culture books, which is probably about right. It has all the elements of a great Culture yarn: huge set pieces, likeable characters who are dissatisfied with their utopian society, and the manipulations of the Mind AIs that actually run the Culture as a whole. It also serves as a fun, slightly-stacked argument in favor of Banks' socialist, post-scarcity future, with the capitalist aliens serving as skeptical audience stand-ins.
  • Use of Weapons: If The Bridge was just on the literary side of things but had a number of science-fictional elements, Use of Weapons is its counterpart. This is definitely SF, but it has elements of cruelty and experimentation that could easily have come from Walking on Glass. It also has a fascinating structure, since the chapters alternate between two different parts of the main character's life as a Culture mercenary, each shedding light and leaving clues for the other, until they merge together for a devastating conclusion. It also begins Banks' habit of showing how the Culture's utopian surface actually hides a number of much less savory choices being made for the greater good.
  • Against a Dark Background: One of my favorites from outside the Culture books. AADB follows a former soldier named Sharrow who is hired to find one of the Lazy Guns--demented superweapons that destroy their targets with sudden, completely random flights of whimsy. Since there's no continuity to worry about, Banks has a great deal of fun with one-off jokes, like the gang of solipsists that wander in and out, each convinced that everyone else is just a hallucination. It's also a merciless book when it comes to its characters, but not without reason.

In addition to these older titles, you may be interested in my reviews of Banks' newer work, including The Hydrogen Sonata, Surface Detail, Matter, and Transition.

March 27, 2013

Filed under: politics»issues»education

Free 'Til It Hurts

These "Academic Freedom Act" laws seem like a very good idea to me, but I wonder if we're taking them far enough. If the Discovery Institute and all manner of right-wing think tanks want to Teach the Controversy, why limit ourselves to evolution and climate change? With that in mind, I've assembled a new school curriculum that (finally!) acknowledges the complicated world beyond "facts" and "truth."

Social Studies: Students will learn about the checks and balances built into our democratic way of life, of course. But we shouldn't leave them ignorant of competing theories, such as David Icke's "lizard oligarchy," in case the queen of England really does turn out to be a giant space reptile bent on world domination. As high school seniors, students will also spend the semester learning about Ayn Rand's theory of radical selfishness, in the hopes that it will keep them from reading Atlas Shrugged in college and becoming insufferably tedious for about a year and a half.

History: Move over, eurocentric history! Take cover, afrocentric and multicultural history! Under new management, history class will approach the hard questions of the past with an open mind toward alternate theories. For example: did the holocaust really happen, or is it just the invention of a shadowy cabal working behind the scenes of our financial and entertainment industries? You know who I'm talking about.

Physical Education: Gym class doesn't change, but students who get sick will now be told that their humours are out of balance, and will be bled by on-site leeches. Coaches also have the option of blaming vaccines when the football team loses.

English: Given the predominance of "literacy" in the early grades, students will spend the second half of their primary education learning how to communicate pre-verbally, mostly by pointing and grunting. For many teenagers, this won't be much of a change. The curriculum will culminate with a trip to a local quarry, where the students will attempt to recreate the Lascaux cave paintings, thus teaching them the valuable life lesson that art is hard so why try anyway?

Math: I tried to think of something funny about math, and then I remembered that we still teach kids about "imaginary" numbers, and to add insult to injury we do so very badly. Math is weird, y'all.

Foreign Languages: One word: Esperanto. Ironically, in Esperanto, this is actually twelve words. It's the language of the future, people. William Shatner did a whole movie in Esperanto once. I've got a good feeling about this one.

You're welcome.

March 20, 2013

Filed under: tech»education


Working on my textbook continues to be a great opportunity to write interesting little snippets of interactive JavaScript. Today I'd like to draw your attention to a couple of new modules for doing annotated source walkthroughs that I'm calling Timelapse. They're located in the repo under js/meta/TimeLapse and js/meta/TLPlayer. There's also a demo history file located here.

There are lots of tools for doing diffs between two source files, but I'm not aware of any source control system (save Perforce, which we use at ArenaNet) that do a timeline view of all revisions since a file was first checked in, and none that store the entire revision history in a single, web-friendly format. This is a shame, because my goal for several parts of the textbook is to be able to "replay" the process of writing a script, to show how it develops from a few lines of simple code into larger and more functional units like functions and prototypes. It's possible that someone else has done something like this, but a cursory Google couldn't turn it up, so I made my own.

The syntax for the files that Timelapse uses is designed to be similar to a standard diff file, but to not collide with JavaScript for easy parsing. It's a line-by-line comparison format with two main types of line tags:

  • @x,y@ source line: In this case, the tagged line exists from revision x to revision y. Both x and y are optional--x defaults to the first revision, and leaving out y will mark the line as included through the end of the history.
  • @@c:x; comments @@: This tag marks a multiline comment for a single revision x. Everything between the semicolon and the closing @@ will be loaded but not shown with the rest of the source.

You don't have to write these files by hand, which is good, because they can get pretty nightmarish. Instead, I've written an authoring tool for putting in multiple revisions (or importing them, using the HTML5 file API), commenting them, and exporting them. Using Ace means the editor is friendly and includes source-highlighting, which is great. You also don't have to worry about writing an output parser: the TLPlayer module is not quite complete, but it's done enough to wire it up to a UI and let people flip through the file, with new lines highlighted in the output.

If you'd like to see a demo, I've started using it for the chapter on writing functions. My goal is to put at least one timelapse at the end of each chapter, so that readers can see the subject matter being used to build at least on real-world code script. By doing these as revision histories, I'm hoping to avoid the common textbook "dump a huge source example into the chapter" syndrome. I know when I see that, my eyes glaze over--I don't see any reason that it's any different for my students.

Although I don't have a license on the textbook files yet (they'll probably be MIT-licensed in the near future), you're welcome to use these two modules for your own projects, and feel free to submit patches (the serialization, in particular, could probably use some love with someone with a stronger parsing background). I'd love to see if this is useful for anyone else, and I'm hoping it will help make this textbook project much friendlier to new developers.

Future - Present - Past