In her review of Embassytown, Ursula Le Guin makes a lovely point about the genre and its critics:
Some authors fill a novel with futuristic scenery and jargon and then strenuously, even stertorously, deny that it's science fiction. No, no, they don't write that nasty stuff, never touch it. They write literature. Though curiously familiar with the tropes and conventions of the despised genre, they so blithely ignore the meaning of terms, they reinvent the wheel with such cries of self-admiration, that their endeavours seem a doomed effort to prove that one can write a novel without learning how. ... Only the trash forms of science fiction are undemanding and predictable; the good stuff, like all good fiction, is not for lazy minds. Where the complexity of realistic novels is moral and psychological, in science fiction it's moral and intellectual; individual character is seldom the key.To put it another way, good science fiction is often sociological literature, as opposed to psychological. It is, as the argument goes, more about the concerns of the present than about any predicted future. And as Mieville himself has written, it lends itself to radical political thinking precisely because it jettisons the most basic bounds of possibility. That's a powerful tool, as Sam Thompson's review in the London Review of Books explains.
Lately I've been wondering if, after the genre gets over its twin infatuation with steampunk and the undead, we're due for another round of post-apocalyptic sci-fi. There's certainly cause for it--global warming, outbreaks of E. coli and bird-flu, and of course, worries over peak oil. The last is something I find particularly intriguing. Could we call it "peak everything?"
Peak oil, of course, is the theory that A) there's a limited amount of petroleum that can be extracted from the earth within our current geological span of existence, and B) we will soon pass, or have already passed, the point where we have extracted all the easy-to-find oil. That means it'll be increasingly expensive to run a petroleum economy--possibly too expensive. We'll have to find alternatives, or suffer serious political and economic consequences.
Peak everything, by extension, would be based on the idea that oil isn't the only exhaustible resource on which our industrialized society is based. Modern electronics are manufactured using a variety of rare materials like coltan, and they aren't inexhaustible (not to mention that they're mined from conflict areas, making their supply prone to disruption). The plastics used in packaging and components are petroleum-based. And some things that are technically renewable, like oil, might turn out to be refreshed at a rate so slow as to be non-renewable. In response, we'd have to drastically change our manufacturing and consumption, which would ripple through unpredictable parts of society: our labor economies, our politics, our communications, and our urban planning, just for a start.
Whether or not this is scientifically feasible, I don't know. It's largely irrelevant anyway, given the genre's loose relationship to its namesake discipline. Charlie Stross, for example, has written regularly about the impossibility of certain genre tropes--this post on the impracticality of interplanetary travel is a great example. Let's not even get started on things like telepathy, artificial intelligence, or functional libertarian governments. Eco-fiction, like other genre settings, is a tool for exploring more than just scientific theory.
But even if we don't see "peak everything" as a subgenre on the rise, I've enjoyed thinking about it this week when reflecting on my own life. After all, how much of a typical American lifestyle could be maintained if we imposed a strict renewables-only requirement? Would we see more bespoke manufacturing, more user-serviceable parts encased in renewable materials like wood and bone? Would we learn to value beautiful patterns of wear over shiny newness?
Despite what seems like obvious potential, I've only read one novel recently that really played with the idea of rampant resource depletion: Paolo Bacigalupi's The Windup Girl. Bacigalupi sets the book in Thailand after the oceans rise and the oil economy runs down: coiled springs are revived as the main method of energy storage, and biotech is used as an imperfect replacement for now-defunct technology. I wouldn't say that The Windup Girl was one of my favorite books this year--it drags a bit, and some of its characterization is patchy--but I loved its willingness to interrogate contemporary economic breakdown without simply veering abruptly into Mad Max caricature. For a genre where "anything's possible," that kind of clarity is a bit too rare--and all too necessary.
Summer is here, bringing with it 100° weather and a new series of CQ data projects--which, in turn, means working with Excel again. Here is a list of all the things I hate about Excel:
Why have I rediscovered my enthusiasm for Excel? It's kind of funny, actually. In the past couple of years, fueled by a series of bizarre experiments in Visual Basic scripting, I've often solved spreadsheet dilemmas using brute-force automation. But now that I'm working more often with a graphics reporter who uses the program on OS X, where it no longer supports scripting, I'm learning how to approach tables using the built-in cell functions (the way I probably should have done all along). The resulting journey is a series of elegant surprises as we dig deeper into the Excel's capabilities.
I mean, take the consolidate operation and the LOOKUP function. If I had a dime for every time I'd written a search-and-sum macro for someone that could have been avoided by using these two features, I'd have... I don't know, three or four bucks, at least. Consolidate and LOOKUP are a one-two combo for reducing messy, unmatched datasets into organized rows, the kind of information triage that we need all too often. I've been using Excel for years, and it's only now that I've discovered these incredibly useful features (they're much more prominent in the UI of the post-Ribbon versions, but the office is still using copies of Excel 2000). It's tremendously exciting to realize that we can perform these kinds of analysis on the fly, without having to load up a full-fledged database, and that we're only scratching the surface of what's possible.
I find that I don't miss the challenge of coding in Excel, because formula construction scratches the same problem-solving itch. Besides, spreadsheets are also programs, of a sort. They may not be Turing-complete, but they mix data and code in much the same way as a binary program, they have variables and "pointers" (cell references), and they offer basic input and output options. Every cell formula is like its own little command line. Assembling them into new configurations offers the same creative thrill of a programming task--at smaller doses, maybe, but in a steady drip of productivity.
But honestly, efficiency and flexibility are only part of my affection for Excel. I think on some level I just really like the idea of a spreadsheet. As a spacially-oriented thinker, the idea of laying out data in a geometric arrangement is instantly intuitive to me--which, for all that I've grown to like SQL, is not something I can say for relational database queries. "We're going to take some values from over there," says Excel, "and then turn them into new values here." A fully-functioning spreadsheet, then, is not just a series of rows and columns. It's a kind of mathematical geography, a landscape through which information flows and collects.
By extension, whenever I start up Excel and open up a new sheet, the empty grid is a field of undiscovered potential. Every cell is a question waiting to be asked and answered. I get paid to dive in and start filling them up with information, see where they'll take me, and turn the results into stories about the world around us. How could anyone not find that thrilling? And how could you not love a tool that makes it possible?
There are three consoles stacked behind our TV. They're the retro platforms: my SuperNES and Dreamcast, and Belle's PS2. I think they're hooked up, but I can't honestly remember, because I rarely turn them on or dig out any of the games that go with them. They just sit back there, collecting dust and gradually turning yellow in the sun, like little boxes of electric guilt. I'm almost starting to hate them.
Most people probably have a media backlog of some kind: books they haven't gotten around to reading, movies they haven't had time to watch, music they can't give the attention it might deserve. But I think gamers have it worst of all, for two reasons. First, the length of the average game, especially older games, is a huge obstacle to completion. Second, there's a lot of hassle involved for anything going back more than a generation.
Belle and I are trying to reduce our physical footprint, so having to keep older consoles around "just in case" grates, but emulation's a mixed bag even when it works. Worse, I have a really difficult time tossing old games that I haven't finished: how could I get rid of Virtual On, Chu Chu Rocket, or Yoshi's Island? Those are classics! I'm also prone to imagine unlikely scenarios during which I'll finish a game or two--my favorite is probably "oh, I'll play that when I get sick one day" as if I were in grade school, a plan that ignores the fact that I'm basically a workaholic. If I'm sick enough to stay home, I'm probably too ill to do anything but lay in bed and moan incoherently.
Having realized that I have a problem, one solution is simply to attack it strategically--if I can only decide what that strategy would be. Should I work backward, from newest to oldest? Or start from the SuperNES and go forward through each platform, gradually qualifying each one for storage? Clearly, the "play at random" approach is not narrowing my collection with any great success.
There is, however, another option, and ultimately it's probably for the better: to simply accept that the backlog is not a moral duty. I don't have to play everything. I think gaming culture is very bad about this: the fact that many gamers grew up with certain titles lends them a nostalgic credibility that they probably don't entirely deserve. And frankly, if the titles I'm considering were that compelling, I wouldn't have to force myself to go back and play them.
I'm hardly the only gamer I know caught between practicality and sentiment. The one plan that would unify both would be digital distribution on a neutral platform--the current XBox and Wii emulations fall far short of this, since they just lock my classic games to a slightly newer console. I'd love to see a kind of "recycling" program, where I could return old cartridges in exchange for discounts on legitimate ports or emulations on a service like Steam or Impulse. After all, even without the trade-in value, I sometimes buy Steam copies of games I already own just because I know they'll then be available to me, forever, without taking up any physical space.
Game publishers probably won't go for that plan. I can hardly blame them: the remake business, just as with the "high-def remaster" film business, is no doubt a profit machine for them. But I don't think it'll last forever. Just as I buy fewer movies these days, since I'd rather stream from Netflix or rent from Amazon Digital, the writing is probably on the wall for buying software in boxes. That won't eliminate the backlog--but it'll certainly clear up the space behind my TV set.
Our cat will be thrilled.
...I dislike thinking in terms of allegory--quite a lot. I've disagreed with Tolkien about many things over the years, but one of the things I agree with him about is this lovely quote where he talks about having a cordial dislike for allegory.
The reason for that is partly something that Frederic Jameson has written about, which is the notion of having a master code that you can apply to a text and which, in some way, solves that text. At least in my mind, allegory implies a specifically correct reading--a kind of one-to-one reduction of the text.
It amazes me the extent to which this is still a model by which these things are talked about, particularly when it comes to poetry. This is not an original formulation, I know, but one still hears people talking about "what does the text mean?"--and I don't think text means like that. Texts do things.
I'm always much happier talking in terms of metaphor, because it seems that metaphor is intrinsically more unstable. A metaphor fractures and kicks off more metaphors, which kick off more metaphors, and so on. In any fiction or art at all, but particularly in fantastic or imaginative work, there will inevitably be ramifications, amplifications, resonances, ideas, and riffs that throw out these other ideas. These may well be deliberate; you may well be deliberately trying to think about issues of crime and punishment, for example, or borders, or memory, or whatever it might be. Sometimes they won't be deliberate.
But the point is, those riffs don't reduce. There can be perfectly legitimate political readings and perfectly legitimate metaphoric resonances, but that doesn't end the thing. That doesn't foreclose it. The text is not in control. Certainly the writer is not in control of what the text can do--but neither, really, is the text itself.
China Mieville, talking to BLDGBLOG
Reading Embassytown, it is obvious that China Mieville has been thinking deeply about metaphors and control for a long time. His first really "science fiction" book, it's a complex meditation on language and colonialism, all filtered through Cronenberg-esque body horror. And while there are scattered threads of homage (I did a double-take at the mention of Karen Traviss' aggressively vegan aliens, the Wess'har), there's no doubt that this is Mieville still writing Weird Fiction in a way nobody else can manage.
Told from the point of view of "immerser" Avice Benner Cho, Embassytown initially jumps back and forward across time, but eventually settles down into a straightforward narrative. Cho comes from a backwater colony planet that's home to aliens named the Hosts, whose Language (capitalization in the original) has some odd characteristics: it's a double-voiced vocalization (requiring specially-raised pairs of humans to speak it), and it's a direct expression of their mental state. The Hosts can't lie, because that would require them to think something impossible, but they can create new linguistic expressions via simile. Before she leaves the planet to travel across space, Cho becomes a Simile ("the girl who sat in darkness and ate what was given to her"). Years later, Cho returns with her linguist husband to visit the colony, just in time for disaster to strike in the form of the new Ambassador to the Hosts from the human empire, and a Host who is learning how to lie.
There are elements here of Dune, Snow Crash, Videodrome, and Adam Troy-Castro's Emissaries from the Dead, although they've been combined into something very different. Mieville manages to create a kind of recursive narrative--both about and functioning as metaphor. It's got something to say about colonialism, about propaganda, and about the relationship between language and policy--although, as Mieville would no doubt point out, it's not a book solely about those things. It's not a polemic.
One of Mieville's great talents is his understanding of trope and genre, which lets him quickly sketch out a scenario, such as the political relationship between Cho's home colony and the wider human civilization, while saving room for what he does best: throwing his characters across stretches of jarring, endlessly inventive territory. In this case, the Hosts' talent for biological manipulation provides a landscape that's both familiar and yet deeply alien, from living houses that grow their own furniture to transit tubes built from peristaltic flesh. Beyond the shock value, the connection between the Hosts and their technology makes the decline of their society graphically manifest, as buildings and tools bleed and weep in desparation.
For all of its immense thoughtfulness, and despite its achingly-rendered arc of destruction, I wish Embassytown were better in a few key areas. Cho is a passive observer for much of its length, and the fractured timeline during the first half of the story seems more like a gratuitous method of disorienting the reader than a useful narrative device. I also wish, for a story that resonates so strongly around the legacy of colonialism, that the ending felt a little less like What These People Need Is a Honky.
For the Mieville fan, what stands out the most is the lack of pulp. In the last three books, he's changed his writing styles and tone significantly for each book, but there's always been a lurid quality to them, as though channeling the fevered grotesqueries of an Amazing Stories cover painting. While the body-horror elements persist, along with his obvious love of language, it's only in a short sequence describing a warp-travel accident that Mieville lets his pulp roots free--otherwise, it's a relatively restrained performance, which may be better for this particular story, but I do miss the sheer excess of previous novels.
With a post on Boing Boing this morning and a decent amount of chatter on Twitter, BitCoin appears to have hit the Internet mainstream. CNN will probably run a story in a week, although they'll be hard-pressed to make it seem any sillier than it already is, because BitCoin is the seasteading of economics: a bizarre scheme created by technolibertarians to address a problem nobody needed solving with a solution that nobody will ever find practical.
The site went down when the first flurry of links hit (a tremendously comforting state of affairs for an entirely-digital currency, I have to say), but here's the gist of BitCoin: it's a monetary system where you earn money by letting your computer "solve" large, complicated mathematical problems. The exact difficulty of these problems is adjusted by a peer-to-peer network to keep the rate of money generation at a planned level. Once created, each "coin" is a cryptographically-signed hash that can be passed from one person to another using a form of public key encryption. The theoretical advantages of this whole scheme are that:
If these seem like they hit the all the standard talking points for crazy people who read Cryptonomicon and thought it was non-fiction, you're not far off. Avery Pennarun's explanation of its flaws has caught a lot of attention (as well as a lot of derision from the pro-BitCoin crowd, although I have yet to see anyone rebut it with anything more convincing than "nuh-uh!").
I'm not an economist, but I play one on the Internet. And from my point of view, the biggest strike against this, or any other alternative currency, is simply that it'll never get more of an audience than conspiracy theorists and utopians. Normal people won't use BitCoin because you can't buy lunch with it. Businesses won't use it because normal people don't use it. And banks won't use it because they already have a perfectly usable system of moving money around electronically, and because they have no incentives to switch unless businesses and normal people demand it, which they won't. Much of the argument for BitCoin seems to be "well, good, we didn't want banks or banking to exist anyway." At that point, the only reasonable response is to back away slowly toward the nearest exit.
Despite the fact that it's completely bonkers, I feel like it's a bit of a shame that BitCoin's so obviously useless. Not because we need a global, untraceable currency that eliminates useful governmental controls on the market, but because our current options for spending money online are so limited. If you pay for something over the Internet, chances are that your transaction went through one of the big credit card companies--they've become the de facto standard platform for remote purchases. That's a huge cash cow, built on a framework with little transparency or real competition. As we move more and more of our economy online, their dominance is basically a license for a small number of corporations to print money.
But mostly BitCoin just leaves me melancholy. It's a dream by and for people who have missed the real lesson of Internet commerce, which is that it succeeded not because it was independent of the real world economy, but precisely because the two could be linked. Sites like eBay and Amazon were revolutionary because they were built on the same basic mechanisms as regular commercial networks: reputation, reviews, consumer feedback... and of course, plain old dollars. BitCoin (much like seasteading) tries to reduce the messy, human parts of economics to a set of sterile game mechanics. That's not even really libertarianism--it's nihilism, masquerading as a political philosophy.
I'm no great fan of money for its own sake, and I think we could stand to tweak our economic systems a bit, but the concept isn't broken in and of itself. If I thought BitCoin had a chance, I'd probably be worried. But despite its extreme principles, it still has to compete in the same marketplace of ideas as everything else--and I'm pretty sure nobody's buying what they're selling.
We've been busy since Soul Society: Urban Artistry will be performing this Saturday with Coyaba Dance Theater and Capitol Tap for a show titled Origins: One Heartbeat. It's $15 for general admission, with student and senior prices available, at the Montgomery College Performing Arts Center in Silver Spring. Check out the site for more details, a video with some background information, and a link to buy tickets. Hope to see you there!
While I re-read Dune once every couple years, I realized while we were on vacation that there's another favorite sci-fi novel that I haven't read in forever: Snow Crash. Due to reprints issued when Neal Stephenson hit the Baroque Cycle lottery, you can't get a new copy of Snow Crash for less than $10 ($13 for the trade paperback), which I regard as highway robbery, but a used bookstore in Seattle had it for $7, and I quickly found myself buried in it again.
Given that I've hated everything that Stephenson's done since this book, I was frankly worried that it would turn out to be another case of memories tinted by nostalgia, but Snow Crash is actually still pretty good. In fact, I think it's probably still the best thing he's written, and one of the better books of the 90's.
What did Stephenson get right with Snow Crash that he hasn't managed since?
There's an old saying that good science fiction contains one big crazy idea--any more, and it detracts from the story, as the writer struggles to fit in story and readers try to keep up. Snow Crash is the glorious exception to that rule. It's just stuffed with great throwaway ideas and scenes: the Rat Things, Raft Pirates, smart-wheeled skateboards, a kayak-riding killer wielding micron-thick glass knives... Despite being satire, and wild satire at that, a lot of the ideas in Snow Crash are remarkably prescient (especially if you give it a little Nostradamus-like leeway): most notably Google Earth, but its depiction of Internet culture and tribalism is pretty dead-on. Its prediction of network consolidation (via phone companies and cable networks) to form a globe-spanning computer network is not that far off. A gargoyle is just a smartphone user without the fancy goggles. And of course, there's that line about globalization:
When it gets down to it-talking trade balances here-once we've brain-drained all our technology into other countries, once things have evened out, they're making cars in Bolivia and microwave ovens in Tadzhikistan and selling them here--once our edge in natural resources has been made irrelevant by giant Hong Kong ships and dirigibles that can ship North Dakota all the way to New Zealand for a nickel--once the Invisible Hand has taken all those historical inequities and smeared them out into a broad global layer of what a Pakistani brickmaker would consider to be prosperity--y'know what? There's only four things we do better than anyone else:...which sometimes these days sounds about right.
music
movies
microcode (software)
high-speed pizza delivery
And yet, most impressive of all, it doesn't feel particularly cluttered. It feels fast. Stephenson charges through the story at a tremendous clip. It is, and I mean this in the best possible sense, cyberpunk by way of Michael Bay. Yes, the ending is still terrible. Yes, it still spends too much time rehashing ancient Sumerian myths. True, the toilet paper memo is really only funny the first time. But none of that honestly matters in the end. After the final page, what you remember are the explosions.
On vacation in the Pacific Northwest. Back next week.
Here are a few challenges I've started tossing out to prospective new hires, all of which are based on common, real-world multimedia tasks:
I learned this the hard way over the last four years. When I started working with ActionScript in 2007, it was the first serious programming I'd done since college, not counting some playful Excel macros. Consequently I had a lot of bad habits: I left a lot of variables in the global scope, stored data in ad-hoc parallel arrays, and embedded a lot of "magic number" constants in my code. Some of those are easy to correct, but the shift in thinking from "write a program that does X" to "design data structure Y, then write a program to operate on it" is surprisingly profound. And yet it makes a huge difference: when we created the Economic Indicators project, the most problematic areas in our code were the ones where the underlying data structures were badly-designed (or at least, in the case of the housing statistics, organized in a completely different fashion from the other tables).
Oddly enough, I think what caused the biggest change in my thinking was learning to use JQuery. Much like other query languages, the result of almost any JQuery API call is a collection of zero or more objects. You can iterate over these as if they were arrays, but the language provides a lot of functional constructs (each(), map(), filter(), etc.) that encourage users to think more in terms of generic operations over units of data (the fact that those units are expressed in JavaScript's lovely hashmap-like dynamic objects is just a bonus).
I suspect that data-orientation makes for better programmers in any field (and I'm not alone), but I'm particularly interested in it on my team because what we do is essentially to turn large chunks of data (governmental or otherwise) into stories. From a broad philosophical perspective, I want my team thinking about what can be extracted and explained via data, and not how to optimize their loops. Data first, code second--and if concentrating on the former improves the latter, so much for the better.
It's been almost two years now since I picked up an Android phone for the first time, during which time it has gone from a generally unloved, nerdy thing to the soon-to-be dominant smartphone platform. This is a remarkable and sudden development--when people start fretting about the state of Android as an OS (fragmentation, competing app stores, etc.), they tend to forget that it is still rapidly mutating and absorbing the most successful parts of a pretty woolly ecosystem. To have kept a high level of stability and compatibility, while adding features and going through major versions so quickly, is no small feat.
Even back in v1.0, there were obvious clever touches in Android--the notification bar, for instance, or the permission system. And now that I'm more used to them, the architectural decisions in the OS seem like "of course" kind of ideas. But when it first came out, a lot of the basic patterns Google used to build Android appeared genuinely bizarre to me. It has taken a few years to prove just how foresighted (or possibly just lucky) they actually were.
Take, for example, the back button. That's a weird concept at the OS level--sure, your browser has a one, as does the post-XP Explorer, but it's only used inside each program on the desktop, not to move between them. No previous mobile platform, from PalmOS to Windows Mobile to the iPhone, used a back button as part of the dominant navigation paradigm. It seemed like a case of Google, being a web company, wanting everything to resemble the web for no good reason.
And yet it turns out that being able to navigate "back" is a really good match for mobile, and it probably is important enough to make it a top-level concept. Android takes the UNIX idea of small utilities chained together, and applies it to small screen interaction. So it's easy to link from your Twitter feed to a web page to a map to your e-mail , and then jump partway back up the chain to continue from there (this is not an crazy usage pattern even before notifications get involved --imagine discovering a new restaurant from a friend, and then sending a lunch invitation before returning to Twitter). Without the back button, you'd have to go all the way back to the homescreen and the application list, losing track of where you had been in the process.
The process of composing this kind of "attention chain" is made possible by another one of Android's most underrated features: Intents. These are just ways of calling between one application and another, but with the advantage that the caller doesn't have to know what the callee is--Android applications register to handle certain MIME types or URIs on installation, and then they instantly become available to handle those actions. Far from being sandboxed, it's possible to pass all kinds of data around between different applications--or individual parts of an application. In a lot of ways, they resemble HTTP requests as much as anything else.
So, for example, if you take a picture and want to share it with your friends, pressing the "share" button in the Camera application will bring up a list of all installed programs that can share photos, even if they didn't exist when Camera was first written. Even better, Intents provide an extensible mechanism allowing applications to borrow functionality from other programs--if they want to use get an image via the camera, instead of duplicating the capture code, they can toss out the corresponding Intent, and any camera application can respond, including user replacements for the stock Camera. This is smart enough that other platforms have adopted something similar--Windows Mobile 7 will soon gain URIs for deep linking between applications, and iPhone has the clumsy, unofficial x-callback-url protocol--but Android still does this better than any other platform I've seen.
Finally, perhaps the choice that seemed oddest to me when Google announced Android was the Dalvik virtual machine. VMs are, after all, slow. Why saddle a mobile CPU with the extra burden of interpreting bytecode instead of using native applications? And indeed, the initial versions of Android were relatively sluggish. But two things changed: chips got much faster, and Google added just-in-time compilation in Android 2.2, turning the interpreted code into native binaries at runtime. Meanwhile, because Dalvik provides a platform independent from hardware, Android has been able to spread to all kinds of devices on different processor architectures, from ARM variants to Tegra to x86, and third-party developers never need to recompile.
(Speaking of VMs, Android's promise--and eventual delivery--of Flash on mobile has been mocked roundly. But when I wanted to show a friend footage of Juste Debout the other week, I'd have been out of luck without it. If I want to test my CQ interactives from home, it's incredibly handy. And of course, there are the ever-present restaurant websites. 99% of the time, I have Flash turned off--but when I need it, it's there, and it works surprisingly well. Anecdotal, I know, but there it is. I'd rather have the option than be completely helpless.)
Why are these unique features of Android's design interesting? Simple: they're the result of lessons successfully being adopted from web interaction models, not the other way around. That's a real shift from the conventional wisdom, which has been (and certainly I've always thought) that the kind of user interface and application design found on even the best web applications would never be as clean or intuitive as their native counterparts. For many things, that may still be true. But clearly there are some ideas that the web got right, even if entirely by chance: a stack-based navigation model, hardware-independent program representation, and a simple method of communicating between stateless "pages" of functionality. It figures that if anyone would recognize these lessons, Google would. Over the next few years, it'll be interesting to see if these and other web-inspired technologies make their way to mainstream operating systems as well.