Most book-lovers, I think, have a shelf devoted to their favorite books. It's always half-empty, because those are also the books they lend out when someone asks for a recommendation--oh, you haven't read something by X? Here you go. I love that shelf, even if I rarely lend books: it's where the private activity of reading becomes a shared experience, either through borrowing or via representation: these are the books that have deeply affected me. Maybe they'll affect you, too.
Likewise, there is writing on the Internet that is classic: essays, articles, and fiction that get linked and re-linked over time, in defiance of the conventional wisdom that online writing is transient or short-lived. The Classics are a personal call: what goes on your mental shelf of great online writing won't be the same as mine, and that's okay. This post is a collection of the items that I consider must-reads, accumulated over years of surfing. As I dig stuff out of my memory, I'll keep adding more.
So, you're thinking about deleting your Facebook account. Good for you and your crafty sense of civil libertarianism! But where will you find a replacement for its omnipresent life-streaming functionality? It's too bad that there isn't a turnkey self-publishing solution available to you.
I kid, of course, as a Cranky Old Internet Personality. But it's been obvious to me, for about a year now, that Facebook's been heading for the same mental niche as blogging. Of course, they're doing so by way of imitating Twitter, which is itself basically blogging for people who are frightened by large text boxes. The activity stream is just an RSS aggregator--one that only works for Facebook accounts. Both services are essentially taking the foundational elements of a blog--a CMS, a feed, a simple form of trackbacks and commenting--and turning them into something that Grandma can use. And all you have to do is let them harvest and monetize your data any way they can, in increasingly invasive ways.
Now, that aspect of Facebook has never particularly bothered me, since I've got an Internet shadow the size of Wyoming anyway, and (more importantly) because I've largely kept control of it on my own terms. There's not really anything on Facebook that isn't already public on Mile Zero or my portfolio site. Facebook's sneaky descent into opt-out publicity mode didn't exactly surprise me, either: what did you expect from a site that was both free to users and simultaneously an obvious, massive infrastructure expense? You'd have to be pretty oblivious to think they weren't going to exploit their users when the time came to find an actual business model--oblivious, or Chris Anderson. But I repeat myself.
That said, I can understand why people are upset about Facebook, since most probably don't think that carefully about the service's agenda, and were mainly joining to keep in touch with their friends. The entry price also probably helped to disarm them: "free" has a way of short-circuiting a person's critical thought process. Anderson was right about that, at least, even if he didn't follow the next logical step: the first people to take advantage of a psychological exploit are the scammers and con artists. And when the exploit involves something abstract (like privacy) instead of something concrete (like money), it becomes a lot easier for the scam to justify itself, both to its victims and its perpetrators.
Researcher danah boyd has written extensively about privacy and social networking, and she's observed something interesting about privacy, something that maybe only became obvious when it was scaled up to Internet sizes: our concept of privacy is not so much about specific bits of data or territory, but our control over the situations involving it. In "Privacy and Publicity in the Context of Big Data" she writes:
It's about a collective understanding of a social situation's boundaries and knowing how to operate within them. In other words, it's about having control over a situation. It's about understanding the audience and knowing how far information will flow. It's about trusting the people, the situating, and the context. People seek privacy so that they can make themselves vulnerable in order to gain something: personal support, knowledge, friendship, etc.This is why it's mistaken to claim that "our conception of privacy has changed" in the Internet age. Private information has always been shared out with relative indiscretion: how else would people hold their Jell-o parties or whatever they else did back in the olden days of our collective nostalgia? Those addresses and invitations weren't going to spread themselves. The difference is that those people had a reasonable expectation of the context in which their personal information would be shared: that it would be confined to their friends, that it would used for a specific purpose, and that what was said there would confine itself--mostly--to the social circle being invited.
People feel as though their privacy has been violated when their expectations are shattered. This classicly happens when a person shares something that wasn't meant to be shared. This is what makes trust an essential part of privacy. People trust each other to maintain the collectively understood sense of privacy and they feel violated when their friends share things that weren't meant to be shared.
Understanding the context is not just about understanding the audience. It's also about understanding the environment. Just as people trust each other, they also trust the physical setting. And they blame the architecture when they feel as though they were duped. Consider the phrase "these walls have ears" which dates back to at least Chaucer. The phrase highlights how people blame the architecture when it obscures their ability to properly interpret a context.
Consider this in light of grumblings about Facebook's approach to privacy. The core privacy challenge is that people believe that they understand the context in which they are operating; they get upset when they feel as though the context has been destabilized. They get upset and blame the technology.
Facebook's problem isn't just that the scale of a "slip of the tongue" has been magnified exponentially. It's also that they keep shifting the context. One day, a user might assume that the joke group they joined ("1 Million Readers Against Footnotes") will only be shared with their friends, and the next day it's been published by default to everyone's newsfeed. If you now imagine that the personal tidbit in question was something politically- or personally-sensitive, such as a discussion board for dissidents or marginalized groups, it's easy to see how discomforting that would be. People like me who started with the implicit assumption that Facebook wasn't secure (and the privilege to find alternatives) are fine, but those who looked to it as a safe space or a support network feel betrayed. And rightfully so.
So now that programmers are looking at replacing Facebook with a decentralized solution, like the Diaspora project, I think there's a real chance that they're missing the point. These projects tend to focus on the channels and the hosting: Diaspora, for example, wants to build Seeds and encrypt communication between them using PGP, as if we were all spies in a National Treasure movie or something. Not to mention that it's pretty funny when the "decentralized" alternative to Facebook ends up putting everyone on the same server-based CMS. Meanwhile, the most important part of social networks is not their foolproof security or their clean design--if it were, nobody would have ever used MySpace or Twitter. No, the key is their ability to construct context via user relationships.
Here's my not-so-radical idea: instead of trying to reinvent the Facebook wheel from scratch, why not create this as a social filter plugin (or even better, a standard service on sites like Posterous and Tumblr) for all the major publishing platforms? Base it off RSS with some form of secure authentication (OpenID would seem a natural fit), coupled with some dead-simple aggregation services and an easy migration path (OPML), and let a thousand interoperable flowers bloom. Facebook's been stealing inspiration from blogging for long enough now. Instead of creating a complicated open-source clone, let's improve the platforms we've already got--the ones that really give power back to individuals.
I don't remember where I was, the first time I heard the name Malcolm X. I remember that I was maybe 8 years old, growing up in Lexington, Kentucky. It was a mostly African-American neighborhood, so it could have been anywhere, really. I think I remember being confused by the 'X'--how could that be a last name? How did he sign forms or documents? And as someone who fumed at the end of every class roll call and official ceremony, I wondered: why didn't he pick a letter closer to the start of the alphabet?
At that age, of course, history is a pretty boring topic, but I don't remember learning about Malcolm X in class. I don't think I ever really discussed him with my parents, either. He was a cipher, a vaguely sinister one for some reason (maybe the name, maybe not). It wasn't until college, when I took a class on social movements and persuasion, that I learned more about the man: his militance within the Nation of Islam, his pilgrimage to Mecca, and the change in his thinking as a result. It was a revelation, a whole part of the civil rights story that I'd never learned about--and I was simultaneously shamed that I'd never bothered to find out about it on my own.
A couple of years ago, I finally got around to reading his autobiography, and was struck all over again. It's a fascinating story: told to Alex Haley during a time when Malcolm X was himself undergoing a serious self-examination, it's a chronicle of transformation on both explicit and implicit levels. He was an extraordinarily complicated person, undoubtably flawed but capable of tremendous insight and intelligence. It makes clear that his assassination was truly one of the great tragedies of the civil rights movement.
Yesterday was the anniversary of the assassination of Malcolm X, and of course February is Black History Month, so I've found myself thinking about this a lot lately. The thing about Black History Month is that it's a misnomer: as US citizens, Black history is our history. The fallout from slavery, segregation, and the struggle for civil rights still echo through our society in ways that we still stumble to articulate. Nobody, to my mind, represents that complex truth more than Malcolm X.
Part 5: Is this the right room for an argument? No, this is abuse.
I will admit this: it took some nerve for Chris Anderson to write the last few chapters of Free. Oh, parts of it are harmless enough: a comparison to old theories of competitive economics, for example, or advice to embrace digital waste. One is obvious padding, and the other is a random plate of leftovers from when this book was a magazine article. But they're nothing compared to the chronicle of chutzpah that is most of the closing section.
It's hard for me to imagine, for example, writing a section on post-scarcity economies in science fiction--much less thanking an intern in the acknowledgements for actually reading the books and regurgitating summaries/analysis to me. I would have a hard time assembling, with a straight face, a chapter on reputation and gift economies that includes a study putting reputation at the very bottom of the motives for Wikipedia contributions. And it almost seems like a practical joke to write "China and Brazil Are The Frontiers of Free. What Can We Learn from Them?"--a chapter which states (without citation, evidence, or any appreciation for irony) that Chinese students are basically Confucian-trained plagiarists, that piracy is the cause of China's growing number of millionaires, and that generic drug prices in Brazil are an endorsement of "free." Seriously?
Of his closing chapters, the only one that's probably worth engaging at length is #14, in which Anderson attempts to answer the criticisms that have been levelled against his argument since its debut. To be fair, these are real arguments, many of which I've raised here. Whether you find his responses convincing will probably depend on your reaction to previous chapters. If you found him less than rigorous before (and I certainly have, what with the Wikipedia, plagiarism, technical misconceptions, and sloppy definitions), his counter-arguments won't change your mind. What frankly surprised me was, looking back at the chapter, how few challenges he actually answered in his replies. In the interests of space and sanity, I'm summarizing them here in the form of the classic Shorter format:
...when the markets recovered and we looked back, we found to our surprise that it was practically impossible to see the effect of the crash on the growth of the Internet. It had continued to spread, just as before, with hardly a dip as the public markets cratered.Wait a second: is he seriously claiming that the dot-com bubble was going to crash the Internet itself? As usual, it's unclear--he continues straight into a paragraph about the netbook market, where low-cost (but not free) computers are loaded primarily with the low-cost (but not free) Windows XP, and will soon be preloaded with the not-so-low-cost Windows 7. Our current financial crisis, he argues, will push people even more into the economics of free content, even though it will make it more difficult for businesses to embrace it as a model. And he closes his book, literally in the last three paragraphs, by running through a list of web startups (including Twitter, YouTube, Digg, and Facebook) that have failed to generate a profitable revenue stream. He concludes (location 3899):
...free is not enough. It also has to be matched with Paid. Just as King Gillette's free razors only made business sense paired with expensive blades, so will today's Web entrepreneurs have to invent not just products that people love but also those that they will pay for. free [sic] may be the best price, but it can't be the only one.Translation: it is a tale told by an idiot, full of sound and fury, signifying nothing.
To quote Weezer: Why bother? Everyone knows these "airport books," as Anil Dash calls them, are terrible--why take the time to engage it in such detail? Why spend a whole week on it? Why not just ignore Anderson and his rampaging ego?
To answer, I can only say that sadly, many people do not bring nearly enough skepticism to the table, especially faced with the mighty publicity machine behind Anderson's work. His first "revelation," the so-called long tail, has run afoul of evidence time and time again--and this has not stopped it from being advocated as sound business strategy. I have had it quoted at me, and I expect the same to happen with Free. So at the very least, it's therapeutic to work through the book and marshall my thoughts on it.
But I have to confess to a less rational motive. For some time now, I've been bothered by the way that "Web 2.0" movements encourage us to commercialize ourselves, or at least to quantify that value: what's your traffic? Your PageRank? Your follower count? How much could you make with AdSense on your blog? How can you turn your readers into money? How much are you worth as a reader? We are all in the business now, it seems, of selling ourselves to the world--based on a set of values which I find, if not suspect, then at least highly artificial.
When we talk about a non-professional attention economy, or a "reputational" economy, what we're doing--partly, at least--is putting a price tag on ourselves, and on each other. It disturbs me to think about community this way. Call me a crazy hippie, but the people that I've met while writing here, to me, are not commodities to be traded. And I like to think that (while I attempt to minimize its harm to my career) I don't write this blog for a commercial benefit, monetary or otherwise. If nobody read it (a state of affairs blessedly close to the truth), I'd still write here, just for the pleasure of doing so.
Anderson is not to blame for the marketing of the reputational economy, but he's one of its strongest proponents. His Free is, in many ways, an attempt to lay out a blueprint for the monetization of your attention and spare time. Speaking personally, I don't appreciate the effort. He's offering Free. I say, keep the change.
Part 4: Free-conomics
Chapters nine and ten--digital media and free economies, respectively--are the strongest points of Chris Anderson's Free so far. That doesn't mean I'd put them up for a Pulitzer by any means, but I have relatively little to debunk (that or this book has finally overwhelmed my snark reserves, thus proving that even seemingly-inexhaustible resources do have limits). So we'll deal with them quickly, then take a break for some lighter fare: Anderson's motley collection of sidebars, and his reaction to the infamous New Yorker review by Malcolm Gladwell.
Anderson begins chapter nine with the heading "Free Media Is Nothing New. What Is New Is The Expansion of That Model to Everything Else Online." Indeed, if by that second "new" you mean "more than a decade old." He's like someone who waits until 1970 to declare that "the automobile is going to be a very influential technology." Good call, man! Hit us with another far-out prediction!
For the most part, though, this is a perfect example of Anderson's weak hypothesis: yes, advertising and alternate revenue streams can sometimes pay for a loss-leader free service. He spends much of this and the next chapter cataloguing (yes, again) all the different models of advertising that are possible online: from video game billboard placement to premium extras to gold farming (you may note, incidentally, that per Anderson's usual M.O. several of these are not really all that free). Anderson sees gaming in particular as a roiling pot of brand-new revenue models, even though most of them (like Second Life's virtual real estate) are just variants on very old models (in Linden Labs' case, the venerable lease). We are not, in other words, seeing the Internet charging ahead. We're seeing it catch up.
I feel compelled, since I'm familiar with it, to mention that Anderson's view of the gaming market is somewhat skewed. He concentrates primarily on massively-multiplayer titles, but he does also raise the transition from physical to digital distribution without spending much time on it. And it's just as well he doesn't, since to do so would be to point out that this is a booming digital content market that is assuredly not free. The cost of making a game, after all, is not primarily in printing CDs and boxes. It's in paying programmers, artists, designers, and writers to churn out an astonishing amount of material in a relatively short amount of time. Moving games to something like Steam or Impulse hasn't lowered their price to zero, as Anderson seems to argue should happen, because distribution was never the bulk of the expense in the first place. And I have seen no explanation from him, so far, on how to reconcile that fact with his predictions.
Of course, no book on Internet economics would be complete without a fawning section on Radiohead's In Rainbows, which was given away for free, then made a ridiculous amount of money for the band. In my opinion, this indicates more about the flaws of the studio system than it does about the viability of digital distribution, but it does (for once) make the point that Anderson wants it to make. Or does it? His other examples are Nine Inch Nails and Prince--all of which are big-name brands that can afford to A) drop the money for recording out of their own pockets and B) have a large fan-base built via a not-free revenue model. Of the struggling bands with free tracks on MySpace that Anderson loves to mention, what proportion of them have actually emerged as new superstars?
The answer, of course, is not many. But it's a shame that Anderson has insisted on sticking to either generalities (MySpace) or well-trodden examples (Radiohead) because there is innovation occurring in the free/premium music space. Take, for example, Steve Lawson and Matt Stevens, two loop-oriented instrumentalists who are using "free" tools like video-sharing service Ustream to broadcast online concerts, then networking with fans over free social media to arrange shows. Here are people who are, as far as I know, making a decent living using hybrid "free" models, many of which are much more interesting than simply giving away tracks online. But then, that would require more research than Anderson seems to have invested in this book.
If he or his editors had been thinking clearly, chapter ten would have been one of the first chapters in Free, not buried more than halfway through. In it, Anderson gives a rough estimate of the size of the free economy, if that's not a contradiction in terms. By doing so, he answers the burning question that most readers should have been asking from the start: So what? But in a bizarre turn, he writes (location 2645):
Let's quickly dispense with the use of "free" as a marketing gimmick. That's pretty much the entire economy. I suspect that there isn't an industry that doesn't use this in one way or another, from free trials to free prizes inside. But most of that isn't really free--it's just a direct cross-subsidy of one sort of another."Let's quickly dispense" with it? It's one of Anderson's four "free" business models from the start of the book! It's behind most of his examples, including the game market on which he's so bullish! Dispense with it? Why not throw away most of the book? Good question.
As always, while totalling up the GDP of this free economic zone, Anderson can't keep his story straight. He wants to use Facebook as an example of the "attention" economy, even though he admits that "Facebook is still unable to find a way to make money faster than it is spending it." Likewise, he wants to include the open-source consulting market, such as the enormous Linux division at IBM, even though (apart from the initial software) those services are at the center of the transaction, and they are very much not free. He wants to include free music and content in the value of networks like MySpace, although he's unable to assign them a value. And then to top it off, he figures the total cost of the Internet, based on an estimate of one hour of work for each individual URL indexed by Google, to be a conservative $260 billion. What are we to do with these numbers, all of which are either wild estimates or utter flights of fancy? Absolutely nothing, as far as I can tell. Primarily, they tell us that you can use the Internet to make money, or to share your hobbies. If Anderson had written this a decade ago, it might be noteworthy. Instead it's just kind of sad.
A Sidebar About Sidebars
Throughout the text, Anderson includes a bunch of sidebars, each titled in the format "How can X be free?" Once or twice they manage to be relevant. Most of the time they are disturbingly inane. For example:
Sidebar the Second: Editorial Review
Malcolm Gladwell's New Yorker review of Free deserves some attention, not just because it's hilarious to watch one pop trend guru flame another, but because it's actually dead-on. Several tech blogs have noted that his numbers for YouTube's bandwidth costs may be based on an inaccurate report, but the point remains: like many of Anderson's pivotal examples of free revenue, YouTube is not actually profitable. Gladwell also raises valid points about research, infrastructure, intellectual property, and scale. And he shows off why he's the king of this genre, with equally-unscientific but far fresher counter-anecdotes scattered through the review. But what seems to have struck home is his comment on journalism. Gladwell writes:
...it is not entirely clear what distinction is being marked between "paying people to get other people to write" and paying people to write. If you can afford to pay someone to get other people to write, why can't you pay people to write? It would be nice to know, as well, just how a business goes about reorganizing itself around getting people to work for "non-monetary rewards." Does he mean that the New York Times should be staffed by volunteers, like Meals on Wheels?Anderson focused primarily on this passage in his Wired.com retort, titling it (in a fit of projection) "Dear Malcolm, Why So Threatened?" He has no good answers for the ailing newspaper industry, Anderson writes, but his personal model is (I am not making this up) Wired's Geekdad blog.
About three years ago, I started a parenting blog called GeekDad, and invited a few friends to join in. We soon attracted a large enough audience that it became apparent that we couldn't post enough to satisfy the demand, so I put out an open call for contributors. Out of the scores who replied, I picked a dozen and one of them was Ken Denmead [...] Ken is, by day, a civil engineer working on the BART extension in the SF Bay Area. But by night he an amazing community manager [sic]. His leadership skills impressed me so much that I turned GeekDad over to him entirely about a year ago. Since then he's recruited a team of volunteers who grown the traffic ten-fold, to a million page views a month.Two things: first, if you are not a parent, reading Geekdad is like being trapped in an elevator with a new father--one who expounds proudly on every single aspect of life with their progeny, as if they are the first parents in history of the entire world, except it's ten times worse because the parent in question is a giant nerd. Second, it's a parenting blog. Of course it's free: you'd have to pay them to shut up about their kids! There's nothing wrong with that, although it's not high on my reading list. But to compare this with the act of journalism--of investigating stories, poring over data, putting in phone calls, fact-checking, etc.--is foolishness.
Good journalists are content experts. They're excellent writers who know what questions to ask, and where to dig. They put in a lot of time doing very unglamorous, tedious work in the service of small glories, like a front-page story or the feeling of a truth well told. For good journalism, you have to pay people. Now, you can certainly pay them based on ad revenue, and you can take advantage of crowdsourced labor to distribute some of the grunt work--Josh Marshall's Talking Points Memo has been a great example of new media reporting--but you don't get good, quality journalism for free. And I would argue, based on the downward spiral of quality in 24-hour TV news, that we should be extremely wary of outlets dependent on audience eyeballs for all funding. Viewers may find that they get what they pay for.
One of Anderson's defenses, as a trendspotter, is that he's not advocating for "free" but merely showing the direction that the market is headed. And it's in cases like this, where he suggests that the news should be run like a niche parenting blog, that I find his approach most reprehensible. It allows him to make arguments about the future but present them as facts, the futurist equivalent of the passive voice. It denies us agency in choosing a future--like it or not, he's saying, you'd better get used to this "free" stuff, because it's inevitable. There is, of course, nothing inevitable about it, and there's nothing neutral about Anderson's position. He's practically salivating over this new, free world, where journalism is run like one of the press release-mills that Wired calls a blog. At the end of his response, Anderson peevishly asks "Malcolm, does this answer your question?" Yes, it does--and we should find that answer terrifying.
Part 3: You keep using that word. I do not think it means what you think it means.
Before we go any further, I want to make something clear: I'm not opposed to "free" digital content, in either its monetary or political sense. Although I pay a small amount each month for hosting this site, it is served via Apache (free) on Linux (free), and is generated on the server by a set of (free) Perl CGI scripts. I log into the server using the PuTTY SSH client (free), and I view and test it using Firefox (free). Like almost everyone else, I use "free" ad-supported search engines and watch "free" ad-supported broadcast television. My current smartphone runs on a free, open-source operating system, and my previous phone OS is currently being open-sourced by its owners. I also eat the "free" samples at Price Club on Saturdays, if they're not too disgusting, which is a credit partly to my thriftiness but mainly to the strength of my stomach.
I don't have a problem with free, as long as we understand that "free" actually means "a wide range of well-known business models that shift costs to another location," also known as Chris Anderson's weak hypothesis in Free. If he'd written a book about that, I'd have little disagreement, but it would be a pointless book mostly composed of truisms. Much of Anderson's writing starts out in this mode. But inevitably, he keeps getting carried away and broadening it into a strong hypothesis that's untenable--either that the shifted costs, Heisenberg-like, cease to exist if he ignores them, or that "very cheap" is the same thing as free, or both.
That's largely the gist of chapters five through eight of Free. Anderson's wishfully-ambiguous conception of his subject from chapter four continues to shift to wherever his argument needs to be. It's incredibly frustrating--my notes are filled with repeated entries reading "so it's not really free, then." It's not that Anderson is unaware of these criticisms--he mentions them in passing at least a couple of times--but that he apparently dismisses them out of hand, or forgets about them in his rush to tell yet another overcooked, second-hand anecdote.
Chapter seven, for example, is devoted to Microsoft and the degree to which it has been threatened by free alternatives. Inarguably, Microsoft has been challenged in several markets by competing products that carry no up-front sticker price, and they've done their best to respond. The result has, in my opinion, been good for both Microsoft and for consumers--you can pry Firefox and Firebug out of my cold, dead hands, for example. But as a case study for how "free" will conquer all, you could not pick a worse company than Microsoft. In every example Anderson describes, by his own admission, they're thriving despite a paid-product revenue model. China? Heavily pirated and discounted, but still profitable. The desktop? Still controlling most of the market, and raking in money even on critical failures like Vista. The server? Incredibly, server software is one of Microsoft's biggest recent successes: IIS runs an astonishing majority of the web. Free software has challenged the software giant, but it shows no signs of killing them off anytime soon, and no cute Kubler-Ross reference on Anderson's part is going to change that.
But let's not get ahead of ourselves. Anderson opens chapter five with the story of Lewis Strauss, the man who coined his favorite phrase, "too cheap to meter." Strauss was discussing electricity, and of course you may have noticed the continued existence of power meters on buildings throughout the U.S. But, says Anderson (location 1219):
...what if Strauss had been right? What if electricity had, in fact, become virtually free?Sure, and what if I had a pony? We can imagine all kinds of ways the world would be different if scarcity no longer applies--and Anderson does, laying out a vision of plentiful water, food, and clean fuel. But looking back, Strauss sounds like a crank. Anderson needs to show how his post-scarcity vision won't appear the same way in forty years, and using weasel-words like "virtually free" doesn't help.
Get used to it, though, because there's a lot of "virtually" free in Anderson's utopia, even though that's not the same thing as free at all. He seems to have an equivalence problem: make something small enough, and he'll swear it doesn't exist. For example, Anderson spends a lot of time in chapters five and six on Moore's Law and the price of transistors. He writes (location 1236):
In 1961 a single transistor cost $10. Two years later, it was $5. [...] Today, Intel's latest processor chips have about 2 billion transistors and cost around $300. So that means each transistor costs approximately 0.000015 cents. Which is to say, too cheap to meter.Can you spot the fallacy? Yes, transistors are really cheap--which would be awesome if I bought computer hardware by the transistor. But of course single transistors are completely useless to me, or to anyone else. I need a bunch of them in a certain configuration, like the Core2 Duo in my laptop or the ARM in my phone, neither of which even remotely qualifies as "free." Anderson obviously knows this--he wrote the sticker price for an Intel chip in the previous sentence, for heaven's sake--but appears to be purposefully ignoring it.
He commits this same mistake when discussing Google (chapter eight is entirely devoted to Google, and is one of the most tedious things ever written). Google keeps building enormous, multi-million datacenters, but (he chortles) their cost-per-byte just keeps dropping! Why, they're practically free! Really? Is the company doing per-byte accounting, then? A huge datacenter may be a better value than the last one, but it still cost someone enough money to keep 70's-era The Who in guitar amps for at least a couple of years. I have the utmost respect for Google and their continued efforts to make their infrastructure both ecologically-friendly and energy-efficient, but their facilities are not "free" by any stretch of the imagination.
Anderson calls the combination of increasing bandwidth, processing power, and storage space a "triple play" that's "not too cheap to meter, as Strauss foretold, but too cheap to matter." (italics in the original) The elephant in the room is, too cheap for whom? All Anderson's examples revolve around fiber broadband and state-of-the-art PC hardware, probably because that's his experience. But even in this country, there are plenty of areas without a fast pipe, and plenty of people too poor to buy a machine that could fully exploit it. Not to mention the developing world.
Indeed, we might well ask "too cheap to matter" for what? In the last few years, commodity hardware has hit the point where it's sufficiently powerful for almost any local task (excepting, of course, heavy lifting like games and media production). I could run Word (or any similar native office suite) just fine on my old 366MHz Celeron. But according to Anderson the future is in the cloud, where an equivalent word processor will be implemented in a high-level scripting language that older hardware may struggle to interpret with the same responsiveness and power. A computer in a rural area (or a developing nation) may have difficulty pulling down pages fast enough to use those AJAX applications effectively. Anderson's hypothetical world is only free--or close enough that the cost can be waved away--for people who are urban, relatively wealthy, and have already sunk money into recent hardware. If you fall outside that cohort, the future of Free, isn't.
In interviews and responses to critics who have raised similar arguments about scale and definition, Anderson and his fellow travelers have not responded gracefully. He's not claiming that everything is free, Anderson says, just the important bits. But this has always been the problem with techno-utopian schemes, ranging from seasteading initiatives to the OLPC. The parts that he and his friends consider important (or unimportant, in the case of Google's extensive data-mining, for example) aren't necessarily the parts that translate across cultures, incomes, and geography. And while Anderson's demographic may not feel the cost of his revolution directly, it doesn't mean that it doesn't exist.
To his credit, Anderson points out one group for whom life is going to suck if his prediction comes true: the people driven out of business by the pursuit of Free's ideology. Wikipedia, he notes, has killed off what was left of the encyclopedia industry after Encarta demolished most of it. Craigslist has done a number on the newspaper industry. Anderson sees this as a "Robin Hood" transaction, decentralizing the flow of money, but admits that he could be wrong. We'll get to see in more depth how he thinks journalism (and the economy as a whole) can reinvent itself in chapters 9 and 10. As someone with no small amount of interest in the sector, and based on hints from Malcolm Gladwell's review, I can't help but dread it.
Part 2: The Experiment
In my first post on Chris Anderson's Free, I joked that my lack of research for these posts matched that of my target, an entirely typical pop nonfiction title. After chapters three and four, that has stopped being funny. You can look at both of these chapters, but especially chapter three, as an experiment: what happens when a writer does everything you're not supposed to do, research-wise? How little can someone work and still get published? The answer, frankly, is appalling.
You may have heard about the accusations of plagiarism in Free. Plagiarism Today has a fine overview, although I also recommend clicking through to the original post at Virginia Quarterly Review, as well as the additional examples at Ed Champion's blog. To summarize, Anderson seems to have cribbed large portions of text from Wikipedia and other sources, without adequate credit. Anderson's explanation is that his original footnotes were removed very late in the publication process, and the subsequent "write-through" missed some paragraphs. Evidence certainly supports the existence of sloppy editing--I've seen repeated capitalization errors and odd word choices consistent with automated find-and-replace (Ronald Coase is described as "the firm [?], Nobel Prize-winning economist," for example).
I assume that my copy of Free is the revision with added inline citations. I sincerely hope that's the case, as I shudder to imagine a book containing more Wikipedia references than this one. A global search (one of the virtues of e-books) finds nine paragraphs where the collaborative encyclopedia is being used, not as an example of free content, but as an actual primary source. Anderson paraphrases from Wikipedia for the history of free lunches, usury, Babbitt's Soap, and more. He even quotes from newspaper articles via the Wikipedia pages. As a writer taught that citing the encyclopedia (even one that's user-generated) is weak sauce, I find this highly troubling, as does Research Cat. Perhaps the author is trying to show the value of free content by relying on it so heavily. If so, I'd like to point out another, equally free--but far more reputable--source of information: the public library.
But set aside the question of Anderson's Wikipedia use, or whether he is a plagiarist (incidentally, I think he is). Another weak point in chapter three (and, to a lesser extent, chapter four) is his reliance on other pop history titles for research material. At various times, Anderson cites as sources (deep breath): Charles Siefe's Zero: The Biography of a Dangerous Idea, Michael Pollan's The Omnivore's Dilemma, Wired Magazine, Heather Rogers' Gone Tomorrow: The Hidden Life of Garbage, Seth Godin's Unleashing the Ideavirus, Clay Shirky's Here Comes Everybody, and Dan Ariely's Predictably Irrational. All in one chapter! It's not that these are bad books--on the contrary, I'm a huge fan of Ariely, Shirky, and Pollan--but they are not really works of scholarship that should be used as primary sources, much less (as happens here) bluntly paraphrased in lieu of original research. The impression given is that of a profoundly lazy writer, as if Anderson needed some padding for this book and simply grabbed whatever marginally-relevant material was close at hand.
And it gets worse, because Anderson doesn't just crib from these books. In at least one case, he's using them at cross-purposes to their actual contents. In his summary of The Omnivore's Dilemma, Anderson writes, from location 730:
When I was a kid, hunger was one of the main problems of poverty in America. Today, it's obesity. Something dramatic has changed in the world of agriculture in the past four decades--we got much better at growing food....and at location 761:
One aspect of agricultural abundance that touches every one of us every day is the Corn Economy. This extraordinary grass, bred by man over millenia to have larger and larger starch-filled kernels, produces more food per acre than any other plant on the Earth.Anderson seems amazed at the modern marvel of corn: it's used in toothpaste! Cosmetics! Linoleum! Ethanol fuel! Ah, but with the latter, he writes regretfully (location 772):
Today, we use corn for more than just food. Between synthetic fertilizer and breeding techniques that make corn the most efficient converter of sunlight and water to starch the world has ever seen, we are now swimming in a golden harvest of plenty--far more than we can eat. So corn has become an industrial feedstock for products of all sorts, from paint to packaging. Cheap corn has driven out many other foods from our diet and converted natural grass-eating animals, such as cows, into corn-processing machines.
After decades of price declines, corn has in recent years started getting more expensive along with oil prices. But innovation abhors a rising commodity, so that rising price has simply accelerated the search for a way to make ethanol out of switchgrass or other forms of cellulose, which can be grown where corn cannot. Once that magic cellulose-eating enzyme is found, corn will get cheap again, and with it, food of all sorts.It is hard to imagine how someone could get all this more wrong.
For a start, we don't find ourselves swimming in corn because it's an awesome supercrop, as Anderson claims. We grow it in such overwhelming quantities because it is massively subsidized by the federal government, the result of years upon years of industry lobbying. The market has nothing to do with the price of corn--it has hardly anything to do with the price of any American food goods, as any regular reader of Pollan's work should know. Much of the corn we grow is, in fact, inedible by humans: as Pollan actually writes in Omnivore's Dilemma, the corn grown by the factory farms of the midwest has been bred and genetically engineered into a product that's practically undigestible on its own. It's only good for high-fructose corn syrup and other industrial chemistry.
Indeed, to link this heavily-subsidized, artificially-abundant crop with "free" is to engage in bait-and-switch tactics. There's nothing free about the market in which it exists, and there's nothing free about that market's byproduct: a production chain that is unnatural, cruel to animals, harmful to developing economies, and results in food-like substances that are at least partially responsible for our epidemic of obesity and ill-health. We pay dearly for that corn, one way or another. To read Pollan's book as support for the view that we are "better at growing food" is at best missing the point, and at worst simply dishonest.
Anderson also, by the way, credits corn with the societal energy surplus that the Aztecs used to conquer much of Latin America. "Rice and wheat societies," he writes, "tended to be agrarian, inwardly focused cultures," while "corn's abundance made the Aztecs warlike." Yes, clearly rice and wheat economies contributed to the peaceful ways of historical China, Japan, India, and pretty much all of Europe, for whom armed conflict was a foreign concept until they traveled to the New World. They seem to have been fast learners once they got here, though, as evidenced by the greatly-diminished number of Aztecs.
In chapter four, Anderson takes these anecdotes that he's been compiling and starts to (finally!) turn them into an actual argument. Continuing to paraphrase liberally from Ariely's Predictably Irrational, Anderson gives a workable explanation of behavioral economics, and how "free" triggers a different mental reaction by consumers. He notes that there's a huge gap in perceptions of value between free and very cheap products, and that this has the side effect of splitting the market into two submarkets: free, and not-free.
I have little to criticize here as far as the economics described--it certainly matches with what I learned in college and at the World Bank. But I think once again Anderson is missing the point. As he admittedly notes (and then hurriedly discounts), the things that consumers consider "free" often actually aren't: they're paid for from subsidies, from higher prices elsewhere, or as loss-leaders for other revenue channels. Sometimes they don't even meet that low bar: one sidebar describes the SampleLab store in Japan, which gives away "free" products--to members who have paid a monthly admittance fee. That's not "free" except as a marketing slogan (or as a scam), something which seems to be a trend in this book.
Indeed, "free" is a flexible concept for Anderson, here and elsewhere. Sometimes it's trade and barter. Sometimes it's charity or communal labor. In one case, it's the royalties charged by ASCAP to radio stations for recorded music--sure, they're a non-zero, non-trivial monetary sum, but they're "low enough for radio stations to prosper." So they're free to an unknown number of significant digits, I guess. In fact, as long as you don't charge the consumer a direct, per-transaction cost, no matter what else might be entailed or who else might have to pay, Anderson's happy to call it "free." For someone who started a previous chapter with the dictionary definition of the term, he takes a lot of liberties with it.
The connection between chapters three and four is to tie abundance to null pricing, which I'm guessing Anderson will parlay into a discussion of broadband data and its levelling effects. There's a strong insinuation--although I'm not sure it's actually explicitly stated--that one has a causal link to the other. There may well be a correlation: abundant things are often free, and free things will often be consumed in abundance given ample supply. But that's all there is. Correlation is not causation. Abundance does not necessarily equal free, nor vice versa. And while Anderson uses the phrase "too cheap to meter" here for the first time (and probably not the last), he doesn't seem to consider that even extremely cheap products incur costs that may not scale efficiently--bandwidth, shipping, environmental impact, etc. You can't get something for nothing, in other words, but you can value something as nothing. So far, I'm not sure that Anderson fully understands the distinction.
I hadn't intended to spend so much space on these introductory chapters. In the next (much larger) section of the book, "Digital Free," we'll hopefully be able to move a little faster as Anderson shifts onto safer ground: the Internet and new media. He's certainly shown that he knows his way around one website, at least.
A couple of weeks ago, visitors to Wired.com were greeted with one of the site's largest headlines, of the type usually reserved for breaking news, pitching editor-in-chief Chris Anderson's new book Free: The Future of a Radical Price. The magazine ran an excerpt of the book (which was, itself, based on a 2008 Wired article). It held a conference that featured Anderson as a speaker, and Wired bloggers wrote adoring posts about his comments. When Malcolm Gladwell penned a scathing review of the book in The New Yorker, Anderson got another above-the-fold headline to ask, in a peevishly defensive tone, "Dear Malcolm, Why So Threatened?" One has to wonder how well Free would be recieved without the benefits of a Conde-Nast owned soapbox.
Poorly, I suspect. True to his word, at least, Anderson released Free at no cost (for a limited time) in a variety of electronic formats, including Kindle. I grabbed it in the same kind of spirit that I read Harry Potter: sooner or later, someone will want to talk about it, and I'd like to be in on the joke.
I didn't expect to like the book, since I've been spectacularly unimpressed with Anderson's previous attempts at Big Thought, and so far that trend remains unbroken. That's nothing special--I read (or start to read, at least) lots of books that I disagree with--but in this case, his over-occupied bully pulpit irks me, as does the degree to which I'll have this nonsense quoted at me by "innovation" types over the next couple of years. So as I read Free, I'm taking notes on the Kindle, and I'm going to try a section-by-section commentary on it. The book is short, it shouldn't take long. Since I'm doing this as I go, I may pick out questions that are answered later on--I'll try to point that out honestly if it happens.
I don't expect that this will be hilarious (Anderson is not a particularly good writer, but he's no Tom Friedman) and I certainly wouldn't expect it to be well-researched (obligatory snark: the same is true for the inspiration), but it should be cathartic. And maybe it'll prove helpful for those who are equally suspicious of the book's vision. Because let's be clear: in reality, nothing is ever free.
Part 1: Keep Moving Those Goalposts, You'll Score Eventually
The point of the first chapters of Free, as with any of these business-lite trend books, is to convince you, the reader, that the author's argument is both A) a revolutionary new theory that's relevant to everything around us, and simultaneously B) simple enough that it can be captured in a series of easily-capitalized buzzwords. In theory, this is the easiest part of the book: keep it low on specifics, high on hype, and save the nuanced qualifications for later. And yet, only a couple of pages into the prologue, Anderson is already screwing it up.
In my Kindle copy, at least, he's actually screwing it up from the first sentence, when he apparently forgets to capitalize Monty Python, but that's just grammatical nitpicking. The real mistake comes when he trumpets the Pythons' increased sales of physical merchandise after the creation of a free, high-quality YouTube channel. Anderson writes:
And all this cost Monty Python essentially nothing, since YouTube paid all the bandwidth and storage costs, such as they were.Techno-utopians: lowering costs by having other people pay for them since 2008. If Anderson claims that there is such a thing as a "free lunch," make sure it's not because you're footing the bill.
This kind of retort is so obvious (even setting aside the weasel words "such as they were," given Youtube's remarkable bandwidth/storage costs), and so blatantly unrefuted, that it can't help but set the tone for the following two chapters. In chapters one and two, Anderson repeatedly backs up his hypothesis that the new kind of free (no, I will not submit to his silly capitalization) is different from the old kind by showing historical examples of its use. It's so revolutionary, it's just like what some guy did 100 years ago!
Say what you like about Gladwell, who writes the same kind of fluffy anecdote-as-science trendspotting, his skills at research and writing are polished enough that you don't notice the gaps in the argument until you put the book down and take a moment to think about it. It is illustrative of how lazy Anderson is as a writer that his examples are not only ill-suited to his purpose, but they're also stunningly cliched. So we're presented in the first chapter with King Gilette (who gave away razors but sold the blades), Jell-O (which gave away recipes in order to sell the product), Wal-Mart's promotional pricing on DVDs, and a variety of other staple anecdotes. My favorite so far is in chapter one (location 280 of my e-book), where he proclaims that
Musicians from Radiohead to Nine Inch Nails now routinely give away their music online...Really? From Radiohead all the way to Nine Inch Nails, huh? Well, those are certainly unexpected and obscure choices. A better writer might have looked up at least a couple of indie groups experimenting with new revenue models--find two more, and you could do the old "from Radiohead to xxxx, Nine Inch Nails to yyyy" construction. But I suspect he's not that interested in the actual musicians, as much as the namedropping.
The effect of all this banality, as Anderson introduces his argument (chapter one) and performs the obligatory categorization into four "types" of free (chapter two), is that you're not enchanted or distracted enough to suspend disbelief while reading. When he opens the second chapter by literally considering the dictionary definition and etymology of "free," your mind starts to wander. Or, in my case, you find yourself continually pulling apart every sentence and example for the absurdity within.
Let's take a moment, quickly, to examine Anderson's four types of free, to which he devotes the second chapter. They are, in brief:
So what's the point of Anderson's many categories? I'm not entirely sure he's got one. He demonstrates his classification system with another less-than-captivating example: a breakdown of Real Simple's guide to "36 Surprising Things You Can Get For Free" (I am not making this up). This, he says, is evidence that the categories are useful models for chapters ahead. With a build-up like that, I can hardly wait.
When Facebook recently announced that users would be getting their own human-readable usernames and corresponding URLs, Anil Dash linked back to his 2002 piece, Privacy through Identity Control:
...if you do a simple Google search on my name, what do you get? This site.It was good advice then, and it's good advice now. It's especially good advice for people in my field, new media and online journalism. Own your name: buy the domain, set up a simple splash page or a set of redirection links, or go all out and create a rarely-updated work portfolio. But leaving your Internet shadow up to chance is simply not an option for us anymore.
I own my name. I am the first, and definitive, source of information on me.
One of the biggest benefits of that reality is that I now have control. The information I choose to reveal on my site sets the biggest boundaries for my privacy on the web. Granted, I'll never have total control. But look at most people, especially novice Internet users, who are concerned with privacy. They're fighting a losing battle, trying to prevent their personal information from being available on the web at all. If you recognize that it's going to happen, your best bet is to choose how, when, and where it shows up.
Here's an example: This week, I got an e-mail in my work inbox from someone who wants to work for us. Well, actually, he's interested in "pitching ideas for new online projects," and he has "a Logline Synopsis and a variety of treatments ready to send upon request." What he doesn't provide is links to any past work, or any hints as to what he wants to do. That's his first mistake: this isn't Hollywood, it's the Internet. We don't want your pitches, we want links and examples, and anyone who doesn't understand that probably isn't someone with whom we want to build online projects.
But it's possible, for very small values of possible, that someone who is aware of all Internet traditions would forget about the humble link, or would be wary of releasing their revolutionary ideas into the wild without keeping them under tight control. So I did what any prospective employer would have done: typed the applicant's name into Google.
The very first link--I kid you not, the first and only link for this guy's name--was a YouTube entry labeled "demo reel" by a username very similar to the applicant's e-mail address. Contained inside were five minutes of poorly-cut, VHS-quality video seemingly from a college TV station, focusing mainly on fratboy humor like asking groups of girls embarrassing sexual questions and being punched in the groin (not at the same time, unfortunately). As far as the Internet is concerned, that's Applicant X's identity. Think he'll get any response on his pitches for "new online projects?"
If you work in a fairly traditional job, or even a low-intensity information technology job, a minimal online presence--maybe even through something like a LinkedIn or Facebook URL--is probably fine. But if, like me, your job is to make digital content (of any variety) specifically for the Internet, you need to do more than that. You need to own your name.
"You're a tinkerer," the IT guy says to me.
This is not entirely a compliment. I've just been describing how I had to hard-reset my phone yesterday, after a botched process involving root access, the application caches, and the Android marketplace. It was entirely my own fault, mind you, and completely predictable. Almost a week between purchase and the first reformat? For me, that is superhuman restraint.
The IT guy would probably appreciate this more if he didn't spend his workday cleaning up other people's computer messes, to the point where it's not terribly amusing any more. But he's not having to clean up mine, so instead he just tells me that I'm a tinkerer, in the same tone of voice that most people would say "oh, you're a chemical weapons engineer" or "oh, you have rabies." That's interesting, the tone says, maybe you could tell me more about it from a little further away.
I don't mind. I'm reminded of something Lance Mannion wrote about the his Uncle Merlin and the "tinker unit" a couple of years back:
Changing a light bulb, caulking a window, nailing down a loose floorboard on the deck, hanging a picture---these are all acts of puttering.He's talking about home repair and I'm talking a kind of generalized electronic interference, but they're the same thing. It's the "not necessarily necessary" part that links them. Tinkering is less about problems, more about projects and potential.
Tinkering is the self-directed, small but skillful, not necessarily necessary work of actual home repair and improvement. There's an experimental quality to tinkering, as well. When you sit down---or kneel down, squat down, or lie down and crawl under something---to tinker, you don't always know exactly what you're going to do. You're going to try something to see if it does the trick.
Tinkering includes the possibility of using a screwdriver, a wrench, or a pair of pliers, possibly even a voltage meter, and preferably all four. To putter, you might need a screwdriver, but usually you can get the job done with a hammer or a paintbrush.
If you go out to the garage to spray some WD-40 on the tracks of your squeaky garage door, you're puttering. If you install a new automatic garage door opener, you're tinkering.
Changing the oil on your car is a putter. Installing new belts and hoses, especially if the car doesn't really need new belts and hoses yet, is a tinker.
Pouring a new garage floor or rebuilding the car's engine are serious jobs that the words tinker and putter don't begin to describe.
I just changed the filter on our furnace. That was a putter.
But the furnace has been a bit balky the last couple of days and even refused to kick on last night until I went downstairs to tinker with it. I checked the filter, saw that I'd need to change it in the morning---Note: The label on the filter says 30 Day Filter and it means what it says---but for the moment all I could do was pluck dust off it and shake dirt out of it. I put new duct tape around the joints on the outtake pipes. Tripped the circuit breaker a few times. Heard a small, sad click and then an ominous and disheartening silence from the furnace. Went upstairs to re-read the troubleshooting guide in the manual. Heard the burners ignite at last, closed the manual, and went to bed, congratulating myself on a job well done.
That was tinkering.
Affinity for tinkering is one way to sort the population, I think. Some people get it, some people don't. Belle is one of the ones who doesn't. She has learned to dread those times when a home purchase suggestion is met with the response "oh, we could just make one of those." She also watches with amusement when I find a new project--such as, a couple of weeks ago, when I decided to make a case for my old phone, since the one I'd been using was falling apart. I wanted one of those magnetic cases, but the ones for Blackberries are too short, and the ones that aren't too short are so wide that the phone would slide back and forth and drive me batty.
No problem, I said, and I dragged her to the fabric store, where I bought some jean rivets. Then I found one of the too-short cases online for a couple of bucks (plus shipping and handling, still a deal!), snipped the leather clasp in two, and used the rivets and a part of the old case to extend it just far enough to close around the Nokia. It was my first time riveting something. I really enjoyed it, and said so. Belle rolled her eyes at me.
To some extent, I can understand where she's coming from, since I've been there myself. My family also tends to be hands-on, which makes me suspect that it may be an inherited (or at least acquired) trait, and it's certainly a lot less fun to be involved in someone else's tinkering. Which is not to say that it holds no rewards: my dad recently sold one of his kayaks, and the buyer specifically requested the one with the nose art.
My goal lately has not been to eliminate tinkering, but to make sure it's channeled in productive directions. For example, one of my regular projects has been upgrading the video drivers on my laptop--I'm always seduced by the thought of a few more frames per second, or a slightly-smoother game of Team Fortress 2. Invariably, this has become a mistake: while the early Lenovo drivers might have been a bit buggy, at this point they've pretty much caught up to the hacked releases, and all I get for my trouble is a long night of restoring backups and rebooting. Better just to leave it alone, or at least find less tedious things to disrupt.
The nice thing about digital tinkering, as opposed to the home infrastructure kind, is that there are ways nowadays to make sure that all you lose is time. That's part of the reason I love mobile platforms and virtual machines: in both cases, mess something up and all you've lost is less than an hour, most of which is just restoring from the default image. If only there were a way to say the same for our apartment, since then I wouldn't have a large packet of rivets, a Dremel tool, a box of half-disassembled guitar pedals, and several yards of unused vinyl lying around.
Or maybe I just need the right project for them. Any ideas?