Part 4: Free-conomics
Chapters nine and ten--digital media and free economies, respectively--are the strongest points of Chris Anderson's Free so far. That doesn't mean I'd put them up for a Pulitzer by any means, but I have relatively little to debunk (that or this book has finally overwhelmed my snark reserves, thus proving that even seemingly-inexhaustible resources do have limits). So we'll deal with them quickly, then take a break for some lighter fare: Anderson's motley collection of sidebars, and his reaction to the infamous New Yorker review by Malcolm Gladwell.
Anderson begins chapter nine with the heading "Free Media Is Nothing New. What Is New Is The Expansion of That Model to Everything Else Online." Indeed, if by that second "new" you mean "more than a decade old." He's like someone who waits until 1970 to declare that "the automobile is going to be a very influential technology." Good call, man! Hit us with another far-out prediction!
For the most part, though, this is a perfect example of Anderson's weak hypothesis: yes, advertising and alternate revenue streams can sometimes pay for a loss-leader free service. He spends much of this and the next chapter cataloguing (yes, again) all the different models of advertising that are possible online: from video game billboard placement to premium extras to gold farming (you may note, incidentally, that per Anderson's usual M.O. several of these are not really all that free). Anderson sees gaming in particular as a roiling pot of brand-new revenue models, even though most of them (like Second Life's virtual real estate) are just variants on very old models (in Linden Labs' case, the venerable lease). We are not, in other words, seeing the Internet charging ahead. We're seeing it catch up.
I feel compelled, since I'm familiar with it, to mention that Anderson's view of the gaming market is somewhat skewed. He concentrates primarily on massively-multiplayer titles, but he does also raise the transition from physical to digital distribution without spending much time on it. And it's just as well he doesn't, since to do so would be to point out that this is a booming digital content market that is assuredly not free. The cost of making a game, after all, is not primarily in printing CDs and boxes. It's in paying programmers, artists, designers, and writers to churn out an astonishing amount of material in a relatively short amount of time. Moving games to something like Steam or Impulse hasn't lowered their price to zero, as Anderson seems to argue should happen, because distribution was never the bulk of the expense in the first place. And I have seen no explanation from him, so far, on how to reconcile that fact with his predictions.
Of course, no book on Internet economics would be complete without a fawning section on Radiohead's In Rainbows, which was given away for free, then made a ridiculous amount of money for the band. In my opinion, this indicates more about the flaws of the studio system than it does about the viability of digital distribution, but it does (for once) make the point that Anderson wants it to make. Or does it? His other examples are Nine Inch Nails and Prince--all of which are big-name brands that can afford to A) drop the money for recording out of their own pockets and B) have a large fan-base built via a not-free revenue model. Of the struggling bands with free tracks on MySpace that Anderson loves to mention, what proportion of them have actually emerged as new superstars?
The answer, of course, is not many. But it's a shame that Anderson has insisted on sticking to either generalities (MySpace) or well-trodden examples (Radiohead) because there is innovation occurring in the free/premium music space. Take, for example, Steve Lawson and Matt Stevens, two loop-oriented instrumentalists who are using "free" tools like video-sharing service Ustream to broadcast online concerts, then networking with fans over free social media to arrange shows. Here are people who are, as far as I know, making a decent living using hybrid "free" models, many of which are much more interesting than simply giving away tracks online. But then, that would require more research than Anderson seems to have invested in this book.
If he or his editors had been thinking clearly, chapter ten would have been one of the first chapters in Free, not buried more than halfway through. In it, Anderson gives a rough estimate of the size of the free economy, if that's not a contradiction in terms. By doing so, he answers the burning question that most readers should have been asking from the start: So what? But in a bizarre turn, he writes (location 2645):
Let's quickly dispense with the use of "free" as a marketing gimmick. That's pretty much the entire economy. I suspect that there isn't an industry that doesn't use this in one way or another, from free trials to free prizes inside. But most of that isn't really free--it's just a direct cross-subsidy of one sort of another."Let's quickly dispense" with it? It's one of Anderson's four "free" business models from the start of the book! It's behind most of his examples, including the game market on which he's so bullish! Dispense with it? Why not throw away most of the book? Good question.
As always, while totalling up the GDP of this free economic zone, Anderson can't keep his story straight. He wants to use Facebook as an example of the "attention" economy, even though he admits that "Facebook is still unable to find a way to make money faster than it is spending it." Likewise, he wants to include the open-source consulting market, such as the enormous Linux division at IBM, even though (apart from the initial software) those services are at the center of the transaction, and they are very much not free. He wants to include free music and content in the value of networks like MySpace, although he's unable to assign them a value. And then to top it off, he figures the total cost of the Internet, based on an estimate of one hour of work for each individual URL indexed by Google, to be a conservative $260 billion. What are we to do with these numbers, all of which are either wild estimates or utter flights of fancy? Absolutely nothing, as far as I can tell. Primarily, they tell us that you can use the Internet to make money, or to share your hobbies. If Anderson had written this a decade ago, it might be noteworthy. Instead it's just kind of sad.
A Sidebar About Sidebars
Throughout the text, Anderson includes a bunch of sidebars, each titled in the format "How can X be free?" Once or twice they manage to be relevant. Most of the time they are disturbingly inane. For example:
Sidebar the Second: Editorial Review
Malcolm Gladwell's New Yorker review of Free deserves some attention, not just because it's hilarious to watch one pop trend guru flame another, but because it's actually dead-on. Several tech blogs have noted that his numbers for YouTube's bandwidth costs may be based on an inaccurate report, but the point remains: like many of Anderson's pivotal examples of free revenue, YouTube is not actually profitable. Gladwell also raises valid points about research, infrastructure, intellectual property, and scale. And he shows off why he's the king of this genre, with equally-unscientific but far fresher counter-anecdotes scattered through the review. But what seems to have struck home is his comment on journalism. Gladwell writes:
...it is not entirely clear what distinction is being marked between "paying people to get other people to write" and paying people to write. If you can afford to pay someone to get other people to write, why can't you pay people to write? It would be nice to know, as well, just how a business goes about reorganizing itself around getting people to work for "non-monetary rewards." Does he mean that the New York Times should be staffed by volunteers, like Meals on Wheels?Anderson focused primarily on this passage in his Wired.com retort, titling it (in a fit of projection) "Dear Malcolm, Why So Threatened?" He has no good answers for the ailing newspaper industry, Anderson writes, but his personal model is (I am not making this up) Wired's Geekdad blog.
About three years ago, I started a parenting blog called GeekDad, and invited a few friends to join in. We soon attracted a large enough audience that it became apparent that we couldn't post enough to satisfy the demand, so I put out an open call for contributors. Out of the scores who replied, I picked a dozen and one of them was Ken Denmead [...] Ken is, by day, a civil engineer working on the BART extension in the SF Bay Area. But by night he an amazing community manager [sic]. His leadership skills impressed me so much that I turned GeekDad over to him entirely about a year ago. Since then he's recruited a team of volunteers who grown the traffic ten-fold, to a million page views a month.Two things: first, if you are not a parent, reading Geekdad is like being trapped in an elevator with a new father--one who expounds proudly on every single aspect of life with their progeny, as if they are the first parents in history of the entire world, except it's ten times worse because the parent in question is a giant nerd. Second, it's a parenting blog. Of course it's free: you'd have to pay them to shut up about their kids! There's nothing wrong with that, although it's not high on my reading list. But to compare this with the act of journalism--of investigating stories, poring over data, putting in phone calls, fact-checking, etc.--is foolishness.
Good journalists are content experts. They're excellent writers who know what questions to ask, and where to dig. They put in a lot of time doing very unglamorous, tedious work in the service of small glories, like a front-page story or the feeling of a truth well told. For good journalism, you have to pay people. Now, you can certainly pay them based on ad revenue, and you can take advantage of crowdsourced labor to distribute some of the grunt work--Josh Marshall's Talking Points Memo has been a great example of new media reporting--but you don't get good, quality journalism for free. And I would argue, based on the downward spiral of quality in 24-hour TV news, that we should be extremely wary of outlets dependent on audience eyeballs for all funding. Viewers may find that they get what they pay for.
One of Anderson's defenses, as a trendspotter, is that he's not advocating for "free" but merely showing the direction that the market is headed. And it's in cases like this, where he suggests that the news should be run like a niche parenting blog, that I find his approach most reprehensible. It allows him to make arguments about the future but present them as facts, the futurist equivalent of the passive voice. It denies us agency in choosing a future--like it or not, he's saying, you'd better get used to this "free" stuff, because it's inevitable. There is, of course, nothing inevitable about it, and there's nothing neutral about Anderson's position. He's practically salivating over this new, free world, where journalism is run like one of the press release-mills that Wired calls a blog. At the end of his response, Anderson peevishly asks "Malcolm, does this answer your question?" Yes, it does--and we should find that answer terrifying.
Part 3: You keep using that word. I do not think it means what you think it means.
Before we go any further, I want to make something clear: I'm not opposed to "free" digital content, in either its monetary or political sense. Although I pay a small amount each month for hosting this site, it is served via Apache (free) on Linux (free), and is generated on the server by a set of (free) Perl CGI scripts. I log into the server using the PuTTY SSH client (free), and I view and test it using Firefox (free). Like almost everyone else, I use "free" ad-supported search engines and watch "free" ad-supported broadcast television. My current smartphone runs on a free, open-source operating system, and my previous phone OS is currently being open-sourced by its owners. I also eat the "free" samples at Price Club on Saturdays, if they're not too disgusting, which is a credit partly to my thriftiness but mainly to the strength of my stomach.
I don't have a problem with free, as long as we understand that "free" actually means "a wide range of well-known business models that shift costs to another location," also known as Chris Anderson's weak hypothesis in Free. If he'd written a book about that, I'd have little disagreement, but it would be a pointless book mostly composed of truisms. Much of Anderson's writing starts out in this mode. But inevitably, he keeps getting carried away and broadening it into a strong hypothesis that's untenable--either that the shifted costs, Heisenberg-like, cease to exist if he ignores them, or that "very cheap" is the same thing as free, or both.
That's largely the gist of chapters five through eight of Free. Anderson's wishfully-ambiguous conception of his subject from chapter four continues to shift to wherever his argument needs to be. It's incredibly frustrating--my notes are filled with repeated entries reading "so it's not really free, then." It's not that Anderson is unaware of these criticisms--he mentions them in passing at least a couple of times--but that he apparently dismisses them out of hand, or forgets about them in his rush to tell yet another overcooked, second-hand anecdote.
Chapter seven, for example, is devoted to Microsoft and the degree to which it has been threatened by free alternatives. Inarguably, Microsoft has been challenged in several markets by competing products that carry no up-front sticker price, and they've done their best to respond. The result has, in my opinion, been good for both Microsoft and for consumers--you can pry Firefox and Firebug out of my cold, dead hands, for example. But as a case study for how "free" will conquer all, you could not pick a worse company than Microsoft. In every example Anderson describes, by his own admission, they're thriving despite a paid-product revenue model. China? Heavily pirated and discounted, but still profitable. The desktop? Still controlling most of the market, and raking in money even on critical failures like Vista. The server? Incredibly, server software is one of Microsoft's biggest recent successes: IIS runs an astonishing majority of the web. Free software has challenged the software giant, but it shows no signs of killing them off anytime soon, and no cute Kubler-Ross reference on Anderson's part is going to change that.
But let's not get ahead of ourselves. Anderson opens chapter five with the story of Lewis Strauss, the man who coined his favorite phrase, "too cheap to meter." Strauss was discussing electricity, and of course you may have noticed the continued existence of power meters on buildings throughout the U.S. But, says Anderson (location 1219):
...what if Strauss had been right? What if electricity had, in fact, become virtually free?Sure, and what if I had a pony? We can imagine all kinds of ways the world would be different if scarcity no longer applies--and Anderson does, laying out a vision of plentiful water, food, and clean fuel. But looking back, Strauss sounds like a crank. Anderson needs to show how his post-scarcity vision won't appear the same way in forty years, and using weasel-words like "virtually free" doesn't help.
Get used to it, though, because there's a lot of "virtually" free in Anderson's utopia, even though that's not the same thing as free at all. He seems to have an equivalence problem: make something small enough, and he'll swear it doesn't exist. For example, Anderson spends a lot of time in chapters five and six on Moore's Law and the price of transistors. He writes (location 1236):
In 1961 a single transistor cost $10. Two years later, it was $5. [...] Today, Intel's latest processor chips have about 2 billion transistors and cost around $300. So that means each transistor costs approximately 0.000015 cents. Which is to say, too cheap to meter.Can you spot the fallacy? Yes, transistors are really cheap--which would be awesome if I bought computer hardware by the transistor. But of course single transistors are completely useless to me, or to anyone else. I need a bunch of them in a certain configuration, like the Core2 Duo in my laptop or the ARM in my phone, neither of which even remotely qualifies as "free." Anderson obviously knows this--he wrote the sticker price for an Intel chip in the previous sentence, for heaven's sake--but appears to be purposefully ignoring it.
He commits this same mistake when discussing Google (chapter eight is entirely devoted to Google, and is one of the most tedious things ever written). Google keeps building enormous, multi-million datacenters, but (he chortles) their cost-per-byte just keeps dropping! Why, they're practically free! Really? Is the company doing per-byte accounting, then? A huge datacenter may be a better value than the last one, but it still cost someone enough money to keep 70's-era The Who in guitar amps for at least a couple of years. I have the utmost respect for Google and their continued efforts to make their infrastructure both ecologically-friendly and energy-efficient, but their facilities are not "free" by any stretch of the imagination.
Anderson calls the combination of increasing bandwidth, processing power, and storage space a "triple play" that's "not too cheap to meter, as Strauss foretold, but too cheap to matter." (italics in the original) The elephant in the room is, too cheap for whom? All Anderson's examples revolve around fiber broadband and state-of-the-art PC hardware, probably because that's his experience. But even in this country, there are plenty of areas without a fast pipe, and plenty of people too poor to buy a machine that could fully exploit it. Not to mention the developing world.
Indeed, we might well ask "too cheap to matter" for what? In the last few years, commodity hardware has hit the point where it's sufficiently powerful for almost any local task (excepting, of course, heavy lifting like games and media production). I could run Word (or any similar native office suite) just fine on my old 366MHz Celeron. But according to Anderson the future is in the cloud, where an equivalent word processor will be implemented in a high-level scripting language that older hardware may struggle to interpret with the same responsiveness and power. A computer in a rural area (or a developing nation) may have difficulty pulling down pages fast enough to use those AJAX applications effectively. Anderson's hypothetical world is only free--or close enough that the cost can be waved away--for people who are urban, relatively wealthy, and have already sunk money into recent hardware. If you fall outside that cohort, the future of Free, isn't.
In interviews and responses to critics who have raised similar arguments about scale and definition, Anderson and his fellow travelers have not responded gracefully. He's not claiming that everything is free, Anderson says, just the important bits. But this has always been the problem with techno-utopian schemes, ranging from seasteading initiatives to the OLPC. The parts that he and his friends consider important (or unimportant, in the case of Google's extensive data-mining, for example) aren't necessarily the parts that translate across cultures, incomes, and geography. And while Anderson's demographic may not feel the cost of his revolution directly, it doesn't mean that it doesn't exist.
To his credit, Anderson points out one group for whom life is going to suck if his prediction comes true: the people driven out of business by the pursuit of Free's ideology. Wikipedia, he notes, has killed off what was left of the encyclopedia industry after Encarta demolished most of it. Craigslist has done a number on the newspaper industry. Anderson sees this as a "Robin Hood" transaction, decentralizing the flow of money, but admits that he could be wrong. We'll get to see in more depth how he thinks journalism (and the economy as a whole) can reinvent itself in chapters 9 and 10. As someone with no small amount of interest in the sector, and based on hints from Malcolm Gladwell's review, I can't help but dread it.
Part 2: The Experiment
In my first post on Chris Anderson's Free, I joked that my lack of research for these posts matched that of my target, an entirely typical pop nonfiction title. After chapters three and four, that has stopped being funny. You can look at both of these chapters, but especially chapter three, as an experiment: what happens when a writer does everything you're not supposed to do, research-wise? How little can someone work and still get published? The answer, frankly, is appalling.
You may have heard about the accusations of plagiarism in Free. Plagiarism Today has a fine overview, although I also recommend clicking through to the original post at Virginia Quarterly Review, as well as the additional examples at Ed Champion's blog. To summarize, Anderson seems to have cribbed large portions of text from Wikipedia and other sources, without adequate credit. Anderson's explanation is that his original footnotes were removed very late in the publication process, and the subsequent "write-through" missed some paragraphs. Evidence certainly supports the existence of sloppy editing--I've seen repeated capitalization errors and odd word choices consistent with automated find-and-replace (Ronald Coase is described as "the firm [?], Nobel Prize-winning economist," for example).
I assume that my copy of Free is the revision with added inline citations. I sincerely hope that's the case, as I shudder to imagine a book containing more Wikipedia references than this one. A global search (one of the virtues of e-books) finds nine paragraphs where the collaborative encyclopedia is being used, not as an example of free content, but as an actual primary source. Anderson paraphrases from Wikipedia for the history of free lunches, usury, Babbitt's Soap, and more. He even quotes from newspaper articles via the Wikipedia pages. As a writer taught that citing the encyclopedia (even one that's user-generated) is weak sauce, I find this highly troubling, as does Research Cat. Perhaps the author is trying to show the value of free content by relying on it so heavily. If so, I'd like to point out another, equally free--but far more reputable--source of information: the public library.
But set aside the question of Anderson's Wikipedia use, or whether he is a plagiarist (incidentally, I think he is). Another weak point in chapter three (and, to a lesser extent, chapter four) is his reliance on other pop history titles for research material. At various times, Anderson cites as sources (deep breath): Charles Siefe's Zero: The Biography of a Dangerous Idea, Michael Pollan's The Omnivore's Dilemma, Wired Magazine, Heather Rogers' Gone Tomorrow: The Hidden Life of Garbage, Seth Godin's Unleashing the Ideavirus, Clay Shirky's Here Comes Everybody, and Dan Ariely's Predictably Irrational. All in one chapter! It's not that these are bad books--on the contrary, I'm a huge fan of Ariely, Shirky, and Pollan--but they are not really works of scholarship that should be used as primary sources, much less (as happens here) bluntly paraphrased in lieu of original research. The impression given is that of a profoundly lazy writer, as if Anderson needed some padding for this book and simply grabbed whatever marginally-relevant material was close at hand.
And it gets worse, because Anderson doesn't just crib from these books. In at least one case, he's using them at cross-purposes to their actual contents. In his summary of The Omnivore's Dilemma, Anderson writes, from location 730:
When I was a kid, hunger was one of the main problems of poverty in America. Today, it's obesity. Something dramatic has changed in the world of agriculture in the past four decades--we got much better at growing food....and at location 761:
One aspect of agricultural abundance that touches every one of us every day is the Corn Economy. This extraordinary grass, bred by man over millenia to have larger and larger starch-filled kernels, produces more food per acre than any other plant on the Earth.Anderson seems amazed at the modern marvel of corn: it's used in toothpaste! Cosmetics! Linoleum! Ethanol fuel! Ah, but with the latter, he writes regretfully (location 772):
Today, we use corn for more than just food. Between synthetic fertilizer and breeding techniques that make corn the most efficient converter of sunlight and water to starch the world has ever seen, we are now swimming in a golden harvest of plenty--far more than we can eat. So corn has become an industrial feedstock for products of all sorts, from paint to packaging. Cheap corn has driven out many other foods from our diet and converted natural grass-eating animals, such as cows, into corn-processing machines.
After decades of price declines, corn has in recent years started getting more expensive along with oil prices. But innovation abhors a rising commodity, so that rising price has simply accelerated the search for a way to make ethanol out of switchgrass or other forms of cellulose, which can be grown where corn cannot. Once that magic cellulose-eating enzyme is found, corn will get cheap again, and with it, food of all sorts.It is hard to imagine how someone could get all this more wrong.
For a start, we don't find ourselves swimming in corn because it's an awesome supercrop, as Anderson claims. We grow it in such overwhelming quantities because it is massively subsidized by the federal government, the result of years upon years of industry lobbying. The market has nothing to do with the price of corn--it has hardly anything to do with the price of any American food goods, as any regular reader of Pollan's work should know. Much of the corn we grow is, in fact, inedible by humans: as Pollan actually writes in Omnivore's Dilemma, the corn grown by the factory farms of the midwest has been bred and genetically engineered into a product that's practically undigestible on its own. It's only good for high-fructose corn syrup and other industrial chemistry.
Indeed, to link this heavily-subsidized, artificially-abundant crop with "free" is to engage in bait-and-switch tactics. There's nothing free about the market in which it exists, and there's nothing free about that market's byproduct: a production chain that is unnatural, cruel to animals, harmful to developing economies, and results in food-like substances that are at least partially responsible for our epidemic of obesity and ill-health. We pay dearly for that corn, one way or another. To read Pollan's book as support for the view that we are "better at growing food" is at best missing the point, and at worst simply dishonest.
Anderson also, by the way, credits corn with the societal energy surplus that the Aztecs used to conquer much of Latin America. "Rice and wheat societies," he writes, "tended to be agrarian, inwardly focused cultures," while "corn's abundance made the Aztecs warlike." Yes, clearly rice and wheat economies contributed to the peaceful ways of historical China, Japan, India, and pretty much all of Europe, for whom armed conflict was a foreign concept until they traveled to the New World. They seem to have been fast learners once they got here, though, as evidenced by the greatly-diminished number of Aztecs.
In chapter four, Anderson takes these anecdotes that he's been compiling and starts to (finally!) turn them into an actual argument. Continuing to paraphrase liberally from Ariely's Predictably Irrational, Anderson gives a workable explanation of behavioral economics, and how "free" triggers a different mental reaction by consumers. He notes that there's a huge gap in perceptions of value between free and very cheap products, and that this has the side effect of splitting the market into two submarkets: free, and not-free.
I have little to criticize here as far as the economics described--it certainly matches with what I learned in college and at the World Bank. But I think once again Anderson is missing the point. As he admittedly notes (and then hurriedly discounts), the things that consumers consider "free" often actually aren't: they're paid for from subsidies, from higher prices elsewhere, or as loss-leaders for other revenue channels. Sometimes they don't even meet that low bar: one sidebar describes the SampleLab store in Japan, which gives away "free" products--to members who have paid a monthly admittance fee. That's not "free" except as a marketing slogan (or as a scam), something which seems to be a trend in this book.
Indeed, "free" is a flexible concept for Anderson, here and elsewhere. Sometimes it's trade and barter. Sometimes it's charity or communal labor. In one case, it's the royalties charged by ASCAP to radio stations for recorded music--sure, they're a non-zero, non-trivial monetary sum, but they're "low enough for radio stations to prosper." So they're free to an unknown number of significant digits, I guess. In fact, as long as you don't charge the consumer a direct, per-transaction cost, no matter what else might be entailed or who else might have to pay, Anderson's happy to call it "free." For someone who started a previous chapter with the dictionary definition of the term, he takes a lot of liberties with it.
The connection between chapters three and four is to tie abundance to null pricing, which I'm guessing Anderson will parlay into a discussion of broadband data and its levelling effects. There's a strong insinuation--although I'm not sure it's actually explicitly stated--that one has a causal link to the other. There may well be a correlation: abundant things are often free, and free things will often be consumed in abundance given ample supply. But that's all there is. Correlation is not causation. Abundance does not necessarily equal free, nor vice versa. And while Anderson uses the phrase "too cheap to meter" here for the first time (and probably not the last), he doesn't seem to consider that even extremely cheap products incur costs that may not scale efficiently--bandwidth, shipping, environmental impact, etc. You can't get something for nothing, in other words, but you can value something as nothing. So far, I'm not sure that Anderson fully understands the distinction.
I hadn't intended to spend so much space on these introductory chapters. In the next (much larger) section of the book, "Digital Free," we'll hopefully be able to move a little faster as Anderson shifts onto safer ground: the Internet and new media. He's certainly shown that he knows his way around one website, at least.
A couple of weeks ago, visitors to Wired.com were greeted with one of the site's largest headlines, of the type usually reserved for breaking news, pitching editor-in-chief Chris Anderson's new book Free: The Future of a Radical Price. The magazine ran an excerpt of the book (which was, itself, based on a 2008 Wired article). It held a conference that featured Anderson as a speaker, and Wired bloggers wrote adoring posts about his comments. When Malcolm Gladwell penned a scathing review of the book in The New Yorker, Anderson got another above-the-fold headline to ask, in a peevishly defensive tone, "Dear Malcolm, Why So Threatened?" One has to wonder how well Free would be recieved without the benefits of a Conde-Nast owned soapbox.
Poorly, I suspect. True to his word, at least, Anderson released Free at no cost (for a limited time) in a variety of electronic formats, including Kindle. I grabbed it in the same kind of spirit that I read Harry Potter: sooner or later, someone will want to talk about it, and I'd like to be in on the joke.
I didn't expect to like the book, since I've been spectacularly unimpressed with Anderson's previous attempts at Big Thought, and so far that trend remains unbroken. That's nothing special--I read (or start to read, at least) lots of books that I disagree with--but in this case, his over-occupied bully pulpit irks me, as does the degree to which I'll have this nonsense quoted at me by "innovation" types over the next couple of years. So as I read Free, I'm taking notes on the Kindle, and I'm going to try a section-by-section commentary on it. The book is short, it shouldn't take long. Since I'm doing this as I go, I may pick out questions that are answered later on--I'll try to point that out honestly if it happens.
I don't expect that this will be hilarious (Anderson is not a particularly good writer, but he's no Tom Friedman) and I certainly wouldn't expect it to be well-researched (obligatory snark: the same is true for the inspiration), but it should be cathartic. And maybe it'll prove helpful for those who are equally suspicious of the book's vision. Because let's be clear: in reality, nothing is ever free.
Part 1: Keep Moving Those Goalposts, You'll Score Eventually
The point of the first chapters of Free, as with any of these business-lite trend books, is to convince you, the reader, that the author's argument is both A) a revolutionary new theory that's relevant to everything around us, and simultaneously B) simple enough that it can be captured in a series of easily-capitalized buzzwords. In theory, this is the easiest part of the book: keep it low on specifics, high on hype, and save the nuanced qualifications for later. And yet, only a couple of pages into the prologue, Anderson is already screwing it up.
In my Kindle copy, at least, he's actually screwing it up from the first sentence, when he apparently forgets to capitalize Monty Python, but that's just grammatical nitpicking. The real mistake comes when he trumpets the Pythons' increased sales of physical merchandise after the creation of a free, high-quality YouTube channel. Anderson writes:
And all this cost Monty Python essentially nothing, since YouTube paid all the bandwidth and storage costs, such as they were.Techno-utopians: lowering costs by having other people pay for them since 2008. If Anderson claims that there is such a thing as a "free lunch," make sure it's not because you're footing the bill.
This kind of retort is so obvious (even setting aside the weasel words "such as they were," given Youtube's remarkable bandwidth/storage costs), and so blatantly unrefuted, that it can't help but set the tone for the following two chapters. In chapters one and two, Anderson repeatedly backs up his hypothesis that the new kind of free (no, I will not submit to his silly capitalization) is different from the old kind by showing historical examples of its use. It's so revolutionary, it's just like what some guy did 100 years ago!
Say what you like about Gladwell, who writes the same kind of fluffy anecdote-as-science trendspotting, his skills at research and writing are polished enough that you don't notice the gaps in the argument until you put the book down and take a moment to think about it. It is illustrative of how lazy Anderson is as a writer that his examples are not only ill-suited to his purpose, but they're also stunningly cliched. So we're presented in the first chapter with King Gilette (who gave away razors but sold the blades), Jell-O (which gave away recipes in order to sell the product), Wal-Mart's promotional pricing on DVDs, and a variety of other staple anecdotes. My favorite so far is in chapter one (location 280 of my e-book), where he proclaims that
Musicians from Radiohead to Nine Inch Nails now routinely give away their music online...Really? From Radiohead all the way to Nine Inch Nails, huh? Well, those are certainly unexpected and obscure choices. A better writer might have looked up at least a couple of indie groups experimenting with new revenue models--find two more, and you could do the old "from Radiohead to xxxx, Nine Inch Nails to yyyy" construction. But I suspect he's not that interested in the actual musicians, as much as the namedropping.
The effect of all this banality, as Anderson introduces his argument (chapter one) and performs the obligatory categorization into four "types" of free (chapter two), is that you're not enchanted or distracted enough to suspend disbelief while reading. When he opens the second chapter by literally considering the dictionary definition and etymology of "free," your mind starts to wander. Or, in my case, you find yourself continually pulling apart every sentence and example for the absurdity within.
Let's take a moment, quickly, to examine Anderson's four types of free, to which he devotes the second chapter. They are, in brief:
So what's the point of Anderson's many categories? I'm not entirely sure he's got one. He demonstrates his classification system with another less-than-captivating example: a breakdown of Real Simple's guide to "36 Surprising Things You Can Get For Free" (I am not making this up). This, he says, is evidence that the categories are useful models for chapters ahead. With a build-up like that, I can hardly wait.
When Facebook recently announced that users would be getting their own human-readable usernames and corresponding URLs, Anil Dash linked back to his 2002 piece, Privacy through Identity Control:
...if you do a simple Google search on my name, what do you get? This site.It was good advice then, and it's good advice now. It's especially good advice for people in my field, new media and online journalism. Own your name: buy the domain, set up a simple splash page or a set of redirection links, or go all out and create a rarely-updated work portfolio. But leaving your Internet shadow up to chance is simply not an option for us anymore.
I own my name. I am the first, and definitive, source of information on me.
One of the biggest benefits of that reality is that I now have control. The information I choose to reveal on my site sets the biggest boundaries for my privacy on the web. Granted, I'll never have total control. But look at most people, especially novice Internet users, who are concerned with privacy. They're fighting a losing battle, trying to prevent their personal information from being available on the web at all. If you recognize that it's going to happen, your best bet is to choose how, when, and where it shows up.
Here's an example: This week, I got an e-mail in my work inbox from someone who wants to work for us. Well, actually, he's interested in "pitching ideas for new online projects," and he has "a Logline Synopsis and a variety of treatments ready to send upon request." What he doesn't provide is links to any past work, or any hints as to what he wants to do. That's his first mistake: this isn't Hollywood, it's the Internet. We don't want your pitches, we want links and examples, and anyone who doesn't understand that probably isn't someone with whom we want to build online projects.
But it's possible, for very small values of possible, that someone who is aware of all Internet traditions would forget about the humble link, or would be wary of releasing their revolutionary ideas into the wild without keeping them under tight control. So I did what any prospective employer would have done: typed the applicant's name into Google.
The very first link--I kid you not, the first and only link for this guy's name--was a YouTube entry labeled "demo reel" by a username very similar to the applicant's e-mail address. Contained inside were five minutes of poorly-cut, VHS-quality video seemingly from a college TV station, focusing mainly on fratboy humor like asking groups of girls embarrassing sexual questions and being punched in the groin (not at the same time, unfortunately). As far as the Internet is concerned, that's Applicant X's identity. Think he'll get any response on his pitches for "new online projects?"
If you work in a fairly traditional job, or even a low-intensity information technology job, a minimal online presence--maybe even through something like a LinkedIn or Facebook URL--is probably fine. But if, like me, your job is to make digital content (of any variety) specifically for the Internet, you need to do more than that. You need to own your name.
"You're a tinkerer," the IT guy says to me.
This is not entirely a compliment. I've just been describing how I had to hard-reset my phone yesterday, after a botched process involving root access, the application caches, and the Android marketplace. It was entirely my own fault, mind you, and completely predictable. Almost a week between purchase and the first reformat? For me, that is superhuman restraint.
The IT guy would probably appreciate this more if he didn't spend his workday cleaning up other people's computer messes, to the point where it's not terribly amusing any more. But he's not having to clean up mine, so instead he just tells me that I'm a tinkerer, in the same tone of voice that most people would say "oh, you're a chemical weapons engineer" or "oh, you have rabies." That's interesting, the tone says, maybe you could tell me more about it from a little further away.
I don't mind. I'm reminded of something Lance Mannion wrote about the his Uncle Merlin and the "tinker unit" a couple of years back:
Changing a light bulb, caulking a window, nailing down a loose floorboard on the deck, hanging a picture---these are all acts of puttering.He's talking about home repair and I'm talking a kind of generalized electronic interference, but they're the same thing. It's the "not necessarily necessary" part that links them. Tinkering is less about problems, more about projects and potential.
Tinkering is the self-directed, small but skillful, not necessarily necessary work of actual home repair and improvement. There's an experimental quality to tinkering, as well. When you sit down---or kneel down, squat down, or lie down and crawl under something---to tinker, you don't always know exactly what you're going to do. You're going to try something to see if it does the trick.
Tinkering includes the possibility of using a screwdriver, a wrench, or a pair of pliers, possibly even a voltage meter, and preferably all four. To putter, you might need a screwdriver, but usually you can get the job done with a hammer or a paintbrush.
If you go out to the garage to spray some WD-40 on the tracks of your squeaky garage door, you're puttering. If you install a new automatic garage door opener, you're tinkering.
Changing the oil on your car is a putter. Installing new belts and hoses, especially if the car doesn't really need new belts and hoses yet, is a tinker.
Pouring a new garage floor or rebuilding the car's engine are serious jobs that the words tinker and putter don't begin to describe.
I just changed the filter on our furnace. That was a putter.
But the furnace has been a bit balky the last couple of days and even refused to kick on last night until I went downstairs to tinker with it. I checked the filter, saw that I'd need to change it in the morning---Note: The label on the filter says 30 Day Filter and it means what it says---but for the moment all I could do was pluck dust off it and shake dirt out of it. I put new duct tape around the joints on the outtake pipes. Tripped the circuit breaker a few times. Heard a small, sad click and then an ominous and disheartening silence from the furnace. Went upstairs to re-read the troubleshooting guide in the manual. Heard the burners ignite at last, closed the manual, and went to bed, congratulating myself on a job well done.
That was tinkering.
Affinity for tinkering is one way to sort the population, I think. Some people get it, some people don't. Belle is one of the ones who doesn't. She has learned to dread those times when a home purchase suggestion is met with the response "oh, we could just make one of those." She also watches with amusement when I find a new project--such as, a couple of weeks ago, when I decided to make a case for my old phone, since the one I'd been using was falling apart. I wanted one of those magnetic cases, but the ones for Blackberries are too short, and the ones that aren't too short are so wide that the phone would slide back and forth and drive me batty.
No problem, I said, and I dragged her to the fabric store, where I bought some jean rivets. Then I found one of the too-short cases online for a couple of bucks (plus shipping and handling, still a deal!), snipped the leather clasp in two, and used the rivets and a part of the old case to extend it just far enough to close around the Nokia. It was my first time riveting something. I really enjoyed it, and said so. Belle rolled her eyes at me.
To some extent, I can understand where she's coming from, since I've been there myself. My family also tends to be hands-on, which makes me suspect that it may be an inherited (or at least acquired) trait, and it's certainly a lot less fun to be involved in someone else's tinkering. Which is not to say that it holds no rewards: my dad recently sold one of his kayaks, and the buyer specifically requested the one with the nose art.
My goal lately has not been to eliminate tinkering, but to make sure it's channeled in productive directions. For example, one of my regular projects has been upgrading the video drivers on my laptop--I'm always seduced by the thought of a few more frames per second, or a slightly-smoother game of Team Fortress 2. Invariably, this has become a mistake: while the early Lenovo drivers might have been a bit buggy, at this point they've pretty much caught up to the hacked releases, and all I get for my trouble is a long night of restoring backups and rebooting. Better just to leave it alone, or at least find less tedious things to disrupt.
The nice thing about digital tinkering, as opposed to the home infrastructure kind, is that there are ways nowadays to make sure that all you lose is time. That's part of the reason I love mobile platforms and virtual machines: in both cases, mess something up and all you've lost is less than an hour, most of which is just restoring from the default image. If only there were a way to say the same for our apartment, since then I wouldn't have a large packet of rivets, a Dremel tool, a box of half-disassembled guitar pedals, and several yards of unused vinyl lying around.
Or maybe I just need the right project for them. Any ideas?
The view from the hot tub on the roof of the Boston bed and breakfast where we're currently holed up. Not that we were actually in the hot tub. More beside it, really.
I haven't been in Boston in a very long time, and everything I know about it I learned from Good Will Hunting--it basically amounts to a bunch of shouting about "southies" or something. Suggestions, therefore, are welcomed.
I've never particularly cared for Kevin Kelly, but the man's outdone himself this time. In a post quoted at Global Guerrillas, he writes that "we are all collapsitarians these days" because progress is boring, so we all secretly hope that the civilization will break down.
Yeah. Wait, what?
There are two kinds of really stupid reasoning going on here. The first is that he opens the post with a chart of Google Trends for "collapse" and "depression," both of which have spiked since mid-2007. Friedman-like, Kelly reads a lot into the word "collapse," a trend which could be more simply explained by the financial markets, you know, collapsing, and the fact that there's only so many ways that journalists can describe a market breakdown before they start to hit the more obscure parts of the thesaurus. It doesn't mean that the world's population suddenly became infatuated with dystopia. But then, you don't get a reputation as a tech visionary by using common sense.
Hence Kelly's second mistake, in which he decides that these brand-new "collapsitarians" come in six varieties, including luddites, anti-globalists, and conservationists. I say that these are brand-new, because Kelly writes that their existence is "surprising." Why it's surprising, I have no idea. None of the ideologies named began in mid-2007. None of them have been particularly altered by the financial crash, although I imagine the anti-globalization crowd is feeling pretty smug. Why is it surprising? Particularly to Kelly, a person who has been (according to Paula Borsook's Cyberselfish) a pretty hard-core Christian, the existence of apocalyptic or end-times movements should be familiar, historically if not personally. Does the Great Awakening ring any bells?
Now, you may ask why we need to worry about Kelly, who to the outside observer just looks like another geek with odd-looking facial hair (seriously: his headshot seems to have been taking right before he went out to churn some butter, raise a barn, and perhaps sell some fine quilts to Pennsylvania tourists). But of course, as a former editor of Wired and a figure of some standing online (albeit much diminished), Kelly acts as a kind of weathervane for the flakier parts of Internet culture. While those with a healthier viewpoint have begun to think multi-generationally, Kelly represents the people for whom a future without shiny jetpacks and nanotech is unbearably boring. This outlook is one of dangerous extremism that we can't afford.
In many ways, we've already moved beyond our previously-imagined futures. I remember reading William Gibson's Virtual Light in high school, which includes a passage about trucks running on cooking oil that smell like fried chicken, and thinking "Huh. That'd be pretty wild." This weekend I went back to my university for a forensics reunion and ate at a brand-new cafeteria, where all the cooking oil is recycled into bio-diesel. That may not be jetpacks, but how can you say it's not fascinating? What kind of person can look at the dilemmas we face, as well as the solutions we're creating, and not be excited--indeed, who would look forward to destruction instead of inspiration?
"Collapsitarianism" is, at its most basic, a kind of tantrum: you didn't get exactly what you wanted, so you'd rather tear it all down. I'm sorry that you picked the wrong future, guys. But the sign of an actual adult is that they recognize when circumstances have changed, and adapt to them. The process of solving ecological and social problems is going to be very exciting. There's going to be plenty of wizardry to go around without crying that the world looks more like Herbert than Heinlein.
Perhaps the root problem is that we continue to make a distinction between present and future, as if there were a solid break between the two. There's not, of course. The future is just an extension of where we are now. Ironically, this is part of the point of the Long Now Foundation, on which board Kelly sits. But where the Long Now decries a culture in which "people in 1996 actually refer to the year 2000 as 'the future'", I think we should close the gap tighter. We need to get used to the idea of the future as connected and intertwined with modern times--we already live in the future, in other words. By placing ourselves on the arc of history, instead of imagining it vaguely in front of us, it's easier to spur ourselves to action. It certainly beats waiting for the collapse.
Don't look now, but higher education just got higher:
Singularity University derives its name from Dr. Ray Kurzweil's book "The Singularity is Near." The term Singularity refers to a theoretical future point of unprecedented advancement caused by the accelerating development of various technologies including biotechnology, nanotechnology, artificial intelligence, robotics and genetics.The other diploma mills are kicking themselves for not thinking of this sooner. Being able to charge $25K to rehash Moravec and talk about how robots will eradicate world hunger? Sign me up!
Singularity University makes no predictions about the exact effects of these technologies on humanity; rather, our mission is to facilitate the understanding, prediction and development of these technologies and how they can best be harnessed to address humanity's grand challenges. The University has uniquely chosen to focus on such a group of exponentially growing technologies, in an interdisciplinary course structure. These accelerating technologies, both alone and when applied together, have the potential to address many of the world's grand challenges.
In all seriousness, though, the real disappointment is that there's an actual niche to be filled, and Singularity University misses it by a mile. After all, we live in a time when technology has had incredible consequences for the way we live, and the future we create together: climate change, I suspect, is going to radically alter the tone of innovation going forward (if it hasn't already--see the recent emphasis on green datacenters and the carbon cost of Google searches). But even in just this one area, SU can't even devote an entire course to it: it gets a minor part of the "Energy and Ecological Systems" section, about equal to the amount devoted to space travel and (tellingly) venture capitalism.
Indeed, the entire curriculum is comical. A path for futurism, in a university named for the event after which technological change becomes impossible to predict? And more importantly, an interdisciplinary program that breaks its studies down into technological disciplines like "Nanotechnology" and "AI & Robotics?" That's a total conceptual failure. Worldchanging's Jamais Cascio gets it right in the comments for his reaction post when he writes:
I proposed the social-centered structure for a few reasons, but they all come down to moving away from the unidirectional technological change -> social change model that seems so commonplace in emerging tech/Singularity discussions.
Implicit in a structure that focuses on particular technology categories is a "here's why this tech is nifty/dangerous" approach. By focusing instead on areas of impact, I'm pushing a "here's an important issue, let's see how the different techs get woven through" model. Both may talk about technologies and society, but the tech-centered structure looks for cause and effect, while the social-centered structure looks for interactions.
Singularity University is distinctly oriented toward a method of thinking where technology leads to social change--unsurprising, since that's much of the appeal of singulatarianism itself. But technology isn't created or used in a vaccuum. Look at development, for example: fifty years of the IBRD trying to build open markets via top-down structural adjustment loans, completely blindsided by microfinance and the ability to transfer cellular minutes. Terrorists in Mumbai are using Blackberries and information technology to coordinate their attacks. Not to mention the rise of the netroots as a political organization that has not only shaped the electoral process, but altered the policies (open government, net neutrality, creative commons) that it demands.
These innovations are not stories of emerging technology with direct, predictable outcomes. They're all rooted deeply in the social and cultural context in which they evolved, and they trade ideas across non-contiguous domains--who would have thought that Daily Kos would borrow community management methods from Slashdot, for example? And yet Singularity University seems to have put together its mission without considering these kinds of Black Swans: invent X technology, they seem to be saying, and Y or Z social impact will follow (or can be guided by visionaries) in a linear fashion. It's a predictive viewpoint straight out of Heinlein-era science fiction, and frankly it's irresponsible. Even assuming that the institution really does foster "the development of exponentially advancing technologies" (if such a thing is at all desirable), it's an act of phenomenal hubris to think that those same leaders will be the ones to "apply, focus and guide these tools" (quotes directly from the SU mission statement).
We could spend all day picking out the inconsistencies and missteps in the SU plan, like the fact that their "international" university has a faculty that's so very white and American. But the wider point remains: at a time when the cost of intellectual overconfidence has been driven home economically and ecologically, Singularity University wants to charge CEOs and government leaders $25,000 to tell them that they're in control of the future. For an academic insitution, that's a pretty big lesson they seem to have missed.
If there is one food item that anyone can absolutely make better and cheaper themselves, compared to the store-bought version, it's tomato sauce. This is partly because fresh ingredients will almost always be better than a sauce bottled and shipped across the country--but it is just as likely that it's because most tomato sauces are pretty awful.
Belle and I eat a fair amount of pasta, since it's a quick, vegetarian-friendly food. So we've tried, at one point or another, a lot of different store-bought sauces. At best, they're inoffensive, and at worst they're flavored in exactly the wrong ways: sour, chemical-tasting, and mealy. One day, after having a particularly bad batch, I decided to try making my own again. It turns out that it's much easier than I expected, although it does take a bit of time.
I'm sure you can find a detailed recipe elsewhere (I typically just buy a bunch of produce and wing it), but the basic process of making sauce (according to the Joy of Cooking) is a simple, three-step process:
The difference between this and sauce-in-a-jar is striking. It tastes like it's made of real vegetables, for one thing. It's no doubt healthier. And if you do it right, there's no excuse for it to be bland. Give it a shot, and I think you'll agree: life's too short to eat store-bought sauce.