Last week Gina Trapani wrote an insightful post on Smarterware about designers, women, and hostility in open source, including how she has applied that to her social-networking scraper application ThinkUp, in terms of welcoming non-coders and contributors from a diverse range of backgrounds. And Trapani's been working around those kinds of problems for a while now: as the founding editor of the Lifehacker blog, she created a space for tech-minded people that was a breath of fresh air. From the post:
At Lifehacker, my original vision was to create a new kind of tech blog, one that wasn't yet another "boys worshipping tech toys" site, one that was helpful, friendly, and welcoming versus snarky, sensational, and cutting. (That was no small task in the Gawker-verse, and I learned much in the process.) ...One of the things that has struck me, as I've paid more attention to the tech news ecosystem online, is how rare that attitude really is. Trapani alludes to the general tone of the Gawker properties, which start at "snark" and work down, but (thanks to imitation and diffusion of former Gawker writers) most of the other big tech blogs sound pretty much the same, which is one of the major reasons that I dropped them from my usual reading habits and blocked them in my browser.
I learned something important about creating a productive online community: leaders set the tone by example. It's simple, really. When someone you don't know shows up on the mailing list or in IRC, you break out the welcome wagon, let them know you're happy they're here, show them around the place, help them with their question or problem, and let them know how they can give back to the community. Once you and your community leaders do that a few times, something magical happens: the newbie who you welcomed just a few weeks ago starts welcoming new folks, and the virtuous cycle continues.
The typical industry blogger persona is aggressive, awestruck (both as irony and as an expression of genuine, uncritical neophilia), and uncompromising to other viewpoints. When they break their pose of cynicism, they usually rave in extreme superlatives, as though it simply wouldn't do to be quietly amused by something. The tone is, in other words, exactly what you'd expect from a group of young men who arrested their development at a precocious seventeen years old--and I say this as someone who is not entirely innocent of similar sins, as a trip through the archives here would show.
For as long as I can remember, Lifehacker did something different. Along with Make and Hackaday, their writers were less interested in tossing off snarky burns at the expense of the day's news, and more focused on appreciating the efforts of their communities. Even though Trapani has moved on, it remains an easy-going, welcoming read. I think they deserve kudos for that. It's certainly a style I'm trying to imitate more--to highlight the positive as much as I call out the negative.
When it comes down to it, I think each of us has to ask ourselves whose future we'd prefer to realize. If I were forced to live in a world represented by TechCrunch, or one built by Lady Ada, I know which one would feel more welcoming. The latter values knowledge, in my opinion, while the former values commerical product--like the difference between learning to cook and reviewing frozen dinners, the result is probably less polished, but ultimately more nutritious.
Tone has consequences beyond self-realization: Lifehacker and Make, in particular, have really encouraged me to take a closer look at open-source hardware and software in a positive way, just because their bloggers are relentlessly cheerful, low-key advocates for those communities. As Trapani noted, that kind of appeal can be a virtuous cycle. A diverse, positive tech community should be more likely to apply its energy to projects that reflect its membership, instead of an endless supply of hipster social networks and expensive new hardware. I'm glad there are writers like her out there, trying to make that a reality.
About five years ago, I designed my own business cards. On the front, they had my contact info and a stamp of my name in Mandarin Chinese that I'd gotten in Xi'An. On the back, there was a QR code containing a vCard of everything on the front, which was supposed to show people that I was way ahead of my time (this was, in fact, so far ahead of its time that nobody was ever able to scan one of the stupid things until last year).
Anyway, of the 500 or so cards I had printed, I probably still have 450 of them sitting on a shelf at home. Partly this is because I wasn't nearly as big on actual networking as I was on having a cool business card, but it's also a function of where the world is going: nobody keeps a rolodex full of cards anymore, and our address "books" live behind a touchscreen or on an Exchange server. And while I may have been a bit hasty in adopting them, these days QR codes and digital tags are everywhere. Machine-readable data has invaded the everyday world.
So this year, I'm taking an admittedly small risk and calling it: now is the time to ditch physical business cards. It'll save paper and money, reduce clutter and littering at conferences, and best of all, it'll genuinely put you on the cutting edge of digital networking.
Now you may be saying to yourself, sure, Thomas can do this: he's a bona fide misanthrope, but how can regular people get away with it? Good question. Here's a few easy strategies for going cardless:
In a few years, this'll all probably seem like old hat. That's why it's important to jump on the card-less trend now, so we can look down our noses at the luddites handing out paper slips (and manually copying them into their computers) while we still can.
I kid, of course. Seriously, though: set aside the snobbery, the savings in money and paper, the confetti of unwanted cards after a professional meetup, and the chance to demonstrate your new media credentials. Won't it feel good just to not have to carry around a stack of disposable business cards wherever you go? I feel lighter already.
I tell everyone they should have Firebug or its equivalent installed, and know how to use it. I believe that people will find it invaluable if they're designing a page and want to test something. They might want to do some in-page scripting. They can examine the source for ideas, or to discover hidden items. But most importantly, they can use it to fix your stupid, unreadable, over-styled web page.
The development of HTML5 means that browsers have gotten more powerful, more elaborate, and more interactive. It also means that they can be annoying in new and subtle ways. Back in the day, page authors used <blink> and <marquee> to create eye-catching elements on their flat gray canvas. Nowadays, thanks to pre-made CMS templates, the web superficially looks better, but it's not necessarily easier to read. Take three examples:
Even worse are the people who have realized you can give the shadow an offset of zero pixels. If the shadow is dark, this ends up looking like the page got wet and all the ink has run. If it's a lighter shadow, you've got a poor man's text glow. Remember how classy text glow was when you used it on everything in Photoshop? Nobody else does either.
I'm not an expert in typesetting or anything, but the effect of these changes--besides sometimes giving Comic Sans a run for its ugly font money--is to throw me out of my browsing groove, and force me to re-acquire a grip on the text with every link to a custom page. If I'm not expecting it, and the font is almost the same as a system font, it looks like a display error. Either way, it's jarring, and it breaks the feeling that the Internet is a common space. Eventually, we'll all get used to it, but for now I hate your custom fonts.
It's no wonder, in an environment like this, that style-stripping bookmarklets like Readability caused such a sensation. There's a fine line between interactive design and overdesign, and designers are crossing it as fast as they can. All I ask, people, is that you think before getting clever with your CSS and your scripts. Ask yourself: "if someone else simulated this effect using, say, a static image, would I still think it looked good? Or would I ask them what Geocities neighborhood they're from?" Take a deep breath. And then put down the stylesheet, and let us read in peace.
Tim Ferriss was a real-world griefer before real-world griefing was cool. Before Anonymous was putting epileptic kids into seizures, DDOSing the Church of Scientology, and harrassing teenage girls for no good reason whatsoever, Ferriss (through sheer force of narcissism) had already begun gaming whatever system he could get his hands on. And now he writes books about it. The question you should be asking yourself, as you read this tongue-in-cheek New York Times review of Ferriss's "four-hour workout" book is, did he write this to actually teach people his idiosyncratic health plan? Or (more likely) is it just the newest way Ferriss has decided to grief the world, via the NYT bestseller list?
Griefing, of course, is the process of exploiting the rules of an online community to make its members miserable. Griefers are the people who join your team in online games, and then do everything possible to sabotage your efforts. It's a malevolent version of the "munchkin" from old-school RPGs, where a player tries to find loopholes in the rules, except that griefers aren't playing to win--they're playing to get a reaction, which is much easier. The key is in the balance--a griefer or munchkin is looking to maximize impact while minimizing effort. That's basically what Ferriss is doing: he power-games various external achievements, like kickboxing or tango, not for their own sake, but to boost his own self-promotional profile.
The problem with writing about reputation griefers like this guy is, for them, there really is no such thing as bad publicity. They want you to hate them, as long as it boosts their search ranking. And there are an awful lot of people out there following similar career plans--maybe not as aggressively, almost certainly not as successfully, but they're certainly trying. They may not realize that they're griefing, but they are. Affiliate marketers? Griefing. Social networking 'gurus' who primarily seem to be networking themselves? Griefing. SEO consultants? Totally griefing.
Like a zen student being hit with a stick, I achieved enlightenment once I looked at the situation this way: it's the Internet equivalent of being a celebrity for celebrity's sake. Or, perhaps more accurately, griefing provides a useful framework for understanding and responding to pointless celebrities elsewhere. Maybe this is one way that the Internet, for all its frustrations and backwardness and self-inflicted suffering, can make us better people.
The one thing I've learned, from years of "Something Is Wrong On The Internet," is that the key to dealing with griefers--whether it's a game of Counterstrike, Tim Ferriss, or the vast array of pundits and shock jocks--is right there in the name. They benefit from getting under your skin, when you treat them as serious business instead of something to be laughed off. As Belle and I often say to each other, you can always recognize people who are new to the dark side of the Internet's ever-flowing river of commentary by the gravity they assign to J. Random Poster. We laugh a little, because we remember when we felt that way (sometimes we still do), before we learned: it takes two people to get trolled. Don't let them give you grief.
Most book-lovers, I think, have a shelf devoted to their favorite books. It's always half-empty, because those are also the books they lend out when someone asks for a recommendation--oh, you haven't read something by X? Here you go. I love that shelf, even if I rarely lend books: it's where the private activity of reading becomes a shared experience, either through borrowing or via representation: these are the books that have deeply affected me. Maybe they'll affect you, too.
Likewise, there is writing on the Internet that is classic: essays, articles, and fiction that get linked and re-linked over time, in defiance of the conventional wisdom that online writing is transient or short-lived. The Classics are a personal call: what goes on your mental shelf of great online writing won't be the same as mine, and that's okay. This post is a collection of the items that I consider must-reads, accumulated over years of surfing. As I dig stuff out of my memory, I'll keep adding more.
So, you're thinking about deleting your Facebook account. Good for you and your crafty sense of civil libertarianism! But where will you find a replacement for its omnipresent life-streaming functionality? It's too bad that there isn't a turnkey self-publishing solution available to you.
I kid, of course, as a Cranky Old Internet Personality. But it's been obvious to me, for about a year now, that Facebook's been heading for the same mental niche as blogging. Of course, they're doing so by way of imitating Twitter, which is itself basically blogging for people who are frightened by large text boxes. The activity stream is just an RSS aggregator--one that only works for Facebook accounts. Both services are essentially taking the foundational elements of a blog--a CMS, a feed, a simple form of trackbacks and commenting--and turning them into something that Grandma can use. And all you have to do is let them harvest and monetize your data any way they can, in increasingly invasive ways.
Now, that aspect of Facebook has never particularly bothered me, since I've got an Internet shadow the size of Wyoming anyway, and (more importantly) because I've largely kept control of it on my own terms. There's not really anything on Facebook that isn't already public on Mile Zero or my portfolio site. Facebook's sneaky descent into opt-out publicity mode didn't exactly surprise me, either: what did you expect from a site that was both free to users and simultaneously an obvious, massive infrastructure expense? You'd have to be pretty oblivious to think they weren't going to exploit their users when the time came to find an actual business model--oblivious, or Chris Anderson. But I repeat myself.
That said, I can understand why people are upset about Facebook, since most probably don't think that carefully about the service's agenda, and were mainly joining to keep in touch with their friends. The entry price also probably helped to disarm them: "free" has a way of short-circuiting a person's critical thought process. Anderson was right about that, at least, even if he didn't follow the next logical step: the first people to take advantage of a psychological exploit are the scammers and con artists. And when the exploit involves something abstract (like privacy) instead of something concrete (like money), it becomes a lot easier for the scam to justify itself, both to its victims and its perpetrators.
Researcher danah boyd has written extensively about privacy and social networking, and she's observed something interesting about privacy, something that maybe only became obvious when it was scaled up to Internet sizes: our concept of privacy is not so much about specific bits of data or territory, but our control over the situations involving it. In "Privacy and Publicity in the Context of Big Data" she writes:
It's about a collective understanding of a social situation's boundaries and knowing how to operate within them. In other words, it's about having control over a situation. It's about understanding the audience and knowing how far information will flow. It's about trusting the people, the situating, and the context. People seek privacy so that they can make themselves vulnerable in order to gain something: personal support, knowledge, friendship, etc.This is why it's mistaken to claim that "our conception of privacy has changed" in the Internet age. Private information has always been shared out with relative indiscretion: how else would people hold their Jell-o parties or whatever they else did back in the olden days of our collective nostalgia? Those addresses and invitations weren't going to spread themselves. The difference is that those people had a reasonable expectation of the context in which their personal information would be shared: that it would be confined to their friends, that it would used for a specific purpose, and that what was said there would confine itself--mostly--to the social circle being invited.
People feel as though their privacy has been violated when their expectations are shattered. This classicly happens when a person shares something that wasn't meant to be shared. This is what makes trust an essential part of privacy. People trust each other to maintain the collectively understood sense of privacy and they feel violated when their friends share things that weren't meant to be shared.
Understanding the context is not just about understanding the audience. It's also about understanding the environment. Just as people trust each other, they also trust the physical setting. And they blame the architecture when they feel as though they were duped. Consider the phrase "these walls have ears" which dates back to at least Chaucer. The phrase highlights how people blame the architecture when it obscures their ability to properly interpret a context.
Consider this in light of grumblings about Facebook's approach to privacy. The core privacy challenge is that people believe that they understand the context in which they are operating; they get upset when they feel as though the context has been destabilized. They get upset and blame the technology.
Facebook's problem isn't just that the scale of a "slip of the tongue" has been magnified exponentially. It's also that they keep shifting the context. One day, a user might assume that the joke group they joined ("1 Million Readers Against Footnotes") will only be shared with their friends, and the next day it's been published by default to everyone's newsfeed. If you now imagine that the personal tidbit in question was something politically- or personally-sensitive, such as a discussion board for dissidents or marginalized groups, it's easy to see how discomforting that would be. People like me who started with the implicit assumption that Facebook wasn't secure (and the privilege to find alternatives) are fine, but those who looked to it as a safe space or a support network feel betrayed. And rightfully so.
So now that programmers are looking at replacing Facebook with a decentralized solution, like the Diaspora project, I think there's a real chance that they're missing the point. These projects tend to focus on the channels and the hosting: Diaspora, for example, wants to build Seeds and encrypt communication between them using PGP, as if we were all spies in a National Treasure movie or something. Not to mention that it's pretty funny when the "decentralized" alternative to Facebook ends up putting everyone on the same server-based CMS. Meanwhile, the most important part of social networks is not their foolproof security or their clean design--if it were, nobody would have ever used MySpace or Twitter. No, the key is their ability to construct context via user relationships.
Here's my not-so-radical idea: instead of trying to reinvent the Facebook wheel from scratch, why not create this as a social filter plugin (or even better, a standard service on sites like Posterous and Tumblr) for all the major publishing platforms? Base it off RSS with some form of secure authentication (OpenID would seem a natural fit), coupled with some dead-simple aggregation services and an easy migration path (OPML), and let a thousand interoperable flowers bloom. Facebook's been stealing inspiration from blogging for long enough now. Instead of creating a complicated open-source clone, let's improve the platforms we've already got--the ones that really give power back to individuals.
I don't remember where I was, the first time I heard the name Malcolm X. I remember that I was maybe 8 years old, growing up in Lexington, Kentucky. It was a mostly African-American neighborhood, so it could have been anywhere, really. I think I remember being confused by the 'X'--how could that be a last name? How did he sign forms or documents? And as someone who fumed at the end of every class roll call and official ceremony, I wondered: why didn't he pick a letter closer to the start of the alphabet?
At that age, of course, history is a pretty boring topic, but I don't remember learning about Malcolm X in class. I don't think I ever really discussed him with my parents, either. He was a cipher, a vaguely sinister one for some reason (maybe the name, maybe not). It wasn't until college, when I took a class on social movements and persuasion, that I learned more about the man: his militance within the Nation of Islam, his pilgrimage to Mecca, and the change in his thinking as a result. It was a revelation, a whole part of the civil rights story that I'd never learned about--and I was simultaneously shamed that I'd never bothered to find out about it on my own.
A couple of years ago, I finally got around to reading his autobiography, and was struck all over again. It's a fascinating story: told to Alex Haley during a time when Malcolm X was himself undergoing a serious self-examination, it's a chronicle of transformation on both explicit and implicit levels. He was an extraordinarily complicated person, undoubtably flawed but capable of tremendous insight and intelligence. It makes clear that his assassination was truly one of the great tragedies of the civil rights movement.
Yesterday was the anniversary of the assassination of Malcolm X, and of course February is Black History Month, so I've found myself thinking about this a lot lately. The thing about Black History Month is that it's a misnomer: as US citizens, Black history is our history. The fallout from slavery, segregation, and the struggle for civil rights still echo through our society in ways that we still stumble to articulate. Nobody, to my mind, represents that complex truth more than Malcolm X.
Part 5: Is this the right room for an argument? No, this is abuse.
I will admit this: it took some nerve for Chris Anderson to write the last few chapters of Free. Oh, parts of it are harmless enough: a comparison to old theories of competitive economics, for example, or advice to embrace digital waste. One is obvious padding, and the other is a random plate of leftovers from when this book was a magazine article. But they're nothing compared to the chronicle of chutzpah that is most of the closing section.
It's hard for me to imagine, for example, writing a section on post-scarcity economies in science fiction--much less thanking an intern in the acknowledgements for actually reading the books and regurgitating summaries/analysis to me. I would have a hard time assembling, with a straight face, a chapter on reputation and gift economies that includes a study putting reputation at the very bottom of the motives for Wikipedia contributions. And it almost seems like a practical joke to write "China and Brazil Are The Frontiers of Free. What Can We Learn from Them?"--a chapter which states (without citation, evidence, or any appreciation for irony) that Chinese students are basically Confucian-trained plagiarists, that piracy is the cause of China's growing number of millionaires, and that generic drug prices in Brazil are an endorsement of "free." Seriously?
Of his closing chapters, the only one that's probably worth engaging at length is #14, in which Anderson attempts to answer the criticisms that have been levelled against his argument since its debut. To be fair, these are real arguments, many of which I've raised here. Whether you find his responses convincing will probably depend on your reaction to previous chapters. If you found him less than rigorous before (and I certainly have, what with the Wikipedia, plagiarism, technical misconceptions, and sloppy definitions), his counter-arguments won't change your mind. What frankly surprised me was, looking back at the chapter, how few challenges he actually answered in his replies. In the interests of space and sanity, I'm summarizing them here in the form of the classic Shorter format:
...when the markets recovered and we looked back, we found to our surprise that it was practically impossible to see the effect of the crash on the growth of the Internet. It had continued to spread, just as before, with hardly a dip as the public markets cratered.Wait a second: is he seriously claiming that the dot-com bubble was going to crash the Internet itself? As usual, it's unclear--he continues straight into a paragraph about the netbook market, where low-cost (but not free) computers are loaded primarily with the low-cost (but not free) Windows XP, and will soon be preloaded with the not-so-low-cost Windows 7. Our current financial crisis, he argues, will push people even more into the economics of free content, even though it will make it more difficult for businesses to embrace it as a model. And he closes his book, literally in the last three paragraphs, by running through a list of web startups (including Twitter, YouTube, Digg, and Facebook) that have failed to generate a profitable revenue stream. He concludes (location 3899):
...free is not enough. It also has to be matched with Paid. Just as King Gillette's free razors only made business sense paired with expensive blades, so will today's Web entrepreneurs have to invent not just products that people love but also those that they will pay for. free [sic] may be the best price, but it can't be the only one.Translation: it is a tale told by an idiot, full of sound and fury, signifying nothing.
To quote Weezer: Why bother? Everyone knows these "airport books," as Anil Dash calls them, are terrible--why take the time to engage it in such detail? Why spend a whole week on it? Why not just ignore Anderson and his rampaging ego?
To answer, I can only say that sadly, many people do not bring nearly enough skepticism to the table, especially faced with the mighty publicity machine behind Anderson's work. His first "revelation," the so-called long tail, has run afoul of evidence time and time again--and this has not stopped it from being advocated as sound business strategy. I have had it quoted at me, and I expect the same to happen with Free. So at the very least, it's therapeutic to work through the book and marshall my thoughts on it.
But I have to confess to a less rational motive. For some time now, I've been bothered by the way that "Web 2.0" movements encourage us to commercialize ourselves, or at least to quantify that value: what's your traffic? Your PageRank? Your follower count? How much could you make with AdSense on your blog? How can you turn your readers into money? How much are you worth as a reader? We are all in the business now, it seems, of selling ourselves to the world--based on a set of values which I find, if not suspect, then at least highly artificial.
When we talk about a non-professional attention economy, or a "reputational" economy, what we're doing--partly, at least--is putting a price tag on ourselves, and on each other. It disturbs me to think about community this way. Call me a crazy hippie, but the people that I've met while writing here, to me, are not commodities to be traded. And I like to think that (while I attempt to minimize its harm to my career) I don't write this blog for a commercial benefit, monetary or otherwise. If nobody read it (a state of affairs blessedly close to the truth), I'd still write here, just for the pleasure of doing so.
Anderson is not to blame for the marketing of the reputational economy, but he's one of its strongest proponents. His Free is, in many ways, an attempt to lay out a blueprint for the monetization of your attention and spare time. Speaking personally, I don't appreciate the effort. He's offering Free. I say, keep the change.
Part 4: Free-conomics
Chapters nine and ten--digital media and free economies, respectively--are the strongest points of Chris Anderson's Free so far. That doesn't mean I'd put them up for a Pulitzer by any means, but I have relatively little to debunk (that or this book has finally overwhelmed my snark reserves, thus proving that even seemingly-inexhaustible resources do have limits). So we'll deal with them quickly, then take a break for some lighter fare: Anderson's motley collection of sidebars, and his reaction to the infamous New Yorker review by Malcolm Gladwell.
Anderson begins chapter nine with the heading "Free Media Is Nothing New. What Is New Is The Expansion of That Model to Everything Else Online." Indeed, if by that second "new" you mean "more than a decade old." He's like someone who waits until 1970 to declare that "the automobile is going to be a very influential technology." Good call, man! Hit us with another far-out prediction!
For the most part, though, this is a perfect example of Anderson's weak hypothesis: yes, advertising and alternate revenue streams can sometimes pay for a loss-leader free service. He spends much of this and the next chapter cataloguing (yes, again) all the different models of advertising that are possible online: from video game billboard placement to premium extras to gold farming (you may note, incidentally, that per Anderson's usual M.O. several of these are not really all that free). Anderson sees gaming in particular as a roiling pot of brand-new revenue models, even though most of them (like Second Life's virtual real estate) are just variants on very old models (in Linden Labs' case, the venerable lease). We are not, in other words, seeing the Internet charging ahead. We're seeing it catch up.
I feel compelled, since I'm familiar with it, to mention that Anderson's view of the gaming market is somewhat skewed. He concentrates primarily on massively-multiplayer titles, but he does also raise the transition from physical to digital distribution without spending much time on it. And it's just as well he doesn't, since to do so would be to point out that this is a booming digital content market that is assuredly not free. The cost of making a game, after all, is not primarily in printing CDs and boxes. It's in paying programmers, artists, designers, and writers to churn out an astonishing amount of material in a relatively short amount of time. Moving games to something like Steam or Impulse hasn't lowered their price to zero, as Anderson seems to argue should happen, because distribution was never the bulk of the expense in the first place. And I have seen no explanation from him, so far, on how to reconcile that fact with his predictions.
Of course, no book on Internet economics would be complete without a fawning section on Radiohead's In Rainbows, which was given away for free, then made a ridiculous amount of money for the band. In my opinion, this indicates more about the flaws of the studio system than it does about the viability of digital distribution, but it does (for once) make the point that Anderson wants it to make. Or does it? His other examples are Nine Inch Nails and Prince--all of which are big-name brands that can afford to A) drop the money for recording out of their own pockets and B) have a large fan-base built via a not-free revenue model. Of the struggling bands with free tracks on MySpace that Anderson loves to mention, what proportion of them have actually emerged as new superstars?
The answer, of course, is not many. But it's a shame that Anderson has insisted on sticking to either generalities (MySpace) or well-trodden examples (Radiohead) because there is innovation occurring in the free/premium music space. Take, for example, Steve Lawson and Matt Stevens, two loop-oriented instrumentalists who are using "free" tools like video-sharing service Ustream to broadcast online concerts, then networking with fans over free social media to arrange shows. Here are people who are, as far as I know, making a decent living using hybrid "free" models, many of which are much more interesting than simply giving away tracks online. But then, that would require more research than Anderson seems to have invested in this book.
If he or his editors had been thinking clearly, chapter ten would have been one of the first chapters in Free, not buried more than halfway through. In it, Anderson gives a rough estimate of the size of the free economy, if that's not a contradiction in terms. By doing so, he answers the burning question that most readers should have been asking from the start: So what? But in a bizarre turn, he writes (location 2645):
Let's quickly dispense with the use of "free" as a marketing gimmick. That's pretty much the entire economy. I suspect that there isn't an industry that doesn't use this in one way or another, from free trials to free prizes inside. But most of that isn't really free--it's just a direct cross-subsidy of one sort of another."Let's quickly dispense" with it? It's one of Anderson's four "free" business models from the start of the book! It's behind most of his examples, including the game market on which he's so bullish! Dispense with it? Why not throw away most of the book? Good question.
As always, while totalling up the GDP of this free economic zone, Anderson can't keep his story straight. He wants to use Facebook as an example of the "attention" economy, even though he admits that "Facebook is still unable to find a way to make money faster than it is spending it." Likewise, he wants to include the open-source consulting market, such as the enormous Linux division at IBM, even though (apart from the initial software) those services are at the center of the transaction, and they are very much not free. He wants to include free music and content in the value of networks like MySpace, although he's unable to assign them a value. And then to top it off, he figures the total cost of the Internet, based on an estimate of one hour of work for each individual URL indexed by Google, to be a conservative $260 billion. What are we to do with these numbers, all of which are either wild estimates or utter flights of fancy? Absolutely nothing, as far as I can tell. Primarily, they tell us that you can use the Internet to make money, or to share your hobbies. If Anderson had written this a decade ago, it might be noteworthy. Instead it's just kind of sad.
A Sidebar About Sidebars
Throughout the text, Anderson includes a bunch of sidebars, each titled in the format "How can X be free?" Once or twice they manage to be relevant. Most of the time they are disturbingly inane. For example:
Sidebar the Second: Editorial Review
Malcolm Gladwell's New Yorker review of Free deserves some attention, not just because it's hilarious to watch one pop trend guru flame another, but because it's actually dead-on. Several tech blogs have noted that his numbers for YouTube's bandwidth costs may be based on an inaccurate report, but the point remains: like many of Anderson's pivotal examples of free revenue, YouTube is not actually profitable. Gladwell also raises valid points about research, infrastructure, intellectual property, and scale. And he shows off why he's the king of this genre, with equally-unscientific but far fresher counter-anecdotes scattered through the review. But what seems to have struck home is his comment on journalism. Gladwell writes:
...it is not entirely clear what distinction is being marked between "paying people to get other people to write" and paying people to write. If you can afford to pay someone to get other people to write, why can't you pay people to write? It would be nice to know, as well, just how a business goes about reorganizing itself around getting people to work for "non-monetary rewards." Does he mean that the New York Times should be staffed by volunteers, like Meals on Wheels?Anderson focused primarily on this passage in his Wired.com retort, titling it (in a fit of projection) "Dear Malcolm, Why So Threatened?" He has no good answers for the ailing newspaper industry, Anderson writes, but his personal model is (I am not making this up) Wired's Geekdad blog.
About three years ago, I started a parenting blog called GeekDad, and invited a few friends to join in. We soon attracted a large enough audience that it became apparent that we couldn't post enough to satisfy the demand, so I put out an open call for contributors. Out of the scores who replied, I picked a dozen and one of them was Ken Denmead [...] Ken is, by day, a civil engineer working on the BART extension in the SF Bay Area. But by night he an amazing community manager [sic]. His leadership skills impressed me so much that I turned GeekDad over to him entirely about a year ago. Since then he's recruited a team of volunteers who grown the traffic ten-fold, to a million page views a month.Two things: first, if you are not a parent, reading Geekdad is like being trapped in an elevator with a new father--one who expounds proudly on every single aspect of life with their progeny, as if they are the first parents in history of the entire world, except it's ten times worse because the parent in question is a giant nerd. Second, it's a parenting blog. Of course it's free: you'd have to pay them to shut up about their kids! There's nothing wrong with that, although it's not high on my reading list. But to compare this with the act of journalism--of investigating stories, poring over data, putting in phone calls, fact-checking, etc.--is foolishness.
Good journalists are content experts. They're excellent writers who know what questions to ask, and where to dig. They put in a lot of time doing very unglamorous, tedious work in the service of small glories, like a front-page story or the feeling of a truth well told. For good journalism, you have to pay people. Now, you can certainly pay them based on ad revenue, and you can take advantage of crowdsourced labor to distribute some of the grunt work--Josh Marshall's Talking Points Memo has been a great example of new media reporting--but you don't get good, quality journalism for free. And I would argue, based on the downward spiral of quality in 24-hour TV news, that we should be extremely wary of outlets dependent on audience eyeballs for all funding. Viewers may find that they get what they pay for.
One of Anderson's defenses, as a trendspotter, is that he's not advocating for "free" but merely showing the direction that the market is headed. And it's in cases like this, where he suggests that the news should be run like a niche parenting blog, that I find his approach most reprehensible. It allows him to make arguments about the future but present them as facts, the futurist equivalent of the passive voice. It denies us agency in choosing a future--like it or not, he's saying, you'd better get used to this "free" stuff, because it's inevitable. There is, of course, nothing inevitable about it, and there's nothing neutral about Anderson's position. He's practically salivating over this new, free world, where journalism is run like one of the press release-mills that Wired calls a blog. At the end of his response, Anderson peevishly asks "Malcolm, does this answer your question?" Yes, it does--and we should find that answer terrifying.
Part 3: You keep using that word. I do not think it means what you think it means.
Before we go any further, I want to make something clear: I'm not opposed to "free" digital content, in either its monetary or political sense. Although I pay a small amount each month for hosting this site, it is served via Apache (free) on Linux (free), and is generated on the server by a set of (free) Perl CGI scripts. I log into the server using the PuTTY SSH client (free), and I view and test it using Firefox (free). Like almost everyone else, I use "free" ad-supported search engines and watch "free" ad-supported broadcast television. My current smartphone runs on a free, open-source operating system, and my previous phone OS is currently being open-sourced by its owners. I also eat the "free" samples at Price Club on Saturdays, if they're not too disgusting, which is a credit partly to my thriftiness but mainly to the strength of my stomach.
I don't have a problem with free, as long as we understand that "free" actually means "a wide range of well-known business models that shift costs to another location," also known as Chris Anderson's weak hypothesis in Free. If he'd written a book about that, I'd have little disagreement, but it would be a pointless book mostly composed of truisms. Much of Anderson's writing starts out in this mode. But inevitably, he keeps getting carried away and broadening it into a strong hypothesis that's untenable--either that the shifted costs, Heisenberg-like, cease to exist if he ignores them, or that "very cheap" is the same thing as free, or both.
That's largely the gist of chapters five through eight of Free. Anderson's wishfully-ambiguous conception of his subject from chapter four continues to shift to wherever his argument needs to be. It's incredibly frustrating--my notes are filled with repeated entries reading "so it's not really free, then." It's not that Anderson is unaware of these criticisms--he mentions them in passing at least a couple of times--but that he apparently dismisses them out of hand, or forgets about them in his rush to tell yet another overcooked, second-hand anecdote.
Chapter seven, for example, is devoted to Microsoft and the degree to which it has been threatened by free alternatives. Inarguably, Microsoft has been challenged in several markets by competing products that carry no up-front sticker price, and they've done their best to respond. The result has, in my opinion, been good for both Microsoft and for consumers--you can pry Firefox and Firebug out of my cold, dead hands, for example. But as a case study for how "free" will conquer all, you could not pick a worse company than Microsoft. In every example Anderson describes, by his own admission, they're thriving despite a paid-product revenue model. China? Heavily pirated and discounted, but still profitable. The desktop? Still controlling most of the market, and raking in money even on critical failures like Vista. The server? Incredibly, server software is one of Microsoft's biggest recent successes: IIS runs an astonishing majority of the web. Free software has challenged the software giant, but it shows no signs of killing them off anytime soon, and no cute Kubler-Ross reference on Anderson's part is going to change that.
But let's not get ahead of ourselves. Anderson opens chapter five with the story of Lewis Strauss, the man who coined his favorite phrase, "too cheap to meter." Strauss was discussing electricity, and of course you may have noticed the continued existence of power meters on buildings throughout the U.S. But, says Anderson (location 1219):
...what if Strauss had been right? What if electricity had, in fact, become virtually free?Sure, and what if I had a pony? We can imagine all kinds of ways the world would be different if scarcity no longer applies--and Anderson does, laying out a vision of plentiful water, food, and clean fuel. But looking back, Strauss sounds like a crank. Anderson needs to show how his post-scarcity vision won't appear the same way in forty years, and using weasel-words like "virtually free" doesn't help.
Get used to it, though, because there's a lot of "virtually" free in Anderson's utopia, even though that's not the same thing as free at all. He seems to have an equivalence problem: make something small enough, and he'll swear it doesn't exist. For example, Anderson spends a lot of time in chapters five and six on Moore's Law and the price of transistors. He writes (location 1236):
In 1961 a single transistor cost $10. Two years later, it was $5. [...] Today, Intel's latest processor chips have about 2 billion transistors and cost around $300. So that means each transistor costs approximately 0.000015 cents. Which is to say, too cheap to meter.Can you spot the fallacy? Yes, transistors are really cheap--which would be awesome if I bought computer hardware by the transistor. But of course single transistors are completely useless to me, or to anyone else. I need a bunch of them in a certain configuration, like the Core2 Duo in my laptop or the ARM in my phone, neither of which even remotely qualifies as "free." Anderson obviously knows this--he wrote the sticker price for an Intel chip in the previous sentence, for heaven's sake--but appears to be purposefully ignoring it.
He commits this same mistake when discussing Google (chapter eight is entirely devoted to Google, and is one of the most tedious things ever written). Google keeps building enormous, multi-million datacenters, but (he chortles) their cost-per-byte just keeps dropping! Why, they're practically free! Really? Is the company doing per-byte accounting, then? A huge datacenter may be a better value than the last one, but it still cost someone enough money to keep 70's-era The Who in guitar amps for at least a couple of years. I have the utmost respect for Google and their continued efforts to make their infrastructure both ecologically-friendly and energy-efficient, but their facilities are not "free" by any stretch of the imagination.
Anderson calls the combination of increasing bandwidth, processing power, and storage space a "triple play" that's "not too cheap to meter, as Strauss foretold, but too cheap to matter." (italics in the original) The elephant in the room is, too cheap for whom? All Anderson's examples revolve around fiber broadband and state-of-the-art PC hardware, probably because that's his experience. But even in this country, there are plenty of areas without a fast pipe, and plenty of people too poor to buy a machine that could fully exploit it. Not to mention the developing world.
Indeed, we might well ask "too cheap to matter" for what? In the last few years, commodity hardware has hit the point where it's sufficiently powerful for almost any local task (excepting, of course, heavy lifting like games and media production). I could run Word (or any similar native office suite) just fine on my old 366MHz Celeron. But according to Anderson the future is in the cloud, where an equivalent word processor will be implemented in a high-level scripting language that older hardware may struggle to interpret with the same responsiveness and power. A computer in a rural area (or a developing nation) may have difficulty pulling down pages fast enough to use those AJAX applications effectively. Anderson's hypothetical world is only free--or close enough that the cost can be waved away--for people who are urban, relatively wealthy, and have already sunk money into recent hardware. If you fall outside that cohort, the future of Free, isn't.
In interviews and responses to critics who have raised similar arguments about scale and definition, Anderson and his fellow travelers have not responded gracefully. He's not claiming that everything is free, Anderson says, just the important bits. But this has always been the problem with techno-utopian schemes, ranging from seasteading initiatives to the OLPC. The parts that he and his friends consider important (or unimportant, in the case of Google's extensive data-mining, for example) aren't necessarily the parts that translate across cultures, incomes, and geography. And while Anderson's demographic may not feel the cost of his revolution directly, it doesn't mean that it doesn't exist.
To his credit, Anderson points out one group for whom life is going to suck if his prediction comes true: the people driven out of business by the pursuit of Free's ideology. Wikipedia, he notes, has killed off what was left of the encyclopedia industry after Encarta demolished most of it. Craigslist has done a number on the newspaper industry. Anderson sees this as a "Robin Hood" transaction, decentralizing the flow of money, but admits that he could be wrong. We'll get to see in more depth how he thinks journalism (and the economy as a whole) can reinvent itself in chapters 9 and 10. As someone with no small amount of interest in the sector, and based on hints from Malcolm Gladwell's review, I can't help but dread it.