Perhaps this is irony, coming on the heels of the previous post, but I'd like to announce that the first version of the NPR client for Android incorporating my patches has gone live. You can find it in the Market, or see it on App Brain. I get a credit and everything, as the second coder on the project. I'm pretty thrilled.
I got involved because, in keeping with the open-source spirit behind Android itself, NPR has released the source for the client at a Google Code repository under the Apache license. You can download it for yourself, if you'd like (you'll need an API key to compile, though). The NPR team would love to have contributions from other coders, designers, or even just interested listeners. You can hit them up via @nprandroid on Twitter, or send an e-mail to the app's feedback address.
This version mainly splits playback off into a background service with a notification, which is a better user experience and means the stream won't be killed if you leave the application with the Back button. We've got another version in the works that improves this functionality, incorporates some little UI tweaks, and lays the groundwork for home screen widgets. I'd like to thank Corvus for his help in spotting areas where the Android client needs improvement. The NPR design team is also finishing up an overhaul of the look-and-feel of the application, and hopefully we can get that out soon. Along with taking care of bug fixes and project cleanup, that's my priority as soon as existing revisions are cleared.
Last week, at the Gov 2.0 conference in Washington, D.C., I sat through a session on mobile application design by an aspiring middleware provider. Like most "cross-platform" mobile SDKs these days, it consisted of a thin native wrapper around an on-device web page, scripted in some language (Ruby, in this case), with hooks into some of the native services. As usual with this kind of approach, performance was awful and the look-and-feel was crude at best, but those can be fixed. What struck me about the presentation, as always, was the simple question: if I already have to code my application in Ruby/HTML/JavaScript, with all their attendant headaches, why don't I just write a web service? Why bother with a "native" application, except for the buzzword compliance?
This is not just snark, but an honest query. Because to be honest, the fervor around "apps" is wearing me out--in no small part, because it's been the new Product X panacea for journalists for a while now, and I'm tired of hearing about it. More importantly, it drives me crazy, as someone who works hard to present journalism in the most appropriate format (whatever that may be), that we've taken the rich array of documents and media available to us and reduced it to "there's an app for that." This is not the way you build a solid, future-proof media system, people.
For one thing, it's a giant kludge that misses the point of general-purpose computing in the first place, which is that we can separate code from its data. Imagine if you were sent text wrapped in individual .exe files (or their platform equivalent). You'd think the author was insane--why on earth didn't they send it as a standard document that you could open in your favorite editor/reader? And yet that's exactly what the "app" fad has companies doing. Sure, this was originally due to sandboxing restrictions on some mobile platforms, but that's no excuse for solving the problem the wrong way in the first place--the Web didn't vanish overnight.
Worse, people have the nerve to applaud this proliferation of single-purpose app clutter! Wired predictably oversells a "digital magazine" that's essentially a collection of loosely-exported JPG files, and Boing Boing talks about 'a dazzling, living book' for something that's a glorified periodic table with some pretty movies added. It's a ridiculous level of hyperbole for something that sets interactive content presentation back by a good decade, both in terms of how we consume it and the time required to create it. Indeed, it's a good way to spend a fortune every few years rewriting your presentation framework from scratch when a new hardware iteration rolls around.
The content app is spiritual child of Encarta. Plenty of people have noticed that creating native, proprietary applications to present basic hypertext is a lot like the bad old days of multimedia CD-ROMs. Remember that? My family got a copy of Encarta with our 486-era Gateway, and like most people I spent fifteen minutes listening to sound clips, watched some grainy film clips, and then never touched it again. Cue these new publication apps: to my eye, they have the same dull sheen of presentation--one that's rigid, hard to update, and doesn't interoperate with anything else--and possibly the same usage pattern. I'm not a real Web 2.0 partisan, and I generally dislike HTML/CSS, but you have to admit that it got one thing right: a flexible, extensible document format for combining text with images, audio, and video on a range of platforms (not to mention a diverse range of users). And the connectivity of a browser also means that it has the potential to surprise: where does that link go? What's new with this story? You can, given time, run out of encyclopedia, but you never run out of Internet.
That's perhaps the part that grated most about the middleware presentation at Gov 2.0. A substantial chunk of it was devoted to a synchronization framework, allowing developers to update their application from the server. Seriously? I have to write a web page and then update it manually? Thing is, if I write an actual web application, I can update it for everyone automatically. I can even cache information locally, using HTML5, for times when there's no connectivity. Building "native" applications from HTML is making life more complicated than it needs to be, by using the worst possible tools for UI and then taking away the platform's one advantage.
I'm not arguing that there's no place for native applications--far from it. There are lots of reasons to write something in native code: access to platform-specific APIs, speed, or certain UI paradigms, maybe. But it all comes back to choosing appropriate technology and appropriate tools. For a great many content providers, and particularly many news organizations, the right tool is HTML/CSS: it's cheaper, easier, and widely supported. It's easily translated into AJAX, sent in response to thin client requests, or parsed into other formats when a new platform emerges in the market. Most importantly, it leaves you at the mercy of no-one but yourself. No, it doesn't get you a clever advertising tagline or a spot at a device manufacturer keynote, and you won't feel that keen neo-hipster glow at industry events. But as a sustainable, future-proof business approach? Ditch the apps. Go back to the browser, where your content truly belongs.
Look, I'm not saying George Clooney's character from Up in the Air is right about wanting to unload all personal relationships. I don't have that many to spare, after all. But getting my worldly possessions down to a backpack (and then ditching the backpack)? Reducing my carbon footprint, my level of mindless consumerism, and my reliance on cheap, over-designed crap created by underpaid factory labor? Great. Let's do it.
..in theory, at least. In practice, it is tough to get rid of stuff. Learning to live frugally is a multi-step process.
Belle started with a simple rule for our apartment: if you bring something in, something else of equivalent size has to go out. This is a great rule, if for no other reason than that the apartment is very, very small and we can't stuff anything else into it without learning to stack the pets like Tetris blocks. And it incentivizes sustainability by making it easier to use trading/swap services than to buy new books/games/movies.
The second step has been learning to embrace digital media. I still buy a few CDs and paper books, but not nearly as many as I used to, and usually only if they're something I'll want to loan out, or if they're not available online. And we almost never buy DVDs--Netflix has that covered. While it has taken some time to get used to not 'owning' my music or movies, maybe that's the point--'ownership' shouldn't be the defining characteristic of cultural engagement.
Next up is learning to be happy with last year's model. This is not easy to do, especially given the constant deluge of electronic follow-up that companies can leverage these days. Most recently, for example, TiVo sent out messages offering new versions of their DVR box to subscribers at a discount. That's tempting: we've still got the old Series 2 box, the one that came out in 2006, and it doesn't do HD, or Netflix streaming, or... well, lots of neat features. But do we need that? I mean, we don't have HD cable anyway, and it doesn't really bother us. We've got the XBox for streaming, and we'd have plenty of space on the current TiVo if we'd stop using it to store whole seasons of Damages. There's nothing wrong with it to justify a replacement, so we'll stick with what we've got.
At some point, I want to start simplifying--giving away, selling, or (as a last resort) trashing the objects that I only keep out of habit. You know what I mean: old purchases that you don't use anymore, but you keep just in case it comes in handy somewhere down the road. Ruthlessness is the key--you're never going to turn that old Super NES on again, and you know it--but I probably lack the outright willpower. So instead I think I'll get a roll of those little green dot stickers, the ones they use to mark prices at flea markets, and put them on anything I haven't touched in a while. If it actually gets used, I'll take the sticker off. Anything with a sticker still on it at the end of the year has got to go.
Which brings us to the toughest part: our book collection. Already, heavy boxes of books books are the moving experience we dread most. But paper texts have another type of inertia, a weight derived more from their intellectual and emotional impact than their actual mass. Especially if you love books--and we do--it's hard to discard them. It's like throwing away knowledge! And yet we'll never read many of them again, and some of them we bought and may never read in the first place. Everyone would be better off if they were donated to the library or recycled. Of all the steps for reducing our material footprint, cutting the number of books sitting around on our shelves will no doubt be the most painful, but it may have the biggest impact.
Belle and I will probably never get our lives down to the point that they can fit in a backpack, or even an overhead luggage compartment. In reality, we probably don't actually want to get there--we're not monks or masochists, after all. Yet just as the best essay can benefit from judicious editing, I think it's appropriate to take a critical scalpel to our lifestyles from time to time. There's a lot of pressure out there to accumulate, to the point that "consumer" has too often become synonymous with "citizen" or "person." That pressure has consequences, in the labor system, in the environment, and in our financial stability. It may be true, as Slate's Farhad Manjoo insists, that we can't actually opt out from American materialism, but maybe we owe it to ourselves to try.
Most book-lovers, I think, have a shelf devoted to their favorite books. It's always half-empty, because those are also the books they lend out when someone asks for a recommendation--oh, you haven't read something by X? Here you go. I love that shelf, even if I rarely lend books: it's where the private activity of reading becomes a shared experience, either through borrowing or via representation: these are the books that have deeply affected me. Maybe they'll affect you, too.
Likewise, there is writing on the Internet that is classic: essays, articles, and fiction that get linked and re-linked over time, in defiance of the conventional wisdom that online writing is transient or short-lived. The Classics are a personal call: what goes on your mental shelf of great online writing won't be the same as mine, and that's okay. This post is a collection of the items that I consider must-reads, accumulated over years of surfing. As I dig stuff out of my memory, I'll keep adding more.
There are some games that you really ought to play under emulation only, and Shadow of the Colossus is going to be one of those. It's a beautiful, interesting game held back by the terrible, terrible PS2 rendering chip. Depending on your hardware, if you haven't played it already, you might even be best off emulating it now.
It was kind of surprising to me how bad the texture handling actually was. I skipped the PS2 when it was current, and only really got to sit down with one when I started using Belle's for Guitar Hero. I had bought a second-hand Dreamcast instead, or played a lot of older PC titles on my low-budget tower (calling it 'hand-built' implies, I think, a level of craftsmanship that wasn't present). Both of those had their issues, but they were capable of handling basic texture filtering, and character models didn't shake like a pair of cheap maracas, neither of which seems to have been a priority for Sony's Emotion Engine designers.
Normally, I'm not a graphics snob kind of guy. I enjoy Wii games for what they are, and I've never owned a computer capable of running new games at their top detail levels. I think Link's Awakening was one of the top two Zelda games, even in four shades of Gameboy Green. But my first reaction to SotC when I finally got around to firing it up this week was "wait, is there a way to turn off the Awful, Shimmery Moiré Filter?" Under the Playstation's dubious rendering context, anything more than five feet away from the camera becomes a shifting, grainy distraction. The development team has clearly tried to integrate this into the art style--I think the elaborate hair and stone textures, not to mention the blown-out bloom and grain filters, are a direct result of accepting the platform's limitations--but it doesn't really work. Not right away, at least, and not without interruption. And these ambitious effects come at a cost--even on native hardware, the game's framerate is notoriously unstable.
Unfortunately, the elaborate tricks used to push the PS2 as far as it can go mean that Shadow of the Colossus is a punishing feat for emulators. While recent PC hardware is easily capable of handling titles like the Final Fantasy games, SotC barely manages more than 10 frames a second on my 2007-era laptop. But it's a tantalizing slideshow: even at its native resolution, without the shaky landscape textures and shifty light bloom, you can really see just how beautifully-designed this game was. If I had a little more CPU to throw at it, I'd love to play it there instead of on Sony's temperamental black box.
As a long-time PC gamer, I've been using emulation for years, and this isn't the first time that the experience has been better on a virtual machine. If nothing else, it means freedom from the idiotic "save point" systems, particularly in console RPGs. I've always preferred the ergonomics of a keyboard or my favorite PC gamepad to whatever weirdness the original manufacturer has invented for their input device (Dreamcast, I'm specifically looking at you and your RSI-triggering monstrosity of a controller).
And more importantly, emulation has historically allowed the technical limitations of the day to be upgraded behind the scenes--from removing the flicker of NES sprite rendering (then restoring it, for the diehards) to the addition of mip-mapping and texture filtering on the PS2. My favorite, of course, is the gorgeous pixel-art enhancement of the Super 2xSaI algorithm. If you ever forget how well-crafted the peak of 16-bit gaming could be, play the first few rainy minutes of A Link To The Past in high resolution through a modern emulator. I think if you look at something like Pixeljunk Shooter, it's an unmistakeable tribute not just to 2D gaming, but to the advances that were first made in emulation, now brought back into the fold.
Which brings us back to Shadow of the Colossus and the poor, palsied PS2. As one of those games that'll get name-checked for years to come, and with the PS3 dropping backwards compatibility, emulation may end up a real blessing in disguise for SotC--new players will get the benefit of its stunning art and sound design, but without the crappy rendering. It's just too bad it takes such a monster of a system--a colossus, if you will--to do it, but that problem will solve itself over time. To be honest, I'm almost a little envious.
So, you're thinking about deleting your Facebook account. Good for you and your crafty sense of civil libertarianism! But where will you find a replacement for its omnipresent life-streaming functionality? It's too bad that there isn't a turnkey self-publishing solution available to you.
I kid, of course, as a Cranky Old Internet Personality. But it's been obvious to me, for about a year now, that Facebook's been heading for the same mental niche as blogging. Of course, they're doing so by way of imitating Twitter, which is itself basically blogging for people who are frightened by large text boxes. The activity stream is just an RSS aggregator--one that only works for Facebook accounts. Both services are essentially taking the foundational elements of a blog--a CMS, a feed, a simple form of trackbacks and commenting--and turning them into something that Grandma can use. And all you have to do is let them harvest and monetize your data any way they can, in increasingly invasive ways.
Now, that aspect of Facebook has never particularly bothered me, since I've got an Internet shadow the size of Wyoming anyway, and (more importantly) because I've largely kept control of it on my own terms. There's not really anything on Facebook that isn't already public on Mile Zero or my portfolio site. Facebook's sneaky descent into opt-out publicity mode didn't exactly surprise me, either: what did you expect from a site that was both free to users and simultaneously an obvious, massive infrastructure expense? You'd have to be pretty oblivious to think they weren't going to exploit their users when the time came to find an actual business model--oblivious, or Chris Anderson. But I repeat myself.
That said, I can understand why people are upset about Facebook, since most probably don't think that carefully about the service's agenda, and were mainly joining to keep in touch with their friends. The entry price also probably helped to disarm them: "free" has a way of short-circuiting a person's critical thought process. Anderson was right about that, at least, even if he didn't follow the next logical step: the first people to take advantage of a psychological exploit are the scammers and con artists. And when the exploit involves something abstract (like privacy) instead of something concrete (like money), it becomes a lot easier for the scam to justify itself, both to its victims and its perpetrators.
Researcher danah boyd has written extensively about privacy and social networking, and she's observed something interesting about privacy, something that maybe only became obvious when it was scaled up to Internet sizes: our concept of privacy is not so much about specific bits of data or territory, but our control over the situations involving it. In "Privacy and Publicity in the Context of Big Data" she writes:
It's about a collective understanding of a social situation's boundaries and knowing how to operate within them. In other words, it's about having control over a situation. It's about understanding the audience and knowing how far information will flow. It's about trusting the people, the situating, and the context. People seek privacy so that they can make themselves vulnerable in order to gain something: personal support, knowledge, friendship, etc.This is why it's mistaken to claim that "our conception of privacy has changed" in the Internet age. Private information has always been shared out with relative indiscretion: how else would people hold their Jell-o parties or whatever they else did back in the olden days of our collective nostalgia? Those addresses and invitations weren't going to spread themselves. The difference is that those people had a reasonable expectation of the context in which their personal information would be shared: that it would be confined to their friends, that it would used for a specific purpose, and that what was said there would confine itself--mostly--to the social circle being invited.People feel as though their privacy has been violated when their expectations are shattered. This classicly happens when a person shares something that wasn't meant to be shared. This is what makes trust an essential part of privacy. People trust each other to maintain the collectively understood sense of privacy and they feel violated when their friends share things that weren't meant to be shared.
Understanding the context is not just about understanding the audience. It's also about understanding the environment. Just as people trust each other, they also trust the physical setting. And they blame the architecture when they feel as though they were duped. Consider the phrase "these walls have ears" which dates back to at least Chaucer. The phrase highlights how people blame the architecture when it obscures their ability to properly interpret a context.
Consider this in light of grumblings about Facebook's approach to privacy. The core privacy challenge is that people believe that they understand the context in which they are operating; they get upset when they feel as though the context has been destabilized. They get upset and blame the technology.
Facebook's problem isn't just that the scale of a "slip of the tongue" has been magnified exponentially. It's also that they keep shifting the context. One day, a user might assume that the joke group they joined ("1 Million Readers Against Footnotes") will only be shared with their friends, and the next day it's been published by default to everyone's newsfeed. If you now imagine that the personal tidbit in question was something politically- or personally-sensitive, such as a discussion board for dissidents or marginalized groups, it's easy to see how discomforting that would be. People like me who started with the implicit assumption that Facebook wasn't secure (and the privilege to find alternatives) are fine, but those who looked to it as a safe space or a support network feel betrayed. And rightfully so.
So now that programmers are looking at replacing Facebook with a decentralized solution, like the Diaspora project, I think there's a real chance that they're missing the point. These projects tend to focus on the channels and the hosting: Diaspora, for example, wants to build Seeds and encrypt communication between them using PGP, as if we were all spies in a National Treasure movie or something. Not to mention that it's pretty funny when the "decentralized" alternative to Facebook ends up putting everyone on the same server-based CMS. Meanwhile, the most important part of social networks is not their foolproof security or their clean design--if it were, nobody would have ever used MySpace or Twitter. No, the key is their ability to construct context via user relationships.
Here's my not-so-radical idea: instead of trying to reinvent the Facebook wheel from scratch, why not create this as a social filter plugin (or even better, a standard service on sites like Posterous and Tumblr) for all the major publishing platforms? Base it off RSS with some form of secure authentication (OpenID would seem a natural fit), coupled with some dead-simple aggregation services and an easy migration path (OPML), and let a thousand interoperable flowers bloom. Facebook's been stealing inspiration from blogging for long enough now. Instead of creating a complicated open-source clone, let's improve the platforms we've already got--the ones that really give power back to individuals.
Clearly shot on a total shoestring, but no less adorable for it.
Dear Valued Customer,We hope you are enjoying your Smartphone! We appreciate and value your business and want to be sure you are aware of a change we've made to your account to ensure you have the best possible experience with unlimited data usage in the United States.
Smartphones are made for data consumption-surfing the web, social networking, email and more. That's why we require a Smartphone data plan in conjunction with our Smartphones. This ensures that customers with data intensive devices are not unpleasantly surprised with high data pay-per-use charges-just one low, predictable, flat rate for unlimited use each month.
For whatever reason, our records indicate your Smartphone does not have the correct data plan. As a courtesy, we've added the minimum Smartphone data plan for you.
Thank you for being an AT&T customer. We look forward to continuing to provide you with a great Smartphone experience.
Sincerely,
AT&T
Dear AT&T,
Thank you for your charming explanation of "Smartphones" and their associated data usage (I don't think the capital S is AP style, though--mind if I drop it?). Despite your carefully-worded letter, I must admit to some confusion: after all, use of my current smartphone has not resulted in any substantial data charges (that would be odd, considering I was on an "unlimited" data plan). Nor has the change from a Nokia phone to a touchscreen Android device resulted in a noticeable increase in data use--your own web site consistently placed my bandwidth consumption at around 100MB/month.
Which is why it surprised me to see that you had "upgraded" me from said "Unlimited" plan to a new "Smartphone" plan, which does not seem to offer any actual advantages to me over the old plan, unless you count the ability to pay you an additional $15 per month (perhaps you do). As a courtesy, I have moved myself to another carrier. I hope you are enjoying the carefree sensation of having one fewer customer!
Can we speak frankly, AT&T? I've been meaning to do this for a while anyway. After you complied in warrantless wiretapping of American citizens ("As a courtesy, we are secretly recording your phone calls, traitor...") it was difficult to justify doing business with you. But the organization of the American wireless industry, even after number porting legislation, is powerfully aligned with keeping customers right where they are, both technologically and contractually.
Consider: in this country, we have two incompatible radio standards (CDMA and GSM) split between four major carriers, each using a largely incompatible portion of the radio spectrum. Even on the GSM carriers, where the technology allows people to separate their number from a specific phone without your "help," the frequency differences mean they'll lose 3G service if they switch. The result is that moving carriers, for most people, also means buying a completely new phone for no good reason. Why, it's almost as though you all have conspired to limit our choices on purpose! ("As a courtesy, we have created an elaborate and wasteful system of hidden surcharges for switching service...")
And your industry's business models--well, I don't think you're even pretending those are customer-friendly, do you? Charging customers with unlocked phones the same premium as people with subsidized hardware? Long contracts and costly early termination fees? Text-messaging plans? This business with your capital-S-Smartphone plans is simply the latest effort from a wireless industry fighting desperately to be more than just a data-pipe provider, just like the ISPs. It's AOL all over again, AT&T, and it's inevitable. I can see why you're trying to squeeze your customers while you can, but it doesn't mean I have to be a part of it.
I mean, I'm not endorsing anyone, but there is at least one carrier who's starting to get it. They're offering month-to-month plans with no contract, and discounts for people who bring their own phones (or, more accurately, they're not charging for unsubsidized hardware). They're GSM, so subscribers can buy phones from anywhere--you know, like the rest of the world. And hey, they sold me an unlimited data plan (with unlimited text messages included, no less!) for the same price I was paying you before you "corrected" my data plan. It's still not perfect--it's the cell industry, after all, and frankly I'd socialize the lot of you in a heartbeat--but it's a damn sight closer to sanity.
In any case, I don't want to sound bitter. Thanks for letting me know about the change you've made to my ex-account. Good luck with that.
Sincerely,
Thomas
"Frank is a funkasaurus rex. Frank has a profile on eharmony.com if any of you single ladies out there are into puppet dinosaurs with sweet dance moves."
So the publishers are moving the e-book industry to the so-called agency model. I resent having to care about this, but since I own a Kindle and the new model will boost the price of digital "hardcovers" by at least a couple of bucks, I feel like I've been dragged into it.
The argument for agency pricing (in which the publishers, and not the retailers, set the prices for their books) primarily concerns preserving the premium for new releases. This premium, and not production costs, is why hardcovers are traditionally priced so much higher than paperbacks (they don't actually cost much more to produce). It's a kind of early adopter tax on avid readers, and the extra profits on bestselling hardbacks go to subsidize all the other books, most of which lose money. The irony is that under agency pricing, publishers are actually making less money in the short term, because they're not getting as much as they were when Amazon was selling titles at a loss.
Customers have been promised that this will all work out better for them in the end, because agency pricing also means that publishers can drop the price of older e-books, similar to the way paperbacks work, instead of keeping them all at a uniform $10. Amazon was also supposed to work this way, and I believe it sometimes did, but I'm not really sure when the change was supposed to happen, and often e-books didn't drop in price when the paperback version hit shelves. That said, I think a lot of people are distrustful that the older book discount is actually going to happen, and with good reason: publishers have historically looked at e-book markets as an opportunity to gouge readers, and it's not at all clear that they won't continue to do so.
Take, for example, Karin Lowachee's Warchild. I started looking at Lowachee's books after her newest, The Gaslight Dogs, got a favorable review on Tor.com last week. Warchild is the first of a three-part series, and was originally written in April of 2002, making it more than eight years old now. Hachette, the publisher, wants readers to pay $11--more than many "brand-new" e-books!--to download it. No doubt they considers this fair: Hachette offers Lowachee's books in a $22(!) mass market paperback format, making the e-book a 50% discount. From their perspective, maybe that seems like a good deal. But to the average reader, that's insane. There's no way I'm going to pay $11 for a pulp sci-fi book that's almost a decade old, any more than I'd pay more than $20 for it on cheap paper. Those are the kinds of prices that send me running to the used bookstore or the library--and then Hachette (and more importantly, Lowachee) gets nothing.
Which pains me, because I like giving money to authors. That's one of my favorite parts of the e-book market: I can give my money to authors without feeling guilty about destroying countless trees for books that I'll read once and never touch again. I buy a lot of books--many more, in fact, now that eco-guilt is out of the equation. And given time to adjust, as I've said, I don't see a real problem with paying a bit more for a just-published e-book. To give one example: although Ian Tregillis's Bitter Seeds will set me back $13, it sounds like a neat book and I don't want to wait for a paperback, so going over the $10 Amazon price point isn't the worst thing in the world. On the other hand, if it costs that much 8 years from now, I'll be considerably less sanguine about it.
I don't know that much about publishing models, so I'm not going to lecture about the costs of production and all that--I'm told those are minimal anyway--but nothing's going to kill consumer interest in e-books faster than a pricing scheme that regularly makes them more expensive than their paper equivalents. And if publishers wonder why people tend to eye them with distrust when they insist that they'll price older titles fairly, they should probably take another glance through their own back catalogs.
At some level this is a conversation about what kind of book market we want to have, and what it's possible to preserve. The existing publishing system is not at all set up for profitability. It leverages blockbuster titles to pay for the writing and editing of a diverse range of smaller, less popular books. I may find Dan Brown and JK Rowling personally repugnant, but their omnipresence makes possible the publication of obscure personal favorites like, say, Joseph Schloss's Foundation: B-Boys, B-Girls, and Hip-Hop Culture in New York.
Assume that we agree, as a society, that diversity of publication is a good thing. Does the agency model preserve it, or does it simply allow an inefficient system to perpetuate itself digitally? More to the point, does the move to cheaper digital pricing necessarily mean margins too thin for niche books to exist? Or is it possible for independent writers and publishers to leverage the new platform? Is the desired model that of the music industry, the movie studios, or something else entirely (Netflix and subscription services, maybe)?
I don't have pat answers for those questions, but I hope they figure it out soon. I'm getting exhausted by the rollercoaster up-and-down of the debate, and I can't be the only one. It'd be nice if they got this sorted out before they alienate the Americans who still read, but given the print industry's general track record, we probably shouldn't get our hopes up.