this space intentionally left blank

April 14, 2009

Filed under: tech»web

All Relative

I have a solution for all the things that drive me crazy about HTML. At least, I think it's a decent solution. The bad news is that there's no chance whatsoever that anything like it would be implemented.

To restate the problem: HTML is a bad way of building an interface. Even discounting incompatibilities and rendering differences between browsers--and even well-behaved browsers can render differently--it's incredibly inefficient. I had another horror-show experience with it while working on another budget package for Normally, at the bottom of that page, there's a footer with CQ's information. Unfortunately, for simplicity's sake, I positioned the central content pane using "position: absolute," which is apparently a big mistake. It made the footer float up underneath the navigation menu and behind the content.

There is, no doubt, a solution for this, probably involving some combination of float, clear, and JavaScript. But it's missing the point. The fact that a footer even needs a clever solution--that there is, in fact, a 'sticky footer that just works'--is insane. Apparently they plan to solve this problem in HTML version 5 with new tags for headers and footers, which is well-intentioned but strikes me as bolting new and horrible combinations onto an already terrifying tag soup.

As is, I'm inclined to rely on either tables or Javascript for placing elements on the page. The former is crufty, and the latter is practically unmaintainable, but they do have the example of allowing me to lay out a page spatially in relation to its component elements. Javascript is particularly tempting, in fact: using tools like JQuery, I can easily find the various parts of a page and then reposition them in relation to each other. Sure, it'll probably break when the page resizes, on small screen sizes, and when confronted with non-dynamic HTML--but it's so much easier to move elements around that I almost don't care.

What we really need is a new model, one that combines the flexibility of CSS-based layout with the relational/visual model of tables and dynamic layout. Basically, when I'm putting a page together, I don't want to think about it in terms of elements floating around. I want to be able to say that this is 3em to the right of that (which is 1em below the other), containing a column of these. I want to be able to say that the footer is simply always below my content, instead of tricking the browser into setting that up for me. I want to be able to line elements up with other elements, or simply say that one should be next to another. Something like:

#container {
max-width: 50em;
margin: auto;

#header {
top: 2em;

#content {
position: relational(#header) below;
top: 1em;
left: 0em;

#sidebar {
position: relational(#content) right;
top: 0em;
left: 3em;

#footer {
position: relational(#content) below;
top: 3em;

There's even a model that I think we could use for this: Swing. For all the criticism heaped on Java's APIs (deservingly so, in many cases), Swing makes a lot of sense to me for building simple, reflowable layouts. My understanding is that XUL and XAML also implement similar container/layout strategies, which is also fine by me: anything's better than what we've got now, right?

I don't have a problem with new tags for semantic purposes. Being able to specify that something is an aside/nav/article element makes a lot of sense from an accessibility perspective, and that's important. But a single-minded focus on semantics at the expense of design and thin-client programming ignores a great deal of where the web has been going--and it imposes a top-down model on accessibility efforts that probably won't be able to keep up with innovation. I can't help think that we'd be better off with a global accessibility attribute that can be extended easily, while paying more attention to the glaring deficiencies in HTML as a presentation language.

February 13, 2009

Filed under: tech»web

The Browser Is The New X11

Since I'm not a GMail user, I didn't know about the new button design that Google implemented until I saw a reference to this post by their designer. As an exercise in HTML and CSS trickery, it's pretty impressive. As a look into the sausage-making for serious web application design, it fills me with abject horror.

To summarize: Google apparently wanted a button that would A) use no images for its appearance, and B) allow new kinds of interaction to go with their labeling functionality. In order to do this, they ended up creating a set of custom CSS classes applied to six nested DIV tags, as well as a large chunk of JavaScript, I'm sure. It's very clever, but it also makes me wonder just how far we're pushing HTML beyond where it was meant to go--and how much it's holding us back.

I've been writing HTML code, off and on, for more than ten years now, and I have hated every minute of it. I don't claim to be very good at it, of course, so maybe that's the problem. But it's always struck me as a technology stuck awkwardly between two worlds. On the one hand, a platform- and display-agnostic representation of textual content. On the other, the desire to build attractive, well-designed presentation layouts, including application interfaces. These are not, I think, entirely compatible with each other. Google's new buttons illustrate that tension, as they torturously beat text layout elements into paintbrushes (ones that will display across all the various browser quirks, no less).

There are new options to alleviate that, of course: the Canvas element gives Flash-like drawing tools to JavaScript and HTML developers (once it's more widely available--figure another couple years, at the rate things are going). And toolkits--jQuery UI, GWT, Dojo--have proliferated. But at some point, I think we have to stop, look at this situation, and realize that we're bludgeoning these tools into doing things that they fundamentally were not meant to do. As interface design languages go, HTML is terrible. I have a hard enough time getting a decent text layout to work reliably, much less trying to build interactive custom controls for anything more complicated than a basic form.

The interesting thing about computing trends, though, is that they're cyclical. Displaying information through a clumsy "semantic" thin client/fat server relationship? We've been there, and then most of us ran away to something better as fast as we possibly could. It's only a matter of time, I suspect, before something replaces HTML for doing UI (while maintaining the lessons we've learned about REST and open communication standards), and at that point we will look back in horror, and wonder how we managed for so long.

If you want to see how this is going to go, consider Twitter. All the basic functionality is available from, but almost no-one I know uses that if they can help it. Invariably, you get a better experience from one of the clients (I use Twhirl and TinyTwitter for desktop and mobile, respectively). They're more reponsive, they take advantage of the native platform, and they don't require a browser to be open all day long. The communication with the server is still standard Web 2.0, but Twitter developers have largely abandoned the idea that they need to muck around with HTML, and everyone's better for it.

For the last few years, tech pundits have repeatedly predicted that the browser will take over the space currently occupied by the operating system: via solutions like Google Gears or Prism, or a custom shell like gOS, you'll run everything over the network via HTML/JS/CSS. It's failed to happen so far, and it'll continue to fail. The closer the browser comes to "real" applications, as with GMail, the more its shortcomings become apparent, and the more developers have to rebuild basic functionality in a system that's just not meant to handle it.

If anything, the future is in the middle ground: a heavyweight, native or bytecode platform that can be run in a distributed fashion over the web, splitting application programming back out from the browser and providing a real API for developers to leverage. Such a format addresses the weaknesses of modern binary applications, such as heavy installation and slow startup, without abandoning the advantages that a real operating system provides. AIR, Silverlight, and XUL (as well as Java, to a much lesser extent) are possibilities that could achieve this. HTML, if we're lucky, is not.

February 5, 2009

Filed under: tech»mobile

make -widget

When people talk about scripting on S60 (and you know what a hot topic that is), they usually mean Python. But there are actually two scripting engines on the platform: in addition to PyS60, there's also a Webkit-based widget engine for building apps in Javascript with some hooks into device functionality. Widgets have to be compiled into a .zip-based format, though, which means that (for me, at least) the whole appeal of a scripting language--the ability to cobble together quick solutions on the fly--is basically lost.

That said, I do like Javascript, particularly as a "scratch pad" kind of language. And the .wgz files used to install these apps can be created on a phone, using the Zip Manager program and a decent file manager. It's just a pain, during each "recompile," to delete the old .wgz, create a new .zip, add the project files, close it, change the extension, and run it again. Exactly the kind of thing, actually, that Python could easily automate.

In that spirit, is a kind of compiler script for generating Web Runtime widgets on S60. It takes care of archiving (hopefully with the right structure), renaming, and even runs the installer file for you. This should take a lot of the pain out of creating widgets on the phone itself. It's not flashy, but it's fast and can be run repeatedly with minimum hassle--I thought about adding a GUI, but I thought for the intended audience streamlined execution was probably a lot more important.

To set it up, you'll need to create a directory containing the widget project as detailed in the WRT development guide, including an info.plist XML manifest for the application. Next, edit the wgzPackager script to point it toward this directory: the relevant variables are dirString and wgzString, with the former being the directory above your project, and the latter being the project name itself. So, for example, I keep my projects in 'e:\\documents\\wrtdev\\' (the backslashes are doubled because they have to be escaped), with a subfolder of 'AppName' for each application.

I hope somebody finds this useful, but to be honest, I'm ambivalent about widget programming on mobile. It still strikes me as too limited, too slow, and too resource-hungry to be enjoyable. On the other hand, I recently started looking through Symbian C++, and that's a nightmare: oddities like two-phase constructors and the weird garbage collection stack mean that it's hardly in a good position to foster developer interest in the platform. And since, as far as I can tell, S60 is the smartphone platform with the greatest amount of flexibility and software freedom, having good developer support is important. I'm hoping that Nokia's upcoming port of Qt for S60 development will change things, by eliminating a lot of the weirdness and giving it a strong foundation for moving forward. In the meantime, if Python's not your thing, I guess widgets are the way to go.

January 26, 2009

Filed under: tech»mobile

Mobile Site Machinations

Nokia's Mobile Web Server (a port of Apache paired with a dynamic DNS service) is one of those "because it's there" kind of things. Not only does that seem to be largely why they built it, simply to prove that it could be done, but that's the spirit in which I've installed it. Do I have any good reason to run a website from my phone?

In fact, this is probably the wrong question. The point of the MWS isn't just to drive a website, although that's pretty cool. It's also to open the phone up to the lingua franca of Web 2.0: HTTP requests. There's a whole world of information services out there standardized on exchanging GET and POST over the Internet, but it can't usually talk to a phone without going through a proxy of some kind. The Mobile Web Server changes that--take a look at the services exposed through a REST API to see what I mean. In this scenario, your mobile presence isn't a limited client to the cloud--it's a full-fledged node of its own, capable of being directly queried and extended with new capabilities.

In developing countries, it should be said, we're seeing applications that work the other way: computers are learning to communicate via SMS with phone users, or to transfer data over the SMS channel instead of using data packets. Mobile applications in the developed world have been forced to either adopt this method of communication (like the SMS functionality on Twitter, or Google's short code) or distribute special code as discrete programs for communicating--the mobile Facebook and Google Maps applications share a common method of talking to servers, but they install as completely separate binaries and can't talk to each other. This is, frankly, a pointless duplication of code, as well as going completely against the spirit of mashups that makes mobile access so intriguing.

That said, there are no doubt interesting pages that could be served from a phone, and good reasons to do so. It would certainly be a workable option for people with my paranoid fear of centralized coordination, if you could find a way to create lots of routes to the dynamic endpoint (or to let people know the IP address when the phone goes online). For notification purposes or short announcements, RSS is more than sufficient: it's versatile, lightweight, likely to be available on a lot of phones in nthe near future, and (as Daniel O'Brien notes in one of the essays I linked yesterday) reliable enough. People don't need 99.999% reliability, O'Brien correctly points out, and RSS readers will just keep polling until a site comes back up. Frankly, for dissidents, always-on hosting might even be a disadvantage.

Another trend lately has been using phones for data collection, taking advantage of the cameras, accelerometers, and GPS/location sensors built into the devices. Nokia's version of Apache gets access to all the APIs normally available to Python, including SQL databases and most of the hardware. The idea of making that data accessible via a web interface--or indeed, being able to interact with it and control it live--is pretty powerful. But more importantly, building that interface through the web server means that it's capable of being a self-contained presentation tool for the data it gathers. The phone may not be able to run the latest version of Flash or compile Javascript at full speed, but it could certainly host viewers in those languages for clients connecting over the network for a portable, no-install data-crunching session.

And finally, of course, there's the possibility of simply hosting a standard website on the phone. I can't imagine running Mile Zero this way, but there are some kinds of content that would be tempting just because of the accessibility of the platform--voice/audio posts, perhaps, or a constantly-updated picture blog. If nothing else, there's the "because it's there." Why host a web page from my phone? Well, why not?

January 16, 2009

Filed under: tech»mobile

Palmed Off

This week, Palm announced a flashy new cell phone called the Pre. It's based on Linux, and runs its applications as HTML/Javascript with hooks into deeper APIs (similar to Nokia's Web Runtime). Unspoken, this is Palm's announcement that they're finally driving a stake through PalmOS, the operating system that made their name (but which, for licensing reasons, they don't actually own anymore). And it's about time. For years, PalmOS has been the Mobile Thing that Would Not Die, and its ab-life hasn't done anyone any favors.

I should qualify that statement by saying that there are a lot of points in PalmOS's favor: as someone who got started programming C for the platform, I have fond memories of versions 3 and 4. Indeed, at the time, PalmOS was well-designed for the challenges it faced. Unlike Windows CE (which ports a subset of the Win32 API to embedded platforms), it was meant specifically to run on very low-end (read: cheap) hardware while still responding quickly. It did this through an event-driven application model that put the processor to sleep as often as possible, as well as a GUI that faked multitasking by having programs remember their state and relaunch in the same place. API calls were, as far as I remember, pretty much just bare pointers to functions in ROM, and you could patch the system traps with your own code to extend OS functionality, like fixing the anemic clipboard or adding pull-down menus.

Much like the Newton, PalmOS didn't have "files" in the traditional sense. All data on the device, including applications, was stored as records inside databases. Inside applications, GUI layouts and components were stored as entries alongside records containing binary code, making it an interesting platform to hack. On both platforms, it soon became obvious that while this is an interesting idea that provides some advantages, people have a lot of files that don't naturally fit into database records, and flat file support was bolted on. Unlike the Newton, Palm didn't try to implement full handwriting recognition, which is troublesome on mobile platforms for several reasons: the device is too small to write comfortably, parsing handwriting requires a lot of processing power, and it invariably fails in the face of chicken scrawl. Instead, they went with Graffiti, a set of simplified letter shapes users had to learn--this sounds cumbersome, but it worked surprisingly well. To this day, I can still write in Graffiti almost as fast as I can scribble longhand.

Palm made choices, in other words, aimed directly at creating a device for quick data lookup and entry. It was never meant as a multi-use mobile computer. And as time moved on, and users began to expect more out of mobile, that became increasingly obvious. The problem Palm faced was two-fold: it didn't want to leave behind the tens of thousands of useful programs that were the platform's greatest strength, but the underlying framework had been stretched the point of breaking.

The solution they created, comically, was to write a modern OS devoted almost entirely to emulating the old operating system. The PalmOS devices that you buy in a store nowadays do just this: programs are compiled for the Dragonball processor (with possible chunks of the binary in native ARM code), then run in a virtual machine with holes poked into it for networking and graphics. It's actually kind of brilliant, in a twisted way--like Java, if Java had been designed by old Apple hardware nuts instead of computer science professors. They've even made a VM for Nokia's Internet tablets, and another company has ported emulators to Windows Mobile, Symbian, and OS X Mobile, raising the very real possibility that classic PalmOS could actually become a defacto interplatform standard (a task for which, honestly, it's not badly suited--or, at least, could hardly be worse than Java's MIDP/CLDC mess).

That said, it's not a great basis for moving into richer mobile software. Even speaking as someone with great affection for PalmOS who finds its emulated incarnation ingenious (or at least highly amusing), the cracks in its foundation have gotten pretty blatant, so that tearing it up and starting again was probably the only way to go. Maybe someone could write an emulator in Javascript--it's probably possible, at this point, and for sentimental reasons I'd love to see it. It's a little sad to think that the PalmOS where I learned a lot of skills will finally vanish. But sentiment's not a good enough reason to halt progress. You'll be fondly remembered, PalmOS, but you won't be missed.

January 11, 2009

Filed under: tech»lenovo

Not So Turbo

My Thinkpad laptop has a component in it that's called Intel Turbo Memory. It's a chunk of non-volatile flash RAM that's supposed to act as a cache for the hard drive, speeding up access and lowering energy use. And it is the single biggest cause of instability on the entire machine.

When I first got the laptop, it had conflicts with the Thinkpad's active protection system, which parks the hard drive when it detects movement. So moving the T61--not something unheard of when using a mobile system--would make everything stall out as the hard drive simply refused to unprotect itself. After a few revisions, Intel and Lenovo said they had a better version. Now it didn't hang during movement--but it didn't always sleep properly, and for some reason it would wake up at 1am to reboot itself. I'm pretty sure that this was the Turbo Memory's fault, since everything ran fine after disabling it. I could be wrong, though.

Every couple of months, Intel would drop a new driver, and I'd install it to see if I could get the benefits they promised me. It's never worked. The most recent version behaves fine unless I plug the laptop back into the dock, at which point it bluescreens.

For most people, this would probably be acceptable, since most people probably don't buy docking stations for their personal laptops. But I got used to multiple screens at the Bank, and I love the dock. It's the best part of owning a business-class laptop--being able to just slot it in and instantly get multiple displays and all of my USB gear enabled, that's a real timesaver. So anything that even thinks about interfering with docking doesn't last long on my system. I didn't just uninstall the drivers this time, I also disabled the hardware in the Device Manager, and I've got no plans to re-enable it.

Now, I've got no solutions, and really no grand observations to make here, although Anandtech's findings of almost no battery life increase do make me feel both better and worse simultaneously (because I've clearly been ripped off, but at least I'm not missing out on anything). I am amazed that something so blatantly snake oil made its way into a very expensive, high-end laptop, but that's capitalism for you. It's not like I'm really that upset, though--the problem's been easy to solve, and while I'm technically out $50 for the RAM, that's pretty insignificant compared to the rest of the sticker price. I'm just griping.

But while I don't have a lot of daily readers here, I do get a fair amount of very targeted traffic on niche issues like this. So I just thought I'd put this out there, in case anyone else is looking for a Thinkpad and wondering if the outlay's worth it for the Turbo Memory: it's not. Unless you really like turning things off. In that case, knock yourself out.

December 10, 2008

Filed under: tech»coding

DOM and Gloom

Against my better judgement, I've started working with JavaScript at work. It's a huge mistake, because learning new technical skills in a department that's weak in them is suspiciously like forming a debilitating drug habit: first you're just creating some tabs, then you work out how to stripe alternating rows in financial tables, and before you know it you've been assigned to build a searchable front-end for the video database. I specifically wanted to not be a programmer when I went into higher education, and look where that got me. But there doesn't seem to be any way around it, particularly after learning Flash--and Flash was itself unavoidable, since I strongly believe in using the right medium for the message, and's messages happen to be very data-heavy and in need of visualization.

In any case, I'm conflicted. I actually really like JavaScript solely as a language. When I go to write something quickly in Visual Basic or Python now, I find myself missing features like dynamic objects (where objects can add properties and methods during runtime), long lambda functions (which are created in the middle of other code, for things like callbacks), and first-class functions (not everything has to be wrapped in a class, and functions can be passed around as objects). I feel like I finally get why people are so enthusiastic about LISP and Scheme, where a lot of these features originated: it makes for a very expressive, flexible language.

But using JavaScript means writing for the web browser, which means writing for the Document Object Model (DOM), which is a dreadful affair. I apologize for writing that, since it's rehashing something that everyone knows, but it's absolutely true. Even using a library like JQuery, thus eliminating some of the frustration of cross-browser incompatibility, cannot make this enjoyable. It's all the pain of writing a web page, compounded by rewriting it dynamically through a set of clumsy, half-blind interfaces, hopefully without breaking anything beyond recognition. JQuery doesn't solve that problem. It makes it slightly less annoying, at the cost of forcing coders to type "$()" about a million times, at which point I might as well be using LISP anyway.

In order to see how nice JavaScript could be, I think you have to write something in ActionScript, where an actual API library is provided for things like UI elements, file access, events, and graphics. Freed from having to use the HTML DOM as a layer for presentation and interaction (something for which it was arguably never designed), it's actually possible to get into a rhythm and discover the good parts of JavaScript/ActionScript/ECMAScript. It's a real wake-up call--with the downside that it makes going back to the browser even more depressing.

November 11, 2008

Filed under: tech»mobile

Given Latitude

Here are two Python scripts for getting latitude and longitude position from Google based on GSM cell tower ID numbers:

The first script is a slightly altered version of the script written by Google Maps Internals, with the struct.unpack() code updated (it originally cast the longitude into the wrong type). It hooks into the low-level data format used by the Google Mobile Maps application on S60 and other platforms instead of using JSON. This is very cool from a hack position, but it's obviously also very fragile, since the GMM API is unpublished and changes on Google's end can break it easily (and may have already done so, hence the improper cast).

The proper way to do this, then, is using the second bit of code, which accesses Google's JSON-based services. The primary function in that file, gGSMLocation(), is actually not specific to S60. It takes as its only argument the four components of a GSM cell tower ID in a tuple: the mobile country code, mobile network code, local area code, and cell ID, in that order. It returns the Google JSON object as a string. I left the return format serialized because working with JSON on PyS60 (or, as far as I can tell, any Python) can be a real hassle, and you may have a favorite approach. In Python 2.3 or later (S60 uses 2.2), you should just be able to use json.load() from the included modules.

The second function in the file, gGSM(), is useful only on Nokia phones. It's just a simple wrapper that feeds location.gsm_location() directly into gGSMLocation(), no arguments needed.

Working on this kind of code is frustrating in equal parts due to Nokia's design decisions and the limitations of Python itself. On Nokia's part, its idiotic code-signing initiative means that loading new libraries (such as JSON or XML parsing) into Python without cracking the signature check is way more trouble than it's worth. If you've broken the signature, you can just copy .py modules to c:\\resource\\, and .pyd modules to c:\\sys\\bin\\, but you can still run into platform security errors.

None of this is helped by the fact that Python is a pretty weird place to work. If you're used to ECMAScript, it's frustrating working with a scripting language that doesn't allow dynamic object creation, or doesn't support an alternate dot syntax for dictionary properties like Javascript does. The implicit declaration of variables, likewise, is kind of jarring--they just pop into existence, compared to a language with a dedicated "var" or type keyword. On the other hand, Python's significant whitespace is almost as annoying as the forest of brackets and parentheses used in Lisp or Objective-C, particularly on a mobile platform where screen space is at a premium.

Python actually has a quirk, I think, in that it's often pitched as a good learning language due (in part) to its lack of punctuation for blocks and line endings. But the flip side to that approach is that advanced code contains underscores (which are used to signal metaprogramming and low-level functions) and weird punctuation in direct proportion to its complexity. I feel like the learning curve goes from flat to nearly vertical the moment those underscores make their appearance, which seems like a serious flaw in a teaching language. Perhaps it's different when you're not teaching yourself.

In any case, I don't know what you might want to do with your latitude and longitude, but I hope people might find it useful. I have a few ideas I'd like to try, like creating a lost-phone-locator over SMS and a geotagged link generator, that could be useful for some work and hobby projects coming up. But these kinds of scripts are useful for all kinds of problems--Ethan Zuckerman recently wrote about farmers in Kenya who are using phones and GSM IDs to track elephant movement and protect their crops from pachyderm predation. You never know what you could do, given a little latitude (and longitude).

October 29, 2008

Filed under: tech»i_slash_o

Touch of Evil

I'm sick of touchscreens.

I'm sick of having to wipe finger grease off the screen, or trying to fend off scratches. When a physical component like a button, switch, or key wears, it lends character. When a display component falls prey to entropy, it becomes a single point of failure that degrades the whole user experience. Not to mention that I'm tired of having to look "around" my own hand to see what's going on.

I'm sick of watching other people use finger-based touchscreens. Everyone adopts the same posture: back hunched, one hand cradling the device, the other hovering over it with fingers splayed out, a single digit prodding the display like an inept fingerpainting. It looks beyond stupid. The only competition in the technology-as-humilation contest are those bluetooth earpieces that make cell phone conversations look like schizophrenia.

But I'm also sick of stylus-based screens. The advantage of a stylus is that it lets you communicate with the machine using the time-honored method of actual writing. Unfortunately, it's invariably used with devices that are way too small to write actual words. They inevitably have to resort to glyph-based alphabet systems (I still have Palm's Graffiti stuck in my head), ask you to cram your writing into a tiny space, or resort to the horror of software keyboards.

What else am I sick of? Oh, right, the loss of tactile feedback. The reason that hardware controls are still highly regarded in the music industry is that there's just something nice about pressing an actual button. I like being able to feel my way over the keypad on my phone, or play a game without having to watch my hands out of the corner of my eye. It's ironic that the sense of touch is completely neglected on interfaces of the same name. And haptic feedback--vibration when using on-screen controls--is no kind of compensation. It's a clever idea that completely fails to feel natural.

I'm sick of the hype around multitouch. Congratulations, you've made the device unusable one-handed, and you've created a solution without a problem. There was a Surface table at the RNC that was showing off a set of multi-touch features. It was one of the most blatantly useless things I'd ever seen, with cues like "put one hand on the table, then move the other to tilt." Likewise, while I understand the general theory of pinching and pulling for zoom on, say, web pages, it'd be a lot less necessary if the browser were smart enough to reflow the page for the screen in the first place.

And most of all, I'm sick of the now-common requests for every new product to be touch-capable, without which capability the chattering classes on the Internet's gadget sites will roundly pan it. The crowning jewel of these is the insistence that the Kindle should have had a touchscreen. Even were it technically feasible, why? Why would you want a touch mechanism on a screen with a .5sec refresh rate? Why would that make it a better experience than the existing controls? It's supposed to simulate the paper reading experience--do you run your filthy paws over ever inch of your actual books? What are you, five years old? Get a grip, people.

The real problem, of course, is not with touchscreens. It's that the tech community tends to lock onto these kinds of features, which are hailed as the solution to every problem, and then demand them even when they make no sense at all. Remember when we were all going to be using thin clients? A decade later, Google Docs is still as good as that gets--and it's not very good. Or when Second Life was the future of the digital economy? Haven't heard much about them lately. Or hey, how about municipal WiFi? Java on the desktop? Bluetooth replacing USB?

As bad as general-interest reporting is nowadays--and it is often very, very bad--tech reporting is probably worse, because it is driven so heavily by these kinds of fads. Everyone wants to be a personality, and everyone wants to be the person who's on top of the new, hot thing. It's the hipsterization of tech. But there is reason to think that some ideas that have worked for a long time--like the separation of physical controls from display surfaces--should continue to be respected.

October 7, 2008

Filed under: tech»coding

In Defense of Flash

Flash has a bad rep in the tech press. It resurfaces whenever Adobe updates the player plugin, which is the only part they actually control anymore, or when the discussion turns to Flash content on mobile platforms. And it's not wholly inaccurate, but it's way out of proportion to what Flash deserves, and what it achieved.

You don't have to tell me that Flash has flaws. It's a pain to deep-link, particularly since it has no access to the browser's address bar--doing it requires adding extra Javascript to the page. The plugins on anything other than Win32 can be cumbersome, slow system-hogs. And yes, there are a lot of annoying Flash ads out there (although that's certainly not Adobe's fault).

But all this pales in comparison to what they've accomplished with Flash. If nothing else, look at it pragmatically: in only a few years, using a platform that started out with far less functionality, Flash basically took over the rich Internet application space. Java applets have been left to languish in backwaters like the Facebook photo uploader (the Flash plugin's comparatively quick startup and tiny download size no doubt has a lot to do with it). Javascript/HTML support for video and canvas drawing is still limited--without Flash, the Internet video sea change probably wouldn't have happened.

And in the meantime, the technical aspects really can't be set aside. Flash started out as a tool for doing frame-based animation, and a lot of its features still exist on that basis. But ActionScript, the Javascript variant that runs Flash's interactive components, has gotten more and more sophisticated, especially after it was revamped into a fully object-oriented language with Flash 9. At this point, despite accusations of sluggishness, it's still fast enough that the Tamarin VM Adobe created will be folded into Firefox for executing Javascript, and it competes favorably with the new engines powering Chrome and Safari.

There's a common criticism that the kinds of things done in Flash could also be done in Javascript without the proprietary plugin. But speaking as a web multimedia producer, the problem with that approach is that it would require me to use Javascript, which means diving into the vast sea of browser incompatibilities (or, more likely, relying on a browser framework like jQuery). And even then, support for custom drawing or multimedia is strictly limited. What Flash brings to the table is a reasonably well-written API for performing those kinds of functions, backed up with a reliable level of cross-browser performance. Eventually, browsers may catch up to that. But right now, if I want to do rich interaction with no headaches, my best option is probably Flash (or Silverlight, but it's not widespread enough yet).

Even so: let's pretend that YouTube, decent APIs, and speed aren't important to us. What's Flash got going for it? As XKCD pointed out, it spawned a tiny renaissance in simple, fun, 2D gaming. As a tool for making everything from the sadly-departed Scrabulous to Alien Hominid, Flash provided a slick platform allowing artists and novice designers to experiment and get exposure. It's done for 2D what Duke3D and Doom did for the mod scene. Say what you like about it, if you love retrogaming, Flash deserves some credit for its recent success.

Future - Present - Past