From The Verge:
Windows 11 is also getting a variety of new AI features, including an AI agent baked into the Windows settings menu; more Click to Do text and image actions; AI editing features for Paint, Photos, and the Snipping Tool; Copilot Vision visual search; improved Windows Search; rich image descriptions for Narrator; AI writing functions in Notepad; and AI actions from within File Explorer. In its detailed blog post Microsoft says the AI features are designed to “make our experiences more intuitive, more accessible, and ultimately more useful.”I have been using Windows (or MS-DOS, even) for my entire life. It hasn't always been a pleasant experience, but it has generally worked and (as someone who does a fair amount of gaming) ran the specific software that I wanted to run, with a high level of backward compatibility. But over the last few years, it has become clear that Microsoft and I are no longer seeing eye to eye on what my computer should be doing.
I think it should be doing the tasks that I ask for predictably and reliably, and Microsoft thinks that it should be inserting semi-randomized chatbots into every nook and cranny of the system, when it's not taking screenshots of everything I do and running them through OCR. This is in addition to a series of UI tweaks that have made Windows increasingly unusable, like the weirdly-centered taskbar or jamming ads into the start menu.
So during my holiday break this January, I started taking steps to make 2025 my own personal Year of Linux on the Desktop. I'd already been using Xubuntu to keep an old 2009-era Thinkpad viable, so I knew it could work for my professional tools and workflows, but I'd resisted making the shift on my tower PC until the end of Windows 10 support gave me a deadline to meet.
To make the switch easier, I bought a second hard drive solely for a Fedora installation, keeping the original drive in the machine unchanged. With this setup, I could switch between the two operating systems at boot, gradually moving over to Linux for longer and longer periods, and pulling files off the old drive as necessary. As of this week, I haven't reverted back into Windows for a couple of months, and I thought it might be useful to write about what the experience has been like, for anyone considering the same migration.
I picked Fedora since it was often recommended as the "no-nonsense" distribution. I actually tried a few distros, and went through a few reinstalls, before everything was functional at the basic hardware level. In particular, the Nvidia drivers for my GPU (a well-loved 1070 GTX) were obnoxious to install and upgrade reliably. Also, partway through the process, my motherboard blew out (I'm assuming for unrelated reasons, probably due to being carted across the Atlantic in a badly-padded suitcase) and had to be replaced, including a new CPU (AMD this time around).
Finally, I needed to manually disable the USB wake functionality for my mouse, which is apparently chattier than Linux likes when it's trying to sleep. This fits my general expectations from prior experience, which was that 95% of my hardware would be fine and 5% would have some screwy but generally surmountable problems (it was certainly miles easier than debugging sleep issues on Windows has been for me).
At the software level, Fedora generally does feel more cohesive, in ways both big and small, compared to the Ubuntu systems that I've used in the past. For example, the logo and progress graphics shown during initial boot or upgrade are more polished, which seems minor but contributes to confidence that corners are not being cut on larger issues either. I prefer Flatpak to Snap, which was another factor in its favor. And of course, they're not trying to sell me a "Pro" service subscription, which I appreciate.
It does have some quirks, mostly around its software sources: by default Fedora only comes with "free" (read: non-patent encumbered) repositories enabled. You need to turn on "non-free" in order to install Steam or good video drivers, and in some cases you'll want to reinstall applications like FFmpeg to use the non-free version, unless you really like having choppy, broken video playback. You also need non-free repos for Blu-ray support, which is important to me.
With the system in a solid working state, I disabled automatic updates. I'll still run upgrades, of course, but I can do it on my own schedule. This is part of my general philosophy with computing going forward, which is that I'm through with software that doesn't respect the user's right to informed consent. Almost everything I do is either on the web platform (which can handle a little lag) or offline, I don't need to be updating every time a UI gets revamped so that a product manager can get a raise.
My personal opinion is that user interface design pretty much peaked with Windows 7 and it's all been downhill from there. I want to be able to snap windows to the screen edge and tile them, search and run applications from the OS menu, and see the names of the programs I'm running in the task bar. I do not want to have big media popups whenever I change the volume. I do not want a "notification center" that serves as the junk drawer for old chat messages. I do not want recommendations or ads anywhere on a computer that I paid for with my own money.
I do not want "AI" anywhere on the machine, at all.
Keeping all that in mind, I went with KDE for the default window manager, since it seemed like the best modern "Windows-ish" option (I like XFCE, but it's always felt clunky in terms of keyboard shortcuts and settings, and Gnome has a real case of MacOS envy that I've never cared for). A few tweaks have put everything pretty much the way I like it — mostly.
The primary catch, which will be unsurprising to any Linux user, has been multi-monitor support. KDE handles rendering to my second screen just fine, especially once I got the Nvidia driver running to support Displayport daisy-chaining. But it seems clear that testing on multi-monitor setups is not something that Linux devs do very much. For example, the taskbar on each monitor is a separate "panel" with its own configuration and application order — if I drag Firefox to the leftmost position on the first screen, I have to repeat this on the second if I want them to be consistent. The result has been that I've largely stopped re-ordering items in the task bar so that I don't obsess over it, which is not ideal but ultimately doesn't actually have any impact on my workflow.
Window positioning also sometimes requires intervention. For example, I typically keep the picture-in-picture video window for Firefox on my second monitor, so that it's basically a "watch in background" button. But KDE initially insisted on automatically placing the pop-up player directly over the Firefox tab, until I specifically told it to remember the last position and size of a browser window with a specific title. I don't know why that's not the default. Of course, some applications do remember where they were last located, which I think they're doing for themselves instead of letting the OS handle it, because Linux UI has a legendary "no gods, no masters" approach to window management that I think only got worse with Wayland.
My favorite thing about the GUI is actually not graphical at all — it's the ability to run an SSH daemon on our local network. Every now and then something (usually a game running full-screen) crashes in a way that captures all input and prevents closing the misbehaving application. I used to fix these crashes by using some weird kernel-level keyboard shortcuts that bypass KDE entirely, causing their own oddities along the way. But then I realized I can just open a terminal on my phone and kill the process from there. This is funny, and stupid, and incredibly useful, all at the same time.
Multi-monitor gripes aside, window management has pretty much been a non-issue. It stays out of my way and most of my GUI muscle memory still works. I suspect that in part this is because I've always been a person who didn't really customize the defaults very much, whether on Windows, Linux, or Chrome OS (and on MacOS I mostly only installed tweaks to get it to the standards of the others). So I've never developed any really esoteric habits that I needed to unlearn.
At the end of the day, software is what actually matters. As long as I can actually run the programs I need — and I am not, in this regard, a person with particularly esoteric tastes — my experience will probably be fine.
I spend roughly 90% of my time in Firefox. Unsurprisingly, it works exactly as I would expect, with the exception of an annoying keyboard shortcut change that I wrote an add-on to fix. Both Firefox and Chrome have been able to see the camera and microphone for video chats without any issues, although there were issues with WebUSB, so I've been running Via from its older AppImage package.
Sublime also worked out of the box. For backups, I'm using Deja Dup instead of Acronis. Bitwarden came from Flatpak. Mozilla VPN is only officially supported on Ubuntu, but you can compile it or (and this is what I did) you can grab the RPM file from the releases on the GitHub repo. I will have to update this manually, but it hasn't been an issue so far.
For e-mail, I had been using a copy of Outlook 2007 for the last two decades. Obviously, it wouldn't be directly compatible and this was a good time to upgrade anyway. It took a little while to figure out the tools needed to convert my old .PST files into something that Thunderbird could import, but I only needed to do that once, and then it's been pretty smooth sailing. For the rest of my office suite, LibreOffice works, but at this point I'm much more comfortable in Google Sheets, so there wasn't much migration cost there.
The truly impressive thing has been running Steam. Of course I knew that Wine had existed for doing Windows emulation, and that Valve had put in a lot of effort to make applications run on their Linux-based handheld. But it's one thing to know that in theory, and another to see pretty much everything in my library run pretty much flawlessly under Proton. The one exception — literally the one I've found so far — is Street Fighter 6, which starts out in good shape and then at some point the shaders lose coherence and turn the screen into one giant chaotic polygon soup. As a result, I've been playing less SF6, which is probably not a bad thing for my sleep habits, and does mean that I'm finally getting around to games I've neglected, like UFO 50 and the just-released Skin Deep.
Sadly, my ancient copy of Photoshop 6.0 has issues under the current versions of Wine. Since I refuse to use either a newer Adobe product or the badly-named open-source image editor, this may become a longer-term project.
Of course, as a web developer, the truly nice thing has been getting access to Linux's tooling support without having to run WSL or see what would function under Git's MSYS shell. Being able to run Poppler, or FFmpeg, or Python, without jumping through any of those hoops is not a revolution, since working on Windows for such a long time has made me pretty good at hoop-jumping. But it's very much appreciated.
Would I recommend this to an ordinary person, like my dad? Probably not. Once the system is running, it's been largely stable, but getting it there was still not frictionless. If you have closed-source devices that you're plugging in, or you need a specific proprietary application, I wouldn't want to take it on faith that those things will work (e.g., my much-loved Zune HD can be viewed in the file explorer but I can't add music to it). And when things break I'm still sometimes digging into a text file from the terminal to fix them.
On the other hand, that kind of transparency — being able to deeply configure the system from a text file — is exactly what I want from my computer these days. Linux has gotten good enough that day-to-day I'm not spending a lot of time recompiling or manually tweaking (i.e., I'm not doing sysadmin work as a hobby), but if I need to change something, I have that option.
Meanwhile, nothing is being installed without my permission. Copilot is not lurking on the horizon, and I don't have to cringe whenever Windows Update pops up a notification or pesters me to update to Windows 11. People complain about systemd or Wayland, but they feel like things I can conceptualize by comparison, and that I can access on my own terms. It's not a perfect system, but for the first time in a long time, it feels like mine, and that's well worth the occasional inconvenience.
Over the last six months, I've consistently given three pieces of advice to my students at SCCC: get comfortable with the Firebug debugger, contribute to open source, and learn to use Linux. The first is because all developers should learn how to debug properly. The second, because open source is a great way to start building a resume. And the last, in part, because Linux is what powers a large chunk of the web--not to mention a dizzying array of consumer devices. Someone who knows how to use an SSH shell and a few basic commands (including one of the stock editors, like vi or emacs) is never without tools--it may not be comfortable, but they can work anywhere, like Macgyver. It all comes back to Macgyver, eventually.
At Big Fish, in an effort to take this to its logical extreme, I've been working in Linux full-time (previously, I've used it either on the server or through a short-lived VM). It's been an interesting trip, especially after years of using mostly Windows desktops (with a smattering of Mac here and there). Using Linux exclusively for development plays to its strengths, which helps: no fighting with Wine to run games or audio software. Overall, I like it--but I'll also admit to being a little underwhelmed.
To get the bad parts out of the way: there are something like seven different install locations for programs, apparently chosen at random; making changes in the graphical configuration still involves arcane shell tricks, all of which will be undone in hilariously awful ways when you upgrade the OS; and Canonical seems intent on removing all the things that made Ubuntu familiar, like "menus" and "settings." I ended up switching to the XFCE window manager, which still makes me angry because A) I don't want to know anything about window managers, and B) it's still impossible to theme a Linux installation so that everything looks reasonably good. Want to consistently change the color of the window decorations for all of your programs? Good luck with that. XFCE is usable, and that's about all you can honestly say for it.
The best part of Linux by far is having a native web stack right out of the box, combined with a decent package manager for anything extra you might need. Installing scripting languages has always been a little bit of a hassle on Windows: even if the base package is easily installed, invariably you run into some essential library that's not made for the platform. Because these languages are first-class citizens on Linux, and because they're just an apt-get away, it opens up a whole new world of utility scripts and web tools.
I love combining a local server with a set of rich commmand-line utilities. Finally, I can easily use tools like the RequireJS optimizer, or throw together scripts to solve problems in my source, without having to switch between contexts. I can use all of my standard visual tools, like Eclipse or Sublime Text, without going through a download/upload cycle or figuring out how to fool them into working over SFTP. Native source control is another big deal: I've never bothered installing git on Windows, but on Linux it's almost too easy.
So there is one axis along which the development experience is markedly superior. It's not that Linux is better built (it has its fair share of potholes) so much as it's where people curently go to build neat tools, and then if we're lucky they bring them over to Windows. Microsoft is trying to fix this (see: their efforts to make NodeJS a first-class Windows platform), but it'll probably always be an uphill battle. The open-source developer culture just isn't there.
On the other hand, I was surprised by the cases where web development is actually worse on Linux compared to Windows. There's no visual FTP client that's anywhere near as good as WinSCP that I can find. The file managers are definitely clumsier than Explorer. Application launching, of all things, can be byzantine--there's no option to create a shortcut to an program, you have to manually assemble a .desktop file instead, and then XFCE will invariably position its window someplace utterly unhelpful. Don't even get me started on the configuration mess: say what you like about the registry, at least it's centralized.
None of these things are dealbreakers, the same way that it's not a dealbreaker for me to need GOW for a decent Windows command line. But if I was considering trying to dual-boot or switch to Linux as a work environment, instead of just keeping a headless VM around for when I need Ruby, I've given that up now. When all is said and done, I spend much of my time in either Eclipse or Firefox anyway, and they're the same no matter where you run them. I still believe strongly that developers should learn a little Linux--it's everywhere these days!--but you can be perfectly productive without living there full time. Ultimately, it's not how you build something, but what you build that matters.
If you do decide to give it a chance, here are a few tips that have made my life easier:
It's been almost two years now since I picked up an Android phone for the first time, during which time it has gone from a generally unloved, nerdy thing to the soon-to-be dominant smartphone platform. This is a remarkable and sudden development--when people start fretting about the state of Android as an OS (fragmentation, competing app stores, etc.), they tend to forget that it is still rapidly mutating and absorbing the most successful parts of a pretty woolly ecosystem. To have kept a high level of stability and compatibility, while adding features and going through major versions so quickly, is no small feat.
Even back in v1.0, there were obvious clever touches in Android--the notification bar, for instance, or the permission system. And now that I'm more used to them, the architectural decisions in the OS seem like "of course" kind of ideas. But when it first came out, a lot of the basic patterns Google used to build Android appeared genuinely bizarre to me. It has taken a few years to prove just how foresighted (or possibly just lucky) they actually were.
Take, for example, the back button. That's a weird concept at the OS level--sure, your browser has a one, as does the post-XP Explorer, but it's only used inside each program on the desktop, not to move between them. No previous mobile platform, from PalmOS to Windows Mobile to the iPhone, used a back button as part of the dominant navigation paradigm. It seemed like a case of Google, being a web company, wanting everything to resemble the web for no good reason.
And yet it turns out that being able to navigate "back" is a really good match for mobile, and it probably is important enough to make it a top-level concept. Android takes the UNIX idea of small utilities chained together, and applies it to small screen interaction. So it's easy to link from your Twitter feed to a web page to a map to your e-mail , and then jump partway back up the chain to continue from there (this is not an crazy usage pattern even before notifications get involved --imagine discovering a new restaurant from a friend, and then sending a lunch invitation before returning to Twitter). Without the back button, you'd have to go all the way back to the homescreen and the application list, losing track of where you had been in the process.
The process of composing this kind of "attention chain" is made possible by another one of Android's most underrated features: Intents. These are just ways of calling between one application and another, but with the advantage that the caller doesn't have to know what the callee is--Android applications register to handle certain MIME types or URIs on installation, and then they instantly become available to handle those actions. Far from being sandboxed, it's possible to pass all kinds of data around between different applications--or individual parts of an application. In a lot of ways, they resemble HTTP requests as much as anything else.
So, for example, if you take a picture and want to share it with your friends, pressing the "share" button in the Camera application will bring up a list of all installed programs that can share photos, even if they didn't exist when Camera was first written. Even better, Intents provide an extensible mechanism allowing applications to borrow functionality from other programs--if they want to use get an image via the camera, instead of duplicating the capture code, they can toss out the corresponding Intent, and any camera application can respond, including user replacements for the stock Camera. This is smart enough that other platforms have adopted something similar--Windows Mobile 7 will soon gain URIs for deep linking between applications, and iPhone has the clumsy, unofficial x-callback-url protocol--but Android still does this better than any other platform I've seen.
Finally, perhaps the choice that seemed oddest to me when Google announced Android was the Dalvik virtual machine. VMs are, after all, slow. Why saddle a mobile CPU with the extra burden of interpreting bytecode instead of using native applications? And indeed, the initial versions of Android were relatively sluggish. But two things changed: chips got much faster, and Google added just-in-time compilation in Android 2.2, turning the interpreted code into native binaries at runtime. Meanwhile, because Dalvik provides a platform independent from hardware, Android has been able to spread to all kinds of devices on different processor architectures, from ARM variants to Tegra to x86, and third-party developers never need to recompile.
(Speaking of VMs, Android's promise--and eventual delivery--of Flash on mobile has been mocked roundly. But when I wanted to show a friend footage of Juste Debout the other week, I'd have been out of luck without it. If I want to test my CQ interactives from home, it's incredibly handy. And of course, there are the ever-present restaurant websites. 99% of the time, I have Flash turned off--but when I need it, it's there, and it works surprisingly well. Anecdotal, I know, but there it is. I'd rather have the option than be completely helpless.)
Why are these unique features of Android's design interesting? Simple: they're the result of lessons successfully being adopted from web interaction models, not the other way around. That's a real shift from the conventional wisdom, which has been (and certainly I've always thought) that the kind of user interface and application design found on even the best web applications would never be as clean or intuitive as their native counterparts. For many things, that may still be true. But clearly there are some ideas that the web got right, even if entirely by chance: a stack-based navigation model, hardware-independent program representation, and a simple method of communicating between stateless "pages" of functionality. It figures that if anyone would recognize these lessons, Google would. Over the next few years, it'll be interesting to see if these and other web-inspired technologies make their way to mainstream operating systems as well.