Why do the Great Operating Systems Fail? August 19, 2011Posted by Peter Varhol in Software platforms.
Let me list them. Unix (yes, Virginia, Unix is dead), OS/2, BEOS, NextStep (although you would get an argument there, as it more or less morphed into the OS X and its successors), Plan 9 (bet you never even heard of that one), OS-9 (that one too). Now webOS. Thanks to its association with the flailing Research in Motion, QNX, arguably the finest of the bunch, looks to be on the ropes.
In the meantime, the mediocre thrive (I mean you, Windows).
First, let’s talk a little about the concept of an operating system. Most of us think GUI when we consider an operating system, but the GUI is arguably not a part of the OS at all. While it’s the visible manifestation of the OS, it can change fairly easily. That’s what Unix did in the 1990s, with several different GUIs and windowing systems.
What an OS does is manage applications and system resources – memory, file system, disk, graphics, and things like that. It does so by providing a finely tuned interface between hardware and software. Because resources are finite and have to be shared, those that do the fastest and most seamless job at that sharing, especially under increasing workloads, tend to be the best operating systems.
It’s relatively easy to write an OS; most grad students in CS implement Xinu, Minix, or a similar POSIX-like OS in one or more of their courses. It’s very hard to write a good OS. Most are optimized for particular types of work, to the detriment of others. I could go on for a long time on the technical characteristics that make a good OS, but suffice it to say that that an OS is like any piece of software, in that there is no one best approach; there are instead hundreds of tradeoffs. To some extent, those tradeoffs depend on the purpose and intended market of the OS, but what makes a great OS is the ability to make the right tradeoffs.
It’s hard for an operating system to grow and develop. QNX possibly does the best job in that regard. It has an extremely small kernel (about 70 kbytes, last I checked) that cannot be swapped out of memory. Everything else, including device drivers, runs in user space, and can be swapped out of memory. They run more slowly than if they were in kernel space, but QNX makes up for it with small executable images, which tend to run more quickly. It’s a design tradeoff, and QNX made it wisely.
Perhaps most important, Windows isn’t all that bad. It is hampered by Microsoft’s refusal to sunset third-party applications and dubious hardware configurations. You can still run 1980s-era DOS applications on Windows, and up until recently my own homebuilt domain controller ran (well, crawled) Windows 2003 Server on a system with 64MB of memory. I simply shouldn’t have been able to do that.
Users love this near-universal compatibility, and perhaps that is what has made Windows the desktop standard for so long. But the fact of the matter is that maintaining and enhancing an OS, and maintaining backward compatibility, result in an increasingly less reliable OS. Even QNX had to do a rewrite years back, to its Neutrino kernel. If Microsoft wants to have a great OS, it should rewrite, and kill off old apps.
As to why great operating systems seem to fail, it has little to do with the technology itself, and everything to do with the intended market, and the resources of the company. Building a great OS takes astute design, great programming, and a lot of luck in making the right tradeoffs. Building a successful OS requires an entirely different skill set. Those skills are almost never found in the same organization.