The Brief Strange Era When Microsoft Ruled Everything
by Scott
There was a period in the history of computing, roughly spanning the late 1980s through the early 2000s, when a single company exercised a degree of control over personal computing that had no real precedent and has had no real successor. Microsoft did not merely dominate its market during these years in the way that successful companies usually dominate markets, by having a larger share than competitors or by setting the pace of innovation that others followed. It dominated in a more total and more structurally embedded way, one that made it difficult for users, manufacturers, and even governments to imagine alternatives to the products it sold. The operating system on virtually every personal computer in the world ran Microsoft software. The office productivity suite on virtually every business computer ran Microsoft software. The web browser that most people used to navigate the early internet was Microsoft software. The programming tools that most developers used to write applications for those computers were Microsoft tools. The company was not just large. It was, for a period that felt to those living through it like it might last forever, the water that personal computing swam in.
Understanding how this happened requires going back to a single business decision made in 1980 that remains one of the most consequential deals in the history of technology. IBM, then the dominant force in mainframe computing and newly committed to entering the personal computer market, needed an operating system for the machine it was building. IBM’s first choice was Gary Kildall, whose company Digital Research had produced CP/M, the leading operating system for personal computers at the time. The negotiations between IBM and Kildall fell apart under circumstances that have been disputed and mythologized for decades, with accounts varying on whether Kildall personally missed the meeting, whether his wife and business partner declined to sign IBM’s nondisclosure agreement, or whether the differences were more substantive. Whatever the precise reason, IBM turned to Microsoft, then primarily a programming language company run by Bill Gates and Paul Allen, and asked if Microsoft could provide an operating system.
Microsoft did not have an operating system to sell. What it had was the awareness that an operating system existed that might be acquired. Gates and Allen located a Seattle programmer named Tim Paterson who had written a CP/M-compatible operating system called QDOS, for Quick and Dirty Operating System, and Microsoft purchased it for fifty thousand dollars. Microsoft then licensed this system to IBM as PC-DOS while retaining the right to license the same software to other manufacturers under the name MS-DOS. This second part of the arrangement was the crucial one. IBM, focused on its hardware business and perhaps failing to anticipate how important the software would become, agreed to terms that allowed Microsoft to sell the same operating system to any manufacturer who wanted to build IBM-compatible computers.
What followed was the cloning of the IBM PC. Manufacturers around the world, most notably Compaq, began producing computers that ran the same software as IBM’s machine, and because the operating system was licensed from Microsoft rather than owned by IBM, these clones were entirely legal. The market for IBM-compatible personal computers exploded, and with it the market for MS-DOS. Every clone manufacturer needed a license from Microsoft. Every software developer who wanted to sell applications to the growing PC market needed to write for MS-DOS. The operating system became the platform, and the platform became the market, and the market was Microsoft’s because every copy of the operating system, whether running on an IBM machine or a Compaq or a machine from any of dozens of other manufacturers, generated royalty revenue for Microsoft. IBM had built the factory and Microsoft had built the toll road through it.
The transition from DOS to Windows extended and deepened this advantage rather than resetting it. Graphical user interfaces, pioneered at Xerox PARC and brought to commercial viability by Apple with the Macintosh in 1984, were clearly the future of personal computing. Microsoft had been involved in the early development of the Macintosh software and had licensed certain interface concepts from Apple under an agreement whose scope would later become the subject of protracted litigation. Microsoft began developing Windows in the early 1980s and released the first version in 1985, to underwhelming response. Windows 2.0 was similarly limited in its impact. It was Windows 3.0, released in 1990, that achieved the kind of commercial success that reshaped the industry, followed by Windows 3.1 in 1992 and then the landmark Windows 95, which arrived amid a marketing campaign that was arguably the most elaborate and expensive software launch in history up to that point.
Windows 95 was not merely a product release. It was a cultural event. The Rolling Stones licensed Start Me Up for the launch campaign. Television commercials ran in prime time. Retail stores held midnight opening events. Lines formed outside shops in the way that lines had previously formed for concert tickets or sneaker releases. The launch captured something real about the cultural moment, because Windows 95 represented for many people their first genuine encounter with a personal computer that felt accessible and complete. It integrated the internet connectivity tools that were becoming increasingly important as the World Wide Web emerged as a mass medium. It brought a level of visual polish and operational reliability that previous versions of Windows had lacked. And it ran on the IBM-compatible hardware that the majority of personal computer buyers had already committed to through years of purchasing decisions.
The position that Microsoft occupied by the mid-1990s was a strategist’s dream and an antitrust regulator’s nightmare. The company controlled the operating system that ran on the overwhelming majority of personal computers, which meant it controlled the platform on which all other software had to run. This gave it structural leverage over every other participant in the personal computing ecosystem. Software developers had to write for Windows because that was where the users were. Hardware manufacturers had to design for Windows because that was what their customers would want to run. Corporate IT departments had to deploy Windows because it was what the software their businesses depended on required. The operating system monopoly was self-reinforcing in a way that made it almost impervious to competitive challenge through normal market mechanisms.
Microsoft used this position aggressively to extend its control into adjacent markets. The most significant and most legally consequential of these extensions was the browser wars of the mid to late 1990s. Netscape Navigator was the browser that had brought the World Wide Web to the mass market, and by the mid-1990s it was dominant, running on the majority of the computers that were connecting to the web. Netscape’s founders had suggested publicly that the web browser might eventually become the platform on which applications ran, making the underlying operating system less important. This was an existential threat to Microsoft’s core business, because if applications ran in browsers rather than directly on operating systems, the value of controlling the operating system would diminish dramatically.
Microsoft’s response was to develop its own browser, Internet Explorer, and to distribute it in a way that Netscape could not match. Beginning with Windows 95 and continuing with Windows 98, Microsoft integrated Internet Explorer into the operating system, distributing it for free on every copy of Windows sold. This was not merely competition. It was competition conducted through a mechanism that was only available to Microsoft because of its operating system monopoly. Netscape could make a better browser, which by most technical assessments it did for a period, but it could not give away its browser on every personal computer sold in the world, because it did not control the personal computer’s operating system. The terms of competition were structurally unequal in a way that Netscape could not overcome through product quality or marketing alone.

By the late 1990s, Internet Explorer had overtaken Netscape as the dominant browser, and the Department of Justice and a coalition of state attorneys general had sued Microsoft for antitrust violations. The case, United States v. Microsoft, became one of the most significant antitrust proceedings in the history of American technology regulation. The government argued that Microsoft had illegally maintained its operating system monopoly and had illegally attempted to extend it into the browser market through anticompetitive conduct. The trial produced remarkable internal emails and communications that documented the company’s strategic thinking in unflattering terms, including discussions of cutting off Netscape’s air supply by bundling Internet Explorer with Windows and pricing it at zero.
The district court judge found for the government on essentially all counts and ordered that Microsoft be broken into two companies, one selling operating systems and one selling application software. This remedy was appealed, the judge’s findings on remedy were overturned on procedural grounds relating to his conduct during the trial, and the case was eventually settled during the early years of the George W. Bush administration under terms considerably more favorable to Microsoft than the original remedy would have been. The settlement required Microsoft to share its programming interfaces with third parties and imposed certain behavioral restrictions, but it left the company intact and in possession of its operating system monopoly.
The antitrust case shaped Microsoft’s behavior for years afterward, creating a culture of legal caution within the company that some observers believe contributed to its slowness in responding to subsequent technological shifts. But the more consequential factor in the eventual erosion of Microsoft’s dominance was not legal but technological and structural. The shifts that diminished Microsoft’s dominance were the same shifts that Netscape had correctly identified as threatening it, the movement of computing from the desktop to the network, and ultimately from the network to the cloud and the mobile device.
The rise of Google illustrated the new dynamic with particular clarity. Google operated entirely within the web browser, requiring no installation and no operating system dependency beyond a browser capable of running web applications. Its search engine became the primary way that most people navigated the internet, and its advertising business generated revenue at a scale that allowed it to invest in an expanding range of web-based services, from email to maps to document editing. None of these services required Windows. They ran in any browser on any operating system, and their quality and utility were entirely independent of which operating system the user’s computer ran. The monopoly leverage that Microsoft had over the desktop did not extend to the web.
The mobile transition was even more damaging to Microsoft’s structural position. When Apple released the iPhone in 2007 and Google released Android in 2008, they established two platforms for mobile computing that Microsoft did not control and was unable to displace despite serious attempts. Windows Mobile had existed before the smartphone era, but it was designed for a different set of devices and assumptions and was not equipped to compete with the touch-optimized, app-ecosystem-driven model that Apple and Google established. Microsoft’s attempts to enter the mobile market with Windows Phone, including the acquisition of Nokia’s device business for more than seven billion dollars, ended in failure and in a write-down that ranked among the largest in the company’s history. The personal computer remained important, but it was no longer the only computer that mattered, and on the computers that were becoming increasingly dominant, Microsoft had no position.
The era of Microsoft’s dominance also produced a distinctive culture within the technology industry that is worth examining for what it reveals about how monopoly power shapes an ecosystem. Developers who built for Windows during the peak years of Microsoft’s dominance operated in an environment where Microsoft’s decisions were essentially laws. The application programming interfaces that Microsoft exposed, the formats it supported, the behaviors it chose to implement or exclude, defined the possible space within which all Windows software had to operate. Microsoft was known for a practice that developers referred to as embrace, extend, and extinguish, a strategy of adopting open standards, adding proprietary extensions that created dependencies on Microsoft’s implementations, and then using those dependencies to crowd out competing implementations. The strategy was effective at strengthening lock-in but toxic to the broader ecosystem, because it created constant uncertainty about whether building on Microsoft’s platforms and standards was safe or whether those platforms would eventually be weaponized against the builders who depended on them.
The corporate culture within Microsoft during these years was shaped by a performance review system called stack ranking, in which employees were evaluated not simply on their individual performance but on their performance relative to their colleagues, with a fixed distribution requiring that a certain percentage of employees in any group be rated as underperformers regardless of the group’s absolute performance. This system, which was eventually abandoned, had predictable effects on internal collaboration and innovation. Employees who competed with each other for limited top ratings had incentives to sabotage colleagues, to protect promising projects from sharing, and to avoid the kind of collaboration that might allow a colleague to claim credit for a success. Former Microsoft employees have described the system as one of the primary factors in the company’s cultural dysfunction during the years of its dominance and decline.
What brought Microsoft back to relevance, after a period in the 2000s and early 2010s when it seemed to be perpetually chasing markets it could never quite reach, was a combination of leadership change and strategic repositioning. Satya Nadella, who became chief executive in 2014, articulated a vision of Microsoft as a cloud computing company rather than an operating system company, and backed that vision with substantial investment in Azure, Microsoft’s cloud computing platform, and with a cultural shift that embraced open source software and cross-platform development in ways that would have been unthinkable under the previous leadership. The company that had once fought open source as an existential threat became one of the largest contributors to open source projects and eventually acquired GitHub, the primary platform for open source collaboration, for seven and a half billion dollars.
The Microsoft of the present is a very large and very successful company, but it is not the Microsoft of the 1990s in any structurally meaningful sense. It does not control a platform that is essential to personal computing in the way that Windows once was essential to it. Its power is distributed across cloud services, enterprise software, gaming, and professional tools, each of which operates in competitive markets where alternatives exist and where customers have meaningful choices. The dominance that Microsoft exercised during its peak years was a product of specific historical circumstances, of the IBM deal and the PC clone market and the network effects of the operating system platform, that are unlikely to be replicated in the same form.
The era when Microsoft ruled everything lasted roughly from the early 1990s to the mid-2000s, a period of perhaps fifteen years in which the company’s control over personal computing was so complete that it shaped not just what software people used but how developers thought about building software, how businesses thought about technology adoption, and how governments thought about the relationship between dominant technology platforms and competitive markets. The lessons of that era, about how platform monopolies self-reinforce, about how bundling can be more powerful than product quality, about how the next platform shift can undermine what seemed like impregnable market position, have been studied and absorbed by every subsequent generation of technology strategists. The era was brief in historical terms. Its influence on how the technology industry thinks about power, competition, and disruption has been anything but.