The Undersea Cable Network That Actually Runs the Internet
by Scott
When most people think about how the internet works, they imagine something wireless, something that floats. They think of satellites drifting in orbit, of signals bouncing between towers, of data moving invisibly through the air in the way that a phone call or a radio broadcast moves. This image is not entirely wrong. A meaningful portion of internet traffic does travel wirelessly, especially for the last mile between a local network and an individual device. But the overwhelming bulk of international internet traffic, the data that crosses oceans and connects continents, travels through cables. Specifically, it travels through a network of fiber optic cables laid on the floor of the world’s oceans, a physical infrastructure so fundamental and so invisible that most of the people who depend on it every hour of every day have no idea it exists.
There are roughly five hundred active submarine cable systems in operation around the world today, stretching across approximately 1.3 million kilometers of seafloor. These cables carry somewhere between ninety-five and ninety-nine percent of all international internet traffic, depending on how the measurement is made and which routes are counted. The remainder travels by satellite, but satellite connections have historically been slower, higher-latency, and far more expensive than undersea fiber, which is why the internet routes the overwhelming majority of its cross-ocean data through cables rather than through orbit. When you send an email from New York to London, watch a video hosted on a server in California from your apartment in Tokyo, or make a video call to a colleague in Singapore, the data is almost certainly traveling through a cable on the ocean floor. The cloud, as the technology industry likes to call it, is not in the sky. It is at the bottom of the sea.
The technology that makes this possible is fiber optics, a method of transmitting data as pulses of light through thin strands of extremely pure glass. A modern submarine cable contains multiple fiber pairs, each pair capable of carrying an enormous amount of data simultaneously through a technique called wavelength division multiplexing, which essentially sends many different signals through the same fiber at different wavelengths of light, as if running multiple radio stations through the same cable by broadcasting on different frequencies. The capacity of modern submarine cables has grown at a pace that tracks closely with the broader exponential growth of internet traffic. Cables laid in the early 1990s could carry a few gigabits per second. Current state of the art cables are designed to carry hundreds of terabits per second, a many-millionfold increase in capacity over three decades.
The physical construction of a submarine cable is considerably more complex than a casual description suggests. At its core is the optical fiber, surrounded by layers of protective material that must withstand conditions that would destroy most engineered structures. The deep ocean is an environment of crushing pressure, near-freezing temperatures, complete darkness, and slow but powerful currents. A cable resting on the abyssal plain, three or four kilometers below the surface, must be designed to last for twenty-five years or more without maintenance. The fiber itself is surrounded by a protective tube, which is surrounded by steel wire armoring, which is surrounded by waterproofing layers, which are surrounded by a final outer jacket. In shallower waters near shore, where cables are more vulnerable to anchors, fishing equipment, and human activity, additional armoring layers are added and the cables are often buried beneath the seabed using specialized plowing equipment.
The light signals that carry data through fiber optic cables weaken as they travel, through a phenomenon called attenuation. For terrestrial fiber, this is managed by placing signal repeaters at regular intervals, electronic devices that detect the weakening signal and retransmit it at full strength. Submarine cables face the same challenge but with the added complication that the repeaters must function reliably at the bottom of the ocean for decades without any possibility of easy maintenance. Modern submarine cable repeaters are engineering marvels of reliability, designed with redundancy and tested to extraordinarily high standards before deployment. They are powered by a direct electrical current that runs through a conductor in the cable from shore, supplied by power feed equipment at each cable landing station. A long transoceanic cable might have dozens of repeaters spaced roughly sixty to eighty kilometers apart, each one sitting on the seabed doing its quiet amplification work without interruption for the life of the cable.
The process of laying a submarine cable is itself a remarkable logistical and engineering undertaking. Specialized cable ships, of which there are only a few dozen in the world, carry thousands of kilometers of cable wound on enormous drums in their holds. The ship moves at a few knots while the cable is fed out over the stern through a series of guides and tensioners that control how much cable is deployed relative to the ship’s speed and the depth of the water below. In deep water the cable simply sinks to the bottom under its own weight after leaving the ship, settling into a gentle curve on the seafloor. In shallower water near the shore, the process is more involved, with divers or remotely operated vehicles helping to guide the cable into burial trenches to protect it from the various hazards that operate in those depths.
Each cable terminates at a landing station on shore, a typically anonymous-looking building near a beach or coastal area that contains the equipment that connects the undersea cable to the terrestrial fiber network on land. These buildings are critical nodes in the global internet infrastructure, but they are rarely marked or identified in any way that would attract attention. The precise location of cable landing stations is considered sensitive information in many countries, for the obvious reason that a physical attack on a landing station would disrupt internet connectivity for potentially millions of people. The cables themselves approach the shore through conduits buried beneath beaches, emerging inland at the landing station where they connect to the racks of equipment that manage the signal handoff between the oceanic portion of the cable and the land-based network.
The history of undersea cables is considerably older than the internet, and understanding that history illuminates both the remarkable continuity of the technology and the ways in which the current era represents a genuine departure from what came before. The first successful transatlantic telegraph cable was laid in 1858 by a consortium organized in part by the American entrepreneur Cyrus Field, an effort that had consumed years of failed attempts and enormous financial resources before it succeeded. The cable allowed messages to be transmitted between North America and Europe in minutes rather than the weeks required by the fastest sailing ships, and the public response to its success was as ecstatic as reactions to major technological milestones tend to be. Queen Victoria sent a congratulatory message to President James Buchanan. Cities held celebrations. Newspapers declared the world transformed.
The first cable failed after a few weeks of operation, damaged by the application of excessive voltage in an attempt to speed up the transmission rate. The project was revived after the American Civil War, and by 1866 a more robust transatlantic cable was in continuous operation. The telegraph cable network expanded over the following decades to cover most of the globe, laid primarily by Britain and used to knit together the far-flung territories of the British Empire in ways that had significant military, commercial, and political implications. The control of the chokepoints through which telegraph cables passed became a matter of strategic importance, and Britain’s advantage in the geography of the early cable network gave it an intelligence and communications advantage that other powers recognized and resented.
The transition from copper telegraph cables to fiber optic data cables happened in the 1980s and 1990s, and the first transatlantic fiber optic cable, TAT-8, was laid in 1988. The shift from copper to fiber was not merely an upgrade in speed and capacity, though the capacity increase was extraordinary. It also changed the economics and the competitive landscape of the undersea cable industry. Telegraph cables had been operated primarily by national telecommunications monopolies and large consortia controlled by governments. The fiber optic era coincided with the deregulation and liberalization of telecommunications markets in many countries, and with the rise of private internet companies that had their own compelling reasons to invest in undersea capacity.

For most of the 1990s and the first decade of the 2000s, the undersea cable industry was dominated by telecommunications companies investing collaboratively in shared cable systems. This model made sense when the primary users of international bandwidth were telephone companies completing calls on behalf of their customers. The economics were relatively stable and the ownership structures reflected the bilateral relationships between national carriers. The explosive growth of the internet and then of cloud computing and streaming video changed these economics dramatically. Internet traffic grew faster than anyone had predicted, capacity constraints became acute, and the nature of who actually needed international bandwidth began to shift.
The most significant and consequential change in the contemporary undersea cable landscape has been the entry of the major technology companies as direct investors and owners of submarine cable infrastructure. Google began investing directly in submarine cables around 2008, and has since become one of the largest investors in undersea cable capacity in the world, with ownership stakes or direct investments in dozens of cable systems crossing every major ocean. Meta, Microsoft, and Amazon have followed similar trajectories. These companies have collectively shifted a large portion of the global submarine cable investment landscape from a consortium model driven by telecommunications carriers to one driven by the capacity needs of cloud computing and content delivery.
The reasons for this shift are straightforward from a business perspective. A company like Google, which must transfer enormous volumes of data between its data centers distributed around the world, and which must deliver video, search results, and application data to users everywhere, has an enormous and predictably growing need for international bandwidth. Buying capacity on cables operated by other parties introduces costs, dependencies, and latency that owning the infrastructure directly can avoid. When you own the cable, you control the capacity allocation, the upgrade timeline, and the routing decisions. The economics of owning versus leasing change significantly when you are consuming bandwidth at the scale that these companies operate.
The concentration of submarine cable ownership among a small number of large technology companies has attracted attention from governments and security researchers who worry about what it means for the resilience and independence of global internet infrastructure. A cable network in which a handful of American technology companies own or effectively control a large fraction of international capacity is a different thing, in geopolitical terms, from a cable network owned by a diverse collection of national telecommunications carriers. Governments in various countries have begun scrutinizing the involvement of Chinese companies in submarine cable projects, concerned about the possibility that ownership or equipment supply relationships could create vulnerabilities in cables carrying sensitive traffic. The United States government has in several cases intervened to block or impose conditions on cable landing license applications involving Chinese-owned or Chinese-funded cable systems.
The vulnerability of submarine cables to physical disruption is a topic that generates considerable discussion in national security and infrastructure resilience circles, and the concern is not hypothetical. Submarine cables break regularly, though the overwhelming majority of breaks are caused by accidental human activity rather than deliberate sabotage. Fishing trawlers, whose gear can reach depths of several hundred meters, are the most common cause of cable breaks in shallow water. Ships dragging their anchors in bad weather account for many others. In deeper water, cable breaks are less frequent but do occur, caused by submarine landslides, turbidity currents, the gnawing of deep-sea fish attracted to the electrical field of the power conductor, and occasionally by causes that remain genuinely mysterious even after investigation.
The repair of a broken submarine cable is an elaborate and expensive process. A cable ship must be dispatched to the approximate location of the break, which is determined by measuring the electrical characteristics of the cable from each end and calculating where the discontinuity occurs. The ship uses grappling equipment to hook the cable and bring it to the surface, cuts out the damaged section, splices in a new section, and lowers the repaired cable back to the bottom. In deep water this process can take several days, during which traffic that would normally have traveled through the broken cable must be rerouted through other cables or satellite connections, often at reduced speeds and increased latency. The repair of a single deep-water cable break can cost several million dollars and require specialized ships that may need to travel long distances to reach the repair site.
The deliberate cutting of submarine cables as an act of sabotage or warfare has occupied military and intelligence planners for as long as cable networks have existed. In the first hours of the First World War, a British cable ship cut the German transatlantic telegraph cables, forcing German international communications to route through cables that Britain could intercept and monitor, a decision that proved consequential for the course of the war. More recently, there have been several incidents of cable damage near conflict zones or in circumstances that have suggested deliberate interference, though attribution of such incidents is difficult and the threshold of proof for official accusations is high. The concentration of many cable routes through relatively narrow geographic chokepoints, such as the waters around the Strait of Malacca, the Red Sea approaches, and the waters surrounding Taiwan, means that a small number of physical intervention points could disrupt a disproportionate share of international internet traffic.
Taiwan is perhaps the most discussed example of this vulnerability, because the island sits at the center of a web of submarine cables connecting East Asia to the rest of the world, and because the geopolitical tensions between Taiwan and mainland China mean that any military confrontation in the region would likely involve the submarine cable infrastructure in ways that would have consequences for internet connectivity far beyond the immediate conflict zone.
Despite all of these vulnerabilities and the concentrated, fragile nature of the physical infrastructure, the global submarine cable network has proven remarkably resilient in practice. The redundancy built into the system, with multiple cables connecting most major regions and sophisticated routing systems that can shift traffic between cables in response to outages, has meant that most cable breaks cause localized disruptions rather than total connectivity failures. The internet was designed, at a protocol level, to route around damage, and the physical redundancy of the cable network generally supports this design intent.
What the submarine cable network reveals, more than anything else, is that the internet is not the ethereal, distributed, physics-defying thing it sometimes appears to be. It is a physical infrastructure, built by human hands and maintained by human effort, embedded in the material world with all of the vulnerabilities and dependencies that implies. The data that feels instantaneous and weightless when it travels from your device to a server on another continent is, for most of its journey, moving as pulses of light through a glass fiber inside a cable resting on mud at the bottom of the ocean. The extraordinary thing is not that this sometimes goes wrong. The extraordinary thing is that it almost always goes right, invisibly and reliably, carrying the communications of billions of people across distances that would have seemed magical to the engineers who laid the first transatlantic telegraph cable a hundred and sixty years ago.