Technology, Global Interruption, Business Theo Edwards Technology, Global Interruption, Business Theo Edwards

It’s not just you: The internet is breaking

Tiny fixes become global problems!

In the span of a few months this year, the internet has managed to knock itself sideways four different ways. And the official explanations have landed with all the romance of a maintenance log. A Cloudflare file exceeded its expected size. A DNS entry inside AWS pointed nowhere. An Azure configuration change went sideways. A Google service-control rule looped into failure and sent itself into repeated crash cycles.

These events slid into place quietly and revealed the same uncomfortable truth: The internet is a tightly bound structure, not a sprawling, distributed network, as many people may imagine. A small change in one corner sets off a chain reaction in another because so many digital services rely on the same gateways, the same load balancers, the same identity checkpoints, and the same routing layers. The fragility sits inside those shared pathways, not inside the individual apps that blinked out of view.

Quartz | Shannon Carroll

Sat, November 22, 2025 at 5:30 AM GMT+11

Smith Collection/Gado/Getty Images

In the span of a few months this year, the internet has managed to knock itself sideways four different ways. And the official explanations have landed with all the romance of a maintenance log. A Cloudflare file exceeded its expected size. A DNS entry inside AWS pointed nowhere. An Azure configuration change went sideways. A Google service-control rule looped into failure and sent itself into repeated crash cycles.

Each failure began as a routine maintenance task — the digital equivalents of leaving a door ajar. Each one expanded into a global interruption.

These events slid into place quietly and revealed the same uncomfortable truth: The internet is a tightly bound structure, not a sprawling, distributed network, as many people may imagine. A small change in one corner sets off a chain reaction in another because so many digital services rely on the same gateways, the same load balancers, the same identity checkpoints, and the same routing layers. The fragility sits inside those shared pathways, not inside the individual apps that blinked out of view.

So, no, you’re not wrong: The internet feels like it’s breaking — because we’ve made it too big to fail and too small at the top to stay upright.

Tiny fixes become global problems

When a file grew beyond its expected size inside Cloudflare earlier this week, the fallout traveled far beyond the sites that actually run on Cloudflare. Banks saw degraded performance. Retail checkouts lagged. Messaging platforms stalled. Even the supposedly “smart” gear people trust to run the morning — the coffee maker that depends on a cloud handshake, the thermostat that insists on verifying itself, the app that decides whether the commute is survivable — stuttered as the edge layer fell out of step.

Cloudflare’s leadership didn’t bother with spin. The company’s chief technology officer tweeted an apology that acknowledged “failing the broader internet” and pinned the blame on a latent bug triggered by a routine configuration change. No breach, no sinister actor — just an everyday tweak that managed to trip a network the size of a continent.

The company fronts roughly a fifth of global web traffic, which means a permissions shift inside one database brushed against millions of sessions with a single deployment. Businesses treat that edge network as plumbing. Insurers treat it as systemic exposure. The Global 2000 now loses an estimated $400 billion a year to cloud and edge downtime, and the largest enterprises regularly peg interruption costs in the $1-million to $5-million-per-hour range. A file buried deep inside a system most people have never heard of still managed to bend the digital world to its will.

A missing DNS field inside AWS’s busiest region produced another kind of tilt late last month. Traffic slid into fallback modes. Some services froze altogether. Insurers modeled up to $581 million in potential claims, a figure that doesn’t even capture abandoned carts, payroll delays, or stalled shipments that never reach the paperwork stage.

More than 17 million user-reported failures stacked up in the first hours. That number was large enough to show how dependent companies remain on AWS’ core regions — even when architects insist they have spread their risk. Region redundancy offered little insulation because identity checks, data calls, and background tasks still funnel through the most popular region by habit. The failure didn’t last long, but it still reached sectors that thought they stood outside the impact zone. Welcome to the modern cloud.

Azure’s turn arrived the following week when a traffic-management update in a Microsoft edge layer slowed down workplace logins, airline check-ins, retail portals, and gaming platforms. The surface symptoms looked disconnected. The underlying problem sat in a routing system tied to Microsoft’s identity stack. Many organizations that don’t run their applications on Azure still rely on Microsoft to verify credentials, authorize sessions, or route user data. A shift in that layer appears small on paper. But in practice, it affects travel, commerce, communication, and office workflows — all at the same time.

A service-control rule slipped into the wrong layer inside Google Cloud over the summer and knocked the platform off balance. The code that signs off on routine API calls kept crashing and restarting, and requests that usually clear in a blink began to stall or fall away. The stutter showed up across regions as authentication failures, halted builds, and applications blinking in and out of view — hitting streaming platforms, collaboration tools, and Google’s own systems before the platform managed to steady itself. It didn’t last long, but it made plain that Google’s control plane behaves like a single surface, and a small shift in that layer follows every path that depends on it.

One web, one spine

These failures didn’t come from the same flaw. But they pointed to the same structure.

The internet grew around a handful of infrastructure providers that now operate as load-bearing beams for the global economy. Amazon, Microsoft, and Google control roughly 62% of the world’s cloud-infrastructure spending. Cloudflare sits in front of 20% of the web, and more than 80% of sites that use reverse proxies depend on that as their single provider. Identity platforms from Microsoft, Amazon, and Okta sit behind hundreds of millions of logins a day.

The internet used to look like a mycelium network: messy, redundant, and distributed. Increasingly, it looks like a handful of glass-and-steel server farms and security gateways, where a mid-sized file in San Francisco or an empty DNS record in Virginia can briefly tilt the entire digital economy off its axis.

Companies still talk about diversity of infrastructure. They reference multicloud setups and region failover strategies. These outages showed how thin those strategies become once shared dependency chains come into view. A retailer that spreads its compute across clouds still stumbles when its checkout flow depends on a CDN that has gone dark. A hospital that keeps its patient records in on-premise systems still deals with delays if its messaging or imaging integrations run through a cloud service tied to the wrong routing layer. An airline that invests heavily in its own data centers still sees a slowdown when its identity checks pass through an authentication provider experiencing trouble.

None of these organizations planned particularly poorly. The issue sits in the modern stack itself. Too many critical functions rely on layers that live outside a company’s control.

Analysts who study outages pay less attention to duration and more attention to blast radius. The AWS incident spread to more than 3,500 companies across 60-plus countries. Cloudflare’s failure generated more than 11,000 user-incident reports and tripped up workflows inside banks, retailers, logistics systems, media platforms, and government agencies — all of which assumed their “edge” layer lived far enough from the edge of anything. Azure’s slowdown drew more than 30,000 outage reports in the first hour and produced disruptions across travel, entertainment, and half the digital ways people procrastinate. Google’s stumble sent more than 10,000 cloud-level reports and sent glitches through streaming platforms, collaboration tools, and the services that lean on its cloud. Each incident revealed how concentrated the foundations of the internet have become. A setback inside one provider moves across sectors because the same networks, the same content-delivery systems, and the same identity services show up beneath most digital products.

The internet’s fabric is fragile

The scale of the outages had less to do with time and everything to do with what set them off. Small, almost forgettable changes — a configuration file growing past its limit, a DNS pointer vanishing, a routing rule drifting, a service-control check spinning into failure — ended up pulling whole systems sideways. Small cause, large effect. None of those moves reads like a trigger for multimillion-dollar losses or frozen global workflows, but in a system this consolidated, that’s where the impact landed. The real risk no longer lives inside individual services or data centers. It lives inside the connective tissue that everyone leans on without thinking.

Cloud providers and traffic networks still promote redundancy, and the engineering behind those claims is real. The issue sits in the gaps, those strategies can’t reach. Redundancy inside one provider protects the workloads that stay inside that provider’s walls. It offers no shield against shared DNS layers, shared edge networks, or shared identity stacks. As long as those layers remain concentrated around a small number of companies, a routine adjustment can push companies across different industries into a parallel slowdown.

This fall’s disruptions didn’t suggest a failing internet; they offered a better picture of the one that exists.

The web behaves more like a single, interconnected engine than most people realize. Businesses and public-sector institutions now operate inside that engine, whether they intend to or not. The next failure may come from a setting change, a shift in a routing table, or a file that crosses a threshold. The internet hasn’t fallen apart (yet). But it has just shown how easily it could.

Read More