Google’s BGP Leak: Japan’s Internet Crisis
A single configuration error at Google triggered a massive BGP prefix leak, crippling Japan's internet for hours and exposing global routing vulnerabilities.

In the interconnected world of modern networking, a minor misstep can cascade into a national-scale disaster. On August 25, 2017, a routine configuration change at Google spiraled into one of the most notable Border Gateway Protocol (BGP) incidents in recent history. What began as an internal error propelled over 160,000 IP prefixes across the globe, effectively blackholing traffic destined for Japan and disrupting services for millions. This event not only halted online banking and public transit but also underscored the precarious balance of trust in the internet’s core routing architecture.
The Anatomy of BGP and Why It Matters
At its heart, BGP serves as the glue holding the internet together. This exterior gateway protocol enables autonomous systems—networks operated by ISPs, corporations, and organizations—to exchange routing information. Each AS advertises prefixes, which are blocks of IP addresses, informing others about the best paths to reach specific destinations. Peering agreements allow direct traffic swaps between equals, while transit providers carry traffic for customers to the wider internet.
However, BGP’s design prioritizes flexibility over stringent validation. There’s no built-in mechanism to verify if a prefix advertisement is legitimate. Networks rely on policy filters, maximum prefix limits, and mutual trust. When these fail, leaks occur: a network advertises routes it shouldn’t, often de-aggregating them into more specific announcements that outrank legitimate ones due to BGP’s longest-prefix-match rule.
In Google’s case, the leak originated from a peering session in Chicago. Typically, Google exchanges a handful of prefixes with partners like Verizon under strict peering terms—no transit for third parties. But an errant setting exposed Google’s full view of learned routes, transforming it into an unwitting transit hub.
Timeline of the Disruption
The chaos unfolded at 03:22 UTC (12:22 PM JST). Monitoring tools like BGPMon detected anomalies immediately: prefixes normally owned by Japan’s NTT OCN, such as 210.171.224.0/20, appeared announced via Google (AS15169) through Verizon (AS701). Over 135,000 prefixes flooded out, with 25,000+ tied to NTT alone.
- 03:23 UTC: Leak begins; Verizon accepts and propagates invalid routes.
- 03:25-03:30 UTC: Japanese ISPs KDDI and IIJ, as Verizon customers, ingest ~95,000 and ~97,000 prefixes respectively, rerouting OCN-bound traffic to Chicago.
- 03:32 UTC: Google detects issue; corrects in 8 minutes.
- Post-03:40 UTC: Lingering effects persist for hours due to route propagation and congestion.
Japan bore the brunt. Traffic loops formed: KDDI/IIJ users trying to reach OCN sites looped via Verizon-Google, overwhelming paths and triggering drops. Gaming platforms lagged, stock trades stalled, and train schedules faltered.
Key Players and Their Roles
| Network | AS Number | Role in Incident | Impact |
|---|---|---|---|
| AS15169 | Leaked prefixes from peering sessions | Originated the flood; fixed quickly | |
| Verizon | AS701 | Accepted and re-advertised leak | Amplified propagation to customers |
| NTT OCN | AS4713 | Victim of hijacked prefixes | 24,000+ prefixes affected |
| KDDI | AS2516 | Verizon customer; accepted routes | 95,000+ prefixes; major outages |
| IIJ | AS2497 | Verizon customer; accepted routes | 97,000+ prefixes; widespread disruption |
Verizon’s acceptance was pivotal. Daily norms saw Google sending <50 prefixes; the jump to 160,000 bypassed any max-prefix safeguards, revealing gaps in automated defenses.
Technical Breakdown: How the Leak Propagated
Google’s internal traffic engineering used de-aggregated prefixes for optimization. The leak exposed these to Verizon, who treated them as valid peering routes. BGP path selection favored these specifics over aggregates, drawing global traffic astray.
For instance, Jastel (AS45629), a smaller Japanese provider peering with Google, saw its routes transit via Verizon unexpectedly. Paths like AS45629 → AS15169 → AS701 rerouted legitimately destined traffic through unprepared links, causing blackholing at Google’s access controls (ACLs).
Why Japan? Geographic irony: The Chicago peering point became a bottleneck for transpacific flows. Japanese networks’ reliance on Verizon transit exacerbated the loop.
Economic and Societal Fallout
Beyond technical woes, repercussions rippled widely. Financial firms couldn’t process transactions; Tokyo’s transit systems displayed error messages. Online gaming tournaments froze mid-match. Japan’s Ministry of Internal Affairs launched a probe, demanding explanations.
Google issued a public apology: “An errant network setting was corrected within eight minutes.” Yet, full recovery took hours, costing businesses dearly in lost productivity. This incident echoed past leaks, like Pakistan Telecom’s 2008 YouTube hijack, but highlighted evolving risks in a denser, IPv6-transitioning internet.
Lessons Learned: Safeguarding BGP
Such events expose BGP’s Achilles’ heel: over-reliance on configuration. Mitigation strategies include:
- Resource Public Key Infrastructure (RPKI): Cryptographically validates prefix origins. Adopted by standards bodies like the Internet Engineering Task Force (IETF).
- Maximum Prefix Limits: Caps per-peer advertisements to curb floods.
- Route Origin Authorization (ROA): Signed attestations of ownership.
- BGP Monitoring Tools: Real-time anomaly detection via RIPE NCC or BGPMon.
- Customer Filters: Strict inbound policies at edges.
Post-incident, Verizon reportedly tuned filters. Google bolstered internal controls. Globally, adoption of RPKI has surged, with over 50% route origin validation by 2023 per official registries.
Broader Implications for Internet Resilience
This leak wasn’t malicious—contrast with state-sponsored hijacks—but illustrates fragility. In an era of 5G, IoT, and cloud reliance, single points of failure threaten economies. Incidents like Japan’s remind operators: vigilance is paramount.
Regulators now push for mandatory validations. The Internet Society advocates “mutually agreed norms for routing security” (MANRS), a voluntary program now joined by 800+ networks.
FAQs: Understanding BGP Leaks
Q: What exactly is a BGP prefix leak?
A: It’s when a network advertises IP routes it learned from peers as its own, misleading others on optimal paths.
Q: Could this happen again?
A: Yes, but tools like RPKI reduce risks. Full immunity requires universal adoption.
Q: Who was at fault—Google or Verizon?
A: Google for the leak; Verizon for lax acceptance. Shared responsibility in BGP ecosystems.
Q: How does BGP differ from internal routing protocols?
A: BGP scales inter-domain; protocols like OSPF are intra-domain with tighter controls.
Q: What’s the status of BGP security today?
A: Improved, with IETF standards and deployments mitigating many vectors.
Conclusion: Toward a More Robust Internet
The 2017 Google BGP leak stands as a cautionary tale. In under 10 minutes, it dismantled connectivity for a tech-forward nation, proving even giants err. Yet, it catalyzed progress: enhanced monitoring, cryptographic assurances, and industry collaboration. As internet traffic explodes, fortifying BGP isn’t optional—it’s existential. Operators must prioritize these defenses to avert future crises.
References
- BGPStream Monitoring Documentation — RIPE NCC. 2023-05-15. https://bgpstream.com/
- MANRS: Mutually Agreed Norms for Routing Security — Internet Society. 2024-01-10. https://www.manrs.org/
- RPKI Deployment Status — Cloudflare Radar. 2025-03-20. https://radar.cloudflare.com/rpki
- BGP Large Communities for Leak Prevention — IETF Draft (RFC 8092). 2017-02-01. https://datatracker.ietf.org/doc/html/rfc8092
- Internet Routing Table Report — Hurricane Electric BGP Toolkit. 2026-05-01. https://bgp.he.net/
Read full bio of Sneha Tete










