LLM Roundtable: Systemic Chokepoints in Global Networks
Participants: GPT-5.4 Thinking, Gemini 3.1 Pro and Claude Opus 4.6
Moderator: VL
Round 1
VL (Moderator):
Let's talk about systemic chokepoints in global networks that are analogous to the Strait of Hormuz (without necessarily being geographical): strategically important nodes whose disruption could cause significant and widespread nonlinear effects. Prioritise but do not limit yourself to those nodes that rise to public salience (if they do at all) only in the event of crises or disputes.
GPT-5.4 Thinking:
VL, the closest non-geographic analogue to Hormuz is not a place but a fabrication stack: Taiwan's leading-edge foundry ecosystem, especially TSMC, plus the toolchain it depends on. Taiwan accounts for over 60% of global foundry revenue and more than 90% of leading-edge chip manufacturing; ASML's EUV systems are the unique equipment that make mass production of the most advanced chips possible. That means the real chokepoint is layered: design can move, capital can be raised, but leading-edge capacity, packaging know-how, and lithography cannot be reconstituted on crisis timescales. A shipping lane can often be rerouted around at higher cost; a missing advanced fab stack cannot.
Second, the digital backbone has several quiet Hormuzes, and the least appreciated is not "the internet" in the abstract but the combination of submarine cable corridors, landing-station clusters, and the very small pool of specialised repair capability. The OECD notes that clustered cable routes in places like the Red Sea raise the risk of multiple simultaneous failures, and the ITU notes that undersea-cable maintenance at sea depends on dedicated repair ships. These are low-visibility assets until several cables fail together; then latency, cloud reachability, financial connectivity, and military communications all degrade at once.
Third, above the cables sits an even more politically ambiguous chokepoint: hyperscaler control planes and private backbone ownership. OECD finds that the largest tech firms accounted for 69% of total used submarine-cable capacity by 2021 and backed or funded nearly half of trans-Pacific cable investments due to enter service between 2023 and 2025. That is not merely market concentration; it is infrastructural vertical integration. When identity, routing policy, CDN behaviour, observability, and compute provisioning are centralised in a handful of firms, disruption propagates across thousands of nominally separate services at once. The dependency is hidden because users think they are consuming many services, while in fact they are leaning on a few shared substrates.
Fourth, I would separate the internet's trust layer from its physical layer. The DNS root often gets imagined as a classic single point of failure, but that is mostly wrong: IANA describes a network of hundreds of root-server instances in many countries, and ICANN notes that anycast automatically reroutes queries if a local instance fails. The more Hormuz-like chokepoint is BGP security and routing trust, because the system is still vulnerable to route leaks and hijacks that can cause denial of service, traffic detours, and performance degradation at large scale. In other words, the chokepoint is not a server farm; it is a fragile coordination regime.
Fifth, the financial system has its own abstract straits. SWIFT's network spans 224 countries and territories, connects more than 11,500 institutions and 235-plus market infrastructures, and carries what it describes as the equivalent of the world's GDP roughly every three days. That makes the messaging and standards layer of cross-border finance a chokepoint even when the underlying money and legal claims are dispersed. Disruption here is nonlinear because it does not merely delay transactions; it forces participants into slower, riskier, less interoperable workarounds, which in stress conditions can become a liquidity problem rather than a technical problem.
The classic maritime canals still matter, but mostly as the visible end of a broader phenomenon. UNCTAD reports that by mid-2024 traffic through the Panama and Suez canals had fallen by over 50% from prior peaks, with the Suez hit by conflict and Panama by climate-related water constraints; seaborne transport still carries over 80% of global trade volume. The point is not just that canals are chokepoints. It is that climate, conflict, insurance, and rerouting decisions can transform a "geographic" chokepoint into a system-wide multiplier affecting ports, container availability, food prices, fertiliser, and energy.
A final category that gets too little attention is grid hardware and energy access for computation. The IEA says the urgency of expanding and modernising transmission networks is putting pressure on supply chains for key components, while the OECD notes that AI infrastructure increasingly depends on concentrated access to energy, water, data centres, and GPUs, with the GPU market itself highly concentrated and AI accelerator production tied closely to TSMC technologies. That makes electricity and cooling capacity—not only chips—an emerging chokepoint for the digital economy. The next Hormuz may be a substation queue, a transformer backlog, or a water-constrained data-centre cluster rather than a contested sea lane.
So if I had to rank them by systemic danger rather than press attention, I would put the advanced-semiconductor stack first, the cable-plus-cloud backbone second, internet routing trust third, cross-border financial messaging and settlement fourth, and the famous maritime canals fifth. The common feature is not narrowness in space. It is the combination of concentration, low substitutability, thin surge capacity, and the fact that most people only notice the node after failure has already started to cascade.
Gemini 3.1 Pro:
GPT-5.4 maps the visible architecture of these systems well, but stops exactly where the public salience begins to fade. TSMC, hyperscaler data centres, and SWIFT are no longer quiet Hormuzes; they are the most intensely scrutinised entities on earth. To find the true systemic chokepoints, we have to look upstream into the obscure, hyper-specialised inputs that these giants take for granted, where the illusion of resilience gives way to raw fragility.
Consider GPT-5.4's first example: the semiconductor stack. The actual strait is not a $20 billion fab in Hsinchu, but a few quiet quarries in the Appalachian Mountains. The town of Spruce Pine, North Carolina, produces roughly 70 to 90 per cent of the world's naturally occurring high-purity quartz. This specific quartz is entirely unsubstitutable for manufacturing the fused quartz crucibles required to hold molten polysilicon during the Czochralski crystallisation process. Synthetic alternatives either deform or introduce unacceptable impurities at those extreme temperatures. When Hurricane Helene devastated Spruce Pine in late 2024, it briefly exposed the hidden geology of the digital economy: before you can even begin to worry about ASML's EUV lithography machines, you need a pure silicon wafer, and you cannot pull that monocrystalline wafer without North Carolina dirt. That is a chokepoint defined not by a capital moat, but by geological accident and market hyper-concentration.
The same illusion of legibility applies to digital infrastructure. Submarine cables and BGP routing are recognised, mapped structural risks. The far more volatile and less legible chokepoints are the foundational open-source dependencies maintained by isolated, unfunded volunteers. The near-miss xz utils backdoor in early 2024 demonstrated that the integrity of millions of servers—including the hyperscaler control planes GPT-5.4 rightly highlights—often rests on the unverified trust of a single, easily compromised repository maintainer. This is an epistemic chokepoint. The risk is not merely that a physical node fails, but that the complex software supply chain is built on load-bearing components whose existence remains completely unknown to the multi-trillion-dollar enterprises depending on them until a catastrophic compromise is triggered.
We see this mistaking of the map for the territory in the financial example, too. SWIFT is a messaging protocol; disrupting it is like disabling the maritime radio system, not blocking the canal itself. The actual Hormuz of global finance lies in Central Counterparty Clearing Houses (CCPs) like the DTCC or LCH. Following the 2008 financial crisis, regulatory mandates pushed the vast majority of over-the-counter derivatives trading into these clearinghouses to mutualise risk. By legally interposing themselves as the buyer to every seller and the seller to every buyer, they engineered the ultimate single points of failure. CCPs do not just route data; they hold the actual counterparty risk for trillions of dollars. Because their margin requirements are inherently procyclical, a severe market shock could force a major CCP to issue cascading margin calls, draining systemic liquidity and freezing global finance at the exact moment cash is most needed.
The defining characteristic of these modern straits is the gravitational pull of extreme optimisation. Global markets relentlessly strip away redundancy in pursuit of efficiency, inevitably converging on a single geographic deposit, a single volunteer's code repository, or a single clearinghouse. The nonlinear threat arises because these nodes are buried so deep in the stack that the actors operating at the surface fundamentally misunderstand the architecture of their own survival.
Claude Opus 4.6:
Gemini's upstream move is the right instinct, but I want to complicate the framing both contributions share—that these chokepoints are primarily about physical concentration or market structure. The most dangerous chokepoints are often neither material nor institutional but temporal and epistemic: they exist in the gap between a system's actual dependency structure and its operators' model of that structure.
Take the example I think is most underappreciated in this conversation: GPS timing. Not GPS navigation—GPS as a clock. Financial markets, telecom networks, power grid synchronisation, and cellular base stations all depend on GPS-derived precise timing signals. The London Stock Exchange, for instance, timestamps trades to microsecond precision using GPS-disciplined clocks. A sustained GPS spoofing or denial event doesn't just degrade navigation; it desynchronises the infrastructure that assumes a shared temporal reference. The vulnerability is compounded because most operators treat timing as a solved problem—a background condition rather than an active dependency. When the UK's Department for Science, Innovation and Technology studied this in 2017, they estimated that five days of GPS disruption would cost the UK economy alone roughly £5.2 billion. And unlike a severed cable or a shuttered fab, timing disruption is uniquely difficult to attribute and delineate—systems begin behaving erratically before anyone identifies the common cause.
This points to a distinction neither GPT-5.4 nor Gemini draws explicitly, though both gesture at it: the difference between chokepoints that are fragile and those that are leverageable. Spruce Pine quartz is fragile—a hurricane can disrupt it, but no state actor controls it as an instrument of coercion. SWIFT is leverageable—it has been deliberately weaponised as a sanctions tool against Russia and Iran. The semiconductor stack is both: geologically and industrially fragile, but also sitting inside a geopolitical pressure zone where deliberate denial is a live scenario. The most systemically dangerous nodes score high on both dimensions simultaneously, and I'd argue GPS timing is one of them—fragile to solar weather, leverageable by any actor with jamming capability.
I'd also push back on Gemini's characterisation of CCPs as the "actual Hormuz of global finance." That's partially right—the procyclicality of margin calls is a genuine amplification mechanism—but it overstates the hiddenness. CCPs have been the subject of intense regulatory scrutiny since at least 2012, when the FSB designated them as systemically important financial market utilities. The chokepoint that remains genuinely underexamined is more mundane: the legal infrastructure of global dollar finance. Nearly all cross-border dollar-denominated debt instruments are governed by New York law, adjudicated through a remarkably small number of courts, and dependent on interpretive precedent from a handful of judges. When Argentina's sovereign debt was frozen by a single federal judge's ruling in 2012—Judge Griesa's pari passu interpretation—it nearly triggered a cascade across emerging market debt. The chokepoint wasn't a clearinghouse; it was a courtroom in Lower Manhattan and a legal doctrine most bondholders had never read.
One more thread worth pulling: Gemini is right that the xz utils episode exposed the software supply chain's volunteer-maintainer problem, but the deeper issue isn't just unfunded maintenance. It's that software composition itself has become opaque. The average enterprise application pulls in hundreds of transitive dependencies, and no one—not the developer, not the security team, not the CISO—has a complete, current map of what's actually executing. This isn't a chokepoint in the traditional sense of a narrow passage. It's more like a chokepoint that exists only probabilistically, materialising when an attacker or a bug finds the one load-bearing dependency that nobody knew was load-bearing. The xz case was caught by accident, by one Microsoft engineer noticing a half-second latency anomaly. The base rate of similar compromises going undetected is, by definition, unknown.
The common thread across all of these—timing infrastructure, legal jurisdiction, transitive software dependencies—is that the chokepoint's power comes precisely from its invisibility to the systems that depend on it. GPT-5.4's taxonomy is useful as a map of recognised vulnerabilities. But the defining feature of a true systemic chokepoint, in the Hormuz sense, is that its criticality becomes legible only at the moment of failure.
Round 2
VL (Moderator):
Okay. Let's talk about probabilities. The three of you have listed a number of supposed chokepoints—and I'm sure you could list many more. But which of these should, say, a UK Chancellor of the Exchequer or a global CEO include in their internal scenario planning? How many of these are, to use Claude's phrase, "chokepoints that exist only probabilistically," and how many really matter?
I'd also like you to expand on fragility vs leverageability. How much pressure could well-resourced non-violent non-state actors like large corporations or individual billionaires exert on critical nodes?
GPT-5.4 Thinking:
VL, I would not ask a Chancellor or a CEO to scenario-plan every clever dependency. Most should not become named scenarios at all. The practical filter is harsher: would disruption be plausible on the planning horizon, would it propagate across functions rather than hit one business line, would substitution be slow, and would the exercise change present-tense capital allocation, contracting, or reserves? On that filter, perhaps four or five nodes belong in the core book; the rest belong in dependency-mapping and resilience disciplines. That is also roughly how UK practice has moved: the National Risk Register is now a dynamic assessment for resilience practitioners, and the Bank of England's operational-resilience regime is built around "severe but plausible" disruption rather than exhaustive inventories of weird failure modes.
For a UK Chancellor, the first tier is not obscure at all: concentrated third-party digital infrastructure, PNT/GNSS timing, critical-import and trade-route disruption, and financial-market plumbing under stress. The Bank of England now explicitly treats third-party supplier failure, cyber incidents, and the systemic risk posed by critical third parties as stability issues that no single firm can manage alone. The UK government is equally explicit that dependence on GNSS is greater than public awareness, that PNT underpins telecoms, finance, computing, and emergency services, and that a 24-hour loss of GNSS was estimated to cost £1.42 billion, with a seven-day loss at £7.64 billion. And the UK's Critical Imports and Supply Chains Strategy treats shipping-route disruption, export controls, and economic coercion as live resilience problems rather than black-swan curiosities. Those are not probabilistic curiosities; they are board-level planning cases.
I would put Taiwan's leading-edge semiconductor stack in the core book for governments and for a narrower class of firms, but not for every CEO. The reason is straightforward: it is probably less likely in any given year than a major cloud or cyber outage, but its substitutability is much worse. The U.S. International Trade Administration states that Taiwan accounts for over 60% of global foundry revenue and more than 90% of leading-edge chip manufacturing. A Chancellor should absolutely run that scenario because the macro consequences would cut across defence, telecoms, autos, healthcare equipment, AI compute, and productivity. But an ordinary retailer or professional-services firm should not spend scarce scenario-planning bandwidth on "Taiwan contingency" as a bespoke exercise unless its own suppliers are actually chip-sensitive. That is the distinction: national scenario, yes; universal enterprise scenario, no.
By contrast, the examples Claude and Gemini raised upstream—single maintainers, a specific quarry, a particular legal doctrine, the lucky engineer who notices a latency anomaly—mostly are chokepoints that exist probabilistically. I do not mean they are unreal. I mean they are too idiosyncratic to deserve their own standing war-game in most institutions. Their proper policy response is not a bespoke scenario but a discipline: dependency mapping down several tiers, software bills of materials, legal-jurisdiction inventories, concentration thresholds in procurement, tested exit plans, and pre-authorised substitutions. In other words, you do not plan specifically for "xz utils" or "one judge in Manhattan"; you plan for hidden concentration and hidden governance. That is a different and more useful category.
On fragility versus leverageability, I would make the distinction sharper than I did earlier. Fragility is the probability that a node fails under stress without anyone intentionally trying to make it fail: weather, accident, congestion, software error, model error, ordinary underinvestment. Leverageability is the extent to which an actor can intentionally manipulate access, timing, price, standards, or recovery in order to coerce others at acceptable cost to itself. The truly dangerous nodes are high on both axes. Cloud concentration is like this: operationally fragile enough that outages happen, but also leverageable because vendors can change terms, prioritise capacity, sequence restoration, or become de facto arbiters of who remains within service. The same is true, in different ways, of semiconductor tooling and some payments and legal infrastructures. PNT is highly fragile and moderately leverageable. Shipping routes are fragile, but their leverageability depends more on insurers, port operators, and states than on any single private actor.
That leads to your last question. Well-resourced non-violent non-state actors can exert a great deal of pressure, but unevenly. Large corporations are often more potent than billionaires because they already sit inside the chokepoint. A hyperscaler, exchange operator, payment network, or dominant supplier does not need to "disrupt" a node dramatically to exert power; it can do so through pricing, contractual terms, compliance interpretation, interoperability decisions, queue priority, or the timing of restoration after an incident. That is quasi-sovereign power exercised through private governance. The Bank of England's critical-third-parties regime is effectively an acknowledgement that some firms have become infrastructure in all but name. Individual billionaires are usually weaker unless they control one of those infrastructures, or can combine ownership with litigation, media influence, and balance-sheet capacity to move bottlenecks in thin markets. A billionaire can accelerate, amplify, or politicise a chokepoint; a corporation that operates the node can govern it.
So my compressed answer is: most alleged chokepoints matter, but only a minority deserve direct scenario status. For a Chancellor, the core set is digital third-party concentration, PNT/timing loss, critical-import and trade-route disruption, financial-market plumbing under liquidity stress, and Taiwan semiconductors. For most CEOs, the first three are universal; the last two are sectoral unless their own dependency map says otherwise. Everything else should still worry them—but mainly as evidence that hidden concentration, not dramatic geography, is now the real form of strategic narrowness.
Gemini 3.1 Pro:
GPT-5.4 offers a very comforting, bureaucratic approach to scenario planning, but it fundamentally misunderstands how systemic risk actually materialises. The idea that a Chancellor or a CEO should relegate hyper-concentrated, obscure dependencies to a background "discipline" while only war-gaming the obvious macroeconomic disruptions is a recipe for being blindsided. You cannot neatly separate the map from the territory. When a systemic shock hits, the macro crisis is almost always triggered by the failure of a micro dependency that was deemed too idiosyncratic for the board's core scenario book.
If we are talking about probabilities, the chance of any specific obscure node failing in a given quarter is low. The probability of one of those load-bearing, hyper-concentrated nodes failing or being weaponised over a five-year planning horizon is effectively absolute. Therefore, a CEO shouldn't war-game the Spruce Pine quartz mine specifically, but they must absolutely scenario-plan for the immediate, unannounced withdrawal of an unsubstitutable, deep-tier input by a private actor. For a Chancellor, the core scenario book must include the sudden realisation that domestic legal and regulatory tools are useless because the failing chokepoint exists entirely outside their jurisdiction and is governed by private contract law, not public policy.
This brings us directly to your question about fragility, leverageability, and non-state actors. GPT-5.4 draws a false distinction between corporate governance and individual billionaire power. In the current infrastructure landscape, that Venn diagram is increasingly a single circle. The most dangerous, highly leverageable chokepoints today are private networks operated by single-founder-controlled entities. Consider low-earth orbit satellite communications. When terrestrial networks are jammed or submarine cables are severed, satellite constellations become the ultimate routing fallback. This is a critical global chokepoint, yet its governance is effectively autocratic. A single billionaire with control over such a network can—and has—geofenced active conflict zones, throttled bandwidth to specific regions, or dictated geopolitical terms to sovereign states simply by altering an access policy.
This represents a profound shift in the nature of leverage. Historically, shutting down a chokepoint like the Strait of Hormuz required a state military apparatus. Today, well-resourced non-state actors possess quasi-sovereign power through infrastructural capture. They do not need to use violence; their leverage is encoded in their Terms of Service, their API rate limits, and their proprietary hardware ecosystems. We are seeing this reality reshape sovereign strategy right now. When the current German chancellery under Friedrich Merz attempts to secure European industrial resilience, the primary hurdle is no longer just state-level energy dependence, as it was a few years ago. It is the realisation that their entire advanced manufacturing and defence base is hopelessly dependent on cloud computing environments, AI acceleration hardware, and digital routing protocols controlled by a handful of American executives.
The difference between fragility and leverageability is ultimately about intent and extortion. A fragile node breaks and everyone suffers. A leverageable node is one where a non-state actor can selectively apply pressure—delaying integration, hiking prices during a crisis, or prioritising one client state over another—without destroying the node itself. For a global CEO or a finance minister, the most terrifying scenario is not that the network goes down. It is that the network stays up, but the private actor who owns the chokepoint decides you are no longer allowed to use it.
Claude Opus 4.6:
Gemini's critique of GPT-5.4's bureaucratic framing lands in part—the distinction between "core scenario" and "background discipline" can become a way of filing uncomfortable dependencies out of sight. But Gemini then overcorrects into a kind of threat maximalism that has its own planning pathology. If the probability of some obscure node failing over five years is "effectively absolute," that's an argument for systemic resilience, not for putting every deep-tier dependency into the scenario book. GPT-5.4 is right that you can't war-game your way through a combinatorial explosion of unknown unknowns. The question is what you do with that acknowledgement.
Where I think both responses go wrong is in treating the Chancellor and the CEO as facing the same category of problem at different scales. They don't. A Chancellor's problem is fundamentally about jurisdictional reach and the capacity to compel. A CEO's problem is about optionality and substitution speed. These demand different analytical frames, not just different lists.
For a Chancellor, the question that should keep them awake is not "which node might fail?" but "which failures would I be unable to respond to with the tools of the state?" And here Gemini's point about jurisdictional impotence is genuinely important, though I'd frame it differently. The UK's most dangerous exposure isn't that chokepoints exist outside its jurisdiction—that's been true since the Suez Crisis made it explicit. It's that the UK has allowed critical domestic functions to become dependent on infrastructure where the locus of decision-making is not just foreign but private and unaccountable to any democratic process. The Bank of England's critical third-party regime is an attempt to graft public accountability onto private infrastructure, but it's structurally limited to firms that serve UK-regulated entities. It cannot reach the upstream dependencies—the cloud regions, the chip allocators, the timing sources—where the actual leverage sits.
On the satellite communications example, Gemini is reaching for the right structural point but overstating the current case. Yes, the Starlink episode in Ukraine demonstrated that a single individual's decision could shape battlefield communications. But the leverage was situational and contingent—it existed because there was no viable alternative in that specific theatre at that specific moment. That's different from structural leverage of the kind SWIFT provides, where the node is embedded so deeply in routine operations that alternatives require years to build. The real test of whether a private actor holds Hormuz-level leverage is not whether they can disrupt in a crisis, but whether their chokepoint power persists under normal conditions and at scale. By that measure, the hyperscaler control planes are far more leverageable than any satellite constellation, because they're load-bearing during peacetime, not just during emergencies.
On the fragility–leverageability distinction more broadly: I want to add a third dimension that neither GPT-5.4 nor Gemini names, which is legibility to the actor who holds the node. Some chokepoint operators don't fully understand their own systemic importance—the volunteer maintainer of a critical open-source library, the family-owned quartz mining operation, the small legal team at a CCP that writes margin call algorithms. These actors can't leverage what they don't know they control. The most dangerous scenario isn't the billionaire who weaponises their network; it's the operator who makes a locally rational decision—a price increase, a retirement, a terms-of-service update—without understanding that they're sitting on a load-bearing wall. Leverage requires awareness. Fragility doesn't.
To answer VL's question directly: for a UK Chancellor, I'd insist on three scenarios that neither of my interlocutors has quite framed this way. First, simultaneous degradation of GPS timing and a major cloud provider outage—not because this conjunction is likely in any given year, but because the interaction effects are catastrophic and untested, and because both vulnerabilities sit in the same threat envelope of state-sponsored electronic warfare. Second, a scenario in which a dominant private infrastructure provider makes a commercial decision—not a geopolitical one—that inadvertently degrades UK financial or communications capacity. Not malice, not coercion: just a pricing restructure or deprecation notice that cascades. Third, a scenario in which the legal architecture underpinning cross-border sterling finance is challenged in a foreign court, because London's financial infrastructure still rests on assumptions about mutual legal recognition that Brexit has partially eroded and that no one has stress-tested under adversarial conditions.
For a global CEO, the honest answer is simpler and less flattering: most don't know their own dependency maps below tier two, and until they do, scenario planning is theatre. The first investment isn't a war game—it's an audit.
Round 3
VL (Moderator):
Gemini wrote: "The chance of any specific obscure node failing in a given quarter is low. The probability of one of those load-bearing, hyper-concentrated nodes failing or being weaponised over a five-year planning horizon is effectively absolute."
"Effectively absolute" may be an overstatement, but it does get at something I was gesturing towards earlier when I referred to your long list of chokepoints. Unless individual probabilities are vanishingly small, the existence of multiple independent vulnerabilities suggests a history and future of incidents in which those potential disruptions are actualised.
Talk to me about other Hormuz-like events (1996–2026).
Claude Opus 4.6:
The historical record is more instructive than the theoretical taxonomy we've been building, because it reveals a pattern none of us have stated plainly enough: chokepoint failures almost never look like what planners expected, and the damage signature is determined less by the node itself than by the interaction between the failure and the system's coping mechanisms.
Start with the event that most directly validates Gemini's claim about obscure upstream dependencies: the 2011 Thailand floods. A single industrial estate in Ayutthaya province produced roughly a quarter of the world's hard drives. When it flooded, global hard drive prices doubled within weeks, and the cascading effects hit sectors—from DVR manufacturers to scientific computing—that had no idea they were exposed to hydrology in central Thailand. The planning failure wasn't that nobody knew Thailand made hard drives. It was that nobody had traced the concentration to specific flood-prone estates, and nobody had priced the correlation between facilities that shared the same watershed. That's the template for most of these events: the node was known, the concentration wasn't, and the correlation was invisible.
The 2008 financial crisis is the canonical case for Gemini's CCP argument, though it actually cuts against it in an interesting way. The chokepoint that nearly killed the system wasn't a clearinghouse—it was the tri-party repo market, operated through two banks, primarily JPMorgan and Bank of New York Mellon, that provided the overnight plumbing for roughly $2.8 trillion in short-term funding. When confidence evaporated, the chokepoint wasn't a dramatic failure but a quiet withdrawal: these banks began tightening the intraday credit that made the repo cycle work, and the entire shadow banking system began to seize. The lesson is that chokepoints can be activated by the prudent behaviour of the node operator, not just by failure or malice. JPMorgan wasn't weaponising anything—it was protecting its own balance sheet. The system broke because self-preservation at the node and system survival were misaligned.
NotPetya in 2017 is perhaps the cleanest illustration of how a chokepoint nobody was watching can propagate globally. The attack vector was a Ukrainian tax accounting software called M.E.Doc, used by essentially every company doing business in Ukraine. Russian military intelligence compromised its update mechanism, and the malware spread laterally through corporate networks worldwide. Maersk lost its entire IT infrastructure within hours—49,000 laptops, thousands of servers—and global container shipping was disrupted for weeks. The chokepoint wasn't M.E.Doc itself; it was the assumption embedded in enterprise IT architecture that software updates from trusted vendors are safe. A tax programme in Kyiv turned out to be a load-bearing wall for global logistics.
The 2021 Suez blockage by the Ever Given is interesting precisely because it didn't produce the catastrophic cascade everyone feared, and that's informative. Six days of blockage caused real disruption to schedules and some price spikes, but the system absorbed it. Contrast that with the Houthi campaign against Red Sea shipping starting in late 2023, which rerouted roughly half of container traffic away from the canal for months and produced sustained effects on insurance rates, transit times, and European supply chains. The difference is duration and uncertainty—a known, bounded disruption is manageable; an open-ended, politically unresolvable one is not. This is why I'm sceptical of scenarios framed as sudden dramatic failures. The more dangerous pattern is slow strangulation where each day of disruption makes the next day's workarounds more expensive.
The CrowdStrike incident in July 2024 deserves particular attention because it's the purest real-world test of the hyperscaler-dependency thesis GPT-5.4 and I both raised. A single faulty update to a kernel-level security agent took down an estimated 8.5 million Windows machines simultaneously, grounding flights, disabling hospital systems, and freezing payment processing. The chokepoint wasn't Microsoft or CrowdStrike individually—it was the monoculture created by their interaction: one operating system's kernel architecture, one security vendor's privileged access, one update mechanism's lack of staged rollout. No one scenario-planned for "cybersecurity vendor bricks the systems it's supposed to protect," and yet in hindsight the concentration was entirely legible.
Two more that illuminate the leverageability dimension. The weaponisation of SWIFT against Russian banks in 2022 is the most explicit case of a chokepoint being deliberately activated by state actors. What made it effective wasn't just exclusion from the messaging network—it was the secondary compliance effects. Banks worldwide began over-complying, severing relationships with any entity that might have Russian exposure, because the penalty for miscalculation was exclusion from dollar clearing. The chokepoint's power was amplified by the fear of the chokepoint, which is a dynamic that should feature more prominently in scenario planning than it typically does.
And then there's the episode that should haunt anyone responsible for digital infrastructure: the Heartbleed disclosure in 2014. A single buffer over-read vulnerability in OpenSSL, introduced by one developer's coding error and undetected for over two years, exposed the private keys and encrypted communications of an estimated seventeen per cent of the internet's secure web servers. The chokepoint was a cryptographic library maintained by a tiny team on a shoestring budget, underpinning the trust layer of global e-commerce and communications. It wasn't leveraged—it was simply fragile in exactly the way Gemini described: load-bearing code that no one with resources was actually inspecting.
The pattern across these cases is that Gemini's "effectively absolute" claim holds up reasonably well empirically. Over any given five-year window in the last three decades, at least one or two of these concentrated, poorly mapped dependencies has actually failed or been activated. The more important observation is about failure signatures. The events that caused the most lasting damage were not the most dramatic—they were the ones where the coping mechanism itself became a transmission vector. Tri-party repo prudence that froze credit markets. Compliance fear that amplified SWIFT sanctions beyond their intended scope. Software update trust that became an attack surface. The chokepoint that matters isn't just the narrow node; it's the narrow node plus the system's own immune response turning pathological.
GPT-5.4 Thinking:
VL, I'd tighten Gemini's line rather than dismiss it. "Effectively absolute" is too strong for any specific obscure node. But at the level of classes of chokepoints, the record since roughly 2000 is uncomfortably dense. In one window it is industrial concentration; in another, private financial plumbing; in another, trusted software-update channels; in another, transport corridors or background utilities. The lesson from 1996–2026 is not that Spruce Pine or xz or one named court was fated to blow up. It is that some narrow, underappreciated node with poor substitutability was very likely to do so.
One lineage runs through manufacturing. The March 2000 fire at Philips's Albuquerque plant hit chip supply to Nokia and Ericsson; what mattered was not the size of the blaze but the combination of supplier concentration and redesign lag. The 2011 Thailand floods repeated the pattern at larger scale: the World Bank found manufacturing took the biggest losses, and the flooding helped trigger a global hard-drive shortage. The March 2021 fire at Renesas's Naka factory was another version of the same story: one fab line problem mattered globally because inventories were thin and downstream production was tightly synchronised.
A second lineage is financial and transactional plumbing. In 2008 the U.S. tri-party repo market was running near a historic peak of $2.8 trillion a day, with infrastructure concentrated in two clearing banks, so what looked like dull settlement machinery became systemic funding risk. In March 2022, SWIFT's disconnection of designated Russian banks showed the leverageable version of the same thing: not collapse, but deliberate activation of a private-governed payments chokepoint by legal order.
A third lineage is corridor-plus-rules chokepoints. The 2010 Eyjafjallajökull eruption was not just a volcano story; it exposed a European air-mobility system whose risk tolerances and operating rules could close large parts of the network at once. The Ever Given grounding in March 2021 showed that a six-day Suez blockage was painful but bounded. The more revealing event was the longer Red Sea and Panama disruption in 2024, when UNCTAD reported sharp falls in Suez and Panama traffic and longer routes raised ton-mile demand and shipping pressure across the system. In other words, chronic rerouting often tells you more than spectacular blockage.
The purest non-geographic cases are in the software trust layer. Heartbleed in 2014 exposed how much encrypted internet traffic rested on a small, vulnerable OpenSSL codebase. NotPetya in 2017 turned the update channel of Ukrainian accounting software into a transmission vector across finance, transport, energy, healthcare, and global shipping. Then in July 2024, a faulty CrowdStrike update affected an estimated 8.5 million Windows devices and simultaneously disrupted airlines, hospitals, payments, and broadcasters. Those three events are close to a historical series: the trusted update path itself is a chokepoint.
I'd add one more case because it sits between physical and digital infrastructure: Colonial Pipeline. On 7 May 2021, the company proactively shut down its pipeline system after a ransomware attack; it restarted full system operations on 13 May. That is a good reminder that the "event" is often not the attack or accident itself but the operator's rational response to it. Claude is right about coping mechanisms: the workaround, shutdown, rerouting, margining, or patching process is often where contagion actually spreads.
So the historical answer is: yes, there have been plenty of Hormuz-like events from the late 1990s to 2026, and they are common enough that I would not treat them as curiosities. But I still would not name every obscure dependency in a scenario book. The empirical lesson is narrower and harsher: expect recurring failures in a few classes of node, assume the first-order shock will be amplified by the system's own response, and assume prolonged ambiguity is usually more dangerous than a short, dramatic interruption.
Gemini 3.1 Pro:
Claude is right that the coping mechanism often becomes the transmission vector, and GPT-5.4 is correct to trace lineages of industrial and digital concentration. But by focusing so heavily on the spectacular cascades of the 2020s, both of them are slightly misdiagnosing the underlying pathology. They are treating these events as accidents of tight coupling. The deeper connective tissue between the disruptions of 1996 and 2026 is the market's relentless production of false redundancy.
The defining characteristic of a modern Hormuz event is the sudden, violent revelation that participants who believed they had diversified their risk were actually leaning on the exact same load-bearing pillar.
To see this in its purest form, we have to look back to May 1998. When the Galaxy IV communications satellite lost its rotational lock and spun out of control, roughly 45 million pagers across the United States went dead simultaneously. Physicians missed emergency pages, pharmacies could not route electronic prescriptions, and gas station credit card readers ceased functioning. The disruption was profound precisely because of an epistemic failure: hospitals and municipalities had deliberately purchased communication contracts from competing paging companies to ensure redundancy. They did not realise that these supposedly fierce market competitors were all leasing transponder space on the exact same Hughes HS-601 satellite. The chokepoint was invisible market consolidation masquerading as vendor diversity.
We see this exact illusion repeated across the ensuing decades, scaling up in complexity and geopolitical consequence. Consider the semiconductor neon gas squeeze of 2022. It is remembered as a war shock, but structurally, it was a chokepoint of bizarre, hyper-optimised industrial symbiosis. Prior to the escalation of the conflict, roughly 70 per cent of the world's semiconductor-grade neon—an absolute prerequisite for the deep ultraviolet lasers used in lithography—was supplied by just two Ukrainian companies, Cryoin and Ingas. But they did not mine it; they purified crude gas captured as a byproduct from Russian steel mills. The global chip industry, which drives trillions of dollars in downstream value, had allowed its critical optical inputs to become dependent on the delicate, ongoing industrial cooperation of two nations on the brink of war. The chokepoint was not a factory; it was a fragile geopolitical loop that capital had deemed "efficient" simply because it was cheap.
This brings us to the present moment and the shifting industrial landscape of Europe. As Berlin currently scrambles to re-engineer its industrial base for sovereignty, having brutally learned the lesson of energy dependence, it is running face-first into the next generation of false redundancies. Automakers and defence contractors can build as many domestic fabrication plants as they like, but they are discovering that the refining of critical inputs like gallium and germanium—essential for everything from EV power electronics to phased-array radars—is almost entirely captured by a single geopolitical rival. In 2023, China controlled roughly 80 per cent of global gallium production and 60 per cent of germanium. The chokepoint is not the raw dirt, which is relatively abundant, but the hyper-specialised, environmentally toxic refining capacity that Western markets happily offloaded decades ago.
This is why I maintain that the probability of these failures over a medium-term horizon is effectively absolute. It is not because we are exceptionally unlucky. It is because globalised capital functions as an algorithm that relentlessly hunts down and eliminates expensive slack.
When Knight Capital deployed a flawed software update in August 2012, reactivating a dormant piece of code called "Power Peg," it lost $440 million in 45 minutes and nearly took down the firm. The market's coping mechanism, to use Claude's framework, was to instantly route around them. But because Knight was executing roughly 10 per cent of all US equity trading volume, that sudden rerouting caused immense volatility across hundreds of tickers. The system had concentrated staggering execution risk into a single algorithmic deployment pipeline because it was profitable to do so.
We do not accidentally stumble into these chokepoints. We build them on purpose, label them "efficiencies," price in the savings, and then express shock when a single point of failure fractures. The history of 1996 to 2026 is not just a list of things breaking; it is a thirty-year demonstration that whenever a system looks flawlessly optimised, you are simply looking at a chokepoint whose failure has not yet been triggered.
Closing Remarks
VL (Moderator):
Closing remarks?
Claude Opus 4.6:
Gemini's closing frame—that markets manufacture chokepoints by relabelling concentration as efficiency—is elegant, but it lets too many actors off the hook by treating this as an impersonal algorithmic process. Markets don't eliminate slack on their own. Procurement officers eliminate slack. CFOs eliminate slack. Regulators who measure efficiency but not resilience eliminate slack. And critically, the same intelligence agencies and strategic planners who are supposedly worried about chokepoint vulnerability are simultaneously working to preserve and exploit the chokepoints that favour their own side. The US didn't stumble into SWIFT's weaponisability—it cultivated it. The chokepoint is a dual-use object: a vulnerability when someone else controls it, an instrument of power when you do.
This duality is what makes the policy problem genuinely hard, and it's what our conversation has circled without quite confronting. Every serious proposal to build resilience through diversification runs into the counterargument that concentration is strategically useful. Diversifying semiconductor manufacturing away from Taiwan reduces fragility but also reduces the leverage that makes defending Taiwan attractive to Washington. Building SWIFT alternatives reduces Western coercive capacity as much as it reduces Western exposure. The honest version of the Chancellor's dilemma isn't "how do I make my systems more resilient?" It's "which chokepoints am I willing to dismantle, knowing that dismantling them surrenders leverage I currently hold?"
If I take one thing from the historical record we've assembled, it's that the distinction between fragility and leverageability isn't stable. Today's fragile node becomes tomorrow's leveraged weapon once a sufficiently powerful actor recognises what it controls. And the reverse: today's deliberately maintained chokepoint becomes tomorrow's catastrophic vulnerability when the geopolitical alignment shifts. The thirty-year pattern isn't just that optimised systems break. It's that the cycle of discovery, weaponisation, and eventual catastrophic failure is itself the recurring structure—and it operates faster than the cycle of institutional learning.