Essay 6 of 7

Counterforces

Regulation, network effects, physical constraints, and institutional inertia — the forces pushing back against hypercompetition, and how strong they actually are.

18 min read

If the thesis so far is right — if intelligence on tap is producing hypercompetition as a structural condition, not a one-time event — then the most intuitive counterforce is government. Surely regulators will step in. They always do.

The question is when. And the historical record on that question is consistent.

The lag

Railroad monopoly abuses emerged in the late 1860s and 1870s — discriminatory pricing, rebate schemes, rate manipulation that ruined farmers and small shippers. The Interstate Commerce Act arrived in 1887, roughly fifteen to twenty years later, and only because the Supreme Court forced Congress’s hand. In Wabash, St. Louis & Pacific Railway v. Illinois (1886), the Court struck down state railroad regulation in a 6-3 decision, ruling that only the federal government could regulate interstate commerce. The ruling created a vacuum that Congress had to fill. Even then, the resulting Interstate Commerce Commission was largely toothless for its first decade. Richard Olney — a railroad attorney who later served as Attorney General under Cleveland — described the Commission in an 1892 letter as “of great use to the railroads,” a body that “satisfies the popular clamor for government supervision” while that supervision remained “almost entirely nominal.”

The pattern repeats. Standard Oil signed its trust agreement on January 2, 1882 — forty-one investors pooling securities of forty companies into a single holding agency managed by nine trustees. The breakup came in 1911. Twenty-nine years.

Malcom McLean put the first container ship to sea in 1956. The Shipping Act of 1984 was the first major regulatory update to accommodate containerized shipping — twenty-eight years later.

The internet commercialized in the mid-1990s. Comprehensive federal regulation of online platforms remains unfinished more than thirty years later.

High-frequency trading’s systemic risks became publicly visible on May 6, 2010, when the Flash Crash erased nearly a trillion dollars in market value in thirty-six minutes. Eric Budish and colleagues proposed a structural fix in 2015 — frequent batch auctions that would eliminate the arms race. Never implemented. The SEC adopted tick-size reforms in September 2024, fifteen years after the problems were visible. The compliance deadline was November 2025. It was then delayed to November 2026. The structural reform Budish proposed — the one that would actually address the underlying dynamic — remains academic.

The fastest example in the current cycle is the EU AI Act: proposed April 2021, with high-risk system requirements taking effect August 2026. Roughly five years from proposal to enforcement — a pace that European regulators consider fast. In AI development time, five years spans the entirety of the large language model revolution. GPT-3 to GPT-4 to open-source parity to multimodal agents — all of it happened inside the EU’s legislative timeline.

TechnologyProblem VisibleRegulation EnactedGap
RailroadsLate 1860s-1870s1887 (ICC Act)~15-20 years
Standard Oil1882 (trust formed)1911 (breakup)29 years
Containerized shipping19561984 (Shipping Act)28 years
InternetMid-1990sIncomplete30+ years
HFT2010 (Flash Crash)2024 (tick-size rules, delayed)15+ years
AI (EU)~20212026 (high-risk enforcement)~5 years
AI (US federal)~2022NoneOngoing

The pattern is not ambiguous. Regulation arrives a decade or more after the competitive dynamics are already entrenched. By the time rules take effect, the market has already restructured around the new reality.

The complication

The story is more complicated than “regulation is slow.” The deeper problem is what regulation does when it arrives.

George Stigler’s “Theory of Economic Regulation,” published in 1971, argued that regulation is not a public-spirited correction of market failure. It is a product — one that industries actively seek. “As a rule,” Stigler wrote, “regulation is acquired by the industry and is designed and operated primarily for its benefit.” The logic is structural: compact producer groups with high stakes per firm can organize and lobby efficiently. Consumers, bearing small individual costs spread across millions of people, cannot.

Thomas Philippon, in The Great Reversal (2019), documented how this dynamic plays out at scale. Across time, states, and industries, he found that corporate lobbying and campaign contributions lead to barriers to entry and regulations that protect large incumbents — not the competitive pressure that regulation is ostensibly designed to address.

The evidence in AI follows the pattern exactly. In May 2023, Sam Altman testified before the Senate Judiciary Committee and called for a new federal agency with the power to grant and revoke licenses for AI models above a certain capability threshold. The framing was responsible: safety standards, independent audits, government oversight. The structural effect, if implemented, would be a licensing regime that only a handful of well-funded organizations could afford to navigate — exactly the kind of entry barrier that intelligence on tap is eroding elsewhere. Critics called it a “power grab” designed to lock in OpenAI’s lead.

OpenAI’s lobbying spend tells its own story. $260,000 in 2023. $1.76 million in 2024. $2.1 million in the first nine months of 2025. Meta spent a record $19.7 million on federal lobbying in the same period. In total, over 500 organizations lobbied Congress on AI in the first half of 2025 — double the number from two years earlier.

The GDPR provides a case study of what this produces in practice. Johnson, Shriver, and Goldberg, in a 2023 Management Science paper, found that GDPR enforcement increased the concentration of the web technology vendor market by 17%. Websites dropped smaller vendors and consolidated around Facebook- and Google-owned services — the largest players, the ones best equipped to absorb compliance costs. In a separate study published in the American Economic Journal: Economic Policy (2024), the same authors estimated that GDPR’s distortionary effects functioned like a 25% tax on the smallest firms, with monotonically decreasing impact as firm size increased. The regulation designed to protect consumers from large data companies made those companies more dominant.

The EU AI Act shows early signs of the same dynamic. Estimated high-risk AI system compliance costs range from $2-5 million for mid-size firms to $8-15 million for large enterprises, with small and medium businesses needing to dedicate up to 30-40% of their technical capacity to compliance documentation. An analysis published in the Harvard Kennedy School Student Policy Review found that a 200% increase in fixed compliance costs transforms a startup’s operating margin from positive 13% to negative 7%. The regulation constrains everyone. It constrains new entrants more.

The honest assessment

Regulation is real. It will come — in fact, it is already arriving, unevenly and imperfectly. Over a thousand AI-related bills were introduced across US states in 2025 alone. California, Texas, and Illinois enacted significant AI legislation effective January 2026. The EU’s high-risk provisions take effect in August.

But the historical record is consistent on three points. First, regulation arrives after the competitive dynamics are already entrenched — a decade or more, every time. Second, when it arrives, compliance costs exhibit economies of scale that function as barriers to entry, constraining new entrants more than incumbents. Third, the industries being regulated actively shape the rules to serve their interests, not to restore the competitive conditions that existed before.

None of this means regulation is irrelevant. In healthcare, finance, and safety-critical systems, regulatory barriers are real and will slow the pace of AI-driven competition meaningfully. The FDA’s 10-15 year approval process, with its 90% clinical failure rate, is not going to compress because AI can generate molecular candidates faster. Financial services regulation, for all its capture dynamics, still requires licensing, capital requirements, and compliance infrastructure that cannot be skipped.

But regulation as a general counterforce to hypercompetition — the idea that governments will step in and restore equilibrium — does not survive contact with the historical evidence. Regulation reshapes hypercompetition. It channels it, constrains it in specific sectors, raises the floor in others. It does not prevent it.

The regulators are coming. By the time they arrive, the market will have already moved.


Regulation is one counterforce. But the more common defense is structural: network effects, data moats, physical constraints, brand. These don’t depend on government action. They exist in the market itself — embedded in infrastructure, in user behavior, in the stubborn physics of the real world.

The question is whether they hold under sustained pressure.

Network effects under pressure

The standard framework for evaluating network effects comes from Eisenmann, Parker, and Van Alstyne, whose 2006 Harvard Business Review analysis identified three conditions for winner-take-all dynamics: the effects must be strong and positive, multi-homing costs must be high, and users must have relatively homogeneous needs. When any condition breaks, the market fragments rather than tips.

Apply those conditions to the most visible AI market. ChatGPT held 87.2% of global AI chatbot traffic in January 2025. By January 2026, that share had fallen to 68% — a 19-point decline in twelve months — as Google Gemini surged to 18.2%, with Grok, Perplexity, and Claude splitting the remainder.

The decline is worth studying not because ChatGPT is failing, but because it shows where the framework cracks. AI chatbots have no cross-side network effects: one user’s presence does not make the product more valuable for another. Multi-homing costs are zero — switching between ChatGPT and Claude takes seconds. And users have sharply differentiated needs — coding, creative writing, research, analysis — that no single model optimally serves. By the Eisenmann framework, the AI chatbot market was never a candidate for winner-take-all. The market share OpenAI built was a first-mover artifact, not a structural moat.

BlackBerry’s collapse provides the historical parallel. At its peak market share in 2009, BlackBerry held roughly 56% of the US smartphone market and 20% globally. The BlackBerry Enterprise Server gave IT departments centralized device management, corporate email integration, and security controls — institutional lock-in that looked unassailable.

What broke it was not a competing network. It was a competing product. The iPhone created consumer demand that bypassed IT purchasing departments entirely — the bring-your-own-device shift of 2010-2011. By 2012, iPhone and Android had surpassed BlackBerry in enterprise adoption. By the time BBM launched on iOS and Android in October 2013, WhatsApp already had 350 million users and growing. BlackBerry’s stock, which had peaked at $147.55 in 2008, fell to single digits.

The pattern: network effects hold when they are embedded in physical infrastructure or deep workflow integration. They fail when the value proposition is the quality of the product itself — because a better product can overcome switching costs.

Amazon’s marketplace is the durable version. Third-party sellers accounted for 62% of units sold in Q4 2024. Seller services revenue reached $172 billion in 2025. The flywheel — more sellers attract more buyers, more buyers attract more sellers — is reinforced by physical logistics infrastructure: fulfillment centers, delivery networks, inventory systems that take years and billions of dollars to replicate. AI has not eroded this moat because the network effect is tied to atoms, not information.

Apple’s ecosystem is similar. The iPhone retains 89-92% of its customers, depending on carrier conditions. But the moat is not a network effect in the classical sense — it is ecosystem switching cost. iMessage, AirDrop, Apple Watch, AirPods, iCloud, and accumulated app purchases create multi-product lock-in that no single competing product can dissolve.

The distinction matters. Network effects built on content, traffic, or user counts are vulnerable to AI substitution. Network effects built on physical infrastructure or ecosystem integration remain intact — for now.

The data question

Data moats follow the same bifurcation.

Hal Varian, Google’s chief economist, called scale arguments about data advantages “pretty bogus,” arguing that after a certain sample size, competitive advantage comes from smarter algorithms, not more data. Andreessen Horowitz’s Martin Casado and Peter Lauten made the case formally in 2019: the cost of adding unique data goes up while the value of incremental data goes down — inverted economies of scale. And the Google “We Have No Moat” memo, leaked in May 2023, conceded the point from inside the company: “The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.”

BloombergGPT is the cautionary tale. Bloomberg spent an estimated $2.7-10 million training a 50.6 billion parameter model on 363 billion proprietary financial tokens. When GPT-4 launched shortly after, it outperformed BloombergGPT on most financial tasks — including the FinQA benchmark — with zero specialized financial training. Scale and generality beat specialization and proprietary data.

But Tempus AI tells the opposite story. Its library of 40 million de-identified clinical records, 4 million sequenced samples, 7 million digitized pathology slides, and 350 petabytes of connected clinical and molecular data creates a flywheel that general-purpose models cannot approximate. More physicians use Tempus, generating more patient data, which improves the AI models, which attracts more physicians. Revenue grew 82% year-over-year in 2025, reaching roughly $1.26 billion.

The difference is not the volume of data. It is the irreproducibility of the collection process. Bloomberg’s financial data exists in structured form across dozens of terminals and databases — a larger model can approximate the same information from public sources. Tempus’s clinical data was assembled through partnerships with over 4,500 hospitals, each requiring regulatory compliance, institutional trust, and years of relationship-building. That cannot be accelerated with compute.

Where atoms rule

Physical constraints are the strongest counterforce, and it is not a close call.

US construction labor productivity fell more than 30% from 1970 to 2020, according to the Richmond Fed. By 2023, despite every wave of computerization, automation, and software tools, construction productivity was essentially where it had been in 1948. Over the same period, overall US labor productivity tripled. The Richmond Fed estimates the cumulative economic cost at roughly a trillion dollars every five years.

Manufacturing is harder than design. Elon Musk has stated this with unusual precision: “It’s 1,000% to 10,000% harder than making a few prototypes. The machine that makes the machine is vastly harder than the machine itself.” This is not rhetoric. It is the lived experience of Tesla’s “production hell,” where the gap between a working prototype and a reliable production line consumed years and billions.

Robotics faces the same asymmetry. Oliver Hsu’s analysis for Andreessen Horowitz in January 2026 identified six specific deployment gaps between laboratory demonstrations and production reliability. A picking robot achieving 95% success in the lab would fail approximately 50 times per day in production — far below the 99.9% reliability that operations require. Andrej Karpathy’s “march of nines” captures why this is so hard: each additional nine of reliability — from 90% to 99%, then 99% to 99.9% — requires roughly as much engineering effort as the previous nine. The effort scales logarithmically. The budget does not. Bain’s assessment of humanoid robotics in 2025: “While demonstrations dazzle, most deployments remain early-stage, with heavy reliance on human supervision.” Most humanoids operate for about two hours per charge.

In pharmaceuticals, biology imposes its own timeline. The FDA approval process from concept to market takes 10-15 years. The overall clinical failure rate — from Phase I entry to final approval — is approximately 90%. AI has compressed early discovery: Insilico Medicine moved rentosertib from target identification to Phase I in 18 months rather than the industry’s typical 42. But clinical development remains bound by biology — patient enrollment, dosing intervals, disease progression monitoring, regulatory review — and no AI-discovered drug has received FDA approval.

Brand as irreducible scarcity

In a world of commodity outputs, does brand become more or less valuable?

Hermès provides the clearest test. In its 2024 full-year results, the company reported EUR 15.2 billion in revenue — up 15% at constant exchange rates — with a 40.5% operating margin. More than seven thousand craftspeople, trained through multi-year apprenticeship programs across a network of in-house schools, produce leather goods that are 100% made in France. The company is opening its 23rd leather goods workshop. Not automating its 22nd.

The brand premium is built on something AI cannot replicate: irreducible human scarcity. A Birkin bag is expensive not despite taking hours of hand-stitching by a single artisan, but because of it. The constraint is the product.

Consumer data suggests Hermès is not an outlier in the broader logic, even if it is an outlier in execution. Only 32% of Americans trust AI, according to the 2025 Edelman Trust Barometer. A KPMG study of 48,000 people across 47 countries found that just 46% are willing to trust AI systems. NielsenIQ ran EEG studies on consumers viewing AI-generated advertisements and found weaker memory activation compared to traditional ads — people respond to perceived authenticity at a neurological level, whether they know it or not.

Trust, like physical constraints, is slow to change. But the real question is whether trust is a counterforce to hypercompetition or merely a lag. The 79% of Americans who prefer human interaction over AI today will not all hold that preference in five years. Trust erodes from the edges: younger consumers are more accepting, repeated positive experiences normalize AI interaction, and competitive pressure rewards firms that adopt it regardless of what surveys say.

Brand is real. Trust is real. But they constrain the pace of hypercompetition more than they constrain its direction.


The counterforces are real. But real is not the same as sufficient.

The honest exercise is to sort them — not into “important” and “unimportant,” but into three categories based on how they interact with the underlying dynamic. Some will slow hypercompetition in ways that matter. Some are real but eroding under your feet. And some are stories people tell themselves because the alternative is uncomfortable.

Likely real

Physical constraints are the strongest barrier, and nothing in the current trajectory of AI changes that.

US construction labor productivity has declined more than 30% since 1970, falling through the personal computer revolution, the internet, mobile, cloud computing, and now AI. The FDA’s approval process remains 10-15 years from concept to market, with a clinical failure rate around 90% — and AI-accelerated drug discovery has not yet demonstrated it can change that number. No AI-discovered compound has received FDA approval. Robotics faces what Andrej Karpathy calls the “march of nines” — each additional nine of reliability demands as much engineering effort as the last, and production environments need 99.9% reliability that lab demonstrations routinely miss. These are not barriers that intelligence on tap erodes. They are barriers imposed by physics, biology, and the irreducible difficulty of manipulating the physical world.

Regulatory barriers in safety-critical domains are similarly durable — not because regulation is fast (the first page of this essay established it is not) but because in healthcare, finance, aviation, and nuclear power, the existing regulatory infrastructure runs deep enough that compliance itself is a structural barrier. Neural network outputs remain infeasible to formally verify under the deterministic safety standards that govern these industries. The regulatory apparatus is not blocking AI. It is channeling AI into incremental applications within existing frameworks — a fundamentally different dynamic than the greenfield competition playing out in software.

Craftsmanship-scarcity moats hold where the constraint is the product itself. Hermes’s 40.5% operating margin in a contracting luxury market is not threatened by AI because the value proposition is irreducible human labor — thousands of craftspeople trained through multi-year apprenticeships, producing goods that are expensive because they are slow. The scarcity is the point. This logic extends beyond luxury goods to any domain where consumers pay a premium specifically for human involvement: certain professional services, artisanal production, live performance, therapeutic relationships. The category is real but narrow.

Trust deficits in high-stakes domains impose a different kind of friction. The Edelman and KPMG numbers from the previous pages tell a consistent story — low trust, and getting lower with exposure rather than higher. In healthcare and financial advice, where errors carry existential consequences, that trust deficit functions as a genuine adoption barrier. People will not hand their diagnoses or retirement portfolios to systems they do not trust, regardless of what the systems can do.

Partially real, eroding

Network effects are the most important category to get right, because the answer is genuinely conditional.

The durable version: Amazon’s marketplace flywheel, reinforced by $172 billion in seller services revenue and physical logistics infrastructure that takes years and billions to replicate. Apple’s 89-92% customer retention, built on ecosystem switching costs that no single competing product can dissolve. These hold because the network effect is embedded in physical infrastructure or deep multi-product integration.

The fragile version: ChatGPT losing 19 points of market share in twelve months despite massive brand recognition and first-mover advantage. The Eisenmann framework explains the collapse — zero multi-homing costs, no cross-side network effects, and users with sharply differentiated needs all undermine winner-take-all dynamics. When the moat is product quality alone, a better product punches through it.

In markets where AI is the primary value proposition, network effects are unreliable. In markets where AI augments an existing physical or ecosystem moat, they remain intact. The problem for incumbents is that an increasing share of economic activity falls into the first category.

Brand in professional services is eroding faster than the firms acknowledge. McKinsey has cut roughly 5,000 jobs since 2023. The firm deploys thousands of AI agents internally, and two-to-three person teams are replacing fourteen-person engagement teams for certain classes of work. “Nobody gets fired for hiring McKinsey” still carries weight in high-stakes strategic decisions. It carries less weight in the analytical work that has historically comprised the majority of billable hours — the research, synthesis, and analysis that AI enables internal teams to do themselves.

The erosion of coordination costs is the subtlest shift. Brooks’s Law and Conway’s Law still constrain complex physical organizations — adding people to a late project still makes it later when the work involves atoms. But in software, the evidence is hard to ignore. Cursor reached $1.2 billion in annual recurring revenue with 300 employees. Lovable hit unicorn status in eight months with 45. Bolt reached $20 million in annual revenue in two months with 15 people. These are not outliers in a normal distribution. They are the leading edge of a structural shift in which AI-augmented small teams compete with organizations ten times their size — in software, in digital products, in any domain where the output is bits rather than atoms.

Likely cope

Any defense premised solely on the complexity of a digital task is cope.

“Our product is too complex to replicate.” “Our codebase is too large to rebuild.” “Our data pipeline is too sophisticated to reproduce.” Each of these was a reasonable defense when the cost of cognitive work was high. When that cost approaches zero, complexity of digital work becomes a feature for AI, not a barrier against it. A large, complex codebase is just more context for a system that processes context for a living. The open-source model performance gap — which has narrowed from roughly 8% behind closed models in early 2024 to under 2% by early 2025, according to LMSYS benchmarks — demonstrates that even scale in AI development itself is not a durable advantage.

Data moats without switching costs, network effects, or regulatory protection are cope. The previous pages laid out the evidence: Varian calling scale data advantages “pretty bogus,” BloombergGPT outperformed by GPT-4 with zero specialized training, Google’s own leaked memo conceding the barrier to entry had collapsed. Data is defensible when the collection process is irreproducible — Tempus AI’s 40 million clinical records assembled through 4,500 hospital partnerships. Data is not defensible when the information exists in structured form across accessible sources.

Scale as defense without other moats is cope. AI unicorns in 2025 reached billion-dollar valuations in roughly two years with an average of 200 employees. Cursor went from $100 million to $1.2 billion ARR in under twelve months. The minimum viable competitive effort has collapsed in any market where the primary output is software.

The honest verdict

The counterforces do not refute the thesis. They bound it.

Hypercompetition hits hardest where the work is cognitive and the output is digital. Services account for roughly 77% of US GDP. The sectors where physical constraints, entrenched regulation, and earned trust genuinely slow the dynamic are real — but they are shrinking relative to the economy as a whole.

The counterforces tell you where hypercompetition will be slower and where it will be faster. They do not tell you it won’t happen. The walls that remain standing are made of atoms, regulation, and trust. The walls made of information, complexity, and scale are already coming down.