Essay 1 argued that intelligence becoming cheap changes the conditions of competition itself. That claim needs to be made precise. What, exactly, does cheap intelligence do to market structure? Which competitive positions are weakened? Which hold? And how fast?
Answering those questions requires starting with something the technology industry uses constantly but rarely defines: barriers to entry.
What a barrier actually is
The phrase gets used loosely — as shorthand for “why it’s hard to compete with us” or “our moat,” or the even vaguer “defensibility.” In economic theory, the concept is more specific and more useful.
Joe Bain, in a 1956 study of twenty manufacturing industries, identified three structural sources of barriers to entry: economies of scale, absolute cost advantages held by incumbents, and product differentiation. His question was straightforward: what allows existing firms to earn above-normal profits without attracting new competitors? The answer was whatever made it expensive, difficult, or slow for a new firm to enter the market and compete effectively.
Michael Porter expanded Bain’s taxonomy in 1980 and again in a 2008 update. His list is the one most strategists carry around, whether they’ve read the original or not: supply-side economies of scale, demand-side benefits of scale (network effects), switching costs, capital requirements, incumbency advantages independent of scale (proprietary technology, favorable access to raw materials, accumulated experience), access to distribution channels, and government policy.
Seven categories. Each one a structural reason why a new entrant can’t simply show up and compete.
George Stigler offered a narrower definition in 1968, arguing that a barrier to entry is any cost of production that a new entrant must bear but an incumbent does not. Under Stigler’s definition, economies of scale are not a barrier — both the entrant and the incumbent face them. Only costs that fall asymmetrically on new entrants qualify. The distinction matters. Stigler’s version focuses attention on what is fundamentally different about being new versus being established — which turns out to be precisely the question AI forces us to reexamine.
These frameworks were built for a world where barriers were relatively stable. A pharmaceutical company’s patent portfolio didn’t evaporate overnight. A bank’s regulatory licenses weren’t suddenly available to a startup. A manufacturer’s economies of scale didn’t lose their advantage because a new tool appeared. The barriers shifted over time, of course. New regulations, new technologies, new market entrants eroding what once seemed permanent. But the shifts were measured in years, sometimes decades.
The question to address is what happens when a large category of those barriers begins to erode not over months instead of decades.
Not all barriers are made of the same material
Look at Porter’s list again and notice something.
Some barriers are made of knowledge and capital. Proprietary technology. Engineering talent. Design capability. The ability to build a working product, operate it at scale, and iterate on it faster than competitors. These are barriers that depend on the cost and availability of cognitive work — the thinking, designing, coding, analyzing, and problem-solving that turns capital into competitive products.
Other barriers are made of different stuff entirely. Government policy requires compliance with specific laws; you can’t think your way around an FDA approval process. Network effects depend on how many people already use a product; no amount of intelligence makes a social network useful if nobody is on it. Physical infrastructure requires atoms to be arranged in specific configurations: chip fabrication plants, data centers, distribution warehouses, power generation facilities. Trust is accumulated over time through consistent behavior and cannot be manufactured on demand.
This distinction is not standard in the economics literature. The usual taxonomies — Bain’s three categories, Porter’s seven, the structural-versus-strategic split common in industrial organization — slice the concept along different axes. But it is the distinction that matters most for understanding what AI does to competitive dynamics.
Intelligence on tap is a selective solvent. It dissolves barriers made of knowledge and capital. It leaves barriers made of regulation, physical infrastructure, and accumulated trust largely intact.
When the OECD analyzed AI and competitive dynamics in downstream markets in late 2025, they documented exactly this selective erosion. AI “lowers capital and labour thresholds for market entry,” they found, “enabling micro-vendors to contest markets traditionally dominated by larger incumbents.” GenAI “can automate cognitive tasks, particularly routine or repetitive tasks, lowering the skills threshold for market participation.” In sector after sector — creative industries, software development, professional services — smaller firms were accessing tools and capabilities “previously reserved for larger incumbents.”
But the same report noted that in sectors like finance, health, and retail, proprietary data held by incumbents still conferred advantages that smaller firms struggled to match. Physical infrastructure requirements hadn’t changed. Regulatory compliance costs hadn’t fallen. In some cases, new regulations — the EU AI Act, for example — were creating additional barriers that disproportionately burdened entrants.
The pattern is not “AI lowers all barriers.” The pattern is “AI lowers barriers built on knowledge and capital while leaving barriers built on regulation, physical infrastructure, and trust intact — and in some cases raising them.”
The sunk cost question
This selective erosion connects to an insight from William Baumol that has been underappreciated for more than forty years.
In 1982, Baumol introduced the concept of contestable markets in his presidential address to the American Economic Association. His definition was deceptively simple: “A contestable market is one into which entry is absolutely free, and exit is absolutely costless.”
The key insight was not about entry costs in general. It was about a specific category of entry cost: sunk costs — expenditures that cannot be recovered if the entrant exits the market.
Fixed costs, even large ones, are not the problem. An airline can buy a fleet of aircraft and, if the route doesn’t work, redeploy or sell the planes. The investment is large but recoverable. A pharmaceutical company that spends a billion dollars or more developing a drug that fails Phase III trials cannot recover that investment. The knowledge generated has no resale value outside the specific regulatory pathway. That spending is sunk.
Baumol’s argument was that markets with high fixed costs can still be perfectly contestable, so long as those costs are not sunk. What makes markets resistant to competition isn’t the size of the upfront investment — it’s the recoverability of that investment. When sunk costs are low, even the threat of a new entrant disciplines incumbents. Baumol called this “hit-and-run” entry: a firm enters, competes, and if conditions turn unfavorable, exits without catastrophic loss. Incumbents, knowing this is possible, cannot afford to price above competitive levels.
This is the framework that makes the current moment legible.
What AI is doing to software-centric markets, quietly and steadily, is driving sunk costs toward zero. The engineering team that once required eighteen months and millions of dollars to build a minimum viable product? A team of four can now ship a comparable product in weeks, at a cost that barely registers as sunk. If the product fails, the team has lost weeks of effort and a few thousand dollars in API credits — not the kind of irrecoverable investment that deters entry.
The OECD recognized this directly: an early-stage SaaS company can now release a minimum viable product with a handful of engineers, where historically it would have taken a much larger team. When the capital required to enter a market drops from millions to thousands, and the time required from years to months, and the knowledge required from deep domain expertise to competence with general-purpose tools, the sunk costs of entry approach the conditions Baumol described.
The market doesn’t need to be flooded with competitors to change behavior. It needs to be credibly flood-able.
That is the mechanism. The next three sections examine it in detail: what falls, what doesn’t, and what happens when the erosion compounds.
The previous section drew the distinction: barriers made of knowledge and capital versus barriers made of regulation, physical infrastructure, and trust. Intelligence on tap dissolves the first category. Here is that erosion in four dimensions, with evidence that is already measurable.
Knowledge barriers
The most direct effect of cheap intelligence is on the cost of knowing things.
For decades, the ability to write production-quality software was a genuine barrier to entry. Learning to code well enough to ship a product took years. Hiring people who could do it cost six figures per head. The knowledge itself, how to architect systems, manage databases, handle edge cases, deploy reliably, was hard-won and expensive to acquire.
That barrier is eroding fast.
In March 2025, Y Combinator president Garry Tan reported that 25% of the Winter 2025 batch had codebases that were 95% AI-generated. YC managing partner Jared Friedman clarified the detail: these were “highly technical founders who are completely capable of building their own products from scratch,” but a year earlier they would have done exactly that. Now the AI writes the code while the founders focus on product decisions and market strategy. The result, according to YC, was the “fastest growing, most profitable” batch in the fund’s history.
And this isn’t limited to startup founders who already know how to program. Andrej Karpathy coined the term “vibe coding” in February 2025 to describe a practice already widespread: expressing intent in natural language and accepting AI-generated code without fully understanding every line. Non-programmers are building functional software products. The skill that once took years to acquire can now be approximated, imperfectly but sufficiently for a minimum viable product, in hours.
A study by Brynjolfsson, Li, and Raymond, published in The Quarterly Journal of Economics in 2025, examined 5,172 customer support agents using AI tools. Overall productivity rose 14%. But the distribution was sharply asymmetric: the least experienced and lowest-skilled workers saw productivity gains of 34-35%, while the most experienced workers saw only marginal improvement. The OECD’s synthesis of experimental studies found the same pattern across software development, consulting, and customer support: productivity gains of 5% to over 25%, concentrated among less-experienced workers.
The implication is direct. When AI compresses the gap between novice and expert performance, the knowledge advantage that incumbents spent years accumulating becomes less decisive. The entrant doesn’t need to match the incumbent’s expertise. They need to be good enough to ship, iterate, and learn. AI makes “good enough” dramatically cheaper to reach. Perhaps U shaped curve, as people like Terance Tao, a Fields medalist recipient, has shown significant progress finding solutions with the aid of AI models to the Erdős set of problems.
Capital barriers
Knowledge barriers and capital barriers are deeply intertwined. When building a product requires deep expertise, you have to pay for that expertise. When AI provides it at the cost of an API call, the capital requirements shift accordingly.
Stanford’s Human-Centered AI Institute reported in 2025 that the cost of querying an AI model performing at GPT-3.5 level, 64.8% accuracy on the MMLU benchmark, dropped from $20 per million tokens in November 2022 to $0.07 per million tokens by October 2024. A 280-fold reduction in eighteen months. And the trend is accelerating: Epoch AI found LLM inference prices falling at a median rate of roughly 50x per year, and closer to 200x per year since January 2024.
This cost collapse shows up in startup economics in a straightforward way. What once required a six-figure infrastructure setup can now run on modest monthly API subscriptions. Carta’s 2025 Solo Founders Report found that the share of new startups with a solo founder rose from 23.7% in 2019 to 36.3% in the first half of 2025. More than one in three new companies. A founder who can access intelligence on tap doesn’t need a co-founder with complementary technical skills. They don’t need a seed round to hire an engineering team. The capital barrier that once required outside financing to clear can now be cleared with a credit card.
In Baumol’s terms, this is what it looks like when sunk costs approach zero. The founder who spends a few thousand dollars on API credits and a few weeks building an MVP has risked almost nothing irrecoverable. If the product fails, they walk away. If it works, they iterate. Hit-and-run entry, exactly as Baumol described.
Scale barriers
Capital and knowledge determine whether you can enter a market. Scale determines whether you can compete once you’re in. Historically, incumbents with thousands of employees and years of accumulated infrastructure could simply outproduce any small entrant. Intelligence on tap is compressing that advantage, and the numbers are hard to dismiss.
Midjourney generates an estimated $500 million in annual revenue with somewhere between 107 and 173 employees, zero external funding, and zero marketing spend. It competes directly with Adobe, which employs over 26,000 people.
But the more striking cases are the ones that moved fastest. Cursor, built by four MIT founders who incorporated in 2022, crossed $100 million in annual recurring revenue within twenty months of launch and reached $1 billion ARR by November 2025 with roughly 150 to 180 employees. Base44, built by a single founder using AI-assisted development, hit $1 million in ARR within three weeks of launch and was acquired by Wix for $80 million cash in June 2025. The company was six months old. Total external funding: zero. Lovable went from $1 million to $100 million in ARR in eight months, the fastest any software company has reached that milestone, and did so with fifteen employees during its period of explosive growth. Whats intersting is most of these already seem like they are facing steep competition and may soon be the disrupted.
Jeremiah Owyang of Blitzscaling Ventures quantified the broader pattern in May 2025. Analyzing the top ten AI-native startups against the top ten traditional SaaS companies, he found average revenue per employee of $3.48 million for AI-native companies versus $611,000 for SaaS. A 5.7x gap. The average AI startup team had 24 employees. The average SaaS company had 21,000.
When intelligence on tap allows a team of four to produce output that once required four hundred, the incumbent’s scale advantage becomes a scale liability: overhead without proportional output advantage. The selective solvent doesn’t just dissolve the barrier to entry. It dissolves the barrier to competition at scale.
Speed barriers
Even if knowledge, capital, and scale barriers all fell, a sufficiently slow market entry would give incumbents time to respond. But the barriers aren’t just falling. They’re falling fast.
Bolt.new, launched in October 2024, hit $4 million in ARR within its first thirty days and $20 million ARR in roughly sixty days. McKinsey reports that over 90% of software teams now use AI for coding activities, with development timelines shortening by up to 30%. Traditional MVP cycles of four to six months are compressing to two to six weeks. This raises some red flags though, if you built your mvp in two weeks so can anyone else.
In July 2025, METR published the results of a randomized controlled trial in which sixteen experienced open-source developers completed 246 tasks on their own repositories, mature codebases averaging 22,000 GitHub stars and over a million lines of code. The developers using AI tools took 19% longer to complete their tasks. Before the study, they predicted AI would speed them up by 24%. After the study, they still believed it had sped them up by 20%. The gap between perception and measurement was one of the study’s most interesting findings.
But look at what the study actually tested: expert developers working on complex, mature codebases they already knew intimately. This is the scenario where AI adds the least value, where tacit knowledge, institutional memory, and deep familiarity with undocumented behavior matter most. One participant noted that “large open-ended greenfield stuff felt harder to legibilize for the study,” even though “AI speed up might have been larger.” METR itself acknowledged the limitation, noting in a February 2026 follow-up that results from “more diverse repositories, including smaller, more greenfield, and less mature repositories” were showing different patterns.
New entrants don’t work on mature codebases. They build greenfield. They scaffold from scratch, generate boilerplate, iterate on MVPs. The METR study doesn’t contradict the barrier-erosion thesis. It clarifies where the speed advantage concentrates: on exactly the kind of work that new market entrants do.
The compounding
These four dimensions don’t just add up. They compound. Cheaper knowledge means less capital required. Less capital means smaller teams, which means faster decisions, which means faster market entry. And faster market entry means less time for incumbents to respond before the next entrant arrives.
Each reinforcing loop tightens the cycle. But before concluding that all barriers are dissolving, the next section examines the ones that aren’t, and why the pattern of selective erosion matters as much as the erosion itself.
The selective solvent metaphor from the first section implies something important: selectivity means some things are left behind. If cheap intelligence dissolved all barriers to entry, the analysis would be simpler and the conclusion would be different. Every market would converge toward perfect competition and margins would approach zero everywhere. That is not what the evidence shows. Several categories of barrier remain stubbornly intact, and understanding which ones hold is as important as understanding which ones erode.
Regulation
Governments move slower than markets, and their rules don’t dissolve in intelligence.
The EU AI Act, which began phased enforcement in 2025, is actively creating new barriers to entry. Compliance costs for high-risk AI systems run $8 to $15 million for large enterprises and $2 to $5 million for mid-size companies, with ongoing costs of roughly EUR 52,000 per high-risk system per year. For small firms, these costs can consume up to 40% of profit margins. The Code of Practice was published only in July 2025, weeks before obligations became applicable, giving startups almost no lead time to plan hiring, data acquisition, and technical roadmaps. Penalties for non-compliance reach up to EUR 35 million or 7% of global annual turnover.
This is not a temporary friction. Regulatory barriers are structural. They do not erode as the technology improves; they often grow in proportion to it.
The pharmaceutical industry illustrates the pattern with particular clarity. Insilico Medicine used generative AI to discover both a novel biological target and a therapeutic compound for idiopathic pulmonary fibrosis, compressing the process from a typical three to four years to twelve to eighteen months. Phase 2a results, published in Nature Medicine in June 2025, were promising: patients receiving 60 mg daily showed a mean lung function improvement of +98.4 mL versus a -20.3 mL decline in the placebo group. But as of early 2026, no AI-discovered drug has received FDA approval. The agency still requires full Phase 2 and Phase 3 clinical trials, manufacturing validation, and regulatory review. AI compressed the discovery stage dramatically. It has not compressed the biology of patient enrollment, the timescale of adverse event monitoring, or the regulatory requirements that ensure drugs are safe. The average drug takes ten to fifteen years from concept to approval. AI cut the front end. The back end is intact.
The FTC’s action against DoNotPay tells a similar story at a smaller scale. The company marketed itself as “the world’s first robot lawyer.” It had not hired or retained any attorneys. It had not tested whether its AI output was equivalent to a human lawyer’s. The FTC settled in February 2025 for $193,000 and prohibited the company from making AI-lawyer claims without evidence. Regulatory bodies are not passive observers. They create new barriers in response to new capabilities.
Physical infrastructure
Atoms are harder than bits, and no amount of intelligence makes them easier.
Gartner projects that 40% of existing AI data centers will be operationally constrained by power availability by 2027. Worldwide data center electricity consumption is on track to double from 448 TWh in 2025 to 980 TWh by 2030, with AI-optimized servers accounting for nearly half that total. Building a state-of-the-art 3nm chip fabrication facility costs $15 to $20 billion. TSMC’s Arizona project represents $165 billion in total investment across six fabs, two packaging facilities, and a research center.
These are not barriers made of knowledge or capital in the sense that matters here. They are barriers made of copper wire, silicon, and concrete. A startup with access to intelligence on tap can build a better software product in weeks. It cannot build a data center, a chip fab, or a power generation facility. The physical layer of the economy operates on different timescales and with different cost structures than the software layer.
This creates an asymmetry that matters for the thesis. Hypercompetition in software-centric markets can intensify rapidly because the barriers are cognitive and can be dissolved. Competition in physical-infrastructure markets is constrained by atoms, and atoms don’t get cheaper at 50x per year.
Trust
Trust is accumulated, not manufactured.
The 2025 Edelman Trust Barometer found that only 32% of Americans trust AI. Three times as many Americans reject the growing use of AI as embrace it. A KPMG study of 48,000 people across 47 countries found that only 46% are willing to trust AI systems, even though 66% already use AI regularly. Nearly 70% of respondents in Deloitte’s Connected Consumer Survey expressed concern that AI-generated content would be used to deceive them.
This is not a marketing problem that better AI can solve. A study published in the Journal of Business Research found that when consumers believe marketing content is AI-generated rather than human-created, they judge it as less authentic, feel what the researchers described as “moral disgust,” and show weaker engagement and purchase intentions — even when the content is otherwise identical. NielsenIQ measured this with EEG in December 2024: AI-generated advertisements triggered weaker memory activation in the brain than traditional ads, even when the AI output was rated as high quality. The Nuremberg Institute for Market Decisions found that simply labeling an ad as AI-generated made people perceive it as less natural and less useful.
Trust operates as a barrier precisely because it takes time to build and is lost quickly. A new entrant can now build a product in weeks using intelligence on tap. Building the trust required for consumers to adopt it — especially in domains where trust matters most, like healthcare, finance, legal, and education — still takes years. The barrier hasn’t moved.
Network effects: the contested category
Here the analysis gets genuinely complicated, because the evidence pulls in two directions.
On one side, AI appears to be weakening traditional network effects. General-purpose data moats, the kind built on broad datasets that feed recommendation engines or advertising platforms, are eroding. Foundation models trained on internet-scale data make proprietary data less defensible. Synthetic data generation and fine-tuning techniques allow smaller firms to approximate what once required years of data accumulation. Andreessen Horowitz has argued that data is “rarely a strong enough moat,” noting that the cost of adding unique data may rise while its incremental value falls — the opposite of traditional economies of scale.
On the other side, domain-specific data advantages are holding and in some cases strengthening. Tempus AI has built a dataset of approximately 38 million research records, over 7 billion clinical notes, more than a million cancer patients with molecular profiling, and 7 million digitized pathology slides. That data creates a flywheel: more physicians use Tempus, generating more patient data, which improves the AI models, which attracts more physicians. Replicating this collection would take a competitor decades. General-purpose AI cannot substitute for it because the value lies in the specificity, the clinical context, the longitudinal follow-up, the connection between molecular data and patient outcomes.
The pattern is bifurcation. General-purpose data moats are collapsing. Domain-specific data moats, built on decades of proprietary collection in regulated or physically constrained industries, are holding. The data that anyone can gather has less defensive value. The data that took twenty years of clinical partnerships to assemble still functions as a wall.
The pattern of selective erosion
What emerges is not a story of universal barrier collapse. It is a story of selective erosion with a consistent pattern.
Barriers built on knowledge and capital erode because AI directly substitutes for cognitive work and reduces the sunk costs of market entry. Barriers built on regulation, physical infrastructure, and trust remain because they are not made of knowledge. Network effects are splitting: data moats that depend on general information are dissolving, while those built on domain-specific, physically constrained, or regulated data are intact.
This selective pattern is what creates the conditions for hypercompetition in some markets and not others — and what makes the next section’s analysis of compounding dynamics especially important. Where barriers are falling, they are falling on multiple dimensions simultaneously. And those dimensions reinforce each other.
The previous three sections mapped the terrain: which barriers dissolve, which hold, how fast the erosion moves. But there is something the analysis has not yet captured. These dynamics do not simply stack. They feed each other.
The cascade
Start with the models themselves. In 2022, hitting 60% accuracy on the MMLU benchmark, a rough proxy for broadly useful language understanding, required a model with 540 billion parameters. Google’s PaLM was that model. Two years later, Microsoft’s Phi-3-mini hit the same threshold with 3.8 billion parameters. A 142-fold reduction in the computational footprint required for equivalent capability.
This is not a single efficiency gain. It is the first stage of a four-part cascade, where each stage creates the conditions for the next.
Algorithmic efficiency improves, which means the same capability runs on cheaper hardware. Cost collapses accordingly. The previous section documented the 280-fold decline in GPT-3.5-level inference costs over two years. Epoch AI found the broader trend falling at a median of 50x per year, accelerating to roughly 200x per year since January 2024. Sam Altman, in Fortune in July 2025: “The cost to use a given level of AI capability falls by about 10x every 12 months.”
Cheaper models invite replication. When OpenAI released Deep Research in February 2025, Hugging Face published an open-source clone within twenty-four hours. It scored 55% on the GAIA benchmark against OpenAI’s roughly 67% — worse, but functional, and free. Five more open-source alternatives appeared in the same week. The broader convergence is harder to wave away. On the Chatbot Arena benchmark, the performance gap between the top closed-source model and the top open-source model narrowed from 8.04% to 1.70% in a single year.
And as open models proliferate and costs drop, the time it takes to build and ship a product compresses. MVPs that took six months are built in weeks. The market floods with entrants.
Each stage of the cascade accelerates the others. Smaller models are cheaper to run, which makes them easier to copy, which makes more models available, which drives further optimization. The loop does not have a natural stopping point.
The intelligence layer is commoditizing
A cascade that only made models cheaper would be interesting but contained. The reason it matters for competitive dynamics is that the intelligence layer itself is becoming a commodity. The AI capability that companies build on top of is converging, cheapening, and losing its power to differentiate.
The numbers tell a clear story. The performance gap between the number one and number ten model on major benchmarks fell from 11.9% to 5.4% in one year. Sixteen different labs had produced models exceeding GPT-4’s capability by November 2024. Nvidia H100 cloud pricing fell 64 to 75 percent between late 2024 and early 2026.
The market share data is more dramatic. OpenAI held 50% of the enterprise AI market in 2023, according to Menlo Ventures. By mid-2025, that share had halved to 25%. Anthropic went from 12% to 40% in the same report. Not because OpenAI’s product got worse. The gap between providers narrowed until switching became low-cost and low-risk.
This is what commodity markets look like. When the difference between products is small and the cost of switching is low, customers move freely and no provider holds pricing power for long. Nvidia’s H100 pricing collapse is the hardware version of the same dynamic. The intelligence layer is heading where compute always heads: toward a utility priced at marginal cost.
For the competitive dynamics described in this essay, the commoditization of the intelligence layer acts as an amplifier. Intelligence on tap is not controlled by any single provider. It is available to everyone, getting cheaper by the month, and converging in capability.
Contestable by default
This brings us back to Baumol.
His insight about contestable markets was that actual competition is not required to change market behavior. The credible threat of entry is sufficient. When sunk costs are low enough that a new firm can enter, compete, and exit without catastrophic loss, incumbents are disciplined by the mere possibility of a challenger, even if no challenger has yet appeared.
Apply this framework to what the cascade produces.
The sunk costs of entering a software-centric market have been falling for a decade, but the cascade described above is driving them toward something qualitatively different. Algorithmic efficiency means you need a fraction of the compute. Inference costs are falling 50 to 200x per year. Open-source models approximate proprietary ones within days of release. A solo founder can ship an MVP in weeks with a few thousand dollars in API credits. Stack those together and the sunk cost of market entry approaches the conditions Baumol described as “absolutely free.”
The market does not need to be flooded with competitors to feel this. It needs to be credibly flood-able.
And the evidence says it is. CB Insights counted 78% of AI startups launched in 2024 as API wrappers, over 12,000 companies built on the same foundation models. Around 400 new ones appeared each month. AlixPartners documented the downstream effect: the window between being unique and being replicated is “shorter than ever.” Net dollar retention across public SaaS companies fell from 120% in 2021 to 108% by Q3 2024. The share of high-growth SaaS companies fell from 57% to 39% in one year, with AlixPartners projecting a further decline to 27% in 2025.
Klarna terminated its contracts with Salesforce and Workday, replacing both with AI built in-house. This was not a startup eating an incumbent’s lunch. This was a customer deciding the incumbent’s product was no longer worth the price, because the tools to replace it had become cheap enough.
The Barney et al. study from MIT Sloan in May 2025 captured the mechanism precisely: AI is “a source of homogenization, not differentiation.” When everyone has access to the same intelligence layer, the competitive advantages built on that layer erode. What was once proprietary capability becomes table stakes.
The connection
Each essay in this series builds one piece of the argument. Essay 1 reframed the question: not “what does AI disrupt?” but “what happens to competition when intelligence is cheap?” This essay answered by identifying the mechanism: selective barrier erosion that compounds across four dimensions, commoditizing the intelligence layer and driving sunk costs toward zero in software-centric markets.
The pattern has a specific shape. It is not uniform, not universal, not hitting all industries equally. Markets built on cognitive barriers are becoming radically more contestable. Markets built on regulatory, physical, or trust barriers are not. And where the erosion is happening, it is accelerating.
What this essay has not yet established is whether the pattern is new. It is not. The next essay examines three historical precedents: the telegraph, containerized shipping, and the internet. Each triggered exactly this kind of competitive intensification when a fundamental economic input became cheap. The pattern is identifiable and it has happened before. The difference this time is the input that got cheap.