Essay 1 of 7

AI Isn't the Disruption

Why the 'AI disrupts X' framing misses the point. When intelligence itself becomes cheap, the disruption isn't technological — it's competitive.

16 min read

Every January, the consulting firms publish their disruption rankings. BCG has one. Bain has one. McKinsey, Deloitte, Goldman — they all have some version of the same product: a matrix that slots industries into quadrants based on their vulnerability to AI. Travel and retail in the danger zone. Financial services and streaming behind regulatory moats. Education somewhere in between, depending on who’s drawing the lines.

The framing is always the same. Which industries will AI disrupt first, which will fall next, and where on the curve does your sector sit?

In January 2026, BCG and Moloco published their “Consumer AI Disruption Index,” ranking seventeen verticals from “Breached” to “Secured.” Travel and retail sit in the top-left quadrant — maximum exposure, weakest defenses. Fintech and streaming sit in the bottom-right — insulated, for now, by regulatory barriers and switching costs. The message, barely disguised beneath the methodology: hire us to figure out where you stand.

Bain’s 2025 Technology Report asks the question directly in its section headers: “Will AI Disrupt Tech’s Most Valuable Companies?” and “Will Agentic AI Disrupt SaaS?” The answer, delivered with the confidence of a firm billing by the hour: “Disruption is mandatory. Obsolescence is optional.”

Bloomberg runs the headline: “Stock Pickers Spot Opportunity in the AI Disruption.” CNBC reports that software stocks are getting hammered by “AI fears.” Fortune relays Microsoft’s AI chief predicting that all white-collar work will be automated within eighteen months — and names the industries in sequence: accounting, legal, marketing, project management, software development.

The ranking changes depending on the firm. The industries shuffle depending on the methodology. But the underlying structure never varies. AI disrupts X. Then AI disrupts Y. Then, eventually, AI comes for Z. Your job is to figure out how much time you have.

The Comfort of Sequence

This framing is familiar because it follows the template of every major technology shift in recent memory.

The internet disrupted media first. Newspaper classified revenue peaked near $49 billion in 2000. Craigslist launched nationally between 2000 and 2004, and a study of a thousand newspapers found it caused an average 20% drop in classified ad rates in each market it entered. By 2012, classified revenue had fallen to $4.6 billion — a 77% decline.

Retail executives watched this happen and believed they had time. Amazon launched in 1995, but the “retail apocalypse” didn’t peak until 2017, when over twelve thousand physical stores closed in a single year. Borders had outsourced its online sales to Amazon in 2001. Its last profitable year was 2006. It filed for bankruptcy in 2011. But the broader shakeout came years later — JCPenney didn’t file until 2020, a full quarter-century after Amazon’s founding.

Finance watched both media and retail get restructured, and had even more time. Global fintech investment grew from $930 million in 2008 to $12 billion by 2015. Traditional banking wasn’t seriously threatened until the late 2010s.

Mobile followed the same sequential logic, just compressed. Phone manufacturers fell first — Nokia’s market capitalization dropped from $150 billion in 2007 to a $7.2 billion sale to Microsoft in 2013. Then media and cameras. Then transportation and hospitality, as Uber and Airbnb reached scale around 2014-2015. Then mobile banking and payments, with Apple Pay launching in late 2014 and Venmo reaching mainstream adoption around 2016.

Each wave took years. Each industry had time to watch the previous one fall and — at least in theory — prepare. The internet took roughly twenty years to move from its first serious casualties to broad cross-sector restructuring. Mobile compressed that to about a decade. Cloud took fourteen years from AWS’s launch in 2006 to mainstream enterprise adoption during COVID.

The sequential model is deeply embedded in how the technology industry thinks about change. It is also why the consulting firms can sell disruption indices: if disruption comes in order, there’s value in knowing your number.

Why the Framing Is Wrong

The problem is that AI doesn’t follow this pattern.

In 2024, researchers at Harvard and the NBER found that 39.4% of Americans aged 18-64 had used generative AI within two years of ChatGPT’s launch — nearly double the 20% who had used the internet after the same period, and exceeding PC adoption at the three-year mark. ChatGPT itself reached a hundred million monthly active users in two months. The internet took seven years to hit the same number.

But the speed of consumer adoption is the less interesting part. What’s different — what the disruption indices miss entirely — is the breadth.

Consider what happened across industries in roughly the same eighteen-month window. Chegg, the education company, lost 99% of its stock value. Stack Overflow saw monthly questions collapse 76%. Klarna cut its workforce nearly in half. Duolingo terminated 10% of its contractors. Salesforce reduced its support organization by thousands. Software stocks fell broadly enough that the WisdomTree Cloud Computing Fund dropped 20% in early 2026 alone.

Education, developer tools, fintech, language learning, enterprise software, SaaS broadly — these are not companies in the same industry. They were all hit simultaneously. Not one sector watching another fall and having time to prepare. All at once.

McKinsey’s 2025 State of AI survey quantified what should have been impossible under the sequential model: AI adoption reached 88% of organizations, and in every industry besides technology, adoption had “meaningfully increased.” Media and telecommunications respondents were now just as likely as technology respondents to report regular AI use. Insurance — insurance — had caught up to tech.

None of the prior technology waves worked like this. When the internet disrupted newspapers in 1998, insurance companies were not simultaneously adopting web-based business models. When mobile disrupted phone manufacturers in 2008, education wasn’t simultaneously restructuring around smartphone-native products. Each technology moved through the economy in a recognizable sequence because each required industry-specific adaptation: digitizing a newspaper is a different problem from digitizing a retail supply chain is a different problem from digitizing a bank.

AI is different because it targets something every industry shares — not a specific business process or a particular supply chain, but the universal substrate: cognitive work itself.

Language is the interface, and that changes everything about adoption. Every knowledge worker, in every industry, can use the technology immediately — no new hardware, no new infrastructure, no industry-specific digitization required. The existing internet is the delivery mechanism and the existing computer is the terminal. The thing being automated isn’t any particular industry’s workflow. It’s the thinking that every industry’s workflow requires.

That’s why the disruption rankings are the wrong frame. They assume AI is like the internet — a technology that disrupts specific industries in a knowable sequence, where you can watch the disruption happen to other people first and learn from their mistakes. That there’s a number on the curve, and if your number is high enough, you have time.

But there is no sequence. There is no curve. There is no time.

What’s happening is something different — not a disruption that moves through the economy but a change in the conditions of competition itself. And that’s what this series is about.


Intelligence Is the Input

To understand why AI behaves differently from prior technologies, you have to look at what’s actually becoming cheap.

When the internet arrived, what became cheap was distribution. Moving information from one place to another dropped toward zero cost. That was transformative, but only in specific, predictable ways. Industries that depended on controlling distribution were exposed: newspapers that controlled local advertising, retailers that controlled physical shelf space, brokers who controlled access to listings and quotes. Industries that didn’t depend on distribution control were largely unaffected, at least initially.

When mobile arrived, what became cheap was access. Computing moved from the desk to the pocket. Again, specific industries were exposed in a knowable order: phone manufacturers first, then media companies, then services that could be coordinated through location-aware devices.

Each prior technology cheapened a specific input to economic activity — distribution, access, communication, logistics — and the industries most dependent on that input were hit first and hardest. The pattern was sequential because the input was specific.

AI cheapens something different. AI cheapens intelligence.

Not in the narrow sense of IQ or consciousness, but in the economic sense: the capacity to process information, recognize patterns, generate analysis, produce judgment-adjacent output, and make decisions under uncertainty. In 2018, Ajay Agrawal, Joshua Gans, and Avi Goldfarb framed this precisely in Prediction Machines: AI is a drop in the cost of prediction, where prediction means not forecasting the future but the broader act of filling in missing information. Classification, translation, summarization, code generation, diagnosis — all prediction tasks in the economic sense. They are also the tasks that constitute the vast majority of cognitive work across every industry.

This is the mechanism that the disruption indices miss. They ask “which industries will AI disrupt?” as though AI is targeting specific business processes the way the internet targeted distribution. But intelligence is not a specific input. It is the universal input, the substrate beneath every business process in every industry.

What a general-purpose technology actually means

Economists have a framework for this. In 1995, Timothy Bresnahan and Manuel Trajtenberg published a paper identifying what they called general-purpose technologies: technologies so foundational that they reshape not just one industry but the entire economy. Their examples were the steam engine, the electric motor, and the semiconductor. They identified three defining properties. Pervasiveness: the technology is used as an input across most sectors. Technological dynamism: it keeps getting better over time. Innovational complementarities: advances in the technology make downstream innovation in every sector more productive.

AI fits all three more completely than any technology since electrification. Pervasiveness: McKinsey’s 2025 survey found AI deployed across 88% of organizations, with adoption converging across sectors. Media, telecommunications, and insurance now match the technology industry. Dynamism: the cost of LLM inference has fallen roughly 50x per year at the median, and since January 2024, closer to 200x per year, according to Epoch AI. Complementarities: a company that adopts AI for customer service doesn’t just improve customer service. It generates data and operational patterns that make AI adoption in sales, product development, and logistics more productive. The OECD concluded in 2025 that generative AI has “considerable potential to qualify as a new general-purpose technology.”

But labeling AI a general-purpose technology, while accurate, understates the point. Steam cheapened physical force. Electricity cheapened the delivery of energy. Semiconductors cheapened computation. Each was pervasive, each improved over time, each made downstream innovation cheaper. None of them cheapened the thinking that decides what to build, how to compete, or where to deploy resources.

Kevin Kelly put it directly in 2016: “Everything that we formerly electrified we will now cognify.” The electricity analogy is useful, but it needs an addendum. Imagine if electricity didn’t just power the machines but could also design new machines, optimize the production line, analyze the market, and draft the regulatory filings. That is closer to what intelligence on tap actually means.

Why the electricity parallel matters — and where it breaks

The economic historian Paul David documented what happened when electricity, the last comparable general-purpose technology, moved through the economy. In a 1990 paper titled “The Dynamo and the Computer,” David showed that electrification took roughly four decades to produce measurable productivity gains. Not because the technology was slow, but because organizations had to be reorganized around it.

When factories first adopted electric power, they did exactly what you’d expect: they ripped out the steam engine and installed a dynamo in its place. Everything else stayed the same. The multi-story layout, the central drive shaft, the belt-and-pulley system that distributed power from the center outward. The gains were marginal because the factory was still organized around the constraints of steam.

It took a generation of managers to realize the real transformation: attaching a small electric motor to each individual machine. That meant single-story layouts organized around the flow of materials rather than proximity to a power source. Lighter, modular, flexible production. The factory could be redesigned around the work, not around the power system. The productivity surge of the 1920s, when electrification finally accounted for roughly half of all manufacturing productivity growth, came not from the technology itself but from the organizational restructuring it enabled.

David’s point was about patience: transformative technologies take time to produce results because organizations must be redesigned around them. The usual lesson drawn is that AI, too, will take decades to reach full impact.

But there is a difference that matters for everything that follows in this series.

Electricity required physical restructuring. New factories had to be built, old ones razed or retrofitted. Workforce training had to happen in person. New management practices had to be invented through trial and error in specific facilities. The co-invention that David described was slow because it was physical, local, and expensive.

AI requires none of that. The infrastructure already exists: internet-connected devices, cloud computing, API endpoints. Andrew McAfee at MIT has noted that AI’s effects “will be felt more quickly” than prior general-purpose technologies because “much of the required infrastructure is immediately available.” The organizational co-invention is happening in software, not in physical space, which means it can be copied, shared, and iterated at the speed of code.

When a company figures out how to use AI to automate underwriting, that innovation doesn’t stay local. It propagates through open-source models, published case studies, employee mobility, and competitors reverse-engineering the approach. The 40-year lag that David documented for electricity has no structural equivalent in AI because the barriers to co-invention (physical plant, geographic isolation, slow knowledge transfer) don’t exist.

This is why the adoption data looks the way it does. Not four decades from invention to productivity impact, but two years from ChatGPT’s launch to nearly 40% of working-age Americans using the technology. Not sequential industry adoption, but simultaneous convergence across every sector, driven by a technology that requires no industry-specific adaptation because its input, language, is universal.

The consulting firms draw their disruption matrices with a sequential model because that’s how every prior technology worked. But intelligence isn’t distribution, or access, or logistics, or computation. Intelligence is the thing that coordinates all of those. The input to every economic process. The capacity that every moat was ultimately built on.

When that becomes cheap, everything downstream changes at once. And the interesting question is no longer which industries are affected. It’s what happens to competition itself.


This Is About Competition, Not Technology

Most analysis of AI focuses on what the technology can do. Which tasks it automates, which jobs it threatens, which industries it enters. This is natural — the technology is genuinely novel, and its capabilities expand monthly. But it’s the wrong level of analysis for understanding what happens next.

The interesting question isn’t what AI can do. It’s what happens to the structure of markets when every participant has access to cheap intelligence simultaneously.

In 1942, Joseph Schumpeter described the force that actually drives capitalism. Not the textbook version, where firms compete on price within stable industries, shaving pennies off margins in a polite contest for market share. The real engine, Schumpeter argued, was something more violent. Competition “from the new commodity, the new technology, the new source of supply, the new type of organization” — competition that “commands a decisive cost or quality advantage and which strikes not at the margins of the profits and the outputs of the existing firms but at their foundations and their very lives.” He called this process creative destruction and argued it was not an occasional disruption but “the essential fact about capitalism.” Not a bug. The operating system.

Schumpeter was writing about what happens when a new input doesn’t just improve existing competitors but enables entirely new ones. The existing firms don’t get outcompeted on price. They get outcompeted on relevance. Their products, their processes, their entire business models become the wrong answer to a question the market has moved past.

Half a century later, Richard D’Aveni looked at what was happening to competitive dynamics in the 1980s and 1990s and concluded that Schumpeter’s gale had become permanent weather. In his 1994 book Hypercompetition, with Robert Gunther, D’Aveni argued that sustainable competitive advantage — the foundational concept of corporate strategy since Michael Porter — was a myth.

His evidence was straightforward. In industry after industry, competitive moves and countermoves were escalating so fast that advantages were being created and destroyed in months, not years. A firm would establish a cost advantage; a rival would match it within a quarter. A firm would launch a differentiated product; competitors would clone the differentiating features before the first sales cycle closed. A firm would build a geographic stronghold; a new entrant would leapfrog it entirely.

D’Aveni identified four arenas where this escalation plays out: price and quality, timing and know-how, stronghold creation and invasion, and deep pockets. In each arena, the competitive dynamic ratcheted upward until the advantage was neutralized. The only viable strategy, he argued, was not to defend any single advantage but to keep moving — constantly creating, exploiting, and abandoning positions before competitors could respond.

“The period of exploitation,” D’Aveni wrote, “is at best measured in months, not years.”

That was 1994. Before the commercial internet. Before mobile. Before cloud computing, before open source reached critical mass, before any of the infrastructure that now allows a competitor to be built in weeks.

Rita Gunther McGrath extended D’Aveni’s thesis in 2013 with The End of Competitive Advantage. Her argument: the default state of competition had shifted from what evolutionary biologists call punctuated equilibrium — long stable periods interrupted by brief disruption — to something she called transient advantage. Competitive positions are inherently temporary. The job of strategy is managing a portfolio of short-lived advantages, not defending a single durable one.

In September, McGrath noted that AI was validating this framework faster than expected. “Industries are colliding unexpectedly,” she wrote, “and competitive advantages can come and go in the blink of an eye.”

Now consider what intelligence on tap does to these dynamics.

D’Aveni’s four arenas all depended on the speed and cost of competitive response. How fast could a rival match your cost position? How quickly could they clone your product? How cheaply could they enter your stronghold? In the 1990s, the escalation was fast by the standards of the time — months, sometimes quarters. The limiting factor was the cost of the cognitive work required to compete: the engineers, analysts, strategists, and designers needed to mount a competitive response.

When that cognitive work becomes cheap — when intelligence is on tap — the rate of competitive escalation doesn’t just increase. It changes in kind.

There is a useful analogy here, and it will be the subject of Essay 5. When high-frequency trading arrived on Wall Street, it didn’t just make trading faster. It transformed the entire structure of financial markets. Traditional market makers — firms like Spear, Leeds & Kellogg, which Goldman Sachs bought for $6.5 billion in 2000 — were replaced by algorithms hunting every inefficiency, every spread, every friction in the market, competing it away in milliseconds. Bid-ask spreads collapsed. Margins compressed to fractions of a penny. The firms that had built decades of advantage on their relationships, their seat on the exchange floor, their human judgment — gone. Not over years. Over months.

What if something structurally similar is happening to the broader economy? Not trading algorithms hunting spreads, but AI-enabled competitors hunting friction across every market — at a pace that makes traditional competitive cycles look like slow motion.

That is the thesis of this series. Not that AI disrupts specific industries in sequence (the consulting-firm model). Not that AI creates a few dominant platforms (the winner-take-all model). But that intelligence on tap moves competition itself toward a permanent state of hypercompetition: continuous, overlapping, high-frequency competitive pressure with no stable equilibrium. The economy doesn’t settle into a new configuration. Unsettled is the configuration.

The rest of the series builds that case:

  • Essay 2 examines the mechanism precisely: which barriers to entry does cheap intelligence erode, which does it not, and how fast?
  • Essay 3 shows the pattern has historical precedent — the telegraph, containerized shipping, and the internet each triggered hypercompetition when they cheapened a fundamental economic input.
  • Essay 4 presents evidence that this dynamic is already measurable in current markets.
  • Essay 5 develops the HFT analogy in full: what the economy looks like when competition operates at the speed of algorithmic trading.
  • Essay 6 steel-mans the counterarguments — regulation, network effects, physical constraints, brand — and assesses which hold.
  • Essay 7 describes the steady state: not what the economy becomes when the instability ends, but what it looks like when continuous instability is the end state.

Throughout, the approach is analytical, not prescriptive. This is not an argument that hypercompetition is good or bad. It is an argument that it is happening — driven by a mechanism that is identifiable, historically precedented, and currently observable — and that the mental models most people use to think about AI underestimate the structural change already underway.

The disruption indices have the question wrong. The question isn’t which industry AI disrupts next. The question is what happens to competition when the cost of intelligence approaches zero.