02
Supercycle
Why compute, data, capital, and tooling can produce compounding capability and adoption dynamics.

02 · Supercycle

Overview#

When I say “supercycle” here, I’m using it narrowly. It refers to a multi-year period in which a general-purpose capability produces compounding second-order effects because it:

  • lowers the marginal cost of producing certain classes of work,
  • can be reused across many workflows with the same underlying tooling,
  • and creates feedback loops between usage, measurement, and improvement.

This is not a claim of inevitability. It is a claim about mechanisms that may, under some conditions, sustain investment and adoption beyond a single product cycle.

Supercycles as waves (working model)#

I think of supercycles as overlapping waves: a general-purpose capability becomes widely deployable, and then deployment itself creates compounding second-order effects: tooling reuse, measurement loops, and unit-economics pressure that pull adoption forward.

The diagram below sketches multiple prior waves and highlights how Generative AI could form a new trajectory if it continues to translate into reliable, governed, cost-declining systems rather than isolated demos.

Technology supercycles with labeled waves and a highlighted generative AI trajectory
Figure 1. Technology supercycles as overlapping waves; Generative AI highlighted as a potential new supercycle.

What is meant by a supercycle (in this context)#

The relevant unit is not “models” in isolation, but the stack:

  • model capability (what tasks can be attempted),
  • reliability engineering (how often it works under constraints),
  • and integration surfaces (APIs, tooling, evaluation, and governance).

A supercycle exists when improvements in that stack propagate across domains faster than organizations can absorb them, producing a persistent backlog of economically viable applications.

The distinction from a bubble is operational:

  • A bubble is sustained primarily by expectations.
  • A supercycle is sustained by measured unit economics and durable workflow replacement or augmentation.

Enabling technical conditions#

Mechanisms that plausibly drive compounding:

  • Cost curves: inference and fine-tuning costs decline through hardware efficiency, model optimization, caching, and specialization. Even when frontier training costs rise, many deployments are dominated by inference and integration.
  • Capability generalization: improvements in representation learning and instruction following can transfer across tasks, especially where tasks share structure (classification + extraction + transformation + synthesis).
  • Tooling reuse: once an organization builds a secure tool interface, retrieval layer, and evaluation harness, those investments can be reused across many applications.
  • Distribution effects: deployments generate interaction data (errors, edge cases, operator feedback) that can be converted into evaluation suites, policy rules, and training data.

These conditions are not always present. Privacy constraints, limited feedback, and high error costs can prevent compounding.

Economic and organizational implications#

If these mechanisms hold in a given domain, the near-term effect is often not “automation,” but a shift in where the bottleneck sits:

  • Work shifts from producing first drafts to specifying constraints, validating outputs, and maintaining evaluation suites.
  • Reliability work (tests, checks, monitoring, incident response) becomes a core competency for AI-enabled systems.
  • Security posture must account for tool access, data exfiltration risk, and failures that may be high-frequency and low-salience.

The predictable failure case: organizations capture headline capability but fail to realize value because integration and governance dominate total cost.

What would falsify this claim#

This framing is falsified if, over time, deployments converge to a stable set of narrow use cases without meaningful expansion in scope because the limiting factors do not improve.

Observable falsifiers include:

  • Integration costs remain the dominant driver and do not meaningfully decline with tooling reuse.
  • Reliability plateaus at levels that are unacceptable for high-value workflows even with substantial evaluation investment.
  • Regulation, privacy constraints, or liability pressure systematically prevent feedback loops (data for evaluation and improvement cannot be collected or used).
  • The economics invert: inference remains expensive relative to the value captured, or supervision costs dominate.

Counterargument / failure case#

Another possibility is that GenAI behaves more like a sequence of waves than a supercycle:

  • Most value is captured by a few categories (search/assistants, code assistance, summarization, support automation).
  • The long tail is constrained by data access, permissioning, and the difficulty of measuring correctness.
  • Apparent progress reflects better demos and better packaging rather than underlying reliability.

This is plausible in domains where errors are costly, feedback is sparse, and accountability is strict.

Key points#

  • Compute and tooling can compound.
  • Data quality, feedback, and evaluation can bottleneck.
  • Regulation, cost curves, and trust can invert trajectories.