Technological and Organisational Evolution in the AI Infrastructure Supercycle
The contemporary debate over whether artificial intelligence is in a “bubble” is usually framed as a valuation question: are capital markets overpaying for AI equities and startups relative to their likely cash flows? That framing badly underspecifies what is actually happening. The data now emerging from capital-expenditure plans, infrastructure build-outs, and organisational adoption suggests something more structural: a shift from software-led digital transformation to an “infrastructural intelligence” regime in which AI becomes a capital-intensive general-purpose technology embedded into power grids, industrial supply chains, and organisational routines. Hyperscalers are planning cumulative data-centre and semiconductor investment on the order of five to seven trillion dollars by 2030, with one detailed forecast placing their own spend at roughly $6.4 trillion across chips, buildings, power and cooling.
Global estimates from investment banks and consultancies converge on similar magnitudes, typically projecting several trillion in AI-ready data-centre capacity and pointing to an additional funding gap of more than a trillion dollars that must be filled by private credit and public financing. At the same time, surveys of public-market fund managers show a clear majority now believe companies are overspending on capital equipment, and AI startup valuations have re-inflated to levels exceeding the last zero-interest-rate bubble.
This paper argues that these apparently conflicting signals can be reconciled if we treat AI not simply as another tech hype cycle but as the emergence of a new socio-technical regime with its own infrastructure logic. The core thesis is that the true “bubble” risk lies less in aggregate over-investment in compute and more in organisational under-adaptation to a regime where intelligence behaves like an elastic infrastructure good constrained by heavy-industry bottlenecks. To develop this thesis, the paper integrates recent evidence from advanced-computing market analyses with conceptual lenses from diffusion of innovation, socio-technical systems theory, and organisational evolution.
1. Elasticity of Intelligence: From Software Margins to Infrastructure Economics
The advanced-computing launch report formalises a powerful idea: an “elasticity of intelligence” in which gains in model capability are tightly coupled to extra compute, and thus to physical capital.
In contrast to traditional SaaS economics—highly scalable code running on relatively modest infrastructure—state-of-the-art AI requires enormous clusters of accelerators, high-bandwidth networking, and advanced cooling. Forecasts suggest that datacentre processors alone could generate nearly $3 trillion of revenue over the next five years, with the share of all processors located in datacentres rising from the mid-teens a few years ago to roughly two-thirds by 2030.
This is infrastructure, not just IT. The same analyses project more than a trillion dollars of cumulative spending on power and cooling as legacy air-cooled facilities, typically designed for roughly 20–30kW per rack, prove inadequate for AI clusters pushing beyond 100kW and heading toward 1MW per rack in future GPU roadmaps.
Independent research points to liquid cooling penetration in AI datacentres rising at extraordinary speed and to grid-connection queues of three to five years becoming a binding constraint in major hubs such as Northern Virginia.
In effect, intelligence—specifically the ability of foundation models and agents to perform expert-level reasoning—has acquired the economic characteristics of a utility service, but one that is still in the build-out phase of its infrastructure. This is reminiscent of earlier techno-economic paradigms, such as electrification and railways, where the “installation” phase required huge, lumpy capex and created asset-heavy networks that later generations of firms could exploit at lower marginal cost. The difference is that AI’s marginal cost curve is much more sensitive to model architecture: mixture-of-experts designs and reasoning-focused modes produce orders-of-magnitude swings in compute intensity, making the demand for chips effectively unbounded so long as organisations can find problems worth solving.
Seen through this lens, headline capex numbers are not themselves evidence of a bubble; they are the structural price of converting intelligence into an on-demand, utility-like resource. Where bubble dynamics arise is in the timing: public markets tend to capitalise future productivity before organisational and infrastructural adjustments have fully materialised. That time lag is where diffusion-of-innovation theory becomes useful.
2. Stacked S-Curves: Diffusion of AI Across Infrastructure, Capability and Organisation
Classic diffusion theory emphasises S-curves: new technologies move from experimentation to rapid diffusion and then saturation. In the case of AI, however, the diffusion process is layered. We can distinguish at least three interlocking S-curves:
Infrastructure diffusion – the build-out of AI-ready datacentres, chip supply chains, and power and cooling capacity.
Capability diffusion – the spread of high-fidelity foundation models, agent frameworks and domain-specific AI tools.
Organisational diffusion – the redesign of workflows, governance, and business models to exploit AI at scale.
The infrastructure S-curve is already steep. Hyperscaler capex has roughly tripled since 2023, and projections from banks and consultancies suggest annual AI-related datacentre investment could exceed half a trillion dollars by the second half of the decade. Sovereigns and private-equity infrastructure funds are piling in, forming multi-billion-dollar joint ventures to build regional AI hubs, and even exploring orbital datacentres to exploit abundant solar energy and avoid terrestrial constraints.
The capability S-curve has also accelerated dramatically. MoE architectures and “thinking” modes have driven steep improvements in benchmark performance at lower inference cost, and major labs are targeting average-human or better scores on reasoning benchmarks as the next milestone.
This is the core of the “elasticity of intelligence”: when extra compute can be reliably converted into better reasoning and higher task fidelity, demand for compute becomes tied not to user counts but to the complexity and economic value of the problems being tackled.
By contrast, the organisational S-curve is much flatter. Early adopter firms are experimenting with copilots, code-assistants and domain-specific models, but most large organisations remain in the “pilot purgatory” familiar from earlier waves of digital transformation. Governance, risk management, skills, and operating models still reflect a world where IT was a cost centre and intelligence resided almost exclusively in human professionals.
This misalignment between S-curves helps explain why capital markets interpret the same data as both a supercycle and a potential bubble. Infrastructure and capability diffusion are racing ahead, funded by hyperscalers’ own cash flows and unprecedented access to debt markets. Organisational diffusion, which ultimately determines whether the trillions in hardware produce sustainable productivity growth, is lagging behind. Valuations of AI software firms may therefore be discounting a pace of organisational change that is difficult to achieve in practice.
3. AI as Socio-Technical Regime: Heavy Industry Meets Cloud Culture
Socio-technical systems theory suggests that technologies, organisations, and institutions co-evolve into relatively stable “regimes” characterised by aligned infrastructures, regulations, skills, and cultural norms. By this logic, the AI boom is not just an upgrade to the existing digital regime but the early phase of a new one in which advanced compute, energy systems, and organisational forms must align.
The advanced-computing stack is already revealing this regime logic. At the bottom are semiconductor supply chains dominated by a handful of foundries and equipment vendors; above them sit GPU and accelerator designers, OEMs, and datacentre operators; on top rest foundation model labs, AI orchestration platforms, and finally the enterprise and sectoral applications that create end-user value.
Power utilities, grid operators, and thermal-equipment manufacturers—actors historically peripheral to “tech”—are becoming central nodes. Utilities and infrastructure funds are repurposing coal plants into datacentres, building on-site gas-turbine generation, and experimenting with new cooling architectures to serve hyperscale AI loads.
Yet most digital-native organisations still operate with a cloud culture shaped by the previous regime: pay-as-you-go compute, rapid iteration, and relatively low capital intensity. In that world, “move fast and break things” was economically rational. In the infrastructural intelligence regime, the economics favour different behaviours:
Long-duration commitments: multi-year capacity reservations for accelerators and power, often backed by long-term leases and project finance.
Vertical partnerships: tight integration between chip designers, OEMs, hyperscalers, utilities and sovereigns to secure location, power, and advanced packaging capacity.
Industrial-grade risk management: AI facilities with hundreds of megawatts of load and complex cooling systems look more like refineries or power plants than software projects, with corresponding safety, environmental and regulatory obligations.
Socio-technical theory would predict friction at the interface between these logics. Cloud-era product teams accustomed to short planning cycles and OPEX-heavy models must now coordinate with finance, infrastructure and public-affairs functions on 10- to 20-year assets. Boards and regulators—many of whom still treat AI as a software or ethics issue—are confronted with questions about grid stability, water use, and industrial zoning. Until these organisational and institutional adjustments occur, the system operates in a transitional, high-volatility state: capital commitments are long-lived, but governance and norms are still experimental.
4. Organisational Evolution: From AI Projects to AI Operating Systems
Against this backdrop, the popular narrative of AI as a sequence of isolated “use cases” is increasingly misleading. From an organisational-evolution perspective, what matters is not the number of pilots but the degree to which AI becomes an operating assumption baked into structure, processes and capability systems.
Three shifts are particularly important.
First, AI demands a new form of organisational ambidexterity. Firms must both exploit current AI capabilities—copilots, RAG systems, domain-specific models—and explore agentic and physical AI that may reconfigure entire value chains. The advanced-computing report warns that without substantial improvements in language-model fidelity, autonomous agents and physical robots will remain constrained by the need for human overseers, blunting their productivity impact.
This implies that organisational design must accommodate two tempos: cautious, risk-managed deployment of today’s brittle systems, and high-risk investment in capabilities that assume near-perfect task execution in the future.
Second, AI compresses the boundary between knowledge work and operations. Historically, organisations separated “thinking” (strategy, design, planning) from “doing” (execution, manufacturing, service delivery). High-fidelity AI blurs this distinction: code-generation, design exploration, synthetic experimentation, and real-time optimisation all sit at the intersection of cognition and execution. Socio-technical theory suggests that such boundary shifts trigger renegotiations of roles, identities, and power. Professional groups—from software engineers to clinicians—are already contesting where responsibility lies when AI tools co-produce work. Without deliberate redesign, organisations risk a patchwork of local solutions and ungoverned shadow systems.
Third, AI infrastructures lengthen feedback loops between investment and observed productivity. A hyperscaler can commit tens of billions to accelerators and power in a given year, but the downstream organisational changes in client firms—process redesign, skills, regulatory approvals—may unfold over a decade. From a dynamic-capabilities perspective, the winners will be organisations that can shorten this lag by building “learning infrastructure”: systematic mechanisms for capturing, validating and scaling AI-enabled process changes across business units and geographies.
These shifts indicate that the decisive bottleneck in the AI supercycle is likely to be organisational rather than purely technical. Firms that treat AI as a bolt-on project risk becoming the stranded assets of the cognitive era: locked into long-term contracts for infrastructure they are structurally unable to exploit.
5. Reframing the Bubble: Temporal Mismatch, Not Pure Overvaluation
Returning to the “AI bubble” debate, we can now be more precise. There are at least four different “bubble” hypotheses in circulation:
Asset-price bubble: equities and private valuations overshoot realistic cash-flow prospects.
Capacity bubble: the world builds far more datacentre and chip capacity than future workloads require.
Financing bubble: the capital stack supporting AI infrastructure—investment-grade bonds, securitisations, private credit—rests on overly optimistic assumptions about utilisation and pricing.
Institutional bubble: organisations and policymakers implicitly assume that AI will automatically convert into productivity, without investing in the socio-technical transformations required.
The evidence so far is mixed for the first three, but strong for the fourth. Public-market valuations of some AI equities will almost certainly prove excessive; history suggests this is the rule, not the exception, in transformative technology waves. Capacity overshoot at the physical layer is less clear. Unlike the dot-com era, the bulk of current AI infrastructure spending is being financed by hyperscalers’ own cash flows and long-term leases, anchored in real customer demand for cloud and AI services.
Long-term investors in data-centre real estate continue to view the sector as attractive precisely because power constraints and regulatory hurdles make supply hard to ramp quickly, which tends to support pricing.
Financing risks are more nuanced. The projected need for trillions of dollars of debt and private credit to close the AI infrastructure funding gap means that a downturn in AI-related revenues could transmit stress across bond, loan and securitisation markets. Yet even here, the assets being built—power-rich campuses, grid-connected sites, modular datacentres—are highly re-deployable for other digital workloads.
The institutional bubble, by contrast, is already visible. Boardrooms and policymakers routinely invoke AI as a panacea while under-resourcing the unglamorous work of process mapping, change management, regulatory adaptation and workforce development. Surveys showing growing investor concern about AI capex “getting out of hand” can be interpreted not only as fear of overbuilding but as anxiety that organisations will fail to turn infrastructure into productivity at the required pace.
From this perspective, the principal systemic risk is a temporal mismatch: infrastructure and capital markets moving at installation-phase speed, while organisational and institutional change remain stuck in an earlier paradigm. If that mismatch persists, the system will resolve it through one of two mechanisms: either a painful repricing of AI-linked assets, or an accelerated wave of organisational innovation and consolidation in which firms incapable of adaptation are acquired, displaced, or effectively hollowed out by more agile competitors.
6. Governing the Infrastructural Intelligence Era
The current AI cycle is best understood as the early stage of an infrastructural intelligence regime rather than a simple speculative mania. Intelligence is being industrialised: converted into a capital-intensive, utility-like service delivered through a global stack of chips, grids, and datacentres. The economics of this regime are fundamentally different from those of the prior cloud-software wave, and they are already pulling in new actors—from utilities and sovereign wealth funds to private-equity infrastructure platforms and even space companies.
This shift has three main implications for technological and organisational evolution.
First, strategic advantage is migrating down the stack. Control over leading-edge nodes, power-dense sites, cooling technologies, and long-term energy contracts will be at least as important as algorithmic innovation. Organisations that ignore these physical constraints are by definition strategising in the wrong regime.
Second, diffusion of AI must be managed as a multi-layered socio-technical transition. Policymakers and boards cannot rely on S-curves to take care of themselves. Instead, they need deliberate alignment across infrastructure planning, workforce development, regulatory frameworks and organisational design, recognising that each layer moves at a different speed.
Third, the most dangerous bubble is institutional complacency—the belief that investment in models and hardware automatically yields productivity. In reality, the binding constraint is the capacity of organisations to absorb and operationalise intelligence at scale. That requires new forms of ambidexterity, learning infrastructure, and cross-functional governance that treat AI not as an add-on project but as an organising principle.
If there is a single practical conclusion, it is this: leaders should worry less about whether the world is spending “too much” on AI in aggregate, and more about whether their own organisations are evolving quickly enough to belong to the regime those trillions are building. The infrastructures of intelligence now under construction will shape economic possibilities for decades. Whether they crystallise into a durable wave of productivity or a long hangover of under-used capacity will depend less on the physics of chips and power than on our willingness to re-engineer the organisations and institutions that sit on top of them.
Stay informed of these developments via my LinkedIn updates at https://www.linkedin.com/in/zenkoh/ and subscribe to my newsletter at
Legal Disclaimer
This article is intended for informational purposes only and does not constitute professional advice. The content is based on publicly available information and should not be used as a basis for investment, business or strategic decisions. Readers are encouraged to conduct their own research and consult professionals before taking action. The author and publisher disclaim any liability for actions taken based on this content.









