Sunday Deep Dive: The Watt Wall
How Energy Physics Is Repricing the AI Boom
Core thesis. AI infrastructure is not being constrained primarily by capital, chips, or even model ambition. It is being constrained by the slowest-moving layers of the power system—transmission, interconnection, equipment lead times, and the regulatory permission to place very large loads next to scarce generation. That is why power access is becoming a moat and why the AI buildout is being repriced from the inside out. It is also why the investable hierarchy of the AI cycle runs through physical infrastructure before it reaches software—and why the current geopolitical and credit environment is stress-testing that hierarchy in real time.
Executive summary
The physical thesis
The AI buildout has collided with a physical bottleneck. U.S. data centers already consumed roughly 4–5% of national electricity in 2024, and even the low end of current 2030 projections implies a much larger grid burden.
The market still talks about power as if it were a fuel problem. In practice, the near-term binding constraints are wires, interconnection rights, transformers, turbine delivery slots, and cost-allocation rules.
Hyperscalers are therefore behaving more like quasi-utilities: underwriting generation, arranging long-term power contracts, exploring co-location next to existing plants, and using temporary or “bridging” power when the grid cannot move fast enough.
Financial and political corollaries
The financial split in AI infrastructure is best understood as a strong-balance-sheet core and a financing-sensitive edge. The core hyperscalers are far more resilient than telecom carriers were in 2000, but the broader ecosystem is increasingly moving from internal cash flow toward debt, leases, and private-credit structures.
Geography is becoming a first-order variable. Regions that can align power, land, and regulatory throughput will capture AI infrastructure; regions that cannot will see projects delayed, downsized, or pushed elsewhere.
The setup
The most revealing AI announcement of early 2026 was not a model release. It was a utility filing. On March 27, 2026, Entergy Louisiana and Meta announced a power-and-transmission package for Meta’s Richland Parish campus that included seven new combined-cycle gas units totaling more than 5.2 GW, roughly 240 miles of new 500-kV transmission, additional solar and storage, and a cost structure under which Meta pays the full cost of service.[^1] Meta has separately described the site as its Hyperion cluster and said it has the potential to scale to 5 GW, making it the company’s largest multi-gigawatt AI training campus.[^2] Secondary reporting often cites ten plants and roughly 7.5 GW because it combines earlier and later project phases. The clean primary-source March 2026 number is seven new units and more than 5.2 GW.[^1]
That is the point. A technology company is now underwriting utility-scale generation and transmission because the ordinary grid process cannot deliver power at the speed the AI capex cycle demands. The deal also reveals something about how the utility sector itself is being repriced. Entergy is not just selling kilowatt-hours; it is operating as an infrastructure platform for AI, with the customer bearing the full cost of service in exchange for speed and scale. Utilities that can offer that kind of arrangement—regulatory latitude, available generation, appetite for bespoke large-load structures—are attracting multi-billion-dollar commitments that would have been unimaginable five years ago. Utilities that cannot are watching those commitments go elsewhere, a divergence that will not show up cleanly in earnings for several quarters but is already visible in where the hyperscalers are signing their next deals. The sector is bifurcating into AI-enabling platforms and AI-constrained incumbents, and the market has only partly noticed.
In Memphis, xAI demonstrated the same logic in rougher form. Reporting indicated that the company installed as many as 35 methane turbines in a historically Black South Memphis neighborhood while only seeking permits for a smaller subset; EPA later concluded the turbines required air permits.[^3] The tactic was crude, but the economic logic was clear: when GPUs depreciate by the quarter and grid studies take years, temporary self-supply becomes rational even when it is politically or environmentally explosive.
The Three Mile Island restart tells the same story from the opposite direction. Constellation spent heavily to restart Unit 1, signed a headline-grabbing 20-year power purchase agreement with Microsoft, and targeted a 2027 return to service. Then PJM told the company that the restarted reactor’s output would likely not be deliverable until 2031 because the transmission configuration was not ready.[^4] The reactor is not the binding constraint. The power system around the reactor is.
How we got here
Demand shock, not just model hype
The Electric Power Research Institute’s February 2026 report put the baseline problem in stark terms. U.S. data centers consumed an estimated 177–192 TWh of electricity in 2024, or roughly 4–5% of national demand. By 2030, EPRI projects 380–793 TWh, equal to about 9–17% of U.S. electricity consumption.[^5] The range is wide because no one knows exactly how fast inference will scale, how much enterprise adoption will stick, or how durable model-efficiency gains will be. But the directional conclusion is unambiguous: even the conservative case implies a much larger power burden inside only a few years.
Virginia is the clearest preview. EPRI estimates that data centers already account for about a quarter of in-state electricity use there and could reach 39–57% by 2030, with seven more states crossing the 20% threshold.[^5] Once a single industry reaches that scale, it stops behaving like an ordinary customer segment. Its siting choices, outage risks, and tariff disputes become de facto energy policy.
The real bottleneck is wires and connection rights
The U.S. power system does not lack projects on paper. Lawrence Berkeley National Laboratory’s 2025 edition of Queued Up found roughly 1,400 GW of generation and 890 GW of storage in active interconnection queues at the end of 2024, or about 2.3 TW in total.[^6] The problem is conversion from paperwork into operating assets. For projects built between 2018 and 2024, the median duration from interconnection request to commercial operation exceeded four years, far above the timelines typical in the early 2000s. And for projects entering queues between 2000 and 2019, only 13% had reached commercial operation by the end of 2024, while 77% had been withdrawn.[^6]
Those numbers matter because they explain why AI demand is colliding with a system that looks superficially oversupplied. There is ample capacity in line. There is not ample capacity that can be connected, financed, permitted, and delivered on the AI timetable. The International Energy Agency notes that building transmission in advanced economies can take 4–8 years and warns that around 20% of announced data-center projects could face delay risk if grids do not keep pace.[^7]
The corollary is that energized capacity—a site with a live grid connection and deliverable power today—is becoming a scarce asset in its own right, worth a structural premium over a site with a queue position and a 2029 target date. That reprices colocation providers, infrastructure REITs, and land plays in ways the market has been slow to absorb. The traditional method of valuing data-center real estate—square footage, fiber density, cooling capacity—increasingly takes a back seat to a simpler question: can the site actually draw power, and how much? For any data-center platform, the gap between energized capacity and contracted capacity is where the queue risk lives, and investors who are not asking about that ratio are evaluating the wrong variable.
The underinvestment decade
The roots of the transmission bottleneck are not abstract. They show up in a single, striking divergence: the amount of money spent on the U.S. transmission system rose steadily over the past fifteen years, but the amount of high-voltage transmission actually built collapsed. Grid Strategies and Americans for a Clean Energy Grid documented the trend in detail. The U.S. averaged roughly 1,700 miles of new high-voltage transmission per year from 2010 to 2014, much of it driven by proactive regional planning efforts like the Texas Competitive Renewable Energy Zones and MISO’s Multi-Value Portfolio. That rate dropped to about 925 miles per year from 2015 to 2019 and then to roughly 350 miles per year from 2020 to 2023, bottoming at just 55 miles in all of 2023. In 2024, only about 322 miles were completed—the third-slowest year on record.[^25]
Meanwhile, annual transmission spending rose from around $10 billion in 2010 to over $25 billion by 2023. More than 90% of that spending went to lower-voltage, reliability-driven projects: replacing aging equipment, hardening local substations, and performing maintenance on infrastructure built in the 1950s and 1960s. Very little went to the kind of high-capacity, long-distance lines that would expand the system’s ability to move bulk power across regions.[^25]
The Bank of America estimate underscores the scale of the mismatch: 31% of U.S. transmission infrastructure and 46% of distribution infrastructure is now within five years of, or already beyond, its expected useful life. In 2024, only about a third of total transmission and distribution spending—roughly $32 billion out of $95 billion—went to expansion. The rest went to replacements and upgrades.[^26]
The shale-era logic made this pattern seem rational at the time. U.S. electricity demand grew at just 0.5% per year from 2014 to 2024. Cheap gas had crushed power prices. Utilities, regulators, and investors all optimized for a world of flat demand and incremental grid maintenance, not for a world in which a single campus might suddenly request gigawatts of incremental load. Clean-energy investment flowed more easily into generation—solar panels, wind farms, battery arrays—than into the less glamorous business of building wires across state lines.
The result was a system that looked adequately funded in aggregate but was structurally unprepared for a demand shock. DOE’s 2024 National Transmission Planning Study put the gap in blunt terms: the lowest-cost electricity system portfolios that meet future demand growth and reliability needs require expanding the total U.S. transmission system by 2.1 to 2.6 times its 2020 size by 2050, and roughly quadrupling interregional transfer capacity. That implies something on the order of 5,000 miles of high-capacity regional transmission per year—more than ten times the recent pace.[^27]
The equipment backlog tells the same story. DOE has repeatedly warned that large power transformers are custom-made, globally concentrated, and subject to procurement lead times of a year or longer.[^8] The IEA says large power transformers can now take up to four years to procure, while cable lead times can stretch two to three years.[^7] On the generation side, GE Vernova reported in January 2026 that its Gas Power equipment backlog plus slot reservations had risen from 62 GW to 83 GW.[^9] When the key components of a buildout are sold in delivery slots rather than inventory, capital loses much of its ability to accelerate the schedule.
The current geopolitical environment makes this worse, not better. The disruption of Middle East energy infrastructure—including damage to Gulf petrochemical and refining capacity, the effective closure of the Strait of Hormuz to normal commercial transit, and the resulting surge in global gas and diesel prices—is adding a layer of cost and competition for the same equipment the AI buildout requires. Gas turbines ordered for AI campuses compete with gas turbines ordered to replace disrupted generation elsewhere. Transformer manufacturers serving U.S. utilities are also fielding orders from European grids under energy-security stress. The Watt Wall was already a binding constraint before the conflict; the conflict is tightening it further by compressing the global equipment supply that AI builders depend on.
How the machine works
Given the structural mismatch just described—surging demand meeting a grid built for a flat-load era—the major actors in the AI buildout are adapting in ways that reshape the traditional boundaries between technology companies, utilities, and regulators.
Hyperscalers as quasi-utilities
A firm spending tens of billions of dollars a year on AI infrastructure cannot treat electricity as a commodity input purchased at the last step. It has to secure generation, transmission, and service rights years in advance. That pushes the business model toward long-term power purchase agreements, direct utility partnerships, co-location with existing generation, on-site backup that begins to resemble primary supply, and in some cases direct underwriting of new power plants.
The strategic consequence is simple: power access becomes a moat. A data-center operator that can obtain reliable electricity in 24 months has fundamentally different economics from one waiting five years in an interconnection queue, even if both can buy the same chips.
Bridging power is not a sideshow
The market still tends to treat temporary power as an anecdote. It is better understood as an asset class created by the Watt Wall. IEEE Spectrum’s 2025 reporting on ProEnergy captured why. The company buys retired CF6-80C2 jet-engine cores, converts them into 48 MW PE6000 generators, and sells them into data-center projects that need power before new grid connections arrive. Each unit can start in minutes and engines can be swapped quickly. ProEnergy had already sold 21 turbines for two projects totaling more than 1 GW.[^10]
That is not a curiosity. It is a symptom of a broader “shadow power” ecosystem that most energy analysts have not fully modeled, because they are still thinking in terms of normal utility procurement cycles. Bridging power exists because the value of an energized GPU cluster now exceeds the value of waiting for a textbook grid solution.
It also means that gas-turbine manufacturers, balance-of-plant suppliers, switchgear vendors, transformer makers, and transmission contractors are not merely second-derivative beneficiaries of AI. In many cases they are closer to the binding constraint than the chip vendors are. GE Vernova and Siemens Energy have already re-rated substantially on this recognition, but much of the supply chain below them—specialty transformer manufacturers, medium-voltage switchgear producers, EPC firms with transmission construction capabilities—has not moved nearly as far. For anyone comfortable going deeper into the industrial stack, the bottleneck suppliers with long backlogs and pricing power are where the unpriced asymmetry sits.
Co-location is real, but it is not a free pass
The co-location debate is where the physics of the grid meets the law of the tariff. The intuition is easy to understand: if a data center can sit beside a power plant, take power “behind the meter,” and avoid years of interconnection delay, it can get to revenue much faster. The regulatory question is whether that arrangement still relies on the broader transmission system in ways that should require network-service charges, reliability obligations, or revised tariff treatment.
FERC’s handling of PJM’s Susquehanna case made clear that this is not settled law. Over 2024 and 2025, the commission first rejected PJM’s proposal without prejudice, then opened a show-cause proceeding, and finally directed PJM in December 2025 to create clearer transmission-service options and revise its behind-the-meter rules.[^11] That does not amount to a ban on co-location. It means co-location is moving from improvisation to regulated structure.
The Three Mile Island delay discussed earlier sits inside the same story. Even when a reactor is technically ready, investors still need transmission service, deliverability, and a tariff treatment that survives regulatory scrutiny. Nuclear is therefore best understood not as a near-term escape hatch but as a promising source of firm power whose value depends on the same transmission and service constraints affecting everything else.
Financing the buildout: strong core, increasingly levered edge
The right comparison with the telecom bubble is not that today’s AI buildout is equally fragile. The core hyperscalers have stronger balance sheets, broader revenue bases, and lower existential risk than the telecom carriers of the late 1990s. If AI demand disappoints, they can slow the pace of capex, absorb write-downs, and continue operating.
But it is no longer accurate to describe the whole system as self-funded. The BIS argued in January 2026 that the scale of AI investment is pushing the ecosystem from operating cash flow toward debt financing.[^12] By March 2026, the BIS noted that major U.S. tech firms had issued more than $100 billion of bonds in 2025 while also relying on off-balance-sheet structures, leases, joint ventures, and private-credit-style funding vehicles backed by long-dated offtake commitments.[^13]
That matters because the edge of the AI system is far more financing-sensitive than the core. Customer-concentrated infrastructure providers, neoclouds, and monetization-dependent software businesses do not all have the luxury of waiting for product-market fit while servicing expensive capital. The same BIS research shows how deep private credit has already penetrated software: direct loans to SaaS firms grew from almost $8 billion in 2015 to more than $500 billion by the end of 2025.[^14]
The clean analytical divide is therefore not debt-free versus indebted. It is low-probability-of-distress core versus funding-sensitive edge. And there is growing evidence that the turn in the financing cycle is not merely a future risk—it may already be underway. In the first quarter of 2026, private credit funds faced roughly $20 billion in redemption requests, several major alternative asset managers imposed or tightened withdrawal caps, and junk bond funds saw $14 billion in outflows. Meanwhile, the corporate bond issuance required to fund hyperscaler capex—Amazon, Meta, and others issued tens of billions in 2025—is absorbing credit capacity at the same moment that higher rates, geopolitical stress, and rising default risk are compressing what is available for everyone else. The private-credit layer, where underwriting standards are hardest to observe from the outside, is where the earliest stress signals are showing up—and the evidence increasingly suggests they already have.
For those who prefer a simpler heuristic: customer concentration in an infrastructure provider’s revenue mix is the single best proxy for fragility. A builder whose backlog depends on one or two hyperscaler contracts is structurally different from one with twenty enterprise customers, even if the near-term revenue looks identical. But there is a second heuristic that matters in the current environment: the gap between a company’s contracted revenue and the credit quality of the counterparty funding the contract. In a period when private credit is under institutional stress and the rate regime offers no relief, that gap is where the surprises live.
The ratepayer fight is about cost allocation as much as scarcity
Electricity politics is where the Watt Wall becomes visible to households. PJM’s capacity market provides the cleanest evidence. The clearing price moved from $28.92/MW-day for the 2024–25 delivery year to $269.92 for 2025–26, then to $329.17 for 2026–27. The 2027–28 auction also cleared at a very elevated level, roughly $333/MW-day, while still falling short of PJM’s installed reserve target.[^15] Those are not normal moves. They reflect both genuine physical scarcity—load growth outrunning available supply—and PJM’s capacity-market design, including the shape of the demand curve and the capacity-performance requirements imposed after the 2014 polar vortex. Separating how much of the repricing is scarcity and how much is mechanism matters for assessing duration: scarcity-driven prices persist until supply catches up, while design-driven prices can be reformed away faster. For the moment, both forces are pushing in the same direction.
But the policy lesson is subtler than “Big Tech is raising your bill.” The rate story is a combination of real load growth, PJM’s market design, and a rapidly intensifying fight over who pays for incremental infrastructure. POWER Magazine’s summary of the DELTa database counted 66 large-load tariffs or service rules across 34 states and 51 utilities as of November 2025, with 36 approved and 29 pending or proposed.[^16] That is where the lasting policy change is occurring: in state commissions, special service agreements, and “very large customer” categories that try to force cost causation and cost recovery to line up.
The March 2026 White House “Ratepayer Protection Pledge” matters mostly as a signal of political salience, not as binding law. It was a proclamation and fact sheet, not a tariff, statute, or FERC order.[^17] By contrast, Entergy’s Meta arrangement—in which the customer pays the full cost of service—looks more like a template. It does not solve the capacity shortage. It solves the political question of who should bear the bill.
What the market is still missing
The GDP mirage
The most repeated AI macro statistic of the last year has been that data-center investment accounted for 39% of U.S. GDP growth in the third quarter of 2025. That is not what the St. Louis Fed analysis found. The January 2026 piece concluded that four AI-related investment categories—software, R&D, information-processing equipment, and data-center construction—accounted for 39% of total GDP growth across the first three quarters of 2025.[^18] In the third quarter alone, those four categories contributed 0.48 percentage points of growth, or about 11% of the total, and data-center construction by itself contributed only 0.03 percentage points.[^18]
That correction matters because it changes the macro frame. AI capex is clearly large enough to show up in the national accounts, but the domestic value-added story is smaller and more contested than many headlines imply—in part because imported servers and GPUs inflate gross investment while contributing less to domestic output.[^19] The right investor question is not whether AI capex is real. It plainly is. The right question is whether the buildout is generating enough domestic output, profits, and durable end-demand to justify the scale of capital and electricity being committed.
The import-adjustment point has a less obvious but investable implication: the domestic beneficiaries of AI capex are narrower than headline spending suggests. The real domestic value-add sits in construction, electrical infrastructure, power generation, and the services layer around them—not in the hardware itself, which is overwhelmingly imported. General contractors with data-center books, electrical distributors, specialized construction firms, and the utilities themselves capture a larger share of domestic AI value creation than the semiconductor importers do, even though the semiconductor names dominate the narrative. If you are trying to express a “domestic AI buildout” thesis, the trade is in the physical plant, not in the box.
Efficiency is a real counter-thesis, not a footnote
The Watt Wall argument becomes much weaker if model capability can keep rising while power intensity falls faster than demand rises. That is no longer a hypothetical concern. DeepSeek-R1, published in Nature in September 2025, showed that reinforcement learning could induce sophisticated reasoning behavior without relying on human-labeled reasoning traces.[^20] Whatever one thinks of DeepSeek as a company, the paper strengthened the case that frontier capability can be improved through algorithmic efficiency rather than brute-force scaling alone.
The mistake is to jump from that insight to the claim that the power problem disappears. The IEA’s scenarios still span a very large range for data-center electricity demand by 2035—roughly 700 to 1,700 TWh globally—and even the high-efficiency path leaves a substantial power footprint.[^7] Efficiency can flatten the slope of the curve without removing the wall.
In market terms, it is the strongest counter-thesis to a straight-line energy bull case, but not yet evidence that the power trade is a mirage. Even if the efficiency thesis is directionally correct, there is a lag—likely measured in years, not quarters—between algorithmic gains and their propagation through deployed infrastructure and committed capex plans. Hyperscalers do not cancel $10 billion campuses because next year’s model is 30% more efficient. That lag creates a window in which infrastructure names continue to benefit even as the slope of the demand curve flattens. The asymmetry cuts the other way for anyone who is long pure-play power scarcity on a five-year horizon: efficiency is the risk that turns a structural thesis into a cyclical one.
There is also a second, less discussed version of the counter-thesis: not that AI power demand falls, but that the capital cycle supporting the buildout peaks before the physical infrastructure catches up. Since early 2026, there have been several signs that the AI investment cycle is entering a more selective phase. OpenAI scaled back its Nvidia infrastructure agreement as it prepared for public markets. Micron sold off on an earnings beat—classic sell-the-news behavior suggesting the hardware narrative was already priced. Governance failures at hardware distributors, model-layer commoditization, and prominent warnings that most AI stocks may not survive have all accumulated. None of these signals individually threatens the Watt Wall thesis—they are about the capital cycle, not the physics—but together they raise the possibility that the urgency of the buildout moderates even while the structural power deficit persists. The infrastructure layer is more insulated from this risk than compute or software, precisely because its value derives from the physical bottleneck rather than from the pace of spending. But anyone positioning for indefinite emergency-level scarcity should at least acknowledge that the emergency may gradually become an ordinary shortage.
Geography follows available electrons
The next geography of AI will be determined less by where engineers want to live than by where power can be built, moved, and permitted. China is the clearest benchmark. The EIA reported that China installed 277 GW of utility-scale solar in 2024 alone, more than twice the total U.S. utility-scale solar fleet at the end of the same year.[^21] In the U.S., total utility-scale generating-capacity additions in 2024 were 48.6 GW, with roughly 63 GW expected in 2025.[^22] Even before counting China’s wind, coal, hydro, nuclear, or grid buildout, the contrast makes the point. If AI is increasingly power-constrained, then countries expanding power capacity at that pace hold a structural advantage.
The U.S. Gulf Coast is the domestic version of the same story. States with available gas, land, and relatively pragmatic utility structures can move faster than states where every large load requires years of transmission battles. Outside the U.S., the Persian Gulf has been positioning itself as a serious contender. Microsoft says its Saudi Arabia East cloud region will be available in the fourth quarter of 2026. Microsoft and G42 have announced a 200 MW UAE expansion through Khazna. Oracle has launched a Blackwell-based OCI Supercluster in Abu Dhabi.[^23] The common trait is throughput: power, land, capital, and a permitting regime that can line up quickly enough to matter.
However, the geopolitical environment as of early April 2026 complicates this thesis substantially. The ongoing conflict involving Iran, the effective closure of the Strait of Hormuz to normal commercial shipping, and direct attacks on Gulf energy infrastructure—including strikes on UAE and Saudi facilities—have introduced a level of operational and political risk that the pre-conflict data-center geography thesis did not account for. Microsoft’s Saudi Arabia East timeline, Oracle’s Abu Dhabi deployment, and the broader Gulf data-center pipeline all depend on assumptions about physical security, energy availability, and investor willingness to commit long-duration capital to a region under active military threat. These projects have not been cancelled, and the structural advantages of the Gulf—cheap power, available land, sovereign capital—remain real. But the risk discount the market should attach to Gulf data-center commitments is higher today than it was six months ago. The more relevant near-term geographic winners may be the U.S. Gulf Coast and other domestic regions that offer a version of the same speed-and-power formula without the geopolitical tail risk.
The geographic rebalancing carries implications for infrastructure adjacencies that tend to get overlooked. Fiber and connectivity providers serving the Gulf Coast corridors are early beneficiaries of capacity being planned now but needing lit connectivity within two to three years. Water utilities and industrial water-treatment companies in regions where cooling demand is about to surge—particularly in warm climates where evaporative cooling is less effective—face a demand step-change that most forward estimates have not incorporated. In an infrastructure cycle this large, these second-order effects are often where the risk-adjusted returns are best, because the market is slower to price them.
Community opposition is becoming a build-speed variable
The most underappreciated risk to the AI infrastructure timetable is not a new technology. It is local politics. In Memphis, opposition centered on air pollution and environmental justice. In Texas, local resistance has focused on water use, land use, and the scale mismatch between data centers and surrounding communities. In Indianapolis, opposition escalated so far that a city councilman’s home was shot at after his stance on a data-center rezoning fight.[^24]
The point is not that every project will face that level of conflict. It is that siting risk is no longer a soft variable. Once data centers begin to look like power plants, transmission corridors, and water-intensive industrial sites, they inherit the politics of infrastructure. That slows projects directly and also pushes regulators to create more explicit large-load rules, which can change economics even when a project is technically feasible.
Scenarios that matter
The analysis above suggests a range of plausible outcomes depending on how the physical, financial, and regulatory variables interact. Four scenarios deserve particular attention.
1. Constraint without collapse
In the base case, AI capex stays high but slows from the initial surge. Utilities, regulators, and equipment vendors gradually expand the system, but not fast enough to erase the queue. Power remains scarce in the best markets, early movers keep their advantage, and the center of value stays with those who control energized capacity rather than those who simply forecast the largest model.
The current geopolitical environment makes this scenario more likely and more intense than it would otherwise be. Elevated energy prices, disrupted global equipment supply chains, and a rate regime that offers no relief from financing costs all extend the duration of the constraint. A ceasefire or de-escalation would ease some of these pressures but would not resolve the underlying transmission, interconnection, and permitting bottlenecks that existed before the conflict began.
The market consequence is that the most durable beneficiaries are still the physical bottlenecks: utilities with credible cost recovery, gas-turbine and electrical-equipment suppliers, transmission contractors, and hyperscalers that locked in power early.
2. Edge financing unwind
In the second scenario, the AI application layer and the more weakly financed infrastructure layer fail to monetize fast enough to support the debt and lease structures now building around them. The core hyperscalers keep spending, but the edge of the stack—customer-concentrated builders, neoclouds, and software borrowers dependent on private credit—begins to de-rate or consolidate.
This scenario is not purely hypothetical. As of early April 2026, there are concrete indicators consistent with its early stages: private credit redemptions have accelerated, several alternative asset managers have imposed or tightened withdrawal limits, leveraged loans are underperforming high-yield bonds due in part to AI-driven disruption of specific borrowers, and at least one prominent short-seller has begun recommending bearish credit derivatives tied to AI labor-displacement risk. None of these individually constitute a crisis. But they describe a credit environment in which the financing-sensitive edge of the AI ecosystem is under more stress than equity markets are pricing.
The defining feature of this scenario is that the infrastructure still matters but the question of who owns it changes. The strongest balance sheets absorb distressed assets at a discount. Credit spreads in the AI infrastructure stack widen before equity prices fall, which means the signal shows up in bond and loan markets first. For investors positioned in the bottleneck layer, the risk is not that the assets become worthless—it is that the counterparty behind the contract becomes impaired.
3. Efficiency surprise
In the upside-to-software, downside-to-infrastructure scenario, model and system efficiency improve faster than expected. Compute becomes cheaper, adoption broadens, and some of today’s most aggressive power-demand assumptions prove too high. AI still grows, but it grows with lower energy intensity than current bullish infrastructure cases assume.
Under this path, the application layer wins more than the physical layer does. Some power and equipment names still benefit because the installed base is large, but valuations built on a straight-line extrapolation of emergency scarcity would need to reset. The nearer-term infrastructure trades—equipment backlogs, utility partnerships already signed, construction in progress—are more defensible than the longer-duration bets premised on the assumption that today’s emergency scarcity persists indefinitely.
4. Policy breakthrough
The least discussed scenario is regulatory rather than technological, and it is more concrete than most investors realize. FERC’s Order 1920, finalized in May 2024 and affirmed with modifications through Order 1920-B in April 2025, requires transmission providers for the first time to conduct long-term regional planning on a 20-year horizon, evaluate projects against seven specified benefit categories, and develop cost allocation methods with meaningful state input.[^28] Compliance filings from RTOs and transmission providers began arriving in mid-2025, and FERC will be reviewing and acting on them through 2026. The rule faces legal challenge in the Fourth Circuit, where eleven consolidated lawsuits are pending, but FERC has defended it vigorously, and Order 1920-A’s expanded state role was designed in part to reduce the legal attack surface.[^29]
If the rule survives judicial review and is implemented as designed, it would represent the most significant structural reform to U.S. transmission planning in over a decade—shifting the system from reactive, utility-by-utility maintenance spending toward the kind of proactive, multi-value regional planning that drove the high-voltage buildout of the early 2010s. Combined with workable co-location rules, faster large-load tariff adoption, and continued DOE action through the Transmission Facilitation Program, the queue could shorten meaningfully without requiring a miracle in hardware.
In market terms, a policy breakthrough would compress the advantage of the earliest power holders and broaden the investable geography of AI. It would also make today’s most distressed “stranded by power” projects more valuable. The risk for current bottleneck holders is that a policy acceleration reduces the duration of scarcity faster than the market expects.
Where this meets the current macro
The Watt Wall thesis is not an abstract infrastructure argument. It is the structural foundation for an investable hierarchy that runs through the AI cycle: power and physical infrastructure first, then optical networking, then compute, then memory, then software. The logic is simple and follows directly from the analysis above. The scarcer the layer, the harder it is to substitute, and the longer the advantage persists. Chips can be redesigned, models can be retrained, and software can be commoditized. But a live grid connection with deliverable power cannot be replicated by a competitor with a better algorithm. That is why the physical layer sits at the top of the hierarchy, and it is why that hierarchy has held up through every rotation in the AI narrative over the past several months—including the maturation signals discussed in the efficiency section.
The current geopolitical and credit environment is stress-testing the thesis from two directions simultaneously. On the supply side, the disruption of Middle East energy infrastructure has tightened the global equipment market, elevated gas and diesel prices, and pushed the rate regime into a posture where financing costs offer no relief. That makes the Watt Wall harder, not softer. On the demand side, the credit stress building in the private-credit and leveraged-loan markets—driven partly by energy exposure and partly by AI-specific disruption of borrowers—threatens the financing-sensitive edge of the AI ecosystem. The core of the buildout, funded by hyperscaler balance sheets, is largely insulated. The edge, dependent on private credit, customer-concentrated contracts, and unproven monetization, is not.
The scenarios described in this deep dive are not academic. As of early April 2026, the base case (constraint without collapse) and the edge-financing-unwind scenario are both partially in progress. The daily briefs will be tracking which of these is materializing and at what speed. TSMC’s 35% revenue beat confirms that AI hardware demand is real and growing, which supports the demand side of the Watt Wall thesis. The $300 billion in committed hyperscaler capex creates power demand that the grid cannot serve on the AI timetable. The question is not whether the wall is real—it plainly is—but whether the capital and credit structure supporting the buildout can hold together long enough for the physical infrastructure to catch up.
Positioning and watchpoints
The cleanest way to express the thesis is not through maximalism about any single company. It is through hierarchy—and the hierarchy follows directly from the scarcity structure described above.
First, prefer the bottlenecks over the narrative. The most durable value sits with businesses that control or supply the scarce layers of the system. That means utility relationships, energized sites, gas-turbine delivery slots, transformers, switchgear, transmission construction, and credible cost-recovery mechanisms. In practice, this is why power infrastructure and electrical-equipment suppliers have remained at the top of the AI positioning framework even as the compute, memory, and software layers have rotated through periods of enthusiasm and disillusionment. The bottleneck layer earns its premium from physics, not from narrative momentum, and that premium persists across all four scenarios described above—including the efficiency surprise, where nearer-term infrastructure plays backed by signed contracts and equipment in production are more defensible than longer-duration scarcity bets.
Second, prefer strong balance sheets and diversified demand over customer concentration and financing dependence. The core hyperscalers can survive a slower AI monetization curve. The thinner-capital edge of the ecosystem may not. This principle applies doubly in the current credit environment: with private credit under redemption pressure, leveraged loans diverging from high-yield on AI-specific impairment, and rate policy offering no near-term relief, counterparty quality in infrastructure contracts is no longer an abstraction. The right question for any infrastructure position is not just “is the asset valuable?” but “can the entity funding the contract survive a 12-month monetization delay?”
Third, be selective with nuclear. Firm zero-carbon power is strategically valuable, but near-term nuclear exposure should be treated as a transmission and regulatory thesis as much as a generation thesis. A reactor without deliverability is not a solved problem. The Three Mile Island delay is the clearest example: the generation asset is ready before the grid around it is. Until the co-location and transmission-service framework matures—which FERC is actively working on—nuclear plays carry queue risk that the market has not fully discounted.
Fourth, watch PJM, FERC, and state utility commissions more closely than product-launch calendars. In this phase of the cycle, a tariff order or transmission-service ruling can create or destroy more enterprise value than a benchmark improvement. The 66 large-load tariffs tracked across 34 states are the real-time legislative expression of the Watt Wall, and the outcomes of those proceedings will determine which utilities can operate as AI-enabling platforms and which remain constrained.
Fifth, assume geography matters—and update the geographic risk map for current conditions. The U.S. Gulf Coast and regions that can pair generation with fast permitting should gain relative importance. The Persian Gulf’s structural advantages remain real, but the current conflict demands a higher risk discount on Gulf-based data-center commitments than the pre-conflict consensus assumed. Markets where data centers are already consuming a huge share of local electricity—especially in PJM and Northern Virginia—deserve a higher political-risk discount from the ratepayer and community-opposition dynamics described above.
Sixth, monitor credit markets as the leading indicator for the AI infrastructure cycle. The edge-financing-unwind scenario described above predicts that stress will show up in credit spreads, covenant quality, and private-credit fund flows before it shows up in equity prices. That is consistent with what the data has been showing in early 2026. For anyone positioned in the bottleneck layer, the primary risk is not that demand evaporates—it is that the counterparty or financing structure behind the contract becomes impaired. Watching the bond and loan markets for AI-adjacent stress is therefore not a credit-market exercise; it is a direct input into the durability of infrastructure positions.
What would change my mind
Four developments would materially weaken the Watt Wall thesis.
The first would be a sustained efficiency break—not a single model improvement, but several years in which algorithmic and systems gains consistently outrun demand growth. The second would be genuine interconnection and transmission reform that cuts multi-year queue times down to something closer to normal industrial project schedules. The third would be faster AI monetization, especially if the application layer begins to produce cash flows that validate today’s capex and financing structures rather than simply carrying them forward. The fourth—and the one most relevant to the current environment—would be a resolution of the geopolitical disruptions that are compounding the infrastructure constraint, combined with a normalization of the credit environment that allows the financing-sensitive edge of the buildout to fund itself without stress.
If the first three happened together, the current power scarcity would look more cyclical than structural. If the fourth happened in isolation, the Watt Wall would persist but the urgency—and the valuation premium attached to bottleneck holders—would moderate. Until then, the burden of proof still sits with anyone arguing that electrons are a side issue.
[^1]: Entergy Louisiana, “Entergy Louisiana announces a new agreement with Meta that will deliver an additional $2B in customer savings,” Mar. 27, 2026.
[^2]: Meta, “Meta’s Richland Parish data center supports Louisiana economy, $875 million in contracts,” Dec. 2025.
[^3]: Politico, May 6, 2025, on xAI’s methane turbines in Memphis; The Guardian, Jan. 15, 2026, on EPA’s conclusion that the turbines required air permits.
[^4]: Reuters, Mar. 26, 2026, reporting that PJM told Constellation the restarted Three Mile Island unit may not connect until 2031 even if the reactor is ready in 2027.
[^5]: Electric Power Research Institute, Powering Intelligence: Analyzing Artificial Intelligence and Data Center Energy Consumption (Feb. 2026).
[^6]: Lawrence Berkeley National Laboratory, Queued Up: Characteristics of Power Plants Seeking Transmission Interconnection (2025 edition).
[^7]: International Energy Agency, Energy and AI (2025).
[^8]: U.S. Department of Energy materials on large power transformer supply constraints and procurement lead times; see also IEA, Energy and AI, on transformer and cable lead times.
[^9]: GE Vernova, fourth-quarter and full-year 2025 results, Jan. 28, 2026, reporting Gas Power equipment backlog plus slot reservations rising from 62 GW to 83 GW.
[^10]: IEEE Spectrum, Oct. 20, 2025, reporting on ProEnergy’s PE6000 jet-engine conversions and the scarcity of new gas turbines.
[^11]: Federal Energy Regulatory Commission materials on PJM co-located load and Susquehanna, including the Feb. 20, 2025 show-cause proceeding and the Dec. 18, 2025 order directing PJM to create clearer transmission-service options and revise its behind-the-meter rules.
[^12]: Bank for International Settlements, Bulletin No. 120, “Financing the AI boom: from cash flows to debt,” Jan. 7, 2026.
[^13]: BIS Quarterly Review, Mar. 16, 2026, “Financing the AI infrastructure boom: on- and off-balance sheet borrowing.”
[^14]: BIS Quarterly Review research on direct lending to SaaS firms, noting growth from almost $8 billion in 2015 to more than $500 billion by end-2025.
[^15]: PJM auction reports for the 2024–25, 2025–26, and 2026–27 Base Residual Auctions; American Public Power Association summary of the 2027–28 auction.
[^16]: POWER Magazine, Nov. 2025, summarizing the DELTa database and large-load tariff activity across U.S. states and utilities.
[^17]: White House fact sheet and proclamation on the Mar. 4, 2026 “Ratepayer Protection Pledge.”
[^18]: Federal Reserve Bank of St. Louis, “Tracking AI’s Contribution to GDP Growth,” Jan. 2026.
[^19]: Standard GDP accounting subtracts imports because imports are not domestic production; see Federal Reserve educational materials and FRED explainers on why imports are subtracted in GDP.
[^20]: DeepSeek-AI et al., “Incentivizing Reasoning Capability in LLMs via Reinforcement Learning,” Nature, Sept. 17, 2025.
[^21]: U.S. Energy Information Administration, Apr. 22, 2025, reporting that China installed 277 GW of utility-scale solar in 2024.
[^22]: U.S. Energy Information Administration, Jan. 2025 outlook on U.S. utility-scale capacity additions, reporting 48.6 GW added in 2024 and about 63 GW expected in 2025.
[^23]: Microsoft, Jan. 2026, on Saudi Arabia East availability in Q4 2026; Reuters, Jan. 2026, on Microsoft and G42’s 200 MW UAE expansion through Khazna; Oracle, Mar. 2026, on its OCI Supercluster launch in Abu Dhabi with Nvidia Blackwell GPUs.
[^24]: Politico and The Guardian on Memphis; The Texas Tribune, Feb. 10, 2026, on data-center opposition in Hood County, Texas; Associated Press, Apr. 7, 2026, on violence directed at an Indianapolis councilman after a data-center zoning fight.
[^25]: Grid Strategies and Americans for a Clean Energy Grid, Fewer New Miles: The U.S. Transmission Grid in the 2020s (Jul. 2024); updated 2025 edition, Fewer New Miles: Strategic Industries Held Back by Slow Pace of Transmission (2025). Data on high-voltage transmission mileage from FERC Energy Infrastructure Reports.
[^26]: Bank of America Global Research, “Transformation Power Check: Watt’s Going On with the Grid?” (2025), citing 31% of transmission and 46% of distribution infrastructure near or beyond useful life, and EEI member data on expansion vs. replacement spending.
[^27]: U.S. Department of Energy, National Transmission Planning Study (Oct. 2024), finding that lowest-cost system portfolios require expanding the U.S. transmission system by 2.1–2.6 times its 2020 size by 2050; Grid Strategies estimate of roughly 5,000 miles per year of high-capacity regional transmission needed to meet that target.
[^28]: Federal Energy Regulatory Commission, Order No. 1920, Building for the Future Through Electric Regional Transmission Planning and Cost Allocation, 187 FERC ¶ 61,068 (May 13, 2024); Order No. 1920-A (Nov. 21, 2024); Order No. 1920-B (Apr. 11, 2025).
[^29]: Consolidated appeals pending in the Fourth Circuit, Appalachian Voices et al. v. FERC, No. 24-1650; FERC defended the rule in a Jan. 2026 brief; see also Utility Dive, “FERC in 2026,” Jan. 29, 2026, on compliance filings and judicial review status.


