The most important variable in AI infrastructure is no longer GPU availability. It is power, and how fast you can get it.

What's Happening

Three structural shifts are converging this week, and together they reframe the entire colocation sourcing decision for any organization standing up serious AI capacity.

First, hyperscaler demand is accelerating. Microsoft has committed to doubling its AI infrastructure within two years, per The Next Platform. That pledge alone is enough to strain power, land, and fiber availability across every major US and EU market. When a single hyperscaler signals a doubling, colocation operators price and allocate accordingly, and the clients who haven't reserved space yet pay for that signal.

Second, the grid is not keeping pace. A Texas AI campus is being forced into a fully off-grid, 200,000-square-foot build because its interconnection queue date stretches to 2029, with a $35 million upgrade cost attached, per Data Center Knowledge. This is not a fringe case. Behind-the-meter (on-site power generation that bypasses the public grid) gas microgrids are becoming primary, not backup, power sources for AI campuses as grid queue timelines push past 2028. Mitsubishi Heavy Industries is ramping gas turbine production 30% specifically to meet AI data center demand, a clear signal that turbine supply is itself becoming a bottleneck.

Third, a new asset class is emerging between raw land and operational capacity. Chad Williams, the former CEO of QTS, has returned with a gigawatt-scale powered land venture, QII, targeting multi-gigawatt sites. Core Scientific is converting crypto mining infrastructure into roughly 3 GW of AI and HPC (high-performance computing) colocation capacity across Oklahoma and Texas. The supply picture is not static, but it is slow, and the gap between announced capacity and operational capacity is measured in years.

Why It Matters

The mechanism here is straightforward but underappreciated. Hyperscalers absorb enormous blocks of powered shell and build-to-suit capacity years in advance. When Azure or GCP signs a 200 MW commitment with an operator like Equinix, Digital Realty, or QTS, that capacity is gone from the spot and near-term reserved market. What remains for sovereign AI programs, Fortune 500 enterprises, and frontier labs is whatever the hyperscalers did not pre-empt, plus newly announced supply that is 18 to 36 months from delivery.

The power constraint adds another layer. AI workloads are not just power-hungry at peak. They create volatile, high-amplitude swings in consumption that utilities were not engineered to absorb. Grid operators are now actively rethinking stability models because AI load patterns do not resemble any prior large industrial customer. That instability is pushing developers toward rural, unincorporated sites where permitting is faster, parcels are larger, and community opposition is lower. It is also pushing serious operators toward PPAs (Power Purchase Agreements, long-term electricity contracts) and behind-the-meter generation as structural solutions, not workarounds.

For neocloud operators (specialized GPU cloud providers, an alternative to hyperscalers), the infrastructure pressure is visible in traffic patterns as well. Sustained GPU-to-storage transfer flows from GPU-dense deployments are obsoleting legacy colo network designs built around bursty traffic. Operators who cannot support the new traffic architecture are losing deals.

In the EU, the capital is following the power. Nscale just secured $790 million in financing for a GPU-dense AI data center in Norway, with an equal accordion expansion option. Norway's combination of hydroelectric power and cooler ambient temperatures is pulling European AI workloads north, away from the constrained Frankfurt and Amsterdam markets.

What Clients Should Do

If you are a frontier lab or large-scale AI research program planning capacity for 2026 or 2027, the window for Tier III (99.982% uptime, redundant systems) colocation at favorable rates in primary US markets, Northern Virginia, Dallas, Phoenix, Chicago, and Silicon Valley, is closing. Operators are allocating. The clients who engage now, even at the pre-commitment stage, get first access to the inventory that hyperscalers have not yet absorbed.

If you are a Fortune 500 enterprise standing up AI infrastructure for the first time, resist the assumption that AWS or Azure is the only path. Hyperscalers (the largest cloud providers: AWS, Azure, GCP, Oracle) sell direct and do not need a broker. But if your workload justifies owned or reserved GPU capacity, a combination of neocloud operators for training and inference, plus dedicated colocation space for data gravity and compliance reasons, is almost always cheaper and more controllable. Neocloud operators we work with regularly price 30 to 50 percent below hyperscaler reserved instance rates, with ramp times measured in weeks rather than quarters.

If you are a sovereign AI program or government-aligned initiative in the US or EU, power procurement is your longest lead-time item. Site selection decisions made today should be driven first by grid access or behind-the-meter generation feasibility, and second by proximity to fiber density. Colocation operators with existing PPAs and on-site generation in place, available from several Tier III operators in secondary US markets, are worth serious evaluation before you anchor to a primary market on name recognition alone.

For scaleups moving from proof-of-concept to production inference, the GPU capacity and the colocation decision are often linked. A deployment that starts on a neocloud and migrates to a colo-hosted private cluster at scale should be architected with that transition in mind from day one.

XIRR Advisors brokers reserved GPU capacity from neocloud operators and Tier III colocation space across the US. We do not broker hyperscalers. AWS, Azure, GCP, and Oracle sell direct. Our value is in the neocloud and colo markets, where operator relationships and deal timing materially affect what you pay and when you can deploy. The provider pays our fee. Clients pay nothing.

Share your requirements, region, GPU type, capacity volume, timing, or megawatts for colocation, and we will canvas the market and return a shortlist within 48 hours. Earlier conversations get better terms. Capacity that exists today may not exist in 90 days. Reach out at contact@xirradvisors.com or DM @XIRRAdvisors.

References

[1] The Next Platform: Microsoft Committed to Doubling AI Infrastructure Within Two Years

[2] Data Center Knowledge: Grid Delays Drive Texas AI Campus to Off-Grid 200KSF Build

[3] Data Center Knowledge: Interconnection Delays Accelerate Gas Microgrid Adoption for AI Sites

[4] Data Center Dynamics: Mitsubishi Heavy Boosts Gas Turbine Output 30% for AI Data Center Demand

[5] Data Center Dynamics: Former QTS CEO Targets Multi-Gigawatt Powered Land Platform

[6] Data Center Knowledge: Core Scientific Plans 3 GW AI and HPC Campus Across Oklahoma and Texas

[7] Data Center Knowledge: AI Load Volatility Forces Utilities to Rethink Grid Stability Models

[8] Data Center Dynamics: Nscale Secures $790M Financing for Norway AI Data Center

ColocationAI InfrastructurePower MarketsHyperscalerEnterprise AI
— Tell Us What You're Sourcing

Share your requirements. We'll canvas the market.

Tell us your needs (region, GPU type, capacity, timing — or MW for colocation) and we'll canvas the neocloud and colocation markets on your behalf. Shortlist in 48 hours.

Earlier conversations get better terms. When you engage early, we have time to negotiate with vendors before you need to commit. You pay nothing. Provider-paid model.