The hyperscalers are consuming the grid faster than utilities can build it, and that compression is creating the best opening in years for clients who move early on Tier III colocation.

What Happened

Three stories this week tell the same structural story from different angles.

First, American Electric Power's contracted capacity has surged to 63 gigawatts, with roughly 90% of that demand tied to data centers, per Data Center Dynamics. AEP added 9 gigawatts in a single quarter and has committed to a $78 billion capital plan to keep pace. That is not a utility responding to demand. That is a utility sprinting and still falling behind.

Second, Meta's exploration of space-based solar, reported by Data Center Knowledge, illustrates what grid lag looks like at the frontier. When one of the world's best-capitalized technology companies is evaluating orbital power generation as a credible hedge, the conventional grid queue is effectively broken for new entrants.

Third, Iren energized the first phase of its 2-gigawatt Sweetwater campus in Texas, per Data Center Dynamics. Full build-out targets 2028. That is a significant near-term capacity injection into a state that has become the de facto overflow valve for data center demand priced out of Northern Virginia (NoVa, the largest US data center market) and Phoenix.

Separately, Microsoft has committed to doubling its AI infrastructure within two years, according to The Next Platform, while Google continues to deepen its full-stack vertical integration from TPUs (Tensor Processing Units, Google's custom AI chips) through cloud services. These are not announcements about available capacity. They describe where hyperscaler capex (capital expenditure, infrastructure spending) is going internally, not what they are offering third-party clients on reasonable timelines.

Why It Matters

The mechanism here is straightforward but underappreciated. Hyperscalers like AWS, Azure, and GCP are simultaneously the largest consumers of new data center power and the entities with the deepest utility relationships. When AEP's queue is 90% data center and that demand is dominated by a handful of hyperscaler campuses, independent operators and enterprise clients are structurally disadvantaged in grid interconnection timelines.

This matters differently depending on who you are. For sovereign AI programs in the US and EU, this is a supply chain sovereignty issue. Depending entirely on hyperscaler regions for national AI infrastructure means operating inside someone else's queue. For Fortune 500 enterprises in financial services or pharma beginning to build dedicated AI infrastructure, the lesson is that waiting for a hyperscaler reserved instance to become available is not a neutral decision. Every quarter of delay is a quarter of lost optionality as power-adjacent land in key markets tightens.

For frontier labs like Anthropic, OpenAI, or Mistral, the power constraint is already a known input in site selection. The more interesting question is whether colocation at Tier III (the data center reliability tier guaranteeing 99.982% uptime) operators in markets like Dallas, Chicago, or Atlanta can extend capacity faster than hyperscaler campuses in constrained geographies. The evidence increasingly says yes.

Neocloud operators (specialized GPU cloud providers, an alternative to hyperscalers) are navigating this same power market, but their smaller initial footprints mean they can co-locate inside existing Tier III campuses at operators like Equinix, Digital Realty, CyrusOne, QTS, Aligned, or Stack rather than waiting years for greenfield utility interconnection. That is a structural speed advantage that translates directly into faster capacity access for clients, often by quarters.

What Clients Should Do

If you are a frontier lab or large AI scaleup planning a training cluster of meaningful scale, the first question is no longer which GPU model but where the power is. Identifying colocation campuses with secured utility contracts and available critical load, before you sign an MSA (Master Service Agreement, the parent contract), is now as important as the hardware spec itself. Texas markets with operating campuses are worth evaluating seriously against NoVa and Phoenix, where absorption rates are compressing available capacity.

If you are a Fortune 500 enterprise rolling out dedicated AI inference infrastructure for the first time, resist the instinct to default entirely to hyperscaler managed services. A hybrid model, where you retain reserved GPU capacity from neocloud operators for cost-sensitive workloads and secure your own Tier III colocation footprint for latency-sensitive or data-residency-constrained workloads, consistently delivers better economics. Neocloud pricing is typically 30 to 50% below equivalent hyperscaler reserved instances, with ramp times (deployment timelines for capacity coming online) measured in weeks rather than quarters.

If you are a system integrator sourcing for an enterprise or government client, the power picture should inform your site shortlist before the hardware conversation starts. Campuses that have already energized, like Sweetwater's first phase, represent real available capacity today, not a projected interconnection date.

For all client types: earlier conversations produce better terms. Power-adjacent colocation slots at competitive pricing are allocated to relationships, not RFPs.

How XIRR Can Help

XIRR Advisors brokers reserved GPU capacity from neocloud operators and Tier III colocation space across the US. We do not broker hyperscalers. AWS, Azure, and GCP sell direct. Our value is in the markets they cannot or do not serve efficiently: flexible reserved GPU contracts, faster ramp, better pricing, and colocation space in the markets where power is actually available now.

Share your requirements, including region, GPU type and quantity, timing, and megawatt needs for colocation, and we will canvas the neocloud and colocation markets and return a shortlist within 48 hours. Many clients need both GPU capacity and physical colocation space. We source both. Our fee is paid by providers. Clients pay nothing. Reach out at contact@xirradvisors.com or DM @XIRRAdvisors. The clients who start the conversation in May are securing Q3 capacity. The ones who wait until Q3 are negotiating for Q1.

References

[1] The Next Platform: Microsoft Committed to Doubling AI Infrastructure Within Two Years

[2] The Next Platform: Google Executes Full-Stack AI Strategy Across Compute and Cloud

[3] Data Center Dynamics: AEP Contracted Capacity Surges to 63GW, 90% Data Center-Tied

[4] Data Center Knowledge: Meta Space Solar Bet Exposes Widening AI Data Center Power Gap

[5] Data Center Dynamics: Iren Energizes First Phase of 2GW Sweetwater Texas Campus

ColocationData Center PowerGPU MarketsEnterprise AIHyperscaler
— Tell Us What You're Sourcing

Share your requirements. We'll canvas the market.

Tell us your needs (region, GPU type, capacity, timing — or MW for colocation) and we'll canvas the neocloud and colocation markets on your behalf. Shortlist in 48 hours.

Earlier conversations get better terms. When you engage early, we have time to negotiate with vendors before you need to commit. You pay nothing. Provider-paid model.