Equinix Q3 Shows Strong Demand for Distributed AI Infrastructure

The News

Equinix reported a standout third quarter with record bookings of $394M (+25% YoY), 8% growth in monthly recurring revenue, and steady expansion across all major performance metrics. Revenue reached $2.316B (+5% YoY), adjusted EBITDA hit $1.148B (+10% YoY), and AFFO climbed to $965M (+11% YoY).

The company is rapidly scaling global capacity (adding land in Amsterdam, Chicago, Johannesburg, London, and Toronto) and now supports more than 900MW of planned future build-out. It also opened in its 77th market (Chennai, India), added 58 development projects (12 xScale), grew to 499,000+ interconnections, and launched its new Distributed AI infrastructure solution.

Analysis

A market signaling real demand for distributed AI

What stands out in this quarter is not only Equinix’s growth, but why it’s happening. Enterprises are shifting AI workloads closer to their data, customers, and compliance boundaries. That shift favors Equinix’s metro-proximate data centers, dense interconnection, and global on-ramp footprint.

This aligns with our findings that 61.8% of enterprises now run primarily hybrid workloads and 73.4% plan to increase AI/ML investment. For developers, this means the old model (i.e., centralizing everything in a single cloud region) is getting replaced by regionally distributed pipelines for training, tuning, and inference. Equinix is one of the few providers positioned to support that architecture at scale.

Capacity expansion tied directly to developer needs

Equinix’s long-term plan to double capacity by 2029 isn’t speculative; it’s a response to how AI applications are really being built. Developers increasingly need:

  • reliable places to run GPU clusters
  • fast, private links between data sources and AI services
  • predictable performance across hybrid environments

The launch of Distributed AI reinforces this. It gives teams a controlled environment to test multi-region AI deployments, experiment with latency-sensitive inference, and validate data-sovereign architectures. These are the exact pressure points we see in ECI and theCUBE data, especially as 59.4% of organizations increase AIOps and automation spending.

Interconnection becomes part of the AI stack

Interconnection revenue grew 10% YoY, with Equinix adding more than 7,100 new physical and virtual connections this quarter. This matters because in the AI era, the network isn’t just plumbing; it’s part of the model performance path. Low-latency connections improve inference response times, RAG retrieval speed, vector database performance, and cost efficiency by reducing cloud egress.

Examples like Bristol Myers Squibb using Equinix to connect distributed oncology datasets to AI workloads highlight why enterprises want neutral, interconnected hubs rather than isolated regions.

Sustainability becomes part of the value proposition

Equinix is also leaning into sustainability as a competitive differentiator. Its green bond investments, clean energy programs, and recent power strategy updates show a clear shift toward delivering more sustainable AI capacity.

This resonates with developers and enterprises alike, especially as energy usage and carbon impact start appearing on AI cost reports and compliance dashboards.

Looking Ahead

Equinix’s Q3 confirms a structural shift in how companies build and run AI systems. Growth in bookings, interconnection, and capacity all point to a model where AI does not live in one place. Instead, it lives across clouds, across metros, and increasingly across borders.

For developers and platform teams, the implications are clear:

  • build for distributed inference and regional data
  • treat connectivity as part of application design
  • use programmable interconnects to manage cost and performance
  • expect sustainability requirements to tighten across AI workloads

Looking into 2026, the key watch areas will be:

  1. How quickly enterprises adopt multi-region inference patterns
  2. Whether sustainable power becomes a gating factor for AI scale
  3. How much developer tooling Equinix introduces around its Distributed AI offerings

If Equinix continues executing at this pace, it is positioned to remain a foundational piece of the global AI infrastructure layer, giving developers the capacity, connectivity, and proximity they need to run modern applications at scale.

Author

  • Paul Nashawaty

    Paul Nashawaty, Practice Leader and Lead Principal Analyst, specializes in application modernization across build, release and operations. With a wealth of expertise in digital transformation initiatives spanning front-end and back-end systems, he also possesses comprehensive knowledge of the underlying infrastructure ecosystem crucial for supporting modernization endeavors. With over 25 years of experience, Paul has a proven track record in implementing effective go-to-market strategies, including the identification of new market channels, the growth and cultivation of partner ecosystems, and the successful execution of strategic plans resulting in positive business outcomes for his clients.

    View all posts