The backbone of modern AI supercomputers. A $18.4B market accelerating at 24.1% CAGR — from InfiniBand NDR to NVLink 5.0, co-packaged optics, and the open UALink consortium reshaping the fabric of 100,000-GPU clusters.
Interconnect infrastructure has evolved from a supporting role into a primary determinant of AI cluster performance. As GPU clusters scale toward hundreds of thousands of accelerators, the fabric connecting them — and its latency, bandwidth, and topology — dictates the speed of every training run. The market is experiencing a structural bifurcation between scale-up networks (NVLink, UALink) and scale-out fabrics (InfiniBand NDR/XDR, 800GbE), while co-packaged optics (CPO) transitions are accelerating faster than projected.
The most consequential market tension is InfiniBand vs. Ethernet for AI scale-out. InfiniBand maintains a 3–5× latency advantage for collective operations critical in distributed training. However, 800GbE with RoCEv2 and congestion offloads is narrowing the gap — and hyperscalers (Meta, Google, Microsoft) increasingly choose open Ethernet stacks for vendor flexibility, signaling a durable split in the market.
| Technology | Generation | Bandwidth | Latency | Use Case | Status |
|---|---|---|---|---|---|
| NVLink 5.0 | Blackwell B200 | 1.8 TB/s / GPU | ~1 µs | Scale-up: intra-pod | Production 2025 |
| InfiniBand XDR | Quantum-3 (800G) | 800 Gb/s / port | ~100 ns | Scale-out: cluster fabric | Sampling 2025 |
| InfiniBand NDR | Quantum-2 (400G) | 400 Gb/s / port | ~100 ns | Scale-out: AI clusters | Deployed 2023+ |
| 800GbE Ethernet | IEEE 802.3df | 800 Gb/s / port | ~1–5 µs | Scale-out: hyperscale AI | Deploying 2025-26 |
| UALink 1.0 | Open Consortium | 200 Gb/s / port | ~500 ns | Scale-up: multi-vendor | GA 2025 |
| HPE Slingshot-11 | Cray / DOE HPC | 200 Gb/s / port | ~200 ns | HPC: national labs | Deployed |
The full Module 06 report delivers 60+ pages of primary research: vendor competitive intelligence, topology benchmarks, RFP selection frameworks, CPO transition playbooks, and buy-side procurement guidance for AI interconnect infrastructure.