Module 06 / 08 · Intelligence Brief
HPC-AI Market Intelligence Series · 2026
Q1 2026 Edition  ·  www.castlerockdigital.com
01 Storage
02 Systems
03 Facilities
04 Quantum
05 Processing
06 Interconnects
07 Memory
08 TCO
Intelligence Brief · Interconnects & Networking

HPC & AI
Interconnects &
Networking

The backbone of modern AI supercomputers. A $18.4B market accelerating at 24.1% CAGR — from InfiniBand NDR to NVLink 5.0, co-packaged optics, and the open UALink consortium reshaping the fabric of 100,000-GPU clusters.

Market Size 2026
$18.4B
↑ +28.4% YoY
Total AI/HPC fabric
5-Year CAGR
24.1%
↑ To $54.2B by 2031
2026 – 2031
NVLink 5.0 BW
1.8TB/s
↑ 2× vs. NVLink 4.0
Per GPU · Blackwell
NDR InfiniBand
400Gb/s
↑ 2× vs. HDR
Per port · Quantum-2
Executive Summary
The Fabric of AI Supercomputing

Interconnect infrastructure has evolved from a supporting role into a primary determinant of AI cluster performance. As GPU clusters scale toward hundreds of thousands of accelerators, the fabric connecting them — and its latency, bandwidth, and topology — dictates the speed of every training run. The market is experiencing a structural bifurcation between scale-up networks (NVLink, UALink) and scale-out fabrics (InfiniBand NDR/XDR, 800GbE), while co-packaged optics (CPO) transitions are accelerating faster than projected.

Market Revenue by Segment · 2022–2026
AI/HPC fabric total market ($B) — stacked by technology segment
2026 Revenue Share by Segment
$18.4B total — protocol / technology split
Scale-Up
NVLink 5.0 Lock-In
Blackwell B200 delivers 1.8 TB/s bidirectional bandwidth per GPU — a 4× advantage over PCIe 6.0. NVSwitch 3rd gen enables all-to-all connectivity across 576 GPUs in a single non-blocking domain.
1.8 TB/s · 576K GPU domain
Open Standard
UALink Consortium
AMD, Intel, Broadcom, Cisco, Google, Meta & Microsoft back UALink 1.0 at 200 Gb/s. UALink 2.0 roadmap targets NVLink 5.0 parity by 2027 — the first credible open alternative for scale-up.
UALink 1.0 GA · 200 Gb/s
Optics Revolution
CPO Replaces Pluggable
Co-packaged optics (CPO) eliminates SerDes interconnect overhead, slashing power 40–60% at 800G+. Intel, Broadcom, Marvell & Ayar Labs in production or advanced sampling. Full CPO dominance projected by 2028.
40–60% power reduction
2026 Inflection Point

The most consequential market tension is InfiniBand vs. Ethernet for AI scale-out. InfiniBand maintains a 3–5× latency advantage for collective operations critical in distributed training. However, 800GbE with RoCEv2 and congestion offloads is narrowing the gap — and hyperscalers (Meta, Google, Microsoft) increasingly choose open Ethernet stacks for vendor flexibility, signaling a durable split in the market.

Castle Rock Digital LLC
HPC-AI Market Intelligence Series · Module 06 / 08
Q1 2026 · castlerockdigital.com · © 2026 Castle Rock Digital LLC
LinkedIn: /castle-rock-digital-llc
01 / 02
Module 06 / 08 · Technology & Forecast
HPC-AI Market Intelligence Series · 2026
Q1 2026 Edition  ·  www.castlerockdigital.com
01 Storage
02 Systems
03 Facilities
04 Quantum
05 Processing
06 Interconnects
07 Memory
08 TCO
Technology Landscape
Protocol Generations & Specifications
Technology Generation Bandwidth Latency Use Case Status
NVLink 5.0 Blackwell B200 1.8 TB/s / GPU ~1 µs Scale-up: intra-pod Production 2025
InfiniBand XDR Quantum-3 (800G) 800 Gb/s / port ~100 ns Scale-out: cluster fabric Sampling 2025
InfiniBand NDR Quantum-2 (400G) 400 Gb/s / port ~100 ns Scale-out: AI clusters Deployed 2023+
800GbE Ethernet IEEE 802.3df 800 Gb/s / port ~1–5 µs Scale-out: hyperscale AI Deploying 2025-26
UALink 1.0 Open Consortium 200 Gb/s / port ~500 ns Scale-up: multi-vendor GA 2025
HPE Slingshot-11 Cray / DOE HPC 200 Gb/s / port ~200 ns HPC: national labs Deployed
Market Forecast 2026–2031
Base / Bull / Bear scenario ($B) — 5-year outlook
Vendor Revenue Share 2026
AI/HPC interconnect market — top vendors by revenue %
NVIDIA / Mellanox
38%
Broadcom
14%
Arista Networks
11%
Cisco
8%
HPE / Cray
7%
Marvell
5%
Cornelis + Others
17%
Top 5 vendors capture 78% of 2026 AI/HPC interconnect revenue.
NCCL used in 94% of GPU cluster deployments worldwide.
2031 Scenario Analysis
Bull Case
$71B
CAGR 31.2% · 2026–2031
AGI-scale million-GPU clusters by 2029. CPO cost parity by 2027. XDR/1.6T on 2-year GPU cadence. Sustained sovereign AI infrastructure programs globally.
Base Case
$54B
CAGR 24.1% · 2026–2031
800GbE mainstream, NVLink 5.0/6.0 generational upgrades, gradual CPO transition. UALink achieves traction by 2028 introducing price competition in scale-up segment.
Bear Case
$36B
CAGR 14.4% · 2026–2031
Algorithmic efficiency reduces cluster scaling need. AI investment cycle cools post-2027. Export control disruptions slow APAC deployments. CPO delayed to post-2029.
Full Research Report Available Now
Get the Complete HPC & AI
Interconnects Module

The full Module 06 report delivers 60+ pages of primary research: vendor competitive intelligence, topology benchmarks, RFP selection frameworks, CPO transition playbooks, and buy-side procurement guidance for AI interconnect infrastructure.

60+ page report Vendor scorecards Topology benchmarks CPO playbook RFP templates 2031 forecasts
Castle Rock Digital LLC
HPC-AI Market Intelligence Series · Module 06 / 08
Q1 2026 · castlerockdigital.com · © 2026 Castle Rock Digital LLC
LinkedIn: /castle-rock-digital-llc
02 / 02