The News
At COMPUTEX 2025, NVIDIA announced NVLink Fusion™, a new silicon architecture that enables partners to build semi-custom AI infrastructure by integrating directly into NVIDIA’s NVLink ecosystem. Launch partners include MediaTek, Marvell, Alchip Technologies, Astera Labs, Synopsys, and Cadence, with Fujitsu and Qualcomm Technologies planning to integrate custom CPUs with NVIDIA GPUs using NVLink scale-up and Spectrum-X scale-out technologies. Learn more at NVIDIA.com.
Analysis
NVLink Fusion is more than a connectivity upgrade — it’s a strategic move that modularizes NVIDIA’s AI infrastructure stack, giving silicon partners and cloud providers the tools to build AI factories tailored to their unique needs. As trillion-parameter models push the limits of monolithic systems, NVIDIA’s partner-first, rack-scale approach enables sustainable growth across industries, geographies, and use cases.
With Mission Control software, NVLink bandwidth, and a rapidly growing partner ecosystem, NVIDIA is turning the AI data center into a programmable, customizable, and globally distributed compute fabric for the frontier era of AI.
Opening the NVIDIA AI Platform for Semi-Customization
NVLink Fusion marks a strategic expansion of NVIDIA’s AI infrastructure playbook. By opening its NVLink™ computing fabric to silicon partners, NVIDIA is enabling:
- Custom chipmakers to scale their own ASICs for high-performance inference and training
- CPU vendors like Qualcomm and Fujitsu to interconnect proprietary CPUs with NVIDIA GPUs
- Hyperscalers to adopt NVIDIA’s rack-scale systems without sacrificing architectural choice
This signals a new era of modular AI infrastructure, where compute can be semi-customized for specific performance, latency, and power efficiency needs.
Rack-Scale Architecture and Ecosystem Expansion
The NVLink Fusion rollout supports NVIDIA’s strategy to dominate AI infrastructure from silicon to software to systems. Key components include:
- NVIDIA GB300 NVL72 and GB200 NVL72 systems, delivering 1.8TB/s bandwidth per GPU (14x PCIe Gen5)
- ConnectX-8 SuperNICs, Quantum-X800 InfiniBand, and Spectrum-X Ethernet for high-speed I/O
- NVIDIA Mission Control™, an orchestration suite to manage AI factory operations and workloads
By integrating these elements with third-party silicon, NVIDIA is creating a heterogeneous AI factory blueprint that can serve hyperscalers, sovereign clouds, and industrial R&D hubs alike.
Strategic Partner Commitments
Launch partners provide diverse capabilities:
- MediaTek: Extending its automotive and datacenter silicon with high-speed interconnects
- Marvell: Custom ASICs to meet trillion-parameter model demands
- Alchip: Design + manufacturing ecosystem to scale NVLink Fusion access
- Astera Labs: Low-latency, memory-semantic interconnect solutions
- Synopsys & Cadence: EDA and IP ecosystems enabling rapid design and production
These integrations reinforce NVIDIA’s full-stack strategy while accelerating time-to-market for specialized AI infrastructure.
Fujitsu and Qualcomm: Sovereign and Efficient Compute
Fujitsu will integrate its 2nm Arm-based FUJITSU-MONAKA CPU with NVIDIA GPUs, offering sovereign, power-efficient performance for AI factories.
Qualcomm’s custom CPU roadmap, now compatible with NVIDIA rack-scale architecture, brings energy-efficient performance to data center-grade AI deployments — positioning both companies as leaders in next-gen, sustainable compute designs.
Sovereignty and Openness in AI Factories
As AI moves into mission-critical and regulated sectors, NVLink Fusion enables:
- Vertical integration without vendor lock-in
- Support for sovereign hardware configurations
- Interoperability with NVIDIA DGX Cloud Lepton marketplace and tools
These attributes are increasingly important as countries and hyperscalers seek control over foundational AI infrastructure while leveraging best-in-class performance.

