NVQLink Adoption and RIKEN Partnership Signal Ambition

The News

NVIDIA announced broad adoption of NVQLink, a universal interconnect for linking quantum processors with GPU-accelerated computing, by more than a dozen supercomputing centers across Asia, Europe, the Middle East, and the U.S. NVQLink delivers 40 petaflops of AI performance at FP4 precision with 400 Gb/s GPU-QPU throughput and sub-four-microsecond latency, enabling hybrid quantum-classical workflows via the CUDA-Q platform. 

Analyst Take

Quantum Computing Remains Experimental, Not Enterprise-Ready

NVIDIA’s NVQLink interconnect addresses the legitimate technical challenges in quantum-classical integration of low-latency communication, high-throughput data transfer, and real-time error correction. Quantinuum’s demonstration of 67-microsecond decoding for quantum error correction is a meaningful technical milestone, showing that GPU-accelerated classical computing can support real-time quantum error correction at speeds far exceeding current requirements. 

Our research consistently emphasizes that solutions must not look like experimental toys but be enterprise-ready, with governance, security, observability, and unified lifecycle management as non-optional requirements. Knowing this, it’s important to note that quantum computing in 2025 remains firmly in the experimental and research phase, with no clear path to production-grade enterprise applications. Organizations should recognize that NVQLink is a research infrastructure play, not a near-term enterprise technology. 

Before we can reach wider adoption, NVIDIA will need to address critical enterprise concerns including total cost of ownership, operational complexity, skills requirements, or realistic timelines for production deployment. Supercomputing centers and national labs are the appropriate early adopters, but enterprises should not interpret this announcement as a signal that quantum-GPU systems are ready for business use.

RIKEN Partnership Raises Questions About Vendor Lock-In

RIKEN’s deployment of 2,140 NVIDIA Blackwell GPUs across two new supercomputers reflects Japan’s commitment to sovereign AI infrastructure and domestic computational capability. This aligns with a global trend toward regional AI infrastructure investments, driven by data sovereignty, national security, and industrial competitiveness concerns. However, the exclusive reliance on NVIDIA hardware raises questions about vendor lock-in and long-term flexibility. 

Our research shows that organizations increasingly prefer multi-vendor, best-of-breed component approaches over unified, single-vendor platforms, and ecosystem partnerships (NVIDIA, hyperscalers, global system integrators) are cited as very important in vendor selection. RIKEN’s deep integration with NVIDIA’s architecture, including the planned use of NVLink Fusion to connect Fujitsu’s MONAKA-X CPUs with NVIDIA GPUs in FugakuNEXT, creates significant switching costs and limits future architectural flexibility. Organizations evaluating sovereign AI infrastructure should understand the trade-offs of interoperability, open standards, and vendor-neutral integration points, or risk building national infrastructure on proprietary architectures that may constrain future innovation.

“100x Performance” Claims for FugakuNEXT Lack Context and Benchmarking Details

NVIDIA and RIKEN claim that FugakuNEXT will deliver “100x greater application performance compared with supercomputers based on CPUs or other existing systems,” but at this time we have not seen benchmarking methodology, baseline comparisons, or workload specifics. Performance claims without context are marketing, not technical validation. 

Organizations should ask: 100x faster than what? On which workloads? At what cost? With what power consumption? The claim that FugakuNEXT will “integrate production-level quantum computers in the future” is aspirational, not a committed roadmap, and the 2030 timeline suggests that even RIKEN views quantum integration as a long-term research goal, not a near-term capability. Enterprises should be skeptical of vendor positioning that conflates research milestones with production readiness, and focus on receiving rigorous, workload-specific benchmarks before making infrastructure commitments.

AI Infrastructure Cost Remains a Top Concern

Our research consistently identifies AI infrastructure cost as a top concern for organizations, alongside scaling for AI and skills shortages. Quantum-GPU hybrid systems will amplify these challenges since quantum processors require specialized cryogenic infrastructure, ultra-low-latency networking, and expert operators; GPU clusters demand massive power, cooling, and data center capacity; and the integration of both adds architectural complexity and operational overhead. 

As mentioned previously, so far total cost of ownership, operational complexity, or the skills gap required to operate quantum-GPU systems have not been addressed. Organizations should recognize that even if quantum-GPU systems eventually deliver transformative performance, the cost, complexity, and skills requirements will likely confine them to hyperscale research institutions and national labs for the foreseeable future. Mid-market enterprises and even large enterprises should focus on optimizing existing AI infrastructure and addressing current cost, governance, and skills challenges before considering experimental quantum-GPU architectures.

Looking Ahead

NVIDIA’s NVQLink and RIKEN partnership announcements signal long-term ambition and technical progress in quantum-classical integration, but organizations should not mistake research infrastructure investments for near-term enterprise opportunities. Quantum computing remains experimental, with no clear path to production-grade business applications in the next five years. The real enterprise story is the continued expansion of sovereign AI infrastructure, driven by national competitiveness and data sovereignty concerns, but organizations must balance ambition with pragmatism. Vendor lock-in, interoperability, and total cost of ownership are critical considerations that need addressed.

The maturity path from prototype to production to scale requires governance, security, observability, and unified lifecycle management. Organizations should prioritize optimizing existing AI infrastructure, addressing cost and skills gaps, and aligning to vendor-neutral, interoperable architectures before investing in experimental quantum-GPU systems. As the quantum computing ecosystem matures, the winners will be those who deliver enterprise-ready solutions with clear ROI, not just impressive technical milestones in research labs.

Author

  • Paul Nashawaty

    Paul Nashawaty, Practice Leader and Lead Principal Analyst, specializes in application modernization across build, release and operations. With a wealth of expertise in digital transformation initiatives spanning front-end and back-end systems, he also possesses comprehensive knowledge of the underlying infrastructure ecosystem crucial for supporting modernization endeavors. With over 25 years of experience, Paul has a proven track record in implementing effective go-to-market strategies, including the identification of new market channels, the growth and cultivation of partner ecosystems, and the successful execution of strategic plans resulting in positive business outcomes for his clients.

    View all posts