The News
At Oracle AI World 2025, NVIDIA and Oracle unveiled a major expansion of their collaboration to accelerate enterprise AI and data processing through the launch of the OCI Zettascale10 computing cluster, powered by NVIDIA GPUs and Spectrum-X Ethernet, the first Ethernet platform purpose-built for AI. The new cluster delivers up to 16 zettaflops of AI compute, designed for large-scale training and inference workloads.
The announcement also includes deeper integration of NVIDIA AI Enterprise, NIM microservices, and RAPIDS for Apache Spark into Oracle Cloud Infrastructure (OCI), as well as Oracle Database 26ai and Oracle’s new AI Data Platform, bringing GPU acceleration, AI vector processing, and retrieval-augmented generation (RAG) capabilities to enterprise workloads. Read the full announcement on NVIDIA’s newsroom.
Analysis
Enterprise AI Moves to Zettascale
The NVIDIA–Oracle partnership represents an inflection point in enterprise AI infrastructure, converging high-performance computing with operational data systems. OCI Zettascale10 is not simply an infrastructure expansion; it signals the next generation of distributed AI compute that is designed to interconnect millions of GPUs with NVIDIA Spectrum-X Ethernet to achieve hyperscale efficiency.
This partnership responds directly to a growing market need: the ability to process, govern, and operationalize AI data pipelines end-to-end. According to theCUBE Research and ECI Day 2 findings, 59.4% of organizations are prioritizing AIOps and automation to accelerate operations, while 61.3% plan to expand observability and monitoring investments over the next 24 months. OCI Zettascale10 brings the same principle of scale used in training foundation models into enterprise data environments, which could give developers and data engineers a platform for real-time inference and vector analytics at zettascale speeds.
Accelerating the Intelligent Data Layer
Oracle’s integration of NVIDIA NeMo Retriever, NIM microservices, and cuVS libraries into Oracle Database 26ai aims to address one of the most urgent challenges in enterprise AI: the performance gap in vector search and index creation. As organizations shift toward RAG-based and multimodal AI applications, vector workloads have become integral to how enterprises extract value from their data.
theCUBE Research and ECI Day 1 study found that 69.1% of teams are confident in pre-deployment functional validation, but only 53.4% report high scalability confidence when workloads reach production. GPU-accelerated database integration will likely provide the missing bridge between AI inference and data management, potentially allowing enterprises to scale vector operations without sacrificing latency or control. By embedding NVIDIA’s acceleration directly into Oracle’s flagship data and analytics stack, this collaboration may extend AI capabilities into the core of enterprise operations and not just the edge or cloud.
Closing the Gap Between AI and Data Engineering
The inclusion of NVIDIA RAPIDS Accelerator for Apache Spark in the Oracle AI Data Platform offers a response to a widespread developer frustration: slow extract, transform, and load (ETL) cycles that stall model training and analytics. TheCUBE Research and ECI’s Day 0 survey shows that 42.1% of developers have automated only half their pipeline processes, and 24% cite complexity as a top barrier.
By coupling RAPIDS GPU acceleration with Spark’s distributed computing model, Oracle and NVIDIA aim to target that pain point by offering near real-time data processing and ML pipeline execution with no code changes. This capability could enable developers to reuse existing Spark workloads while achieving order-of-magnitude performance gains, and therefore creating a more seamless path between data engineering and model deployment.
Market Challenges and Strategic Importance
As AI adoption accelerates, data gravity, security, and compliance remain persistent enterprise obstacles. ECI and theCUBE Research DevSecOps survey shows that 48.3% of organizations strongly agree with the concept of “security-as-code”, but 41% still struggle with limited time and expertise to manage compliance effectively.
By allowing enterprises to deploy AI securely across public, sovereign, and dedicated regions, NVIDIA and Oracle may enable a consistent governance framework for global AI deployment where AI pipelines, data privacy, and inference workloads coexist under a unified compliance model. For developers, this could mean reduced friction between model innovation and operational delivery. For enterprises, it may provide the ability to scale responsibly by bringing AI compute and data processing under the same umbrella of security, performance, and cost predictability.
Looking Ahead
The NVIDIA–Oracle collaboration marks a turning point for AI-ready cloud infrastructure. With OCI Zettascale10, NVIDIA GPUs, and integrated AI data platforms, enterprises could gain the foundation to run agentic, multimodal, and data-intensive workloads with performance and governance baked in.
AI infrastructure maturity is no longer measured by GPU count alone; it’s measured by how effectively intelligence can move across the data stack. By bridging AI compute and database acceleration, NVIDIA and Oracle could change how developers and enterprises operationalize intelligence at scale.
Looking ahead, this partnership sets the stage for a broader industry shift, one where cloud providers and AI hardware leaders unite to build vertically integrated AI ecosystems. The result could be a new standard for enterprise AI, one where compute, data, and governance converge into a single, intelligent fabric powering the next decade of digital transformation.
Twilio Report Exposes Trust Gaps in Enterprise Conversational AI
New data highlights rapid adoption of conversational AI in addition to a widening divide between…

