The News:
At KubeCon Europe 2025, Mirantis announced that Netherlands-based private cloud provider Nebul has deployed the open-source k0rdent platform to power an on-demand AI inference service. The solution integrates NVIDIA GPU Operator and Gcore Everywhere Inference, enabling customers to run AI inference workloads with high performance and data sovereignty. Read the original announcement here.
Analysis:
According to industry analysts, by 2027, more than 50% of enterprises will have deployed AI workloads in environments outside centralized data centers—including edge, sovereign, and hybrid clouds. The k0rdent deployment at Nebul demonstrates how open-source platforms are meeting this demand. With support for NVIDIA’s AI stack and real-time orchestration of inference workloads, this model enables lower total cost of ownership (TCO), faster time-to-inference, and greater assurance for data governance. The future of enterprise AI depends on scalable, sovereign-ready platforms—and open-source is leading the charge.
European Cloud Landscape Shifts Toward Open-Source AI Infrastructure
The European cloud ecosystem is undergoing a strategic transformation driven by the need for data sovereignty, cost-efficiency, and AI performance at scale. Amid regulatory pressures like GDPR and the growing demand for inference workloads, cloud service providers are seeking alternatives to proprietary virtualization stacks. This is where Mirantis’ open-source k0rdent platform becomes a differentiator.
According to theCUBE Research, the convergence of open-source infrastructure, container orchestration, and AI is redefining the private cloud landscape. Nebul’s implementation of k0rdent demonstrates how providers can reduce operational complexity while modernizing for scalable AI—especially across sovereign infrastructure.
Nebul Brings Inference-as-a-Service to the Edge
With support for OpenStack, bare metal Kubernetes, and the sunset of VMware, Nebul’s deployment of k0rdent delivers on the promise of composability and multi-cluster orchestration. By layering in NVIDIA GPU acceleration and integrating with Gcore’s Everywhere Inference, Nebul can deliver GPU-optimized workloads dynamically and securely.
Inference-as-a-Service provides a new economic model for private cloud providers. Instead of provisioning static GPU workloads, Nebul can now dynamically allocate resources based on user demand—minimizing idle time and maximizing ROI on expensive infrastructure investments. This shift aligns with the broader industry trend of operationalizing AI across hybrid and edge environments.
Prior Challenges in Private Cloud AI
Previously, enterprises seeking to run AI workloads faced hurdles like fragmented infrastructure, vendor lock-in, and inefficient GPU utilization. VMware-based stacks weren’t designed to support the dynamism of inference workloads or the composability required for modern multi-cloud deployments. Platform teams were left managing infrastructure across silos—slowing time-to-value.
Nebul’s journey reflects a common pivot: moving away from legacy infrastructure toward open, policy-driven platforms that offer unified control planes and declarative automation. This allows organizations to bring trained models to their data with minimal friction—crucial for regulated industries like finance and healthcare.
The Impact of k0rdent on Platform Engineering
With k0rdent, Nebul achieves multi-cloud orchestration with a Kubernetes-native, open-source solution that integrates seamlessly with NVIDIA’s full-stack AI tools. Features like GPU-aware scheduling, dynamic resource provisioning, and zero-touch deployment templates give platform engineering teams the power to scale infrastructure without increasing complexity.
The integration with Gcore ensures that inference tasks are routed to the nearest available GPU—boosting performance and reducing latency. Policy-driven automation, as enabled by k0rdent, also improves governance and utilization, two top challenges for enterprise AI operations, according to industry analysts.
Looking Ahead:
The Nebul-Mirantis partnership signals broader market moves as private and sovereign cloud providers pivot to support AI-native workloads. Nebul is now positioned to offer turnkey inference capabilities that rival hyperscaler services while maintaining compliance with EU data sovereignty laws. Future enhancements could include support for foundation model hosting, retrieval-augmented generation (RAG), or fine-tuning pipelines—all deployed via the same composable control plane.
Meanwhile, Mirantis’ growing ecosystem of validated integrations, including GPU operators and data plane accelerators, positions k0rdent as a cornerstone for open AI infrastructure. As more organizations move toward distributed AI, open-source solutions like k0rdent offer the transparency, flexibility, and cost-efficiency needed to scale responsibly.
How AWS and Apache Pinot Power Real-Time Gen AI Pipelines
7Signal’s Strategic Migration from Apache Clink to Apache Pinot
How Life360 Scales Family Safety with Real-Time Geospatial Analytics and Apache Pinot
Nubank Tames Real-Time Data Complexity with Apache Pinot, Cuts Cloud Costs by $1M
With over 300,000 Spark jobs running daily, Nubank’s innovative observability platform, powered by Apache Pinot,…
How CrowdStrike Scaled Real-Time Analytics with Apache Pinot
In today’s cybersecurity landscape, time is everything. Threat actors operate at machine speed, and enterprise…
How Grab Built a Real-Time Metrics Platform for Marketplace Observability
In the ever-evolving landscape of digital platforms, few companies operate with the complexity and regional…