GPU Isolation for Multi-Tenant AI Workloads
The News
At KubeCon North America 2025, Mirantis announced a new virtualization platform built on k0rdent, positioning it as an extension of cloud-native modernization rather than a one-to-one VMware replacement. The platform addresses customer demand for virtualization that supports cloud-native transformation while delivering GPU orchestration and isolation capabilities critical for multi-tenant AI workloads.
Mirantis emphasizes GPU isolation as a major driver for the platform, enabling customers to prevent data leaks when multiple tenants share GPU resources, addressing scenarios where competing organizations require guaranteed isolation.
The solution creates isolation through virtualization across multiple layers, including VMs, CPU, memory, and GPU itself, using technologies like NVIDIA MIG (Multi-Instance GPU), orchestrating this complexity to provide each tenant with secure, isolated resource slices that are properly cleaned up after use.
Analyst Take
Mirantis’s go-to-market strategy targets three customer segments: neo-clouds (NCPs) with skilled teams facing monetization challenges and time-to-market pressure, forward-looking enterprises, including banks and government agencies with innovation agendas but skills gaps requiring rapid AI operationalization, and the lagging majority seeking VMware alternatives and paths to modernize by unifying VMs and containers. The k0rdent platform aims to deliver standardized automation and meta-controlled infrastructure across bare metal, public cloud, and private cloud environments.
Mirantis’s positioning as a modernization platform rather than a VMware replacement reflects strategic repositioning away from direct competition with Broadcom’s VMware and emerging alternatives like Nutanix, instead targeting organizations viewing virtualization as a bridge to cloud-native rather than an end state.
This positioning addresses a real market segment, enterprises with substantial VM estates that recognize containers and Kubernetes as the future but cannot execute wholesale migration, but it also creates messaging complexity. Organizations seeking a simple VMware replacement may overlook Mirantis if the modernization narrative suggests complexity and transformation rather than continuity. The company’s challenge is attracting customers motivated by VMware licensing changes while simultaneously positioning for long-term cloud-native transformation rather than perpetuating legacy virtualization patterns.
The emphasis on GPU isolation for multi-tenant AI workloads addresses a critical gap as organizations attempt to monetize GPU infrastructure or share expensive GPU resources across teams with conflicting security requirements. The “Coke and Pepsi” scenario, competing organizations requiring guaranteed isolation on shared infrastructure, represents a real constraint for cloud service providers and enterprises operating internal GPU-as-a-service platforms.
However,the technical complexity of delivering true isolation across GPU, memory, and data plane while maintaining performance and utilization efficiency remains substantial. NVIDIA MIG provides GPU partitioning but with limitations around supported GPU models, partition granularity, and performance overhead. Mirantis’s ability to orchestrate multi-layer isolation (VM, CPU, memory, GPU) through k0rdent (think mainframe LPAR to partition the resources into logical partitions) could differentiate the platform, but success depends on whether the orchestration complexity remains manageable for operations teams or becomes another layer of abstraction that obscures problems rather than solves them.
The three-segment go-to-market strategy, neo-clouds, forward-looking enterprises, and the lagging majority, reflects recognition that different customer types have fundamentally different needs and buying motivations. Neo-clouds prioritize time-to-market and operational efficiency over feature completeness. Forward-looking enterprises need to bridge skills gaps while operationalizing AI quickly, and the lagging majority seeks safe migration paths from legacy infrastructure.
However, this segmentation creates product and messaging challenges: features that appeal to neo-clouds (flexibility, extensibility, control) may overwhelm lagging majority customers seeking turnkey solutions, while enterprise-focused governance and compliance capabilities may add complexity that neo-clouds view as unnecessary overhead. Mirantis must determine whether k0rdent can serve all three segments with configuration and packaging variations or whether attempting to address disparate needs dilutes the platform’s value proposition for each segment.
The emphasis on standardized automation and meta-controlled infrastructure across bare metal, public cloud, and private cloud environments positions k0rdent as an infrastructure abstraction layer, but this creates both opportunity and risk. Organizations operating hybrid and multi-cloud environments face genuine complexity in managing inconsistent APIs, tooling, and operational models across infrastructure types.
A unified control plane that abstracts these differences could reduce operational burden, but abstraction layers also introduce indirection that complicates troubleshooting and can mask underlying infrastructure problems until they become critical. Our Day 1 research found that 43% of organizations struggle with “too many disparate tools,” suggesting demand for consolidation, but the question is whether organizations prefer infrastructure-level abstraction (Mirantis’s approach) or application-level abstraction through Kubernetes and service mesh that treats infrastructure as a commodity.
Looking Ahead
Mirantis’s success with the k0rdent-based virtualization platform depends on correctly timing the market transition from VMware-centric virtualization to cloud-native infrastructure. If enterprises rapidly abandon virtualization in favor of containers and Kubernetes, Mirantis’s positioning as a modernization bridge becomes a short-lived opportunity.
Conversely, if VM workloads persist longer than cloud-native advocates expect, due to application constraints, operational inertia, or regulatory requirements, Mirantis’s platform could capture sustained demand. The next 12-18 months will reveal whether the VMware licensing disruption accelerates cloud-native adoption or drives customers toward alternative virtualization platforms that perpetuate existing patterns. Mirantis must balance investment in virtualization capabilities that address near-term VMware migration with cloud-native features that position it for long-term relevance as the market evolves.
The competitive landscape for GPU orchestration and multi-tenant AI infrastructure is intensifying as hyperscalers, Kubernetes distributions, and specialized AI platforms all target the same workloads. Mirantis competes with native Kubernetes GPU scheduling, with emerging platforms like Run.ai and NVIDIA DGX Cloud focused specifically on AI infrastructure, and with hyperscaler offerings that bundle GPU access with managed services.
The company’s differentiation depends on delivering superior multi-tenant isolation and hybrid cloud orchestration that justifies the complexity of an additional infrastructure layer. As GPU availability increases and pricing becomes more competitive, the value proposition of complex orchestration and isolation may diminish, organizations may prefer simpler dedicated GPU allocation over sophisticated sharing and isolation if the cost difference narrows. Mirantis’s ability to demonstrate clear ROI through improved GPU utilization, faster time-to-market, and reduced operational complexity will determine whether k0rdent becomes essential infrastructure or remains a niche solution for specific use cases where multi-tenant isolation requirements justify the additional complexity.

