Cilium’s Next Act Is About Securing and Connecting AI-Ready Kubernetes Infrastructure

At KubeCon EU 2026, Isovalent’s message was less about novelty than continuity. In a show increasingly dominated by GPUs, agents, and AI infrastructure, Nico’s argument was that the underlying networking and runtime foundations still matter just as much; arguably more.

That is the core takeaway from the conversation. As Kubernetes expands to support AI workloads, sovereign cloud ambitions, and virtualization migration, the control points around networking, security, and workload connectivity are becoming more strategic. Cilium’s position is that eBPF-based networking and runtime security are no longer niche infrastructure choices. They are becoming part of the default architecture for modern Kubernetes environments.

AI infrastructure still depends on networking fundamentals

One of the more revealing parts of the discussion was Nico’s observation that KubeCon itself is in transition. The event is increasingly shaped by AI topics, but many attendees are still trying to reconcile that shift with Kubernetes’ original infrastructure focus.

That tension is important because AI infrastructure does not replace cloud-native fundamentals. It intensifies them.

As organizations move toward AI-as-a-service, GPU-backed multi-tenancy, and distributed inference, networking becomes more (not less) important. High-performance connectivity, secure workload isolation, encryption, and policy enforcement all become harder when environments are more distributed, more resource-intensive, and more sensitive to latency.

This is where Cilium’s continued traction is meaningful. Nico pointed out that Cilium has already been selected by major hyperscalers including Google, AWS, and Microsoft, and is also being adopted by newer AI-focused cloud providers and sovereign cloud environments. That pattern suggests the same networking stack is proving relevant across both conventional Kubernetes deployments and newer AI-oriented infrastructure models.

Why Cilium’s architecture still resonates

Cilium’s long-term argument has been that networking and security functions can be handled more efficiently in the kernel using eBPF, rather than by layering more proxies and sidecars into every workload path.

That argument has aged well.

Nico revisited the earlier service mesh debate, where sidecar-based architectures were often treated as the default answer for traffic management and policy enforcement. The issue was not that sidecars were irrational. They solved a real portability problem at the time. But the cost of attaching a sidecar to every pod has become harder to justify as environments scale.

The market appears to be moving in Cilium’s direction. Nico noted that sidecarless approaches have since been echoed elsewhere, including in ambient mesh models. The broader point is that efficiency is no longer optional. In AI-heavy environments especially, wasting resources on unnecessary infrastructure overhead becomes more visible and more expensive.

That is why Cilium’s value proposition remains durable. It is not just about networking performance. It is about reducing architectural drag while still enforcing connectivity, observability, and security.

Runtime security is moving closer to the infrastructure layer

The conversation also highlighted a second shift: runtime security is becoming more operational and more immediate.

Nico described how runtime security capabilities are now extending into Cisco’s own infrastructure products, including switches and routers running Linux-based environments. The most interesting implication was the idea of virtual patching at runtime. Instead of waiting for a full upgrade cycle, reboot window, or maintenance event, organizations can remediate certain vulnerabilities tactically while systems remain in operation.

That is a meaningful change in posture.

In traditional infrastructure environments, patching often creates a tradeoff between security urgency and operational disruption. Runtime security changes that equation by making remediation more continuous and less dependent on maintenance windows. That will resonate in environments where uptime, distributed operations, and security exposure all have to be managed simultaneously.

The point is bigger than one feature. It reflects a market shift in which security is moving from periodic intervention toward embedded enforcement at the infrastructure layer.

Virtualization migration is becoming a networking problem

Another notable theme was Isovalent’s work on network virtualization, aimed at helping organizations migrate from traditional virtualization environments into Kubernetes.

That is an important framing. Migration is often discussed as a compute or platform issue, but in practice networking is one of the biggest sources of friction. Enterprises may be willing to move workloads into Kubernetes, but they still need continuity across policies, topology, and operational models.

Nico described the new capability as a bridge between legacy VMware-style environments and Kubernetes-based infrastructure. That matters since many organizations are not just looking for a replacement platform. They are looking for a migration path that reduces operational discontinuity.

This is where networking becomes strategic. If Kubernetes is going to absorb more virtualized workloads, the transition has to feel manageable from a connectivity and policy perspective; not just from a compute perspective.

Sovereignty and multi-tenancy raise the stakes

The European context also matters here. Nico pointed to growing interest from sovereign cloud providers and newer AI cloud operators that need stronger security, encryption, and multi-tenant isolation.

That aligns with broader market pressure. As enterprises and governments push for more control over where workloads run and how infrastructure is governed, networking and runtime security become part of the sovereignty conversation. It is not enough to host workloads locally. Organizations also need confidence that multi-tenant environments can be segmented, monitored, and secured appropriately.

That is especially true for GPU-backed AI infrastructure, where expensive shared resources increase both the operational value and the security sensitivity of the environment.

Bottom line

Isovalent’s message at KubeCon EU 2026 was not that AI changes everything. It was that AI makes the existing infrastructure layers more important.

Enterprises still need high-performance networking, efficient service connectivity, runtime security, and a realistic migration path from older virtualization environments. If anything, AI workloads, sovereign cloud requirements, and multi-tenant GPU environments raise the cost of getting those layers wrong.

Cilium’s bet is that eBPF-based networking and runtime security belong at the center of that transition. Given its adoption across hyperscalers, neo-clouds, and sovereign environments, that is looking less like a specialized open-source position and more like a blueprint for how Kubernetes infrastructure is evolving.

Author

  • With over 15 years of hands-on experience in operations roles across legal, financial, and technology sectors, Sam Weston brings deep expertise in the systems that power modern enterprises such as ERP, CRM, HCM, CX, and beyond. Her career has spanned the full spectrum of enterprise applications, from optimizing business processes and managing platforms to leading digital transformation initiatives.

    Sam has transitioned her expertise into the analyst arena, focusing on enterprise applications and the evolving role they play in business productivity and transformation. She provides independent insights that bridge technology capabilities with business outcomes, helping organizations and vendors alike navigate a changing enterprise software landscape.

    View all posts