The Context
The AI era is accelerating at an extraordinary pace. Industry projections show that by 2028, 95% of organizations will have integrated generative AI into their daily operations, compared to just 15% in 2025. That exponential adoption curve is transforming not only how applications are built but also how they must be secured and scaled.
At VMware Explore 2025, Broadcom previewed a series of enhancements to VMware vDefend and VMware Avi Load Balancer designed specifically for this new landscape. The updates focus on three areas: providing AI-driven insights into firewall and load balancing operations, extending Zero Trust models into AI workloads, and supporting emerging protocols like the Model Context Protocol (MCP) that underpin agentic AI ecosystems. Read the full press release here.
Why Developers Should Care
From a developer perspective, these innovations tackle some of the most pressing pain points when deploying AI-native applications. Traditional firewall and rule-based security models were not designed for the dynamic communication patterns of AI inference or agentic workflows. Likewise, standard load balancers were built for web apps, not large language model endpoints that demand multi-terabit throughput and elastic scale-out. VMware’s move to bring these capabilities natively into its Private AI Foundation and Cloud Foundation stack is an attempt to simplify the developer experience while reducing operational risk.
Key Innovations for AI/GenAI Workloads
- vDefend GenAI Assistant: Provides AI-driven recommendations for firewall operations, not just static documentation lookups but dynamic insights (e.g., blocked apps, real-time policy violations).
- Zero-Trust Lateral Security: Tech preview for securing AI workloads inside PAIF, ensuring model runtimes, agent builders, and retrieval services are segmented and locked down.
- 20 Tbps Distributed Firewall Performance: High-scale throughput for enterprise AI environments with rapid changes in workload communication.
- Avi GenAI Assistant: Natural language reasoning for troubleshooting, upgrades, and configuration. Simplifies Day 0 setup and Day 1+ operations through conversational AI.
- AI-Ready Load Balancing: Tech preview of Avi delivering elastic load balancing for AI workloads (e.g., LLM endpoints with RAG integration).
- MCP Intelligence in Avi: Load balancing and securing Model Context Protocol (MCP) traffic with JWT authorization and persistence, enabling MCP-based agent ecosystems.
Industry Impact
From an industry perspective, these previews illustrate a broader shift. As we have noted, AI adoption will succeed or fail based on the ability of enterprises to operationalize workloads securely and reliably. VMware is betting that embedding AI into the infrastructure stack, rather than relying on bolt-on tools, will provide enterprises with the speed and governance required to move beyond proofs of concept.
Developer Takeaway
For developers, security and load balancing should no longer be obstacles to deploying AI-native applications. With GenAI-driven assistants, Zero Trust enforcement for agentic workloads, and AI-aware load balancing, VMware aims to reduce the friction between infrastructure complexity and application delivery. In a landscape where velocity and security are often at odds, these innovations point toward a more balanced future where developers can focus on building intelligent applications while trusting the platform to scale and protect them.