At a recent Cloud Field Day event, Fortinet demonstrated something that should concern every cloud architect: generative AI applications are creating security vulnerabilities that traditional defenses weren’t built to handle. The presentation revealed how AI agents introduce complex traffic flows that attackers are already learning to exploit.
The Problem: Speed Meets Vulnerability
Cloud platforms offer the GPU and TPU scalability that enterprises need to rapidly deploy AI services. This competitive advantage comes with a cost: organizations are moving so fast that security often lags.
The numbers tell a concerning story: enterprises have fully implemented just 12% of their AI, yet nearly half of enterprises report theft or unauthorized access to their models, with 35% experiencing prompt injection attacks.
The challenge extends beyond AI-specific threats. Cloud networks remain fundamentally complex environments with flat architectures, multiple ingress and egress points, and persistent hygiene issues. Misconfigurations, vulnerable applications, and exposed credentials continue to plague organizations, and AI applications amplify these existing weaknesses rather than replacing them.
When Traditional Attacks Meet AI: A Three-Stage Escalation
Fortinet’s Cloud Field Day demonstration walked through a realistic attack sequence that illustrates how attackers combine established techniques with AI-specific vulnerabilities. The scenario used an AWS environment with microservices architecture, application VPCs, and a dedicated Security Services VPC connected through a Transit Gateway.
Stage One: The Entry Point
The attack began with a classic SQL injection through a chatbot interface targeting a vulnerable e-commerce application. This decades-old attack vector remains effective, granting the attacker unauthorized admin access. What made the demonstration valuable was showing how FortiWeb, operating as both a Web Application Firewall and reverse proxy, detected the malformed input. Its machine learning engine analyzed the payload and provided blocking recommendations—moving from monitor mode to active prevention in real time.
Stage Two: Exploiting Trust
The attacker then escalated using Server-Side Request Forgery (SSRF). Through prompt injection, they manipulated the AI agent to use an exposed tool to query the AWS metadata service. This exploitation of the trusted internal environment yielded temporary access keys—credentials that opened the door to deeper compromise. The attack succeeded because the AI agent had access to capabilities it shouldn’t have exposed, highlighting a configuration risk specific to AI deployments.
Stage Three: Model Corruption
With stolen credentials in hand, the attacker located an S3 bucket used for Retrieval-Augmented Generation (RAG) and uploaded a malicious file. This poisoned the training data, causing the chatbot to respond nonsensically to legitimate queries. The demonstration called this attack “making the chatbot respond like a duck”—an actual technique that corrupts the model’s behavior by introducing conflicting instructions into its knowledge base.
Fortinet’s Integrated Response Strategy
What distinguished Fortinet’s presentation was the emphasis on coordination across their security portfolio rather than isolated point solutions. The defense architecture spanned four key platforms:
- FortiWeb manages Web Application and API Protection, functioning as the first line of defense. Beyond standard WAF capabilities, it applies OWASP Top 10 for LLMs protections and employs machine learning to detect anomalous activity. A critical feature for AI environments: FortiWeb can automatically discover and document continuously changing APIs in LLM traffic, even generating Swagger documentation for development teams. This addresses a fundamental problem—if an API isn’t fully visible, it cannot be secured.
- FortiGate inspects all outbound traffic to external LLMs like OpenAI, inserting Layer 7 security services and enforcing intent-based security policies dynamically.
- FortiCNP (Cloud-Native Application Protection) provides visibility across the entire AWS account, scanning AI workloads from development through runtime. It monitors misconfigurations, vulnerabilities, and suspicious API calls made by compromised roles.
- FortiAnalyzer and FortiSOAR complete the loop by parsing logs into actionable events and automating response. In the demonstration, when FortiCNP detected a login from a new geolocation and the assumption of an over-permissioned role, FortiSOAR automatically cleaned the malicious file from S3, blocked the attacker’s IP via FortiWeb, and revoked all temporary credentials for the compromised role.
The Integration Challenge
The demonstration environment reflected real-world complexity: segmented VPCs, an LLM, an MCP server, and multiple microservices all generating traffic and potential vulnerabilities. Managing security across this architecture requires more than deploying individual tools. Protection profiles must be configured correctly, policies must adapt to daily changes, and crucially, security teams need visibility into API specifications that evolve continuously as agentic services develop.
This is where Fortinet’s approach diverges from traditional security strategies. Rather than expecting security teams to manually track every API change or configuration update in a rapidly evolving AI environment, the portfolio leverages automation and machine learning to discover, learn, and apply protection dynamically.
Why This Matters Now
The threats demonstrated at Cloud Field Day align with the OWASP Top 10 for LLMs—a framework that security professionals are still learning to implement. The attack paths are increasingly sophisticated, combining traditional vulnerabilities with AI-specific exploitation techniques. AI-enabled bots are already targeting AI services, creating a cycle of escalation.
For cloud and security architects, the takeaway from Fortinet’s presentation is clear: securing AI applications requires defense in depth across the entire lifecycle. Inline network controls must work in concert with posture management and automated response. The velocity of AI development demands security that can keep pace through orchestration rather than manual intervention.
The integration of AI into core business functions is no longer a future concern—it’s the predominant use case facing security teams today. Organizations deploying these services need security architectures capable of defending against both the SQL injections of yesterday and the model corruption attacks of tomorrow, ideally within a unified framework that provides visibility from development through runtime.
Twilio Report Exposes Trust Gaps in Enterprise Conversational AI
New data highlights rapid adoption of conversational AI in addition to a widening divide between…

