Developers Face Growing Consumer Distrust Over AI Code Security

Developers Face Growing Consumer Distrust Over AI Code Security

The News

To mark National Cybersecurity Awareness Month, Legit Security released a new consumer study revealing growing public anxiety about AI-generated code in apps. The survey of 1,000 U.S. consumers found that 47% are concerned about AI in apps, and 1 in 4 would lose trust in their favorite app if they discovered it used AI-written code. Read the full press release here.

Analysis

The report highlights a critical tension emerging in the software industry where developers are racing to adopt AI tools, while consumers are becoming more skeptical of AI-generated code. According to theCUBE Research and ECI Research, 89.6% of developers are already using AI-based tools, but 41.3% identify APIs and identity management as their most vulnerable cloud-native stack elements.

This dissonance magnifies a widening trust gap between the people building AI systems and those using them. Developers see AI as a catalyst for productivity and innovation, but users perceive it as a potential risk to privacy and reliability, particularly when vulnerabilities or unpredictable behavior arise.

A New Mandate for AI-Native Security

The findings from Legit Security point to a new phase in software security: AI-native risk management. Nearly half of consumers believe developers are responsible for protecting their data, reflecting the growing expectation that security must be built into applications rather than added later.

Yet, the Day 2 research shows that 71% of organizations already use AIOps to accelerate operations, but 45.7% admit they spend too much time identifying root causes of incidents. These metrics suggest that while AI may be improving efficiency, it’s also introducing new complexity, especially in code provenance, model explainability, and vulnerability remediation. The shift from manual code reviews to AI-assisted coding amplifies this need for continuous, automated risk detection across the software supply chain.

Managing Trust is Getting a Makeover

Before the rise of generative coding, developers relied on deterministic pipelines with static analysis tools, human reviews, and dependency scans to ensure quality and security. These methods worked well when code origin and behavior were predictable.

However, in AI-generated codebases, only 52.6% of organizations report addressing vulnerabilities “very effectively” pre-release. Traditional scanning tools may fail to detect logic-level flaws introduced by LLMs, leading to gaps that undermine user confidence. The result is a new category of “AI-induced technical debt,” vulnerabilities emerging not from negligence, but from opaque model behavior.

Responsible AI and Visible Security

Going forward, developers will need to adopt AI-aware AppSec practices that verify model output, trace AI-generated contributions, and integrate threat modeling at the prompt level. The research suggests that transparency may be just as critical as prevention, with 53% of consumers saying app store validation influences their sense of security.

As Liav Caspi of Legit Security emphasizes, “visible signals of accountability” will become essential. This could include verifiable AI-code attestations, vulnerability SLAs for generative code, and real-time posture management to detect drifts in model-driven pipelines.

Looking Ahead

As AI moves deeper into the software supply chain, the boundary between development speed and user trust is narrowing. The consumer data from Legit Security reinforces what theCUBE Research has observed in enterprise behavior: automation without accountability creates systemic risk.

The challenge (and opportunity) lies in building a new kind of trust architecture, where AI-native security is a design principle, not a compliance afterthought. The winners in this new era of application development will be those who can code at AI speed while proving, continuously and transparently, that their software remains safe, stable, and worthy of user trust.

Author

  • Paul Nashawaty

    Paul Nashawaty, Practice Leader and Lead Principal Analyst, specializes in application modernization across build, release and operations. With a wealth of expertise in digital transformation initiatives spanning front-end and back-end systems, he also possesses comprehensive knowledge of the underlying infrastructure ecosystem crucial for supporting modernization endeavors. With over 25 years of experience, Paul has a proven track record in implementing effective go-to-market strategies, including the identification of new market channels, the growth and cultivation of partner ecosystems, and the successful execution of strategic plans resulting in positive business outcomes for his clients.

    View all posts