The Google 2025 DORA research report, now titled The State of AI-Assisted Software Development, marks a notable turning point: it extends beyond pure software delivery metrics into a more holistic view of team health, well-being, and the evolving influence of AI. Key shifts include:
- Addition of a “rework rate” metric to the traditional DORA four, giving more visibility into the cost of defects and unplanned fixes.
- Evidence that throughput (how fast teams ship) and stability do not need to be in trade-off; high throughput can correlate with high reliability in well-engineered organizations.
- A richer “team archetype” framework (seven profiles) that clusters teams not just by performance but by their experience of friction, burnout, and value delivery.
- Articulation of an AI Capabilities Model: seven core capabilities that enable teams to harness AI safely and effectively.
- Findings that AI is now nearly ubiquitous in software teams (~90% adoption), but its benefits (and risks) amplify existing organizational strengths or weaknesses.
- A warning: AI can accelerate dysfunction if foundational practices (platform engineering, observability, testing, version control) are not mature.
From my research at theCUBE (via “The State of DevOps and AI’s Growing Influence”), we see an additional emphasis on how the speed vs. stability myth is being challenged, and how AI is reshaping developer workflows in real-time.
In what follows, I break down the elements of this update: the metrics, the team/frontline perspective, the AI influences, and the implications. In this summary, I indicate what is known vs. what is inferred or speculative, and highlight areas for future measurement.
Team Performance, Metrics, and the Impact of AI
The 2025 DORA (DevOps Research & Assessment) report marks a significant evolution in how the software industry measures and understands engineering performance. Moving beyond the traditional four “DORA metrics” that focused purely on software delivery efficiency, this year’s report expands its scope to assess holistic team performance, blending throughput and stability metrics with measures of well-being, collaboration, and AI adoption. For the first time, the DORA research introduces new performance archetypes, a fifth metric called rework rate, and an exploration of how AI is reshaping the social and cognitive fabric of software engineering teams.
At the core of the report’s updates are five software delivery metrics: deployment frequency, lead time for changes, mean time to restore, change failure rate, and the newly added rework rate. This fifth metric quantifies how often teams must deploy unplanned fixes or patches to correct user-facing defects. DORA researchers note that rework has become an increasingly important signal in the age of AI-assisted development, where code can be produced faster but not always validated with equal rigor. Importantly, the data dispels the long-held myth that speed and stability are mutually exclusive; high-performing teams demonstrate that throughput and reliability often improve together when underpinned by strong automation, version control, and testing practices. As theCUBE Research has also observed, developers typically spend less than a quarter of their time writing new code, with the remainder spent navigating reviews, approvals, and debugging, meaning that process health now matters as much as technical speed.
This year’s report also expands the lens from delivery to team experience. DORA identifies eight drivers of holistic performance, including burnout, friction, and time spent on “valuable work.” These dimensions form the foundation for seven team archetypes, ranging from “Legacy Bottleneck” (about 11 percent of respondents) to “Harmonious High Achievers” (roughly 20 percent). These archetypes offer a nuanced view of how organizational health, team structure, and platform maturity influence performance outcomes. Teams in the higher-performing clusters show low friction, strong platform engineering practices, and healthy workload balance, while those in lower clusters often struggle with cognitive overload, unclear ownership, and bottlenecks in deployment or review workflows. Burnout and friction now serve as early warning indicators of declining performance, which is often visible before metrics like throughput or failure rates begin to degrade.
A major thematic addition in the 2025 report is the role of artificial intelligence in reshaping software engineering. According to Google’s findings, roughly 90 percent of developers now use AI tools at work, engaging with them for an average of two hours per day. Most report productivity gains and improved code quality, with 80 percent citing faster delivery and 59 percent noting better outcomes. However, the report underscores that AI’s benefits are not universal; it tends to amplify what already exists within a team. High-maturity organizations (i.e., those with strong version control, observability, and internal platforms) see outsized benefits. In contrast, teams with weak foundations experience greater instability, hidden technical debt, and mounting rework. DORA calls this the “mirror and multiplier” effect: AI reflects the quality of an organization’s practices and multiplies their impact, for better or worse.
The social and cognitive dimensions of this transformation are captured in p. 44: “The Sociocognitive Impact of AI on Professional Developers.” Here, DORA researchers discuss how AI alters not just workflow efficiency but the nature of developer cognition. Developers report that AI assistance changes how they think about problem-solving: rather than writing code from scratch, they focus more on prompt engineering, code review, and synthesis. This shift reduces rote cognitive load but increases the need for discernment, context management, and ethical awareness. The report suggests that as AI co-authors code, the human role evolves from creator to curator, a change with deep implications for team collaboration and professional identity. Developers also describe a subtle erosion of “flow” when AI suggestions interrupt natural reasoning processes, indicating that psychological safety and team communication remain essential to maintaining creativity and ownership in AI-driven environments.
The Sociocognative Transformation
The DORA report situates this sociocognitive transformation within a broader organizational warning: p. 74: “The greatest risk today isn’t falling behind, it’s pouring massive investment into chaotic activity that doesn’t move the needle.” This statement captures one of the report’s central cautions. Many organizations, in their rush to adopt AI, risk mistaking activity for progress, layering new tools on top of fragmented processes, misaligned metrics, and weak data foundations. Without intentional design, these investments generate noise rather than measurable improvement. DORA’s message is clear: AI success requires a systems approach, integrating platform engineering, clear governance, and continuous measurement. Otherwise, organizations risk accelerating dysfunction instead of innovation.
To guide that integration, the 2025 report introduces an AI Capabilities Model outlining seven essential competencies that correlate with effective AI adoption. These include a clear organizational stance on AI governance, high-quality data ecosystems, AI-accessible internal systems, robust version control, small-batch delivery practices, user-centric feedback loops, and strong internal platforms. Together, these form the scaffolding that allows teams to use AI safely and productively. Teams strong in these areas leverage AI to enhance throughput and stability, while those lacking them often experience an increase in rework, burnout, or coordination friction. Observability, in particular, emerges as a key differentiator. Without visibility into system behavior, AI-generated changes can introduce silent instability that only manifests later as technical debt.
Skill development is another critical axis explored in p. 88: “AI: A Skill Development Threat — and Opportunity.” The report warns that while AI accelerates productivity, it may also narrow the pathways for junior developer growth. As generative tools handle more of the entry-level coding work, early-career engineers risk losing the “muscle memory” and problem-solving depth that come from direct practice. DORA frames this as a paradox: AI can both erode and enable skill development depending on how organizations structure learning. When used thoughtfully, AI can accelerate mentorship, pair programming, and knowledge transfer; when used carelessly, it can create a hollowed-out talent pipeline lacking future senior expertise. The report encourages leaders to redesign training and apprenticeship models to ensure that AI supplements, rather than replaces, human learning.
The Future Impact of Burnout and Friction
From a performance perspective, DORA’s expanded dataset supports the conclusion that AI adoption correlates with improved throughput, marking a shift from prior years where early experimentation often coincided with instability. Teams now appear to be maturing in their AI integration practices. Yet, as both DORA and theCUBE Research note, metrics alone do not capture the full picture. Burnout, friction, and cognitive overload remain limiting factors, particularly in organizations scaling AI without proper governance or workload management. High-performing teams balance automation with intentional human oversight, using AI to remove toil while preserving developer agency and creativity.
The 2025 DORA findings carry clear implications for engineering leaders. AI should be treated as an organizational transformation, not a point-solution rollout. Foundational capabilities, internal platforms, data pipelines, and observability systems must come first. Leaders should track rework rate as a leading indicator of hidden instability, diagnose which archetype each team most closely resembles, and tailor their interventions accordingly. Teams with legacy bottlenecks may need architecture modernization before AI expansion; teams suffering burnout may need friction reduction and workload rebalancing; and top performers should focus on cross-team learning and scalable governance. Across all cases, investment in psychological safety, continuous feedback, and professional growth remains essential to sustain high performance.
Ultimately, the 2025 DORA report reframes the narrative around software delivery performance in the AI era. It argues that engineering excellence is no longer measured solely by speed or frequency, but by the harmony between human cognition, machine augmentation, and organizational design. AI can be either a multiplier of capability or a magnifier of chaos; the outcome depends on structure, leadership, and learning. As DORA succinctly puts it, the challenge ahead is not whether teams can adopt AI, but whether they can do so deliberately, coherently, and sustainably.