What’s Happening
Liveops has published its 2026 AI Maturity Benchmark for Customer Experience, produced in partnership with Ryan Strategic Advisory and based on a survey of 815 enterprise executives across global markets and industries. The report examines how enterprises are operationalizing AI within contact center and CX environments, introducing a four-stage maturity framework spanning from AI-assisted observation (“Crawl”) through autonomous real-time optimization (“Fly”). The headline finding is unambiguous: 73% of enterprise leaders prefer a hybrid AI-plus-human delivery model, with only 6% favoring full AI automation. Perhaps more importantly, the research identifies organizational readiness, not technology capability, as the primary constraint on AI-driven CX transformation.
The Bigger Picture
The Hybrid Model Is a Strategic Choice, Not a Transitional State
The report’s most consequential finding deserves more emphasis than it typically receives in coverage of AI announcements. When 73% of enterprise leaders actively prefer hybrid AI-human models, that is not a proxy for fear of the technology or lack of ambition. It reflects a considered architectural decision about where AI generates value and where human judgment remains structurally necessary.
ECI Research data reinforces this directly. According to our 2025 AI Builder Summit survey, enterprise AI leaders envision a future where humans and AI agents actively collaborate on complex tasks and shared goals, not one replacing the other. The Liveops benchmark lands in exactly the same place, and the convergence across two independent research programs is meaningful signal. When enterprise leaders in CX and enterprise AI leadership cohorts reach the same conclusion independently, the hybrid model is consolidating as the dominant operational design pattern, not a temporary hedge.
The practical implication is that the framing of AI-versus-human in contact centers has been wrong for some time. The more productive question is how to design the handoff layer: which interaction types, which risk profiles, and which customer emotional states warrant AI handling versus human escalation. Organizations that have clarity on those boundaries are outperforming those that default to maximizing automation volume.
The Maturity Distribution Reveals Where Most Enterprises Actually Are
The four-stage distribution is worth sitting with. Only 14% of organizations have reached “Fly” maturity, where AI continuously optimizes in real time. The bulk of the market sits in “Walk” (32%) and “Run” (29%), with a meaningful 25% still at the “Crawl” stage. This is not a story of laggards failing to keep pace. It is a realistic portrait of where the implementation curve stands in mid-2025 for a domain as operationally complex as CX.
The industry-level variation is instructive for ITDMs making sector-specific investment decisions. Gaming (61% Run/Fly), FinTech (58%), and e-commerce (54%) lead because they share structural advantages: high transaction volume, mature digital infrastructure, and competitive economics that reward automation. Public sector (47% Crawl), energy and utilities (43% Crawl), and pharmaceuticals (39% Crawl) lag not from indifference but from legitimate governance requirements. Those sectors need explainability, auditability, and human-in-the-loop accountability that current AI architectures handle imperfectly.
For ITDMs in regulated industries, this should recalibrate expectations. Aiming for “Fly” maturity in a compliance-intensive CX environment by 2026 is almost certainly the wrong goal. Aiming for well-governed “Run” maturity, with strong human oversight and measurable customer outcome improvements, is a defensible and realistic target.
For Developers: The Governance Gap Is the Actual Technical Problem
The report’s finding that change management and workforce readiness (3.7 out of 5) now outrank immature AI technologies (3.2) as transformation barriers might look like an HR problem. For developers building and operating CX AI systems, it is a product architecture problem in disguise.
Workforce readiness failures often trace back to systems that lack clear escalation logic, poor observability into agent decision paths, and handoff mechanisms that force human agents to inherit context-free mid-stream conversations. These are engineering problems. Data security and compliance (3.6) and internal alignment and ownership (3.5) are similarly downstream of architectural decisions made during platform design. A CX AI system that cannot produce an auditable decision trail makes compliance alignment nearly impossible regardless of organizational intent.
This connects directly to a broader challenge in AI operations. According to ECI Research’s 2025 AI Builder Summit survey, 44% of enterprise AI leaders have only moderate confidence that AI agents can act autonomously without human intervention. That confidence gap is unlikely to close through better models alone. It closes through better governance instrumentation: visibility into what the agent decided, why, and what fallback path it took.
What ITDMs Should Take from the Geographic Data
Japan and South Korea leading on “Fly” maturity (27% and 21% respectively) while markets like Canada, Spain, and Singapore concentrate in “Walk” reflects more than cultural variation. It reflects investment climate, regulatory architecture, and workforce adaptation programs. This matters for global enterprises building consistent CX AI strategies across regions. A centrally designed AI automation layer may need significant regional calibration to operate within compliance boundaries and meet local workforce readiness conditions. Treating CX AI as a global rollout problem rather than a regional orchestration challenge is a common and expensive mistake.
What’s Next
Operational Design Becomes the Core Competency
The Liveops benchmark signals a market transition that ECI Research has been tracking across multiple domains. AI transformation is shifting from a technology procurement exercise to an operational design discipline. The organizations moving up the maturity curve fastest are not those with access to better models. They are those with sharper answers to workflow questions: who owns the AI decision in a given scenario, how exceptions are handled, how quality is measured, and how improvement is fed back into the system.
For CX specifically, we expect to see accelerating investment in the human-AI handoff layer over the next 18–24 months. Vendors that can offer configurable escalation logic, real-time agent assist that genuinely improves mid-conversation rather than generating post-hoc summaries, and audit-ready interaction logging will have a structural advantage over those selling raw automation volume.
The Governance Infrastructure Investment Wave
ECI Research’s 2025 AI Builder Summit data found that two-thirds of enterprise AI leaders have already implemented multi-agent collaboration in live or pilot workflows. As that number grows in CX environments, the governance infrastructure demand will follow. Organizations currently at “Walk” and “Run” maturity that want to reach “Fly” without incurring unacceptable compliance risk will need to invest in QA frameworks, ownership structures, and observability tooling purpose-built for AI-human hybrid workflows. That is not a niche market. Given the 73% preference for hybrid models documented in the Liveops research, it is the center of the CX technology market for the next several years.
The practical near-term ask for ITDMs: assess your current CX AI maturity honestly against the four-stage framework, identify whether your primary constraint is organizational readiness, governance infrastructure, or integration quality, and prioritize accordingly. For developers: the next generation of CX AI systems will be evaluated less on deflection rates and more on auditability, escalation quality, and operational resilience. Build for that.
