The News
At KubeCon North America 2025, Cisco Splunk announced a strategic shift from data storage to insights-driven observability, introducing the OpenTelemetry Injecture project in collaboration with Dash Zero and other vendors to simplify instrumentation to two commands for full host coverage. The company is decoupling data collection and storage from analytics through a “system data fabric” architecture that allows customers to store telemetry in third-party platforms like Snowflake or Azure Data Lake while Splunk manages metadata, cataloging, and the analytical engine. This addresses customer complaints about observability platform storage costs and eliminates vendor lock-in around data repositories. Splunk emphasized its position as the largest contributor to OpenTelemetry and announced collaboration with Red Hat to deliver a certified, all-in-one OpenTelemetry solution for OpenShift that provides standardized best practices. The company is developing AI-powered capabilities including natural language querying for its “mission data lake,” LLM-based log interpretation tools, and triaging agents for incident response that keep humans in the loop while automating data collection and root cause analysis. Splunk positions these AI assistants as productivity enhancers for generalist IT staff rather than specialist replacements, with support for customer-provided LLMs to ensure trustworthy AI implementation.
Analyst Take
Splunk’s strategic pivot from data storage to insights represents a fundamental repositioning driven by customer economics and competitive pressure. The acknowledgment that customers complain about “high cost of data storage on observability platforms” validates a market shift we’ve observed where organizations are drowning in telemetry data but struggling to extract proportional value. By decoupling storage from analytics, Splunk is attempting to address the core tension in observability business models where vendors have historically monetized data volume, creating incentives misaligned with customer cost optimization. This architectural separation allows customers to leverage lower-cost storage options while retaining Splunk’s analytical capabilities, but it also commoditizes a significant revenue stream. The success of this strategy depends on whether Splunk can demonstrate sufficient value in its insights layer to justify pricing when customers control the underlying data. This mirrors patterns we’ve seen in other infrastructure markets where vendors transition from infrastructure provision to value-added services.
The OpenTelemetry Injecture project addresses a critical adoption barrier we’ve documented in our Day 0 research: 29% of respondents cite “lack of internal expertise” as a barrier to adopting new development practices, and OpenTelemetry’s complexity has been a persistent complaint despite its technical advantages. Reducing instrumentation to two commands dramatically lowers the entry threshold, potentially accelerating adoption among organizations that recognize OpenTelemetry’s strategic value but lack specialized expertise. However, simplification often comes with trade-offs around customization and control. The Injecture project’s value will depend on how well the default configuration serves common use cases versus requiring extensive post-installation tuning. The collaboration with Dash Zero and other vendors suggests an ecosystem approach that could establish de facto standards, but it also introduces coordination complexity and potential fragmentation if implementations diverge.
Splunk’s emphasis on OpenTelemetry as a “business strategy” rather than just a technology reflects market maturation we’ve tracked across multiple customer conversations. Organizations increasingly view OpenTelemetry as insurance against vendor lock-in, a standardization layer for M&A integration, and a foundation for multi-vendor observability strategies. Our Day 1 research found that 43% of organizations struggle with “too many disparate tools,” and OpenTelemetry’s promise of unified instrumentation addresses this pain point. However, the observation that scalability is a “sociological challenge as much as a technical one” highlights the gap between OpenTelemetry’s technical capabilities and organizational readiness. Ensuring consistent versions, configurations, and best practices across teams requires governance frameworks, training programs, and cultural alignment that many organizations lack. The Red Hat OpenShift certified solution attempts to address this by providing “a set of best practices” rather than “a bag of Legos,” but the effectiveness depends on whether these opinionated defaults match diverse customer requirements.
The AI-powered capabilities Splunk is developing align with the broader industry trend toward AI-assisted operations that we’ve observed across multiple vendors at KubeCon. Our Day 2 research indicates that 41% of development and operations teams spend more than 25% of their time on troubleshooting and incident response, creating clear demand for automation that reduces this burden. Splunk’s positioning of AI as productivity enhancement for generalists rather than specialist replacement reflects pragmatic recognition that enterprises are shifting hiring toward generalists due to talent scarcity and cost pressures. The effectiveness of these AI tools depends entirely on the quality of their outputs. Log interpretation and root cause analysis require deep contextual understanding that current LLMs struggle to provide reliably. If Splunk’s AI assistants generate incorrect hypotheses or miss critical patterns, they will increase rather than decrease troubleshooting time. The emphasis on “human in the loop” and “trustworthy AI” suggests awareness of these risks, but the proof will be in customer adoption and measured impact on MTTR.
Looking Ahead
Splunk’s decoupled architecture strategy creates both opportunity and risk as the observability market evolves. If customers embrace the flexibility to store data in lower-cost repositories while leveraging Splunk’s analytics, the company successfully transitions to a higher-margin, insights-focused business model. But, this requires demonstrating that Splunk’s analytical capabilities deliver sufficient differentiated value to justify pricing when competitors can access the same underlying data. The rise of open source observability tools and cloud-native analytics platforms creates competitive pressure on the insights layer that didn’t exist when Splunk controlled the entire stack. The next 12-18 months will reveal whether customers perceive Splunk’s analytics as uniquely valuable or whether they increasingly substitute lower-cost alternatives once data is decoupled from the platform.
The OpenTelemetry Injecture project and Red Hat collaboration represent Splunk’s bet on simplification and standardization as key adoption drivers. If these initiatives successfully reduce OpenTelemetry complexity and establish widely-adopted best practices, Splunk positions itself as the de facto enterprise observability platform for OpenTelemetry-instrumented environments. This strategy depends on ecosystem coordination and community adoption that Splunk cannot fully control. Competing vendors may develop alternative simplification approaches, and the OpenTelemetry community’s governance structure could evolve in directions that conflict with Splunk’s commercial interests. The company’s status as the largest OpenTelemetry contributor provides influence but not control, and maintaining community trust while pursuing commercial objectives requires careful balance. As OpenTelemetry matures and adoption broadens beyond early adopters, the vendors that successfully bridge the gap between open source flexibility and enterprise operational simplicity will capture disproportionate market share in the post-proprietary-agent observability landscape.
