Dell–SUSE AI Factory: Why Enterprise AI Governance Comes First

The Announcement

SUSE, Dell Technologies, and a panel of ecosystem partners including Krumware, a CNCF-aligned platform engineering specialist, and regional systems integrators convened to outline a shared vision for enterprise resilience built on sovereign, open, and governance-first infrastructure. The core argument: organizations must standardize workflows and skill sets before layering AI on top, not after. The Dell–SUSE collaboration is formalizing what they call an “AI factory” framework, a structured approach that aligns infrastructure (including Dell PowerStore and PowerFlex storage integrations with SUSE Rancher/RKE2), data engineering, data science, and CI/CD into a coherent operating model. Rancher is positioned as the orchestration layer for NVIDIA GPU workloads at both edge and core, with MLOps tooling such as ClearML and Run:AI sitting above it. The message to enterprise buyers is positions to avoid having to choose between legacy virtualization and cloud-native; you can run both side by side while you modernize.

The Bigger Picture

Governance Before AI: A Necessary Inversion

The instinct of most organizations is to bolt AI onto whatever infrastructure already exists. That approach is failing. What the Dell–SUSE panel described is an inversion of the typical sequence: establish standardized workflows, replatform skills, and put governance guardrails in place first, then introduce AI agents and automation. This isn’t conservatism for its own sake. Uncontrolled AI agents operating inside complex enterprise ecosystems without clear expectations, feedback loops, or integration constraints are a genuine operational risk, not a theoretical one.

The governance-first framing aligns directly with where enterprise AI investment is actually going. According to ECI Research’s Developer Pulse survey, 59% of organizations are investing in Agentic AI for IT Operations today. That’s a majority of the market moving fast on agentic deployments, and a significant portion of those organizations are doing so without mature governance frameworks in place. The AI factory construct that Dell and SUSE are promoting is, in part, a response to this gap. It gives IT buyers a structured on-ramp that doesn’t require them to solve governance as an afterthought.

What This Means for ITDMs

For IT decision-makers, the most commercially significant element of this announcement is the side-by-side modernization strategy. Ripping out legacy virtualization to go cloud-native in one move is expensive, risky, and organizationally disruptive. The Dell–SUSE model allows enterprises to run virtualization and cloud-native workloads concurrently, migrating incrementally while maintaining operational continuity. That’s a meaningful reduction in migration risk, and it addresses one of the most common reasons enterprise modernization programs stall.

The storage integration story also carries weight. Deep certifications and CLI plugin support between SUSE Rancher/RKE2 and Dell PowerStore and PowerFlex means that enterprises with existing Dell infrastructure investments don’t have to treat those assets as stranded. That’s a defensible ROI argument, particularly in budget-constrained environments like the Latin American markets the panel referenced, where demonstrable returns and vendor trust are prerequisites for any technology decision.

The open-source angle matters for economic reasons, not just philosophical ones. Vendor lock-in is a tangible cost driver. When hardware supply chains tighten or a vendor shifts pricing models, organizations with proprietary dependencies pay a premium they can’t easily escape. CNCF-aligned, open containerized foundations give buyers negotiating leverage and architectural flexibility that closed platforms don’t.

What This Means for Developers and Platform Engineers

The “harness engineering” concept introduced by Colin Griffin deserves attention from platform and infrastructure teams. The idea is to surface environment-level metadata, including standards, endpoints, security posture, and available services, to both human developers and AI agents operating on Kubernetes-first stacks. Think of it as contextual scaffolding: rather than having developers (or agents) operate with incomplete knowledge of their operating environment, the platform exposes that context explicitly, reducing ambiguity and shortening time-to-value.

This is a specific and technically actionable response to a real problem. ECI Research data shows that nearly three in four enterprise IT leaders name AI and machine learning as a top spending priority for the next 12 months. That investment will land on developers who are being asked to build and operate AI-native applications on platforms they may not fully understand. Harness engineering is a workflow-level intervention that helps close that gap, and it’s a capability that platform engineering teams should be evaluating now rather than after AI agent deployments are already in flight.

The Rancher and RKE2 choice by Visual Cortex, operating in some genuinely harsh edge environments including 4G-constrained police deployments and high-security airport infrastructure, is also instructive. When a platform is good enough for containerized computer vision at the edge under extreme conditions, it’s almost certainly good enough for enterprise datacenter use cases. The interoperability and openness that make it viable in those edge contexts are the same properties that matter for multi-cloud and hybrid deployments more broadly.

Competitive Positioning

SUSE’s positioning here is deliberate and worth noting. By anchoring the narrative in openness, sovereignty, and choice, SUSE is drawing a clear line against proprietary Kubernetes distributions and hyperscaler-native orchestration platforms that carry lock-in risk. The partnership with Dell strengthens this position considerably: Dell brings hardware credibility, storage depth, and a large existing customer base; SUSE brings the open-source platform story and CNCF alignment. Together, they’re targeting the substantial middle of the enterprise market that wants AI infrastructure without betting the organization on a single vendor’s roadmap.

The NVIDIA dimension is also strategically important. Rapid GPU hardware cycles are compressing economics in ways that stress traditional enterprise lifecycle models. Organizations that build on open orchestration layers (Rancher managing NVIDIA GPU workloads) retain the ability to swap underlying hardware as the market evolves, which is a meaningful hedge given the pace of NVIDIA product releases.

What’s Next

Agentic AI Governance Will Become Non-Negotiable

The governance-first framing that Dell and SUSE are promoting today will likely become table stakes within 18–24 months as agentic AI deployments scale. Organizations that treat agent governance as an optional layer rather than a foundational requirement will encounter the exact failure modes the panel described: agents operating without environmental context, without integration constraints, and without feedback loops that catch errors before they propagate. The AI factory framework, or something functionally equivalent, will become a standard architectural requirement rather than a differentiating capability.

Platform engineering teams should treat the harness engineering concept as an early indicator of where the market is heading. As AI agents become more autonomous and more deeply embedded in software delivery workflows, the platforms that expose rich environmental context will have a significant advantage over those that don’t. That’s a capability worth building now.

Open Infrastructure as a Strategic Hedge

The sovereign, open-source infrastructure story will continue to gain traction as geopolitical and regulatory pressures around data locality and vendor dependency intensify. SUSE’s positioning on sovereignty, particularly the extension of sovereignty beyond data locality to include protection of learned AI patterns and people-centric exposure, reflects a more sophisticated understanding of what enterprise customers in regulated industries and sensitive geographies actually need. ECI Research finds that 82% of AI/ML teams report skill gaps in AI/ML operations, with 31.3% describing these gaps as extremely prevalent. Organizations that can deliver open platforms with strong partner ecosystems and regional proximity, as Miguel described for the Latin American market, will be better positioned to address those skill gaps than vendors who offer software alone without service and training infrastructure. The Dell–SUSE model, with its emphasis on integrated software, hardware, services, and regional partner networks, is a reasonable architectural response to a market that needs more than a product catalog.

Authors

  • With over 15 years of hands-on experience in operations roles across legal, financial, and technology sectors, Sam Weston brings deep expertise in the systems that power modern enterprises such as ERP, CRM, HCM, CX, and beyond. Her career has spanned the full spectrum of enterprise applications, from optimizing business processes and managing platforms to leading digital transformation initiatives.

    Sam has transitioned her expertise into the analyst arena, focusing on enterprise applications and the evolving role they play in business productivity and transformation. She provides independent insights that bridge technology capabilities with business outcomes, helping organizations and vendors alike navigate a changing enterprise software landscape.

    View all posts
  • Paul Nashawaty

    Paul Nashawaty, Practice Leader and Lead Principal Analyst, specializes in application modernization across build, release and operations. With a wealth of expertise in digital transformation initiatives spanning front-end and back-end systems, he also possesses comprehensive knowledge of the underlying infrastructure ecosystem crucial for supporting modernization endeavors. With over 25 years of experience, Paul has a proven track record in implementing effective go-to-market strategies, including the identification of new market channels, the growth and cultivation of partner ecosystems, and the successful execution of strategic plans resulting in positive business outcomes for his clients.

    View all posts