The Announcement
Starburst has announced AI & Datanova 2026, a two-day conference scheduled for May 27–28 in Miami Beach, Florida, with a virtual attendance option. The event brings together enterprise data and AI leaders from NVIDIA, GEICO, Highmark Health, Citizens Bank, and others to address a problem that is quietly stalling enterprise AI programs: data infrastructure was built for reporting, not for AI. The conference centers on how organizations move AI from pilot environments into production by building governed, distributed data foundations that AI systems can actually rely on.
The Bigger Picture
The conference announcement is, at one level, a marketing event. But the agenda Starburst has constructed tells a more substantive story about where enterprise AI is actually stuck.
The Real Bottleneck Is Not the Model
The most important sentence in the Starburst announcement is this one: “AI isn’t being held back by models. It’s being held back by data that is fragmented, inconsistently defined, and difficult to govern across systems.” That framing is blunt, accurate, and increasingly widely shared among practitioners who have spent the last two years watching capable models produce unreliable outputs because the data feeding them is a mess.
This is not a new problem, but it has a new urgency. ECI Research’s 2025 report Stop Managing Cloud Costs. Start Managing Cloud Strategy. observed that many FinOps initiatives fail by fixating on savings instead of systems — automation is implemented without strategy, and governance becomes a checklist rather than a discipline. The same dynamic applies directly to enterprise AI. Organizations have invested heavily in model capabilities and inference infrastructure while treating data governance as an afterthought, and they’re now paying for it in failed pilots and production AI that can’t be trusted.
The Datanova speaker lineup makes this concrete. The GEICO session focuses on putting AI agents into production across coordinated workflows. The Highmark Health conversation addresses the economics of scaling AI while managing cost and complexity. Financial services leaders from Citizens Bank, TIAA, and MinIO are discussing data readiness in regulated environments specifically because governance, consistency, and auditability are non-negotiable in those sectors. These are not theoretical problems. They are the operational barriers that separate a proof of concept from a production system.
What This Means for ITDMs
For IT decision-makers, the strategic question the conference surfaces is whether your data architecture is a first-class consideration in AI planning, or an afterthought being retrofitted after the model decisions were already made. Retrofitting is expensive, slow, and often politically complicated because it touches systems owned by multiple teams.
The fragmentation problem is significant. According to ECI Research, the average enterprise now uses more than two public cloud platforms, with Kubernetes, Snowflake, and GenAI often coexisting across a patchwork of teams, workloads, and tools. That operational reality means AI systems need to draw on data that is distributed across environments with different access controls, different schema conventions, and different ownership structures. Building consistent context for AI under those conditions requires federated data access, not centralization, which is precisely the architectural bet Starburst has made.
ITDMs evaluating this space should ask a specific set of questions: Can your current data platform provide governed, consistent context to AI systems across hybrid and multi-cloud environments without requiring you to move the data? Do your data products have well-defined semantics that AI systems can rely on, or are definitions inconsistent across teams? And critically, what happens to AI model reliability when a data source changes without notice?
What This Means for Developers
For the engineering teams building and operating AI systems, the Datanova agenda points to a category of infrastructure problem that doesn’t get enough attention in the AI tooling conversation. Most of the energy in the developer tooling market has gone toward inference infrastructure, model fine-tuning, and agent orchestration. Data access governance and federated query across distributed sources is less exciting but arguably more directly responsible for whether an AI system works reliably in production.
Starburst’s technical positioning, built on Trino and Apache Iceberg, is relevant here. Open standards matter because they reduce the risk of data layer lock-in at a moment when the rest of the AI stack is also consolidating around a small number of platforms. Developers building AI applications that need to query across on-premises systems, multiple clouds, and third-party data sources have a practical interest in whether their query engine can do that without requiring data movement, and whether the access controls are consistent enough to pass enterprise security review.
The financial services sessions at Datanova are worth particular attention for developers working in regulated industries. The combination of HIPAA or GDPR compliance requirements, strict access controls, and AI systems that need real-time or near-real-time data access creates an architecture problem that most developer tutorials don’t address. ECI Research has found that more than 40% of cloud governance breakdowns stem not from malicious misuse but from ambiguous ownership and inaction on known recommendations. AI systems that operate autonomously against poorly governed data will surface those ownership ambiguities quickly, and in production.
What’s Next
From Conference Signal to Market Movement
The specific conference agenda Starburst has assembled suggests the company is targeting a particular inflection point in the enterprise AI market: the moment when organizations that have successfully run AI pilots are trying to figure out why those pilots won’t scale into production systems. That’s a real and growing segment of the buyer market, and the problem is well-defined enough that vendors with credible answers will find receptive audiences.
The Governance Gap Will Define 2026–2027 AI Deployment Outcomes
Based on the trajectory visible in the Datanova agenda and the broader market, the organizations that successfully operationalize AI at scale over the next 18 months will be the ones that treated data governance as an architectural requirement, not a compliance checkbox. ECI Research has found that enterprises that successfully operationalize FinOps achieve faster product delivery, improved cross-functional alignment, and more predictable financial outcomes without compromising innovation velocity. That finding maps directly to enterprise AI. The discipline of making infrastructure decisions strategically, with clear ownership and governance, produces measurably better outcomes than chasing capability without attending to the operational foundations.
The 2026–2027 window will likely produce a significant divergence between organizations that scaled AI reliably and those that accumulated expensive, fragmented AI infrastructure that underperforms expectations. Conferences like Datanova, whatever their marketing function, are increasingly where that playbook gets written.
