HPE Expands AI Factory Portfolio with Deepened NVIDIA Integration and Infrastructure Advancements

HPE Expands AI Factory Portfolio with Deepened NVIDIA Integration and Infrastructure Advancements

The News

Hewlett Packard Enterprise (HPE) has announced expanded co-engineered offerings with NVIDIA to advance its AI Factory portfolio. Key highlights include new capabilities in HPE Private Cloud AI, an SDK for NVIDIA AI Data Platform on HPE Alletra Storage MP X10000, enhanced HPE ProLiant Compute DL380a Gen12 systems with RTX PRO 6000 Blackwell GPUs, and expanded support in HPE OpsRamp Software for AI infrastructure optimization. These updates aim to accelerate enterprise and sovereign adoption of generative and agentic AI solutions at scale. Learn more with the full press release here.

Analysis

AI compute is evolving from experimentation to enterprise-scale deployment. With this announcement, HPE positions itself not only as a supplier of best-in-class infrastructure but also as an orchestrator of the AI lifecycle—from data pipeline to model training to production inference. Co-engineering with NVIDIA allows HPE to bring validated, secure, and efficient AI factory blueprints to enterprise, government, and research markets.

As generative AI becomes central to enterprise innovation, this full-stack partnership provides the architecture, performance, and manageability enterprises need to scale AI with confidence.

Full-Stack AI Integration for the Enterprise

With AI adoption expanding across industries, organizations need a cohesive stack to develop, deploy, and scale AI models and applications. HPE and NVIDIA are delivering a vertically integrated solution encompassing:

  • Compute (HPE ProLiant, Cray XD, DL384 Gen12)
  • Storage (Alletra MP X10000)
  • Cloud (HPE Private Cloud AI)
  • Optimization (OpsRamp AI Ops)

This positions HPE as a leading provider of AI-ready infrastructure with the flexibility to support hybrid cloud and air-gapped deployments.

HPE Private Cloud AI: Developer-Centric AI Factories

Now supporting feature branch model updates from NVIDIA AI Enterprise, HPE Private Cloud AI enables developers to test experimental AI models before moving them to production. Built-in support for NVIDIA NIM microservices, AI frameworks, and SDKs ensures agile experimentation while maintaining enterprise-grade governance. Combined with NVIDIA Enterprise AI Factory validated designs, HPE is offering a turnkey private AI cloud tailored for generative and agentic workloads.

Storage as a Strategic Enabler: HPE Alletra X10000 SDK

The new SDK for Alletra Storage MP X10000 bridges HPE’s unstructured data platform with the NVIDIA AI Data Platform. Benefits include:

  • Inline vector indexing and metadata enrichment
  • RDMA acceleration between GPU memory and storage
  • Composable scaling of storage and performance

These capabilities enable real-time ingestion, inference, and training workflows across edge-to-cloud pipelines, advancing HPE’s vision for intelligent data fabrics.

Leadership in AI Compute: ProLiant DL380a Gen12 + RTX PRO 6000

HPE’s DL380a Gen12 server, now available with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, leads over 50 MLPerf Datacenter v5.0 benchmarks. This makes it ideal for:

  • Multimodal AI inference and physical AI
  • Model fine-tuning and generative design
  • High-throughput graphics and video applications

The system supports air and direct liquid cooling (DLC), Silicon Root of Trust, and FIPS 140-3 Level 3 readiness, making it secure and resilient for high-performance workloads.

Intelligent AI Ops: HPE OpsRamp for NVIDIA Infrastructure

HPE OpsRamp now supports NVIDIA RTX PRO 6000 GPUs with features for:

  • Telemetry: temperature, power, utilization, memory
  • Automation: alerts, remediation, workload scheduling
  • AI-Powered Insights: resource forecasting, anomaly detection

Tightly integrated with NVIDIA Base Command, BlueField, InfiniBand, and Spectrum-X Ethernet, OpsRamp delivers real-time observability and predictive analytics across AI compute nodes.

Looking Ahead

At HPE Discover 2025, expect deeper orchestration of NVIDIA’s AI stack with HPE GreenLake, expanded SDK and RDMA support across storage tiers, and reference architectures for sovereign and regulated environments. HPE’s focus on air-gapped and secure deployment models reinforces its appeal to governments and critical industries.

Author

  • Paul Nashawaty

    Paul Nashawaty, Practice Leader and Lead Principal Analyst, specializes in application modernization across build, release and operations. With a wealth of expertise in digital transformation initiatives spanning front-end and back-end systems, he also possesses comprehensive knowledge of the underlying infrastructure ecosystem crucial for supporting modernization endeavors. With over 25 years of experience, Paul has a proven track record in implementing effective go-to-market strategies, including the identification of new market channels, the growth and cultivation of partner ecosystems, and the successful execution of strategic plans resulting in positive business outcomes for his clients.

    View all posts