The News
Hewlett Packard Enterprise (HPE) and KDDI Corporation announced plans to open the Osaka Sakai Data Center by early 2026. The facility will feature NVIDIA GB200 NVL72 systems, built by HPE and optimized for large-scale AI workloads, including generative AI model development and inferencing.
Read the original press release here.
Analysis
The Asia-Pacific application development ecosystem is seeing rapid growth in demand for high-performance AI infrastructure. According to our research and findings from theCUBE Research, regional AI adoption is accelerating, driven by both government-led digital transformation efforts and private sector investment in generative AI. Japan, in particular, is witnessing increased pressure on developers to build and deploy large language models (LLMs) and AI-driven applications. This trend parallels broader industry shifts where developers face growing challenges around compute availability, energy efficiency, and workload scalability, especially as trillion-parameter models become the norm.
The Osaka Sakai Data Center aims to address the compute and performance bottlenecks limiting AI application development. By offering cloud-based access to NVIDIA GB200 NVL72 systems, with built-in high-performance networking and liquid cooling, the facility could give developers scalable infrastructure for both training and inferencing. For application developers in Japan and globally, this may mean faster time-to-deployment for LLMs, improved performance on latency-sensitive AI services, and greater access to AI-ready compute without massive capital investment. KDDI’s WAKONX platform further extends this reach by enabling on-demand consumption models.
Before initiatives like this, developers often relied on fragmented infrastructure solutions or leased compute capacity from hyperscalers with limited regional presence. Many organizations struggled with high latency when accessing GPU clusters located outside of Japan, and faced unpredictable costs when scaling LLM training workloads. Additionally, handling power and cooling requirements for local deployments presented major operational hurdles for enterprises attempting to build AI capabilities in-house.
The Osaka Sakai Data Center presents developers with a more localized, energy-efficient alternative for AI training and inferencing workloads. Developers may see lower latency, improved sustainability metrics, and access to pre-configured NVIDIA software stacks such as NVIDIA AI Enterprise. This change could allow for more experimentation with larger model sizes and faster iteration cycles on AI projects. The inclusion of HPE’s liquid cooling technology also aims to address growing developer concerns about the carbon footprint and energy costs associated with AI infrastructure.
Looking Ahead
The application development market in Japan is poised for a new wave of AI-driven innovation, especially in sectors like manufacturing, telecommunications, and financial services. With localized, high-performance infrastructure now coming online, developers could have more control over deployment, performance, and cost. This announcement signals growing momentum toward decentralized, regionally available AI compute clusters that support both startups and enterprises. Moving forward, the collaboration between HPE and KDDI could serve as a model for similar partnerships in other global regions facing similar AI infrastructure gaps.

