Broadcom to integrate Nvidia AI technologies into VMware Cloud Foundation

VMware Cloud Foundation 9.0 is now generally available as an AI‑native release that bundles Private AI Services. This launch aims at enterprises that need proven infrastructure and unified cloud operations.

The platform pairs enterprise software leadership with trusted hardware partners. It brings GPUs, advanced networking, and familiar workflows into a single cloud foundation that customers can adopt with minimal disruption.

Adoption momentum is clear: nine of the top 10 Fortune 500 firms have committed, and more than 100 million cores are licensed. That level of traction shows strong demand for solutions that balance innovation and reliability.

Readers will find practical detail in the article on performance, workloads, and platform-level benefits. Expect coverage of how this release simplifies operations, speeds deployment, and helps enterprises modernize infrastructure across private and hybrid cloud models.

VMware Explore 2025 announcement: VCF 9.0 becomes AI‑native for the modern private cloud

VCF 9.0 was announced at VMware Explore 2025 as an AI‑native platform for secure private cloud deployments. The release is now generally available and bundles new capabilities that aim to simplify operations for enterprises.

VMware Private AI Services bundled with VCF 9.0 for unified AI and non‑AI workloads

VMware Private AI Servicesare included in the base subscription. The suite adds GPU Monitoring, Model Store, Model Runtime, Agent Builder, a Vector Database, and Data Indexing/Retrieval. These services standardize governance and observability across mixed workloads.

Customer momentum and timing: Fortune 500 adoption and present availability context

Nine of the top 10 Fortune 500 have committed, and more than 100 million cores are licensed worldwide. Entitlement for the bundled services as part of a VCF 9.0 subscription is expected in Q1 FY26. That timing removes separate purchases and helps customers move models from development into production faster.

The unified approach reduces handoffs, speeds provisioning, and simplifies data workflows. For enterprises seeking a single cloud foundation for diverse workloads, VCF 9.0 offers a consistent service experience and clearer path to scale.

Broadcom to integrate Nvidia AI across GPUs, networking, and software

Enterprises gain access to the latest GPUs and high‑speed fabrics while keeping familiar operational controls. This update extends the private foundation nvidia stack with accelerated computing and network plumbing that supports demanding multi‑node workloads.

Support for NVIDIA Blackwell accelerated computing

VCF will support NVIDIA Blackwell GPUs, including the B200 and the RTX PRO 6000 Server Edition. Deployments can scale up to eight gpus per server, giving teams options for training, inference, and visualization without changing core workflows.

High‑speed fabric and Enhanced DirectPath I/O

Networking includes nvidia connectx-7 NICs and BlueField‑3 400G DPUs using enhanced directpath. That fabric enables GPUDirect RDMA and GPUDirect Storage for faster multi‑node training and data transfer.

Preserving enterprise workflows for mixed workloads

Operational continuity remains a priority: vMotion, High Availability (HA), Distributed Resource Scheduler (DRS), and Live Patching continue to work for mixed workloads. The result is higher performance across compute and networking while keeping production controls intact.

What it means for enterprises: Private AI at scale with governance, performance, and lower TCO

Enterprises can now run model workloads on a secure private cloud that enforces policy and delivers predictable GPU performance.

Run, move, and govern models in the private cloud with GPU precision

This platform lets teams operate models with clear role‑based controls and centralized logging. IT keeps policy guardrails while developers get fast access to GPUs and optimized networking for higher performance.

Governance features include policy enforcement, auditing, and observability across mixed workloads. That makes it easier to move models across environments without sacrificing compliance.

From fine‑tuning to inference: consolidating applications and services on VMware Cloud Foundation

With vmware private services built into the base subscription, organizations can consolidate development, fine‑tuning, and inference on a single cloud foundation. This reduces vendor sprawl and simplifies lifecycle management.

Capabilities span model store, runtime, and monitoring, so teams standardize data handling, promotion, and endpoints under one operating model. The result is lower TCO and a scalable platform for mission‑critical workloads.

Technical capabilities fueling accelerated computing on VCF

Under the hood, the platform layers exclusive GPU access and high-speed fabrics to streamline multi-node training. This approach aligns compute, networking, and storage for consistent results.

DirectPath enablement for exclusive device mapping

DirectPath enablement gives a single VM exclusive access to a GPU. That simplifies project start-up and yields predictable performance for accelerated computing tasks.

Enhanced DirectPath I/O for multi‑host training

Enhanced DirectPath I/O with ConnectX‑7 and BlueField‑3 unlocks GPUDirect RDMA and GPUDirect Storage. The improved data paths lower latency and raise throughput for multi-node training runs.

Model Runtime, endpoint sharing, and multi‑tenant services

Model Runtime is now GA in VCF 9.0. Upcoming endpoint sharing will enable secure, multi‑tenant Models‑as‑a‑Service while isolating tenant data and scaling centrally.

Multi‑accelerator deployments and dense interconnect

Multi‑accelerator Model Runtime supports deployments across NVIDIA and AMD gpus without refactoring. NVLink, NVSwitch, and the HGX platform with Blackwell parts add dense interconnect for massive LLM work.

Combined, these capabilities reduce complexity and standardize operations on the vmware private foundation nvidia. The cohesive infrastructure balances compute, networking, and data so teams focus on models and apps.

AI services and tooling natively integrated into the platform

Built into the release, a set of native services streamlines model lifecycle operations and runtime management on the private cloud. This gives a strong, unified control plane for developers and operators.

GPU Monitoring, Model Store, Agent Builder, Vector Database, and Data Indexing/Retrieval

GPU Monitoring and Model Store centralize visibility and artifact management. Model Runtime standardizes deployment and scaling so teams run models with predictable results.

Agent Builder, Vector Database, and Data Indexing/Retrieval speed application builds. They provide secure, policy-aware access to data and reduce custom plumbing for services.

Intelligent Assist for VCF: AI‑driven troubleshooting in connected and air‑gapped deployments

Intelligent Assist is in tech preview and helps diagnose issues using Broadcom’s knowledge base. It works with on-prem or cloud-hosted language models and surfaces solutions fast for mixed environments.

Model Context Protocol (MCP) roadmap: standardized, secure integration with enterprise tools

MCP will offer a common integration fabric for Oracle, Microsoft SQL Server, ServiceNow, GitHub, Slack, and PostgreSQL. End-to-end authentication and RBAC ensure auditable flows across the vmware private foundation and vmware private stacks.

These capabilities help enterprises accelerate time to production while keeping governance, observability, and consistent interfaces across the platform. Learn vmware and adopt embedded services that lower overhead and speed operations.

Developer velocity and ecosystem: building modern apps on a secure private cloud

Developers can move faster when platform services and GitOps are built directly into the private cloud. VCF embeds vSphere Kubernetes Service, Argo CD, and Istio so teams get end-to-end workflows without extra glue.

vSAN S3 Object Store: native, multi‑tenant object storage for unstructured data

The native vSAN S3 Object Store delivers S3‑compatible storage without third‑party licenses. It consolidates block, file, and object under unified policies and cuts operational overhead for platform teams.

GitOps with Argo CD and Istio Service Mesh for secure, auditable app delivery

Git acts as the source of truth while Argo CD enforces consistent, auditable deployments across clusters. Istio provides zero‑trust networking, traffic management, and rich observability for modern applications.

Enhanced multi‑cluster management centralizes policy and visibility so customers scale secure application operations across hybrid environments.

Broader ecosystem: AMD ROCm Enterprise AI, Instinct MI350, and Canonical partnership

The release expands hardware and software choice with AMD ROCm Enterprise support and Instinct MI350 options. A Canonical partnership aligns container tooling and secure runtime support for enterprise software delivery.

Together, these platform integrations reduce toolchain stitching, increase developer productivity, and standardize how applications land on the vmware cloud foundation.

Conclusion

strong,Organizations can adopt Blackwell‑class parts and high‑speed fabrics while keeping standard enterprise controls intact.

The vmware private foundation now supports nvidia blackwell hardware, including B200 and RTX PRO 6000 Server Edition, plus nvidia connectx-7 and BlueField‑3 with enhanced directpath and DirectPath I/O. That preserves vMotion, HA, DRS, and Live Patching while adding GPUDirect RDMA and GPUDirect Storage for faster data movement in the data center.

Embedded private services and Model Runtime simplify model operations and planned endpoint sharing. The roadmap — including MCP, NVLink/NVSwitch, and HGX support — gives enterprises clear capabilities for larger training and inference workloads.

Customers can learn vmware best practices, plan migrations, and optimize infrastructure for predictable performance and lower operational friction in cloud and on‑prem environments.

Back to top button