In a significant shift from its traditionally closed ecosystem approach, Nvidia has announced the NVLink Fusion program at Computex 2025 in Taipei. This initiative allows partners to integrate Nvidia's proprietary high-speed interconnect technology with non-Nvidia CPUs and custom AI accelerators, potentially reshaping the landscape of enterprise AI computing architectures.
![]() |
---|
CEO Jen-Hsun Huang introduces the NVLink Fusion program at Computex 2025, showcasing the innovative spine technology |
Breaking Down Barriers in AI Infrastructure
For years, Nvidia's NVLink technology has been a cornerstone of its dominance in AI workloads, providing superior bandwidth and latency compared to standard PCIe interfaces. Until now, this proprietary interconnect has been largely restricted to Nvidia's own silicon, with limited exceptions like early collaborations with IBM. The new NVLink Fusion program marks a strategic pivot, opening this technology to a broader ecosystem of partners including Qualcomm and Fujitsu for CPUs, as well as Marvell and MediaTek for custom AI accelerators.
Technical Capabilities That Redefine Scale
NVLink Fusion's technical specifications are nothing short of impressive. According to Nvidia CEO Jen-Hsun Huang, a single NVLink spine can transfer data at rates up to 130 TB/s, which he claimed exceeds the entire Internet's traffic capacity. While such comparisons may be debatable (with some sources suggesting the Internet's peak transfer rate exceeds 1,200 Tb/s), the technology undeniably offers transformative bandwidth for AI computing clusters. The system allows for massive scalability, connecting up to 72 GPUs through a single spine architecture.
Strategic Partnerships Expanding the Ecosystem
The NVLink Fusion ecosystem has already attracted significant partners. Qualcomm, which recently confirmed plans to enter the server CPU market, can now ensure compatibility with Nvidia's dominant AI infrastructure. Fujitsu will integrate the technology with its upcoming 144-core Monaka CPUs, which feature innovative 3D-stacked cores over memory. Silicon partners including Marvell, MediaTek, and Alchip will develop custom AI accelerators compatible with the system, while Astera Labs joins to provide specialized interconnectivity components. Design tool providers Cadence and Synopsys complete the ecosystem with necessary software and IP resources.
![]() |
---|
A diagram showing the integration of third-party custom CPUs and accelerators into Nvidia's NVLink Fusion architecture |
Implementation Through Chiplet Architecture
The technical implementation of NVLink Fusion involves integrating the functionality into chiplets positioned adjacent to the compute package. This approach allows partners to maintain their core CPU or accelerator designs while adding NVLink compatibility. Nvidia is also releasing new Mission Control software to unify operations and orchestration, streamlining system-level validation and workload management to accelerate time-to-market for these complex integrated systems.
Industry Competition and Standards
Notably absent from the NVLink Fusion partnership roster are Nvidia's primary competitors: AMD, Intel, and Broadcom. These companies have instead joined the Ultra Accelerator Link (UALink) consortium, which aims to create an open industry-standard interconnect as an alternative to Nvidia's proprietary technology. This parallel development highlights the ongoing tension between proprietary ecosystems and open standards in the rapidly evolving AI hardware landscape.
Implications for Enterprise AI Deployment
The opening of NVLink technology represents a strategic move by Nvidia to extend its influence while addressing customer demands for more flexible AI infrastructure options. By allowing integration with non-Nvidia CPUs, the company maintains its GPU dominance while accommodating organizations that may prefer alternative CPU architectures for specific workloads or sovereignty requirements. This approach could potentially expand Nvidia's addressable market while still leveraging its core strengths in AI acceleration.
Future Outlook
While NVLink Fusion primarily targets data center and enterprise applications rather than consumer computing, its development signals important shifts in how AI computing infrastructure may evolve. The technology enables new possibilities for heterogeneous computing at scale, potentially influencing future approaches to system architecture across the industry. As AI workloads continue to grow in complexity and scale, the ability to efficiently connect diverse computing resources becomes increasingly critical for performance and energy efficiency.