NVIDIA AI Landscape: NVLink, InfiniBand, and Ethernet Technologies

NADDOD Adam Connectivity Solutions Consultant Apr 22, 2024

Nvidia's presence in the field of artificial intelligence is comprehensive, with its systems and networks, hardware, and software forming the three pillars that constitute a robust technological moat. In 2020, Nvidia completed its acquisition of Mellanox, thereby gaining capabilities in InfiniBand, Ethernet, SmartNIC, DPU, and LinkX interconnectivity.

 

NVIDIA Network

 

Facing GPU interconnectivity, Nvidia has developed NVLink interconnect and NVLink network to achieve GPU computing scale-up expansion. This creates a differentiated competitive edge compared to InfiniBand-based networks and Ethernet-based RoCE networks. Since its introduction in 2014, NVLink has evolved through four generations: 20G NVLink 1.0 in 2014, 25G NVLink 2.0 in 2018, 50G NVLink 3.0 in 2020, and the anticipated 100G NVLink 4.0 by 2022. It is projected that by 2024, NVLink will further develop to 200G NVLink 5.0.

 

In terms of application scenarios, NVLink 1.0 to 3.0 primarily addressed the needs of PCIe board-to-board and chassis interconnectivity, gaining significant bandwidth advantages in competition with PCIe interconnectivity through SerDes acceleration.

 

It is worth noting that, except for NVLink 1.0, which utilized a 20G special rate, NVLink 2.0 to 4.0 adopted frequency points similar to or close to Ethernet. This approach allows for the reuse of mature Ethernet interconnect ecosystems and lays the groundwork for future implementations of connection boxes or chassis to form super nodes. NVSwitch 1.0, 2.0, and 3.0, in conjunction with NVLink 2.0, 3.0, and 4.0, respectively, form the basis of the NVLink bus domain network. NVLink 4.0, paired with NVSwitch 3.0, establishes the foundation of super node networks. The external feature of this change is that NVSwitch is separated from the computing board to become a standalone networking device, while NVLink upgrades from board-level interconnect technology to device-to-device interconnect technology.

 

Nvidia has also laid out two types of networks for scale-out scenarios: traditional InfiniBand and Ethernet networks, and NVLink bus domain networks. In traditional networks, Ethernet targets AIGC Cloud for multiple AI training and inference cloud services, while InfiniBand targets AI Factory to meet the application requirements of large-scale model training and inference.

 

In terms of switch chip layout, there are Spectrum-X switch chips based on enhanced open Ethernet and Quantum switch chips based on InfiniBand. The Ultra Ethernet Consortium (UEC) is currently attempting to define an open, interoperable, high-performance full-stack architecture based on Ethernet to meet the growing AI and HPC network demands, aiming to counterbalance Nvidia's network technology. The goal of the UEC is to build an open protocol ecosystem similar to InfiniBand, which can be understood technically as enhancing Ethernet to achieve the performance of InfiniBand networks, or realizing an InfiniBand-like Ethernet.

 

GPU Scale Up

In a sense, the UEC is re-tracing the InfiniBand path. The main feature of the NVLink bus domain network is to achieve memory semantic-level communication within the super node range and memory sharing within the NVLink bus domain network. It is essentially a Load-Store network after the scale-up expansion of traditional bus networks. From the evolution of the NVLink interface, it can be seen that versions 1.0 to 3.0 are clearly benchmarked against PCIe, while version 4.0 actually targets the application scenarios of InfiniBand and Ethernet. However, its primary goal is still to achieve GPU Scale-Up expansion.

 

GPU Scale Up and Scale Out

 

From the perspective of original requirements, during the evolution of the NVLink network, it needs to introduce some basic capabilities of traditional networks, such as addressing, routing, balancing, scheduling, congestion control, management control, and measurement. At the same time, NVLink also needs to retain the basic characteristics of bus networks, such as low latency, high reliability, unified memory addressing and sharing, and memory semantic communication. These features are not possessed or are lacking in current InfiniBand or Ethernet networks. In comparison with traditional InfiniBand and Ethernet networks, the functional positioning and design concepts of the NVLink bus domain network fundamentally differ. It is difficult to say whether the NVLink network will ultimately converge with traditional InfiniBand networks or enhanced Ethernet networks.

 

In the competitive landscape of AI clusters, Nvidia has demonstrated a comprehensive layout covering computing (chips, super chips) and networking (super nodes, clusters). In terms of computing chips, Nvidia has a comprehensive layout including CPUs, GPUs, CPU-CPU/CPU-GPU SuperChip, etc. At the super node network level, Nvidia offers two customized network options: NVLink and InfiniBand. In the cluster network domain, Nvidia has layouts based on Ethernet switch chips and DPU chips.

 

AMD follows closely behind, focusing more on CPU and GPU computing chips and adopting Chiplet technology based on advanced packaging. Unlike Nvidia, AMD does not currently have the concept of super chips but instead uses advanced packaging to encapsulate CPU and GPU dies together. AMD uses the proprietary Infinity Fabric Link memory coherent interface for GPU, CPU, GPU, and CPU interconnection, while GPU-CPU interconnection still retains the traditional PCIe connection method. Additionally, AMD plans to introduce XSwitch switch chips, and the next-generation MI450 accelerator will utilize a new interconnect structure, presumably to compete with Nvidia's NVSwitch.

 

BRCM focuses exclusively on the network domain, with the Jericho3-AI+Ramon DDC solution targeted at super node networks comparable to InfiniBand, and the Tomahawk and Trident series switch chips based on Ethernet in the cluster network domain. Recently, BRCM launched its new software-programmable switch Trident 5-X12, which integrates the NetGNT neural network engine to recognize network traffic information in real-time and invokes congestion control technology to prevent network performance degradation, thereby improving network efficiency and performance. Cerebras/Telsa Dojo takes a different approach, relying on "wafer-level advanced packaging" for deep customization hardware routes.

 

NADDOD is a leading provider of comprehensive optical network solutions, dedicated to building a digitally intelligent world of interconnected everything through innovative computing and networking solutions. Continuously delivering innovative, efficient, and reliable products, solutions, and services, NADDOD aims to provide the optimal switch + AOC/DAC/optical module + smart NIC + DPU + GPU integrated solution for applications such as data centers, high-performance computing, edge computing, artificial intelligence, and more, significantly enhancing customer business acceleration capabilities at low cost and outstanding performance.

 

NADDOD InfiniBand NDR Product

 

Customer-centric, NADDOD consistently creates outstanding value for customers across various industries. With a professional technical team and extensive experience in implementing and servicing diverse application scenarios, NADDOD's products and solutions have earned the trust and favor of customers for their high quality and exceptional performance. Widely applied in industries and critical sectors such as high-performance computing, data centers, education and research, biomedicine, finance, energy, autonomous driving, internet, manufacturing, and telecommunications, NADDOD's lossless network solutions based on InfiniBand and RoCE build a seamless network environment and high-performance computing capability for users.

 

Tailoring solutions to different application scenarios and user requirements, NADDOD can choose the most optimal solution according to specific circumstances, providing users with high-bandwidth, low-latency, and high-performance data transmission. Effectively addressing network bottleneck issues, NADDOD enhances network performance and user experience. Explore more about NADDOD's optical interconnect solutions today!