NVIDIA Quantum-2 InfiniBand NDR 400Gb/s - NADDOD Blog

NVIDIA Quantum-2 InfiniBand NDR 400Gb/s

NADDOD Abel InfiniBand Expert Mar 17, 2023


The NVIDIA® Quantum-2 InfiniBand platform is designed to offer the most advanced networking performance available, specifically for scientific researchers and AI developers working on complex challenges. This platform includes cutting-edge NVIDIA In-Network Computing acceleration engines that deliver ultra-low latency and twice the data throughput. These features make it ideal for use in supercomputers, hyperscale cloud data centers, and artificial intelligence applications that require scalability and a rich set of features.

The NVIDIA Quantum-2 platform takes In-Network Computing acceleration technology to the next level by offering preconfigured and programmable engines. These engines include NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) ™, Message Passing Interface (MPI) tag matching, MPI_Alltoall, and programmable cores. Additionally, the platform offers full transport offload with RDMA, GPUDirect RDMA, and GPUDirect Storage. By providing these features, the NVIDIA Quantum-2 platform is able to deliver the best cost per node and return on investment (ROI).

World-Leading Performance

The NVIDIA Quantum-2 InfiniBand platform is setting new records for high-performance networking, achieving 400Gb/s per port – which is double the bandwidth of the previous generation – and boasting 3 times higher switch silicon port density. It can connect over a million nodes at 400Gb/s in a 3-hop Dragonfly+ topology.

The platform includes the third generation of NVIDIA SHARP technology which offers virtually limitless scalability for large data aggregation through the network and 32 times higher AI acceleration power compared to the previous generation. Additionally, this technology enables multiple tenants or parallel applications to share the infrastructure without any performance degradation. The platform also includes hardware engines for MPI_Alltoall acceleration and MPI tag matching, as well as advanced congestion control, adaptive routing, and self-healing networking features. These enhancements make the NVIDIA Quantum-2 InfiniBand platform ideal for use in high-performance computing (HPC) and AI clusters, enabling them to achieve even greater levels of performance.

NVIDIA Quantum-2 Portfolio

The NVIDIA Quantum-2 platform is designed to provide AI developers and scientific researchers with the fastest networking performance and advanced feature sets, enabling them to tackle the most complex challenges. The platform offers software-defined networking (SDN), In-Network Computing, performance isolation, advanced acceleration engines, remote direct memory access (RDMA), and the highest speeds and feeds available – reaching up to 400Gb/s. This makes NVIDIA Quantum-2 a powerful tool for the world’s leading supercomputing data centers.

400G InfiniBand NDR Switches

NVIDIA Quantum-2 switches are equipped with 64 ports that can support up to 400Gb/s each or split to provide up to 128 ports with 200Gb/s each. The switches come in a compact, 1U fixed configuration with both internally and externally managed options. They offer an aggregated bidirectional throughput of 51.2 terabits per second (Tb/s) and can handle more than 66.5 billion packets per second. The switches can be used in a variety of topologies, including Fat Tree, Dragonfly+, and multi-dimensional Torus.

400G InfiniBand NDR Network Adapters (NIC)

NVIDIA ConnectX®-7 InfiniBand adapters provide 400Gb/s data throughput and support 32 lanes of PCIe Gen5 or Gen4 for host connectivity. They use OSFP connectors for physical connectivity and come in a variety of form factors, including Open Compute Project (OCP) 3.0 with OSFP or QSFP112 connectors, and CEM PCIe x16 with QSFP112 connectors. The ConnectX-7 adapters also include advanced In-Network Computing features such as MPI_Alltoall and MPI tag matching hardware engines, as well as quality of service (QoS), congestion control, and more. Some adapters also have the option to connect an additional 16-lane auxiliary card using NVIDIA Socket Direct™ technology to achieve 32 lanes of PCIe Gen4.

400G InfiniBand NDR Transceivers and Cables

The 400Gb/s InfiniBand (NDR) connectivity provides flexibility in building a preferred topology using connectorized transceivers with passive fiber cables, active copper cables (ACCs), and direct attached copper cables (DACs). MPO fiber cables can be used to create complex infrastructures with easier installation, maintenance, and upgrades.

  • Connector options for 400Gb/s InfiniBand include the 8-channel and 4-channel octal small form-factor plug (OSFP) and the 4-channel, quad small form-factor plug 112G (QSFP112).
  • Twin-port OSFP transceivers have two separate 400Gb/s NDR transceiver engines inside with two 4-channel, MPO/APC optical connectors.
  • Each 4-channel NDR port can be split into two 2-channel, 200Gb/s NDR200 ports using fiber splitter cables.
  • Switch-side, twin-port, copper DACs and ACCs are also available in straight or 1:2 or 1:4 splitter cables with single-port OSFP or QSFP112 endpoints to adapters and DPUs.
  • The air-cooled Quantum-2 switches only accept twin-port, finned-top, OSFP devices. DGX-H100 InfiniBand four OSFP cages only accept twin-port, flat-top, 800Gb/s transceivers and ACCs. HCAs and DPUs use flat-top OSFP or QSFP112 devices.

The large selection of device combinations creates multiple interconnect choices.

(1) Quantum-2 switches can be connected to HCAs and DPUs using switch-side twin-port, 800Gb/s DACs, ACCs, and transceivers:

  • Two 400Gb/s NDR
  • Four 200Gb/s NDR200
  • Single-port OSFP or QSFP112

(2) Connectivity to previous InfiniBand generations is also possible using Quantum-2 switch twin-port OSFP, 2xHDR DACs and AOCs splitter cables with QSFP56 connector endpoints.

  • Two 400Gb/s HDR, two 200Gb/s EDR, or four 100Gb/s HDR100 links to Quantum switches
  • ConnectX-6 HCAs
  • BlueField-2 DPUs

(3) Flat-top, twin-port, 800Gb/s transceivers and ACC cables can connect DGX-H100 Hopper GPU systems to Quantum-2 switches, creating 400Gb/s NDR GPU compute fabrics with minimal cabling.

UFM Cyber-AI

The Cyber-AI platform of NVIDIA Unified Fabric Manager (UFM) provides advanced analytics and real-time network telemetry, supported by AI-powered intelligence. This platform helps IT managers to detect operational irregularities and predict network outages, leading to improved security, higher data center uptime, and reduced operating expenses.

NVIDIA Quantum-2 InfiniBand NDR Products Information

NVIDIA Quantum-2 InfiniBand NDR Switches

Part Number (PN) Description
MQM9790-NS2F NVIDIA Quantum-2-based 400Gb/s InfiniBand switch, 64 400Gb/s ports, 32 OSFP ports, non-blocking switching capacity of 51.2Tb/s, two power supplies (AC), standard depth, unmanaged, power-to-connector (P2C) airflow, rail kit
MQM9790-NS2R NVIDIA Quantum-2-based 400Gb/s InfiniBand switch, 64 400Gb/s ports, 32 OSFP ports, non-blocking switching capacity of 51.2Tb/s, two power supplies (AC), standard depth, unmanaged, connector-to-power (C2P) airflow, rail kit
MQM9700-NS2F NVIDIA Quantum-2-based 400Gb/s InfiniBand switch, 64 400Gb/s ports, 32 OSFP ports, non-blocking switching capacity of 51.2Tb/s, two power supplies (AC), standard depth, managed, P2C airflow, rail kit
MQM9700-NS2R NVIDIA Quantum-2-based 400Gb/s InfiniBand switch, 64 400Gb/s ports, 32 OSFP ports, non-blocking switching capacity of 51.2Tb/s, two power supplies (AC), standard depth, managed, C2P airflow, rail kit

ConnectX-7 Network Adapters (InfiniBand NDR NIC)

PCIe Standup Adapters

Part Number (PN) Description
MCX755106AC-HEAT NVIDIA ConnectX-7 HHHL adapter card, 200GbE (default mode) / NDR200 IB, dual-port QSFP112, PCIe 5.0 x16 with x16 PCIe extension option, crypto enabled, Secure Boot enabled, tall bracket
MCX755106AS-HEAT NVIDIA ConnectX-7 HHHL adapter card, 200GbE (default mode) / NDR200 IB, dual-port QSFP112, PCIe 5.0 x16 with x16 PCIe extension option, crypto disabled, Secure Boot enabled, tall bracket
MCX75310AAC-NEAT NVIDIA ConnectX-7 HHHL adapter card, 400GbE / NDR IB (default mode), single-port OSFP, PCIe 5.0 x16, crypto enabled, Secure Boot enabled, tall bracket
MCX75310AAS-NEAT NVIDIA ConnectX-7 HHHL adapter card, 400GbE / NDR IB (default mode), single-port OSFP, PCIe 5.0 x16, crypto disabled, Secure Boot enabled, tall bracket
MCX75310AAS-HEAT NVIDIA ConnectX-7 HHHL adapter card, 200GbE / NDR200 IB (default mode), single-port OSFP, PCIe 5.0 x16, crypto disabled, Secure Boot enabled, tall bracket
MCX75510AAS-NEAT NVIDIA ConnectX-7 adapter card, 400Gb/s NDR IB, single-port OSFP, PCIe 5.0 with x16 extension option (Socket Direct ready), crypto disabled, Secure boot enabled, tall bracket
MCX75510AAS-HEAT NVIDIA ConnectX-7 adapter card, 200Gb/s NDR200 IB, single-port OSFP, PCIe 5.0 with x16 extension option (Socket Direct ready), crypto disabled, Secure boot enabled, tall bracket

Auxiliary Cards

Part Number (PN) Description
MTMK9100-T15 NVIDIA auxiliary kit for additional PCIe Gen4 x16 connection, PCIe Gen4 x16 passive auxiliary card, two 150 millimeter (mm) IPEX cables
MTMKI9100-T25 NVIDIA auxiliary kit for additional PCIe Gen4 x16 connection, PCIe Gen4 x16 passive auxiliary card, two 250mm IPEX cables
MTMK9100-T35 NVIDIA auxiliary kit for additional PCIe Gen4 x16 connection, PCIe Gen4 x16 passive auxiliary card, two 350mm IPEX cables

Open Compute Project (OCP) Adapters

Part Number (PN)DescriptionMCX753436MC-HEABNVIDIA ConnectX-7 OCP3.0 SFF adapter card, 200GbE (default mode) / NDR200 IB, dual-port QSFP112, multi-host and Socket Direct capable, PCIe 5.0 x16, crypto-enabled, Secure Boot enabled, thumbscrew (pull tab) bracketMCX753436MS-HEABNVIDIA ConnectX-7 OCP3.0 SFF adapter card, 200GbE (default mode) / NDR200 IB, dual-port QSFP112, multi-host and Socket Direct capable, PCIe 5.0 x16, crypto-disabled, Secure Boot enabled, thumbscrew (pull tab) bracketMCX75343AMS-NEACNVIDIA ConnectX-7 OCP3.0 TSFF adapter card,400GbE / NDR IB (default mode,, single-port OSFP, Multi-Host and Socket Direct capable, PCIe 5.0 x16, crypto-disabled, Secure Boot enabled, thumbscrew (pull tab) TSFF bracketMCX75343AMC-NEACNVIDIA ConnectX-7 OCP3.0 TSFF adapter card,400GbE / NDR IB (default mode,, single-port OSFP, Multi-Host and Socket Direct capable, PCIe 5.0 x16, crypto-enabledd, Secure Boot enabled, thumbscrew (pull tab) TSFF bracket

400G InfiniBand NDR Transceivers and Cables

Transceivers for Air-Cooled Quantum-2 Switches

Part Number (PN) Description
Multimode Transceivers
MMA4Z00-NS NVIDIA twin-port 850nm transceiver, 800Gb/s, 2x NDR, 2x SR4, finned-top OSFP, 2x MPO/APC, up to 50m (switch-side, air-cooled)
MMA4Z00-NS-FLT NVIDIA twin-port 850nm transceiver, 800Gb/s, 2x NDR, 2x SR4, flat-top OSFP, 2x MPO/APC, up to 50m (for liquid-cooled switches and DGX-H100)
MMA4Z00-NS400 NVIDIA single-port 850nm transceiver, 400Gb/s, NDR, SR4, flat-top OSFP, MPO/APC, up to 50m (for OSFP HCAs)
MMA1Z00-NS400 NVIDIA single-port 850nm transceiver, 400Gb/s, NDR, SR4, flat-top QSFP112, MPO/APC, up to 50m (for HCA/DPUs)
Single-Mode Transceivers
MMS4X00-NM NVIDIA twin-port 1310nm transceiver, 800Gb/s, 2x NDR, 2x DR4, finned-top OSFP, 2x MPO/APC, up to 500m (switch-side, air-cooled)
MMS4X00-NS NVIDIA twin-port 1310nm transceiver, 800Gb/s, 2x NDR, 2x DR4, finned-top OSFP, 2x MPO/APC, up to 100m (switch-side, air-cooled)
MMS4X00-NM-FLT NVIDIA twin-port 1310nm transceiver, 800Gb/s, 2x NDR, 2x DR4, flat-top OSFP, 2x MPO/APC, up to 500m (liquid-cooled switches and DGX-H100)
MMS4X00-NS-FLT NVIDIA twin-port 1310nm transceiver, 800Gb/s, 2x NDR, 2x DR4, flat-top OSFP, 2x MPO/APC, up to 100m (liquid-cooled switches and DGX-H100)
MMS4X00-NS400 NVIDIA single-port 1310nm transceiver, 400Gb/s, NDR, DR4, flat-top OSFP, MPO/APC, up to 100m (for OSFP HCAs)
MPO/APC Single-Mode Crossover Fiber
MFP7E30-Nxxx NVIDIA passive fiber cable, SMF, MPO to MPO, xxx indicates length in meters: 001, 002, 003, 005, 007, 010, 015, 020, 030, 040, 050, 060, 070, 100
MPO/APC Single-Mode Crossover Fiber Splitter
MFP7E40-Nxxx NVIDIA passive fiber cable, SMF, MPO to 2x MPO, xxx indicates length in meters: 003, 005, 007, 010, 015, 020, 030, 040, 050
MPO/APC Multimode Crossover Fiber
MFP7E10-Nxxx NVIDIA passive fiber cable, MMF, MPO to MPO, xxx indicates length in meters: 003, 005, 007, 010, 015, 020, 030, 040, 050
MPO/APC Multimode Crossover Fiber Splitter
MFP7E20-Nxxx NVIDIA passive fiber cable, MMF, MPO to 2x MPO, xxx indicates length in meters: 003, 005, 007, 010, 015, 020, 030, 050
Direct Attached Copper (DAC) Switch to Switch OSFP
MCP4Y10-Nxxx NVIDIA passive copper cable, InfiniBand NDR 800Gb/s to 800Gb/s, OSFP to OSFP, xxx indicates length in meters: 00A (0.5m), 00B (0.75m), 001, 002
Direct Attached Copper (DAC) Switch to ConnectX-7 HCA OSFP
MCP7Y00-Nxxx NVIDIA passive copper splitter cable, InfiniBand NDR 800Gb/s to 2x 400Gb/s, OSFP to 2x OSFP xxx indicates length in meters: 001, 01A (1.5m), 002, 02A (2.5m), 003
MCP7Y50-Nxxx NVIDIA passive copper splitter cable, InfiniBand NDR 800Gb/s to 4x 200Gb/s, OSFP to 4x OSFP xxx indicates length in meters: 001, 01A (1.5m), 002, 02A (2.5m), 003
Direct Attached Copper (DAC) Switch to BlueField-3 DPU QSFP112
MCP7Y10-Nxxx NVIDIA passive copper splitter cable, InfiniBand NDR 800Gb/s to 2x 400Gb/s, OSFP to 2x QSFP112 xxx indicates length in meters: 001, 01A (1.5m), 002, 02A (2.5m), 003
Direct Attached Copper (DAC) Switch to ConnectX-7 HCA or BlueField-3 DPU QSFP112
MCP7Y40-Nxxx NVIDIA passive copper splitter cable, InfiniBand NDR 800Gb/s to 4x 200Gb/s, OSFP to 4x QSFP112 xxx indicates length in meters: 001, 01A (1.5m), 002, 02A (2.5m), 003
Active Copper (ACC) Switch to Switch OSFP
MCA4J80-Nxxx NVIDIA active copper cable, InfiniBand NDR 800Gb/s to 800Gb/s, OSFP to OSFP xxx indicates length in meters: 003, 004, 005
Active Copper Cable (ACC) Switch to ConnectX-7 HCA OSFP
MCA7J60-Nxxx NVIDIA active copper splitter cable, InfiniBand NDR 800Gb/s to 2x 400Gb/s, OSFP to 2x OSFP xxx indicates length in meters: 004, 005
MCA7J70-Nxxx NVIDIA active copper splitter cable, InfiniBand NDR 800Gb/s to 4x 200Gb/s, OSFP to 4x OSFP xxx indicates length in meters: 004, 005
Active Copper Cable (ACC) Switch to BlueField-3 DPU QSFP112
MCA7J65-Nxxx NVIDIA active copper splitter cable, InfiniBand NDR 800Gb/s to 2x 400Gb/s, OSFP to 2x QSFP112 xxx indicates length in meters: 004, 005
Active Copper Cable (ACC) Switch to ConnectX-7 HCA or BlueField-3 DPU QSFP112
MCA7J75-Nxxx NVIDIA active copper splitter cable, InfiniBand NDR 800Gb/s to 4x 200Gb/s, OSFP to 4x QSFP112 xxx indicates length in meters: 004, 005
Backward-Compatible Active Optical Cable (AOC), 1:2 Splitter
Switch to ConnectX-6, BlueField-2, or Quantum Switch - HDR
MFA7U10-Hxxx NVIDIA AOC 1:2 splitter, InfiniBand 2x HDR to 2x HDR 400Gb/s to 2x 200Gb/s, OSFP to 2x QSFP56 xxx indicates length in meters: 003, 005, 010, 015, 020, 030
Backward-Compatible Direct Attach Copper Cable (DAC), 1:2 Splitter
Switch to ConnectX-6, BlueField-2, or Quantum Switch - HDR
MCP7Y60-Hxxx NVIDIA DAC 1:2 splitter, InfiniBand 2x HDR to 2x HDR 400Gb/s to 2x 200Gb/s, OSFP to 2x QSFP56 xxx indicates length in meters: 001, 01A (1.5m), 002
Backward-Compatible Direct Attach Copper Cable (DAC), 1:4 Splitter
Switch to ConnectX-6, BlueField-2, or Quantum Switch - HDR100
MCP7Y70-Hxxx NVIDIA DAC 1:4 splitter, InfiniBand 2x HDR to 4x HDR100, 400Gb/s to 4x 100Gb/s, OSFP to 4x QSFP56 xxx indicates length in meters: 001, 01A (1.5m), 002
Backward-Compatible Direct Attach Copper Cable (DAC), 1:2 Splitter
Switch to ConnectX-6, BlueField-2, or Quantum Switch - EDR
MCP7Y60-Hxxx NVIDIA DAC 1:2 splitter, InfiniBand 2x EDR to 2x EDR 200Gb/s to 2x 100Gb/s, OSFP to 2x QSFP28 xxx indicates length in meters: 001, 01A (1.5m), 002

About NADDOD
NADDOD provides optimal Ethernet and InfiniBand high-performance switches, intelligent network cards, AOC/DAC/optical module product combination solution according to different application scenarios of users. We provide more advantageous and valuable optical network products and overall solutions for data centers, high-performance computing (HPC), edge computing, artificial intelligence (AI) and other application scenarios, with low cost and excellent performance, greatly improving customers’ business acceleration capabilities.

Related Resources:
InfiniBand NDR: The Future of High-Speed Data Transfer in HPC and AI
NADDOD NDR InfiniBand Network Solution for High-Performance Networks
What Is InfiniBand and How Is It Different from Ethernet?
NADDOD High-Performance Computing (HPC) Solution
Case Study: NADDOD Helped the National Supercomputing Center to Build a General-Purpose Test Platform for HPC
MMA4Z00-NS: The InfiniBand NDR Transceiver for 800G High-Speed Data Transfer
MMA4Z00-NS400: The Ultimate InfiniBand NDR Transceiver for High-Performance Computing