NVIDIA MQM9700-NS2F Quantum-2 NDR InfiniBand Switch, 64 x 400Gb/s Ports, 32 OSFP Cages, Managed, Power-to-connector(P2C) Airflow(forward), with 5-year Service

#101908
#101908
Model: MQM9700-NS2F|790-SN7N0Z+P2CMI60
  Sold:  0
Available
Brand:
NVIDIA/Mellanox (InfiniBand)
NVIDIA/Mellanox (InfiniBand)
request-free-sample
Request Sample

Item Spotlights

  • 64 400Gb/s non-blocking ports with aggregate data throughput up to 51.2Tb/s.
  • Support Remote Direct Memory Access (RDMA), adaptive routing, and NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™.
  • Support Fat Tree, SlimFly, DragonFly+, multi-dimensional Torus, and more.
  • 1+1 redundant and hot-swappable power, 6+1 hot-swappable fan unit.
  • Support CLI, WebUI, SNMP, JSON Interface, or UFM® Platform for Flexible Operation.
NVIDIA MQM9700-NS2F Quantum-2 NDR InfiniBand Switch, 64 x 400Gb/s Ports, 32 OSFP Cages, Managed,  Power-to-connector(P2C) Airflow(forward), with 5-year Service
NVIDIA MQM9700-NS2F Quantum-2 NDR InfiniBand Switch, 64 x 400Gb/s Ports, 32 OSFP Cages, Managed, Power-to-connector(P2C) Airflow(forward), with 5-year Service
Specifications
Applications
Product Highlights
Questions & Answers
Customer Reviews
Resources
Specifications
Applications
Product Highlights
Questions & Answers
Customer Reviews
Resources
Description

NVIDIA MQM9700-NS2F Quantum-2 NDR InfiniBand Switch, 64 x 400Gb/s ports, 32 OSFP cages, managed, power-to-connector(P2C) airflow(forward)

NVIDIA QM9700 switch come with 64 400Gb/s ports on 32 physical octal small form-factor pluggable (OSFP) connectors that can be split to deliver up to 128 200Gb/s ports. The compact, 1U, fixed configuration switch offering includes internally managed and externally managed (aka unmanaged) versions. They carry an aggregated bidirectional throughput of 51.2 terabits per second (Tb/s), with a landmark capacity of more than 66.5 billion packets per second. As an ideal rack-mounted InfiniBand solution, the NVIDIA Quantum-2 switches allow maximum flexibility, enabling a variety of topologies, including Fat Tree, Dragonfly+, multi-dimensional Torus, and more.

Specifications
Part Number
MQM9700-NS2F
Mount Rack
1U rack mount
Ports
32xOSFP 2x400Gb/s
System Power Usage
747W
Switching Capacity
51.2Tb/s
Latency
130ns
CPU
x86 Coffee Lake i3
System Memory
8GB
Software
MLNX-OS
Power Supply
1+1 redundant and hotswappable power
Dimensions(HxWxD)
1.7"(H) x 17"W) x23.2"(D),(43.6mm (H) x 433.2mm (W) x 590.6mm (D))
Connectivity Solutions
Compute Fabric Topology for 127-node DGX SuperPOD

Each DGX H100 system has eight NDR400 connections to the compute fabric. The fabric design maximizes performance for AI workloads, as well as providing some redundancy in the event of hardware failures.

Applications
View More
Product Highlights
SHARP--low latency data reduction and streaming aggregation
Adaptive Routing--optimize network routing for low latency and dynamic load balancing
SHIELD--Enhanced network availability with fast network link self-healing
GPU Direct RDMA

Reduce CPU load, release CPU computing resources, decrease transmission latency, accelerate data interaction speed, increase bandwidth utilization of data transmission, and accelerate the performance of HPC and deep learning algorithms.

NCCL--accelerate GPU-to-GPU communication with a library that supports collective and point-to-point messaging
Questions & Answers
Ask a Question
Q:
Can the same module on the NDR switch have one port connected to an NDR cable and the other port connected to a one-to-two cable for NDR200?
A:
Yes, it is possible, but the switch needs to be configured to split the NDR port.
Q:
Is there any difference in the number of management functions/openSM subnet managers on the switch compared to UFM (Unified Fabric Manager)? Which one is more suitable for customer deployment?
A:
The switch is suitable for managing up to 2K nodes. UFM and the openSM node manager in OFED have unlimited management capabilities, but they require the CPU and hardware processing power of the management nodes to be considered.
Q:
Can the NDR switch be backward compatible?
A:
Yes, it can achieve backward compatibility through speed reduction. The 400G NDR port on the NDR switch can be reduced to 200G to connect to the CX6 VPI 200G HDR network card.
View More
Customer Reviews
Quality Certification
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
What We Supply