NVIDIA MQM8790-HS2F Quantum HDR InfiniBand Switch, 40 x HDR QSFP56 Ports, Two Power Supplies (AC), Unmanaged, x86 Dual Core, Standard Depth, P2C Airflow, Rail Kit, with 1-year Service

#101873
#101873
Modèle: MQM8790-HS2F|790-SHQN0Z+P2CMI12
  Vendu:  0
En stock 26
Brand:
NVIDIA/Mellanox (InfiniBand)
NVIDIA/Mellanox (InfiniBand)

Articles en vedette

  • 40x HDR 200Gb/s Ports or 80x HDR100 100Gb/s Ports
  • Comes with an onboard subnet manager, enabling simple, out-of-the-box fabric bring-up for up to 2048 nodes
  • Deliver 7.2 billion packets-per-second (Bpps), or 390 million pps per port
  • 1+1 Hot-swappable Power Supplies, N+1 Hot-swappable Fans
  • 1U form factor by delivering up to 16Tb/s of non-blocking bandwidth with sub 130ns port-to-port latency
NVIDIA MQM8790-HS2F Quantum HDR InfiniBand Switch, 40 x HDR QSFP56 Ports, Two Power Supplies (AC), Unmanaged, x86 Dual Core, Standard Depth, P2C Airflow, Rail Kit, with 1-year Service
NVIDIA MQM8790-HS2F Quantum HDR InfiniBand Switch, 40 x HDR QSFP56 Ports, Two Power Supplies (AC), Unmanaged, x86 Dual Core, Standard Depth, P2C Airflow, Rail Kit, with 1-year Service
Caractéristiques
Applications
Points forts du produit
questions et réponses
Avis des clients
Ressources
Caractéristiques
Applications
Points forts du produit
questions et réponses
Avis des clients
Ressources
Description

NVIDIA MQM8790-HS2F Quantum HDR InfiniBand Switch, 40 x HDR QSFP56 ports, two power supplies (AC), unmanaged, x86 dual core, standard depth,P2C airflow, rail kit

NVIDIA QM8790 switch systems provide the highest performing fabric solution in a 1U form factor by delivering up to 16Tb/s of non-blocking bandwidth with sub 130ns port-to-port latency. These switches deliver 7.2 billion packets-per-second (Bpps), or 390 million pps per port. These systems are the industry's most cost-effective building blocks for embedded systems and storage with a need for low port density systems. Whether looking at price-to-performance or energy-to-performance, these systems offer superior performance, power and space, reducing capital and operating expenses and providing the best return-on-investment.

Caractéristiques
Part Number
MQM8790-HS2F
Mount Rack
1U rack mount
Ports
40xQSFP56 200Gb/s
System Power Usage
253W
Switching Capacity
16Tb/s
Latency
130ns
CPU
x86 ComEx Broadwell D-1508
System Memory
8GB
Software
MLNX-OS
Power Supply
1+1 redundant and hotswappable power
Dimensions(HxWxD)
1.7"(H) x 17"W) x23.2"(D),(43.6mm (H) x 433.2mm (W) x 590.6mm (D))
Solutions de connectivité
Compute Fabric Topology for 140-node DGX SuperPOD

For the complete 140-node DGX SuperPOD, the three-layer switches all use 40-port NVIDIA QM8790 switches. Every 20 DGX A100s in the cluster form a SU, and there are 8 Leaf switches in each SU. The design is rail optimized through both the leaf and spine levels—each InfiniBand HCA on a DGX A100 system is connected to its fat tree topology. This rail-optimized network architecture is of great help in improving the performance of deep learning training.

Applications
Voir plus
Points forts du produit
SHARP--low latency data reduction and streaming aggregation
Adaptive Routing

The optimal network routing selection aims to reduce latency, minimize congestion, and achieve dynamic load balancing for network traffic.

SHIELD--enhanced network availability with fast network link self-healing
GPU Direct RDMA

Reduce CPU load, release CPU computing resources, decrease transmission latency, accelerate data interaction speed, increase bandwidth utilization of data transmission, and accelerate the performance of HPC and deep learning algorithms.

NCCL--accelerate GPU-to-GPU communication with a library that supports collective and point-to-point messaging
questions et réponses
Poser une question
Q:
Is there a difference in the number of management nodes for switch management/opensm subnet manager on network cards and UFM? Which one is more suitable for customer deployment?
A:
Switch management is suitable for managing up to 2K nodes. UFM and opensm subnet manager on OFED network cards have unlimited management node capabilities but require CPU and hardware processing capabilities on the management node.
Q:
Can HDR switches be backward compatible?
A:
Backward compatibility can be achieved through speed reduction. The 200G HDR port on HDR switches can be reduced to 100G to connect with EDR network cards such as CX6 VPI 100G.
Q:
Do IB switches support Ethernet?
A:
Currently, there are no switches that simultaneously support IB and Ethernet. According to the IB standard, Ethernet data (IPv4, IPv6) can be communicated through the IB network in the form of a tunnel (IPoIB).
Voir plus
Avis des clients
Certification de qualité
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
Ce que nous fournissons