NVIDIA/Mellanox MCX75510AAS-NEAT ConnectX®-7 InfiniBand Adapter Card ,NDR 400Gb/s, Single-Port OSFP, PCIe 4.0/5.0 x 16, Tall Bracket - NADDOD

NVIDIA/Mellanox MCX75510AAS-NEAT ConnectX®-7 InfiniBand Adapter Card ,NDR 400Gb/s, Single-Port OSFP, PCIe 4.0/5.0 x 16, Tall Bracket

#100676
#100676
Model: MCX75510AAS-NEAT
  Sold:  325
Available
Brand:
NVIDIA/Mellanox (InfiniBand)
NVIDIA/Mellanox (InfiniBand)
request-free-sample
Request Sample

Item Spotlights

  • PCIe 4.0/5.0 x16 Host Interface and NDR 400Gb/s Single-Port Transmission
  • RDMA Delivering Low Latency and High Performance
  • ASAP2 technology accelerates software-defined networking
  • Provide Security from Edge to Core with inline encryption/decryption of TLS, IPsec, and MACsec
  • End-to-end QoS and Congestion Control to Anticipate and Eliminate Congestion
  • Smart Interconnect for x86, Power, Arm, GPU and FPGA-based Compute and Storage Platforms
NVIDIA Mellanox MCX75510AAS-NEAT ConnectX®-7 InfiniBand Adapter Card ,NDR 400Gb/s, Single-Port OSFP, PCIe 4.0/5.0 x 16, Tall Bracket
NVIDIA/Mellanox MCX75510AAS-NEAT ConnectX®-7 InfiniBand Adapter Card ,NDR 400Gb/s, Single-Port OSFP, PCIe 4.0/5.0 x 16, Tall Bracket
Specifications
Applications
Product Highlights
Questions & Answers
Customer Reviews
Resources
Specifications
Applications
Product Highlights
Questions & Answers
Customer Reviews
Resources
Description

NVIDIA Mellanox MCX75510AAS-NEAT ConnectX®-7 InfiniBand Adapter Card ,NDR 400Gb/s, Single-Port OSFP, PCIe 4.0/5.0 x 16, Tall Bracket

ConnectX®-7 family of Remote Direct Memory Access (RDMA) network adapters supports InfiniBand and Ethernet protocols and a range of speeds up to 400Gb/s. It enables a wide range of smart, scalable, and feature-rich networking solutions that address traditional enterprise needs up to the world’s most-demanding AI, scientific computing, and hyperscale cloud data center workloads.

Specifications
Part Number
MCX75510AAS-NEAT
Data Transmission Rate
InfiniBand:SDR/FDR/EDR/HDR100/HDR/NDR200/NDR
Network Connector Type
Single-port OSFP
Application
InfiniBand
Host Interface
PCIe x16 Gen 4.0/5.0 @ SERDES 16GT/s/32GT/s
Technology
RDMA
Adapter Card Size
6.6 in. x 2.71 in. (167.65mm x 68.90mm)
RoHS
RoHS Compliant
Temperature
Operational: 0°C to 55°C Storage: -40°C to 70°C
Supported operating systems
Linux, Windows, VMware
Connectivity Solutions
Applications
Product Highlights
GPU Direct RDMA

GPU Direct allows for direct data transfers from one GPU memory to another GPU memory, enabling direct remote access between GPU memories. This greatly enhances the efficiency of GPU cluster operations, offering significant improvements in both bandwidth and latency.

Advanced Network Offloads

Accelerate data plane, networking, storage, and security, enabling in-network computing and in-network memory capabilities. Offloading CPU-intensive I/O operations enhances host efficiency.

Accelerating Network Performance

By employing accelerated switching and packet processing technologies, network performance can be enhanced while reducing CPU overhead in the transmission of Internet Protocol (IP) packets, thereby freeing up more processor cycles to run applications.

Questions & Answers
Ask a Question
Q:
Can the CX7 network card in Ethernet mode be interconnected with other vendors' 400G Ethernet switches that support RDMA?
A:
Interconnection with 400G Ethernet switches is possible, and RDMA is RoCE, which can run in this scenario, but performance is not guaranteed. It is recommended to use the Spectrum-X platform consisting of BF3 and Spectrum-4 for 400G Ethernet deployments.
Q:
What are the differences between the OSFP interface modules on the network card side and the switch side?
A:
The network card side uses flap-top modules, and the CX7 network card uses 400G OSFP modules. On the switch side, twin-port 400G modules are used, which are finned-top and thicker than flap-top modules. The DGX H800 compute network interface (host side) also uses twin-port 400G modules, but they are flap-top and have the suffix "-flt" compared to the switch modules.
Q:
Does the IB card in Ethernet mode not support RDMA?
A:
RDMA over Converged Ethernet (RoCE), which supports Ethernet-based RDMA, is recommended for large-scale networking using the NVIDIA Spectrum-X solution.
View More
Customer Reviews
Quality Certification
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
What We Supply