NVIDIA/Mellanox MCX653105A-HDAT-SP ConnectX®-6 InfiniBand Adapter Card ,HDR/200G, Single-Port QSFP56, PCIe 3.0/4.0 x 16, Tall&Short Bracket - NADDOD

NVIDIA/Mellanox MCX653105A-HDAT-SP ConnectX®-6 InfiniBand Adapter Card ,HDR/200G, Single-Port QSFP56, PCIe 3.0/4.0 x 16, Tall&Short Bracket

#101438
#101438
Model: MCX653105A-HDAT-SP
  Sold:  2834
In Stock 115
Brand:
NVIDIA/Mellanox (InfiniBand)
NVIDIA/Mellanox (InfiniBand)
request-free-sample
Request Sample

Item Spotlights

  • Up to 200Gb/s connectivity per port
  • Sub 0.6usec latency
  • Advanced storage capabilities including block-level encryption and checksum offloads
  • Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)
  • Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms
  • Flexible programmable pipeline for new network flows
NVIDIA Mellanox MCX653105A-HDAT-SP ConnectX®-6 InfiniBand Adapter Card ,HDR/200G, Single-Port QSFP56, PCIe 3.0/4.0 x 16, Tall&Short Bracket
NVIDIA/Mellanox MCX653105A-HDAT-SP ConnectX®-6 InfiniBand Adapter Card ,HDR/200G, Single-Port QSFP56, PCIe 3.0/4.0 x 16, Tall&Short Bracket
Specifications
Applications
Product Highlights
Questions & Answers
Customer Reviews
Resources
Specifications
Applications
Product Highlights
Questions & Answers
Customer Reviews
Resources
Description

NVIDIA Mellanox MCX653105A-HDAT-SP ConnectX®-6 InfiniBand Adapter Card ,HDR/200G, Single-Port QSFP56, PCIe 3.0/4.0 x 16, Tall&Short Bracket

ConnectX®-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing one or two ports of HDR InfiniBand and 200GbE Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications.

Specifications
Part Number
MCX653105A-HDAT-SP
Data Transmission Rate
InfiniBand:SDR/DDR/QDR/FDR/EDR/HDR100/HDR Ethernet:10/25/40/50/100/200 Gb/s
Network Connector Type
Single-port QSFP56
Application
InfiniBand/Ethernet
Host Interface
PCIe Gen 3.0 / 4.0 SERDES @ 8.0GT/s / 16.0GT/s
Technology
RDMA/RoCE
Adapter Card Size
6.6 in. x 2.71 in. (167.65mm x 68.90mm)
RoHS
RoHS Compliant
Temperature
Operational: 0°C to 55°C Storage: -40°C to 70°C
Supported operating systems
Linux, Windows, VMware
Connectivity Solutions
Applications
View More
Product Highlights
GPU Direct RDMA

GPU Direct allows for direct data transfers from one GPU memory to another GPU memory, enabling direct remote access between GPU memories. This greatly enhances the efficiency of GPU cluster operations, offering significant improvements in both bandwidth and latency.

Advanced Network Offloads

Accelerate data plane, networking, storage, and security, enabling in-network computing and in-network memory capabilities. Offloading CPU-intensive I/O operations enhances host efficiency.

Accelerating Network Performance

By employing accelerated switching and packet processing technologies, network performance can be enhanced while reducing CPU overhead in the transmission of Internet Protocol (IP) packets, thereby freeing up more processor cycles to run applications.

Questions & Answers
Ask a Question
Q:
Does the IB card in Ethernet mode not support RDMA?
A:
RDMA over Converged Ethernet (RoCE), which supports Ethernet-based RDMA, is recommended for large-scale networking using the NVIDIA Spectrum-X solution.
Q:
Are there mentions of simplex or duplex for IB cards?
A:
All IB cards are duplex. Simplex or duplex is merely a concept for current devices because the receive and transmit physical channels are already separated.
Q:
Can a server use two types of cards (encrypted and non-encrypted) simultaneously?
A:
Yes, it is possible.
View More
Customer Reviews
Quality Certification
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
What We Supply