NVIDIA Mellanox MCX516A-CCAT ConnectX®-5 EN Network Interface Card ,100GbE, Dual-port QSFP28, PCIe Gen 3.0 x16, Tall bracket

#101935
#101935
모델: MCX516A-CCAT
  판매된:  2579
재고 95
Brand:
NVIDIA/Mellanox (Ethernet)
NVIDIA/Mellanox (Ethernet)

아이템 스포트라이트

  • Up to 100Gb/s connectivity per port
  • Industry-leading throughput, low latency, low CPU utilization, and high message rate
  • Smart interconnect for x86, Power, Arm, and GPU-based compute and storage platforms
  • Advanced storage capabilities, including NVMe-oF offloads
  • Efficient I/O consolidation, lowering data center costs and complexity
NVIDIA Mellanox MCX516A-CCAT ConnectX®-5 EN Network Interface Card ,100GbE, Dual-port QSFP28, PCIe Gen 3.0 x16, Tall bracket
NVIDIA Mellanox MCX516A-CCAT ConnectX®-5 EN Network Interface Card ,100GbE, Dual-port QSFP28, PCIe Gen 3.0 x16, Tall bracket
명세서
Applications
제품 하이라이트
질문과 답변
고객 리뷰
자원
명세서
Applications
제품 하이라이트
질문과 답변
고객 리뷰
자원
Description

NVIDIA Mellanox MCX516A-CCAT ConnectX®-5 EN Network Interface Card ,100GbE, Dual-port QSFP28, PCIe Gen 3.0 x16, Tall bracket

ConnectX-5 Ethernet adapter cards provide high performance and flexible solutions with up to two ports of 100GbE connectivity and 750ns latency. For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency.

명세서
Part Number
MCX516A-CCAT
Data Transmission Rate
Ethernet: 10/25/40/50/100 Gb/s
Network Connector Type
Dual-port QSFP28
Application
Ethernet
Host Interface
PCIe x16 Gen 3.0 @ SERDES 8GT/s
Technology
RoCE
Adapter Card Size
2.71in. x 5.6in. (68.90mm x 142.24 mm)
RoHS
RoHS Compliant
Temperature
Operational: 0°C to 55°C
Storage: -40°C to 70°C
Supported operating systems
Linux, Windows, VMware
Applications
제품 하이라이트
GPU Direct RDMA

GPU Direct allows for direct data transfers from one GPU memory to another GPU memory, enabling direct remote access between GPU memories. This greatly enhances the efficiency of GPU cluster operations, offering significant improvements in both bandwidth and latency.

Advanced Network Offloads

Accelerate data plane, networking, storage, and security, enabling in-network computing and in-network memory capabilities. Offloading CPU-intensive I/O operations enhances host efficiency.

Accelerating Network Performance

By employing accelerated switching and packet processing technologies, network performance can be enhanced while reducing CPU overhead in the transmission of Internet Protocol (IP) packets, thereby freeing up more processor cycles to run applications.

질문과 답변
질문하기
no data
고객 리뷰
품질인증
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
우리가 공급하는 것