NVIDIA Mellanox MCX713106AC-CEAT ConnectX®-7 Adapter Card, 100GbE, Dual-port QSFP112, PCIe 4.0/5.0 x16, Crypto Enabled, Tall Bracket

#101941
#101941
モデル: MCX713106AC-CEAT
  売却:  1479
利用可能
Brand:
NVIDIA/Mellanox (Ethernet)
NVIDIA/Mellanox (Ethernet)

アイテムスポットライト

  • PCIe 4.0/5.0 x16 Host Interface and 100Gb/s Dual-Port Transmission
  • RDMA/RoCE Delivering Low Latency and High Performance
  • ASAP2 technology accelerates software-defined networking
  • Provide Security from Edge to Core with inline encryption/decryption of TLS, IPsec, and MACsec
  • End-to-end QoS and Congestion Control to Anticipate and Eliminate Congestion
  • Smart Interconnect for x86, Power, Arm, GPU and FPGA-based Compute and Storage Platforms
NVIDIA Mellanox MCX713106AC-CEAT ConnectX®-7 Adapter Card, 100GbE, Dual-port QSFP112, PCIe 4.0/5.0 x16, Crypto Enabled, Tall Bracket
NVIDIA Mellanox MCX713106AC-CEAT ConnectX®-7 Adapter Card, 100GbE, Dual-port QSFP112, PCIe 4.0/5.0 x16, Crypto Enabled, Tall Bracket
仕様
Applications
製品のハイライト
質問と回答
カスタマーレビュー
資力
仕様
Applications
製品のハイライト
質問と回答
カスタマーレビュー
資力
Description

NVIDIA Mellanox MCX713106AC-CEAT ConnectX®-7 Adapter Card, 100GbE, Dual-port QSFP112, PCIe 4.0/5.0 x16, Crypto Enabled, Tall Bracket

The NVIDIA®ConnectX®-7 family of Remote Direct Memory Access (RDMA) network adapters supports InfiniBand and Ethernet protocols and a range of speeds up to 400Gb/s. It enables a wide range of smart, scalable, and feature-rich networking solutions that address traditional enterprise needs up to the world’s most-demanding AI, scientific computing, and hyperscale cloud data center workloads.

仕様
Part Number
MCX713106AC-CEAT
Data Transmission Rate
Ethernet: 1/10/25/40/50/100 Gb/s
Network Connector Type
Dual-port QSFP112
Application
Ethernet
Host Interface
PCIe x16 Gen 4.0/5.0 @ SERDES 16/32GT/s
Technology
RoCE
Adapter Card Size
2.71in. x 6.6in. (68.90mm x 167.65 mm)
RoHS
RoHS Compliant
Temperature
Operational: 0°C to 55°C
Storage: -40°C to 70°C
Supported operating systems
Linux, Windows, VMware
Applications
製品のハイライト
GPU Direct RDMA

GPU Direct allows for direct data transfers from one GPU memory to another GPU memory, enabling direct remote access between GPU memories. This greatly enhances the efficiency of GPU cluster operations, offering significant improvements in both bandwidth and latency.

Advanced Network Offloads

Accelerate data plane, networking, storage, and security, enabling in-network computing and in-network memory capabilities. Offloading CPU-intensive I/O operations enhances host efficiency.

Accelerating Network Performance

By employing accelerated switching and packet processing technologies, network performance can be enhanced while reducing CPU overhead in the transmission of Internet Protocol (IP) packets, thereby freeing up more processor cycles to run applications.

質問と回答
質問する
no data
カスタマーレビュー
品質認証
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
私たちが提供するもの