Item Spotlights
NVIDIA/Mellanox® MFS1S50-H010V Compatible 10m (33ft) 200G InfiniBand HPC Active Optical Cable (QSFP56 to 2xQSFP56, OM3 MMF, Black Pulltab)
NADDOD is an Elite Partner of NVIDIA network products, and works with NVIDIA to realize the strong combination of optical network products and solutions, especially in InfiniBand high-performance network construction and application acceleration, with deep technical interactions and rich project implementation experience, we offer optimal Mellanox MFS1S50-H010V Compatible 10m (33ft) 200G QSFP56 to 2X100G QSFP56 Breakout HDR AOC, it can be perfectly adapted to NVIDIA switches and NICs, providing higher transmission efficiency in supercomputers and large-scale systems with strict requirements to guarante 100% compatibility between connectors and devices, and ensure stable transmission and high reliability of the network.
With NVIDIA/Mellanox InfiniBand NDR/HDR/EDR all series switches and NICs, NADDOD Test Center have extensively tested every part by live GPU and NVIDIA/Mellanox switching systems to guarantee its performance and ensure our InfiniBand networking products' 100% compatibility.
NADDOD's 200G InfiniBand HDR AOC provides exactly the same technology and performance standards including low power consumption, high bandwidth, high density, lowest latency, and insertion loss as NVIDIA/Mellanox, getting 120% of the performance at 50% of the price of the original. Bandwidth tests and latency tests in the HDR network environment are verified by NVIDIA/Mellanox equipment connection, and the practical cases of supercomputing application solutions associated with InfiniBand products of NADDOD service are verified to be fully comparable to the performance and quality of the original.
NADDOD continuously achieves efficient integration with NVIDIA products and solutions in cloud computing, artificial intelligence, HPC, and other areas. In data-intensive applications, NADDOD's high-performance networking products have been deployed to the core of national well-known supercomputing centers in January 2022, and the solutions include high-performance switches, optical connectors and NICs to support renowned supercomputing centers and undertake various large-scale scientific and engineering computing tasks with high-performance networking.