2Q2Q56-200G-A3H Compatible AOC 3m (10ft) 2x200Gb/s QSFP56 to 2x200Gb/s QSFP56 IB HDR Active Optical Cross Connect Splitter H-Cable (850nm, MMF, LSZH)
NADDOD 2Q2Q56-200G-A3H is a QSFP56 VCSEL-based (Vertical Cavity Surface-Emitting Laser), cost-effective 2x 200Gb/s to 2x 200Gb/s active optical splitter cable (AOC), designed for use in 200Gb/s InfiniBand (IB) HDR (High Data Rate) systems. It provides cross-connect capability between ToR (Top of Rack) and Spine switches. The cable enables a HDR InfiniBand QSFP56 switch port to operate as 2x HDR100. It creates a non-blocking fat-tree topology doubling the number of ports. This eliminates a third level of switching, which is needed for traditional cluster design of 1600 up to 3200 HDR100 ports, and results in significant CAPEX and OPEX savings. By reducing a layer of switching the cluster latency is reduced, which results in improved performance.
With NVIDIA/Mellanox InfiniBand NDR/HDR/EDR all series switches and NICs, NADDOD Test Center have extensively tested every part by live GPU and NVIDIA/Mellanox switching systems to guarantee its performance and ensure our InfiniBand networking products' 100% compatibility.
NADDOD's 200G InfiniBand HDR AOC provides exactly the same technology and performance standards including low power consumption, high bandwidth, high density, lowest latency, and insertion loss as NVIDIA/Mellanox for full compatibility with InfiniBand systems. Bandwidth tests and latency tests in the HDR network environment are verified by NVIDIA/Mellanox equipment connection, and the practical cases of supercomputing application solutions associated with InfiniBand products of NADDOD service are verified to be fully comparable to the performance and quality of the original.
NADDOD continuously achieves efficient integration with NVIDIA products and solutions in cloud computing, artificial intelligence, HPC, and other areas. In data-intensive applications, NADDOD's high-performance networking products have been deployed to the core of national well-known supercomputing centers in January 2022, and the solutions include high-performance switches, optical connectors and NICs to support renowned supercomputing centers and undertake various large-scale scientific and engineering computing tasks with high-performance networking.