NADDOD 1.6T InfiniBand XDR transceivers provide unmatched single-mode 500m, 2KM long stable distance transmission, which are specifically designed for efficient artificial intelligence training, HPC, and data centers. These 1.6T transceivers are rigorously tested for ultra-low bit error rates, low latency, and cost-effective, delivering good interoperability with NVIDIA XDR systems.
NADDOD 1.6T OSFP224 to OSFP224 DAC cable is designed for high-performance InfiniBand XDR networks, supporting data rates up to 1.6Tbps (224Gbps per lane, PAM4). This 1.6T DAC cables offers ultra-low latency and power-efficient connectivity between OSFP224 ports. It is primarily used for direct interconnection between 1.6T OSFP switches and is fully compatible with NVIDIA Quantum-X800 Q3400-RA switches. Ideal for short-reach, high-speed links for AI,HPC, data center environments requiring 1.6T throughput.
NADDOD 1.6T OSFP224 to 2x800G OSFP224 DAC splitter cable is designed for InfiniBand XDR networks, enabling high-speed breakout connectivity from a single 1.6T OSFP224 port to two 800G OSFP224 ports. This 1.6T breakout DAC is mainly used for 1.6T switch to 800G NIC interconnects. It is fully compatible with NVIDIA Quantum-X800 Q3400-RA switch and NVIDIA Connect-8 C8180 SuperNIC. NADDOD 1.6T breaout DAC cable offers a cost-effective and scalable solution for intensive networks within AI, HPC, and data center infrastructures.
NADDOD’s 1.6T OSFP224 to 4x400G OSFP224 DAC splitter cable is purpose-built for InfiniBand XDR networks, providing high-speed breakout connectivity from a single 1.6T OSFP224 port to four 400G OSFP224 ports. Primarily deployed for linking 1.6T switches to 400G NICs, this breakout DAC ensures seamless interoperability with NVIDIA Quantum-X800 Q3400-RA switches and NVIDIA ConnectX-8 C8180 SuperNICs. Offering a cost-efficient and scalable interconnect solution, it is ideal for demanding AI, HPC, and modern data center environments.
NADDOD 800G OSFP224 InfiniBand XDR transceivers come in OSFP form factor shape, used over single-mode fiber as a media. The product portfolio comprehends 800G DR4(500m). It is used to link the Quantum-X800 QM3x00 switches using Twin-port OSFP 2x800Gb/s transceivers to the dual 800Gb/s ConnectX-8 mezzanine card located internally in the GB200-based, liquid-cooled system with high performance.
NADDOD delivers 1.6T/800G InfiniBand XDR solutions for high-speed networking in HPC and AIDC, combining low-power optical transceivers with high-performance copper connectivity. The optical transceiver portfolio comprises the OSFP-1.6T-2xDR4H and OSFP-800G-DR4H, both rigorously tested for ultra-low bit error rates, low latency, and best out-of-the-box installation experience, performance, and durability. The active copper cable OSFP-1.6T-AC3H, active copper cable splitters O2O224-1.6T-ACCH, and the passive copper cable OSFP-1.6T-CU0-9H offer cost-effective, low-latency, and power-efficient connectivity with high bandwidth density for seamless 1.6T performance in dense data center environments.
NADDOD 800G InfiniBand NDR modules come in OSFP form factor shape, used over single-mode and multi-mode fiber as a media. The product portfolio comprehends 800G 2×SR4, 2×DR4 and 2×FR4. Interconnection distances range from 30m to 2km. NADDOD 800G 2×SR4 are embedded with Broadcom VCSEL and Broadcom DSP for delivering high-quality signal transmission. NADDOD 800G 2×DR4 and 2×FR4 can be compatible with low-cost DAC by adjusting the layout of the data center and providing unparalleled bandwidth and efficiency.
NADDOD 800G Twin-Port OSFP to OSFP InfiniBand NDR Direct Attach Cables (DAC) feature an advanced twinax construction with eight high-speed electrical copper pairs, enabling an aggregate bandwidth of 800Gbps. These 800G DAC cables deliver high-performance, low-latency connectivity for short-distance transmission in HPC and AI data center environments. NADDOD 800G OSFP DAC cables are available in both passive and active copper cables, and are fully tested on NVIDIA QM9700/9790 switches, making them ideal for interconnections between Quantum-2 InfiniBand switches.
NADDOD 800G breakout InfiniBand NDR DAC includes 800G Twin-port 2x400Gb/s OSFP to 2x400Gb/s OSFP InfiniBand NDR passive and active copper splitter cables. It provides connectivity between system units with an 800Gb/s OSFP port on one side and two 400Gb/s OSFP ports on the other. It is a high-quality, cost-effective alternative to fiber optics in 800Gb/s to 400Gb/s applications for high-speed, efficient, and sustainable network connectivity.
NADDOD 800G breakout InfiniBand NDR DAC comprehends Twin-port 2x400G OSFP to 4x200Gb/s OSFP InfiniBand NDR passive and active copper splitter cables. It provides connectivity between system units with an 800Gb/s OSFP port on one side and four 200Gb/s OSFP ports on the other. It is designed to provide ultra-low latency, ensuring real-time performance for the most demanding applications.
NADDOD 400G InfiniBand NDR modules come in OSFP, QSFP112 form factor shape for multimode, and single mode fiber as a media. The product portfolio comprehends 400G OSFP/QSFP112 SR4 and 400G OSFP DR4. The transmission distances are from 50 to 100 meters. 400G IB modules are often paired with 800G transceivers, ensuring seamless connectivity and interoperability between 400G and 800G modules by using compatible fiber cables. NADDOD 400G IB transceivers offer higher speeds and greater efficiency for bandwidth-intensive applications.
NADDOD InfiniBand NDR 400G OSFP to OSFP DACケーブルは、400G InfiniBandネットワーキング、AIクラスター、およびハイパフォーマンスコンピューティング(HPC)相互接続向けに設計されており、最大400Gbpsのデータレートをサポートします。このInfiniband 400G DACケーブルは、OSFPポート間の低遅延で電力効率の高い接続を提供します。主に400G NIC間の相互接続に使用され、NVIDIA NDR HCAと完全に互換性があります。
NADDOD 400G breakout InfiniBand NDR direct attached copper cables(DAC) have an 8-channel twin-port OSFP end using a finned top form factor for use in Quantum-2 and Spectrum-4 switch cages. It provides connectivity between system units with an OSFP 400Gb/s connector on one side and two separate QSFP56 200Gb/s connectors on the other. NADDOD breakout DAC can provide lower latency in short-distance transmission, and it is suitable for a variety of application scenarios such as data centers, high-performance computing, and storage networks, especially in environments that require rapid deployment and flexible configuration.
NADDOD's 400G breakout InfiniBand HDR DAC includes 400G OSFP to 4x100Gb/s QSFP56 InfiniBand HDR passive copper splitter cable. It provides connectivity between system units with an OSFP twin-port 2x200Gb/s port on one side and four 100Gb/s QSFP56 ports on the other side, supporting InfiniBand and Ethernet networking.
NADDOD 400G breakout InfiniBand NDR active optical cables(AOC) provide connectivity between system units with an OSFP 400Gb/s connector on one side and two separate QSFP56 200Gb/s connectors on the other, such as a switch and two サーバーs. NADDOD 400G AOC supports high-speed data transmission at rates up to 400Gbps with a smaller size and lightweight to meet the demand for high bandwidth, and cabling in data centers and high-performance computing environments.
NADDOD provides four types of MPO-12 APC female single-mode and multi-mode elite trunk cables. NADDOD IB NDR APC fiber patch cord minimizes reflections at the fiber connection and meets the stringent optical surface requirements of IB NDR transceivers. Paired with NADDOD APC patch cords, NDR transceivers reduce signal distortion and raw physical BER is better than E-8, perfectly compatible with the InfiniBand system. NADDOD provides reliable solutions for high-quality signal transmission.
NADDOD cutting-edge InfiniBand NDR connectivity product line includes 800Gb/s OSFP, 400Gb/s OSFP and QSFP112 transceivers, 800G passive and active direct attached copper cables (DAC), 800G and 400G breakout DAC, 400Gb/s OSFP to 2x QSFP56 breakout NDR MMF active optical cables (AOC) and four typers of fiber cable. 800G/400G NDR cables and transceivers are commonly used for empowering next-generation data centers and high-performance computing environments and offering unparalleled speed and efficiency for the most demanding network applications.
NADDODの200G InfiniBand HDR QSFP56 AOCは、200Gb/s InfiniBand HDRシステムで使用するために設計された、QSFP56 VCSEL(Vertical Cavity Surface-Emitting Laser)ベースのアクティブ光ケーブル(AOC)です。200G AOCは、高いポート密度と構成可能性を提供し、データセンター内のパッシブ銅ケーブルよりもはるかに長いリーチを実現します。厳格な製造テストにより、最高の初期設定、パフォーマンス、耐久性が保証されています。
NADDODの200G InfiniBand HDR QSFP56 DACは、主に200Gの広帯域リンクを実現します。200GイーサネットレートとInfiniBand HDRをサポートしています。QSFP56からQSFP56への銅線ケーブルによる直接接続ソリューションを提供します。非常に短いリンクに適しており、隣接するラック間で200ギガビットリンクを確立するための手頃な方法を提供します。
NADDODの200G InfiniBand HDR QSFP56 ブレイクアウトAOCは、QSFP56から2x100G QSFP56 ブレイクアウトアクティブ光通信ケーブルで、マルチモードファイバー上で動作します。IEEE 802.3、QSFP56 MSA、SFF-8024、SFF-8679、SFF-8665、SFF-8636、およびInfiniBand HDRに準拠しています。片側に200G QSFP56ポート、もう一方の端に2つの100G QSFP56ポートを接続し、ラック内や隣接ラック間の迅速かつ簡単な接続に適しています。
NADDOD's InfiniBand HDR 200G 2xQSFP56 to 2xQSFP56 active optical splitter cable (AOC) Compatible Mellanox® MFS1S90-HxxxE is a QSFP56 VCSEL-based (Vertical Cavity Surface-Emitting Laser), designed for use in 200Gb/s InfiniBand HDR (High Data Rate) systems. It provides cross-connectivity between a 200G Top of Rack (ToR) switch port configured as 2x100G ports and two 200G spine switch ports also configured as 2x100G ports. This offers a substantial CAPEX saving by reducing the required number of spine ports and cables. For data centers with more than 1600 サーバーs, it will also save the third-layer switches.
NADDODの200G InfiniBand HDR QSFP56 ブレイクアウトケーブル(DAC)は、200Gb/s InfiniBand HDRアプリケーションにおいて、光ファイバーに代わる高速かつ費用対効果の高いソリューションです。片側に200Gb/s HDR QSFP56ポート、もう一方に2つの100Gb/s QSFP56ポートを備えたシステムユニット間の接続を提供します。厳格なケーブル製造テストにより、最高の初期設定、パフォーマンス、耐久性を保証します。
NADDODは、HDRに対応したQSFP56マルチモードおよびシングルモードトランシーバーを、最大100m、2kmまで提供しています。NADDOD 200G IB HDRトランシーバーには、高性能な200Gb/s InfiniBand HDRシステムで使用するために設計された200G QSFP28 SR4およびDR4が含まれています。
InfiniBand HDR接続製品ラインは、200Gb/s QSFP56 IB HDR MMF AOC、アクティブ光スプリッターケーブル、パッシブダイレクトアタッチ銅線ケーブル(DAC)、パッシブ銅線ハイブリッドケーブル、光トランシーバを提供します。200Gb/s QSFP56 InfiniBand HDRケーブルとトランシーバは、InfiniBandネットワークインフラストラクチャ全体で、トップオブラック スイッチ(QM8700/QM8790など)をNVIDIA GPU(A100/H100/A30など)およびCPUサーバー、ストレージネットワークアダプタ(ConnectX-5/6/7 VPIなど)に接続するために、またスイッチ間接続にも一般的に使用されています。モデルレンダリング、人工知能(AI)、深層学習(DL)、NVIDIA OmniverseアプリケーションなどのGPUアクセラレーションによるハイパフォーマンスコンピューティング(HPC)クラスターアプリケーションにおいて、InfiniBand HDRネットワークで最大50%のコスト削減を実現します。
NADDODの100G InfiniBand EDR QSFP28 AOCは、QSFP28コネクタを備えたアクティブ光ファイバーアセンブリで、マルチモードファイバー(MMF)上で動作します。このAOCは、QSFP28 MSAおよびRoHS-6規格に準拠しています。個別の光トランシーバーと光パッチケーブルを使用するよりも、費用対効果の高いソリューションを提供します。NADDOD 100G InfiniBand EDR QSFP28 AOCは、ラック内および隣接ラック間の100Gbps接続に適しています。
NADDODの100G InfiniBand EDR QSFP28 DACは、主に100Gの広帯域リンクを実現します。100GイーサネットレートとInfiniBand EDRをサポートし、QSFP28からQSFP28への銅線ケーブルによる直接接続ソリューションを提供します。非常に短いリンクに適しており、隣接するラック間で100ギガビットリンクを確立するための手頃な方法を提供します。
NADDOD 100G InfiniBand EDRモジュールは、QSFP28フォームファクタで提供され、シングルモードおよびマルチモードファイバーをメディアとして使用します。製品ポートフォリオには、100G QSFP28 SR4、PSM4、LR4、およびCWDM4が含まれます。100G QSFP28光モジュールは、70メートルから10キロメートルの距離伝送をサポートし、低遅延で高スループットの通信を提供します。幅広いお客様にご利用いただいております。
InfiniBand EDRは、InfiniBand 200Gbase QSFP28 to QSFP28 EDR AOC、EDR DAC、EDR トランシーバーを含み、Mellanox EDR 100Gb スイッチSB7800/SB7890との接続に最適です。HPC、クラウド、モデルレンダリング、ストレージ、NVIDIA OmniverseアプリケーションをInfiniBand 100Gbネットワークで実行する際に、高性能接続のためにGPUアクセラレーションコンピューティングを50%節約します。
NVIDIA MMS4X00-NSとMMA4Z00-NSは、AIDC向けの高性能、低遅延、低BERを備えたInfiniBandトランシーバーに含まれています。NADDODは、NVシステムと100%互換性があり、非常に費用対効果の高いInfiniBandトランシーバーを提供しています。 800Gトランシーバーをクリックしてください。
NVIDIA InfiniBand linkX InfiniBandトランシーバーは、MMS4X00-NSとMMA4Z00-NSで構成されており、最高のROIと最低のTCOでデータセンターにおける50m、100mの高品質データ伝送に適しています。NADDODは、NVシステムと100%互換性があり、非常に費用対効果の高いNDR製品を提供しています。 800Gトランシーバーをクリックしてください。
NVIDIA ConnectX-8 SuperNICs includes ConnectX-8 800G options C8180 (900-9X81E-00EX-DT0), C8180 (900-9X81E-00EX-ST0), ConnectX-8 400G SuperNIC C8240 (900-9X81Q-00CN-ST0), and the companion PCIe Auxiliary Card Kit C8180X (930-9XAX6-0025-000) — designed for pairing with the C8180 (900-9X81E-00EX-ST0) and C8240 (900-9X81Q-00CN-ST0) adapters. These products enable advanced routing and telemetry-based congestion control capabilities, delivering maximum network performance and peak AI workload efficiency, while supporting both InfiniBand and Ethernet networking.
The ConnectX-7 smart host channel adapter (HCA) offers ultra-low latency, 400Gb/s throughput, and advanced NVIDIA In-Network Computing acceleration engines for enhanced performance. It provides the scalability and robust features essential for supercomputers, artificial intelligence, and hyperscale cloud data centers.
The ConnectX-6 smart host channel adapter (HCA) utilizes the NVIDIA Quantum InfiniBand architecture to deliver exceptional performance and NVIDIA In-Network Computing acceleration engines, enhancing efficiency in HPC, artificial intelligence, cloud, hyperscale, and storage environments.
NVIDIA® ConnectX® InfiniBand smart adapters achieve outstanding performance and scalability through faster speeds and innovative In-Network Computing. NVIDIA ConnectX effectively lowers operational costs, boosting ROI for high-performance computing (HPC), machine learning (ML), advanced storage, clustered databases, and low-latency embedded I/O applications.
The NVIDIA Quantum-X800 Q3200-RA, Q3400-RA switch with its high performance for AI workloads. Q3400-RA Leverags 200Gb/s-per-lane serializer/deserializer (SerDes) technology significantly enhances network performance and bandwidth. NADDOD SiPh-based OSFP-1.6T-2xDR4H modules typically connect with Q3400-RA for delivering high bandwidth and low power consumption in hyperscale data centers. The NVIDIA Quantum-X800 Q3400-RA features 144 ports at 800Gb/s distributed across 72 Octal Small Form-factor Pluggable (OSFP) cages with ultra-low latency and high bandwidth for next-gen AI.
The NVIDIA Quantum-2-based QM9700 and QM9790 switch systems deliver an unprecedented 64 ports of NDR 400Gb/s InfiniBand per port in a 1U standard chassis design. A single switch carries an aggregated bidirectional throughput of 51.2 terabits per second (Tb/s), with a landmark of more than 66.5 billion packets per second (BPPS) capacity.
The NVIDIA InfiniBand QM8700 Series delivers high-performance networking solutions with low latency and exceptional scalability. Designed for supercomputing and data center environments, this series enhances connectivity and efficiency for demanding applications, ensuring reliable performance in modern computing landscapes.
NVIDIA's InfiniBand Switching solutions are designed to optimize high-performance computing and cloud-native environments. With features like adaptive routing, self-healing capabilities, and enhanced quality of service (QoS), these solutions ensure reliable, scalable connectivity for supercomputing applications, driving efficiency and performance in your data center.
The BlueField-3 DPU is a cloud infrastructure processor that empowers organizations to build software-defined, hardware-accelerated data centers from the cloud to the edge. BlueField-3 DPUs offload, accelerate, and isolate software- defined networking, storage, security, and management functions, significantly enhancing data center performance, efficiency, and security.
The BlueField-3 SuperNIC is a novel class of network accelerator that’s purpose-built for supercharging hyperscale AI workloads. For modern AI clouds, the BlueField-3 SuperNIC enables secure multi-tenancy while ensuring deterministic performance and performance isolation between tenant jobs.
The NVIDIA DPUs and SuperNICs provide specialized hardware accelerators for modern data centers. BlueField DPUs handle infrastructure offload tasks like networking, storage, and security, freeing up critical CPU resources. SuperNICs deliver ultra-low latency connectivity optimized for large-scale AI and HPC environments.