Purpose-Built for High Performance AI Workloads

Adaptive routing, in-network computing, congestion control architecture, empower InfiniBand to meet the rigorous demands of HPC and AI clusters. These optimizations ensure seamless data flow, eliminate bottlenecks, and enable efficient resource utilization, driving superior performance and operational efficiency for complex infrastructures.

High Performance, Low Latency

Lossless Transmission with Credit-Based Flow Control

Adaptive Routing for Optimal Load Distribution

In-Network Computing with SHARP Protocol

Scalability with Flexible Topologies

Stability and Resilience with Self-Healing Technology

Scalable Architecture for Peak AI Performance with InfiniBand

For large-scale InfiniBand AI cluster deployment, with NVIDIA H100, H200 GPUs, the DGX solutions (GB200, GB300), fat-tree network topology are paired with these suited for handling intensive AI workloads.

Immersion Cooling Solution

NADDOD InfiniBand Network Solutions for Small-to-Large AI Clusters

Flexible solutions tailored to varying AI cluster sizes, data center layouts, and connection distances.

Small-Scale InfiniBand AI Data Center Solutions

High performance and cost-effective way to upgrade or build more scalable AI clusters.

NDR Multimode Transceivers Deployment

InfiniBand NDR multimode transceivers cost-effective, reliable performance for short distances.

Typical Use Case
Spine-to-Leaf and Leaf-to-Server connections under 50 meters.
Recommendations

Mid-to-Large InfiniBand AI Data Center Solutions

High performance and cost-effective way to upgrade or build more scalable AI clusters.

NDR Single-Mode Transceivers and DAC/ACC Cables

InfiniBand NDR Single-mode transceivers enable stable, long-distance connections, while DAC cables lower costs and power consumption. Together, they provide an efficient solution for mid-to-large clusters.

Single-Mode Transceivers + 800G DAC/ACC Cables
Typical Use Case
Spine and Leaf switches colocated or in adjacent racks for short-distance DAC connections. Single-mode transceivers handle longer Server-to-Leaf distances with high-speed, low-latency performance.
Recommendations
Spine-to-Leaf: 800G OSFP DAC/ACC cables (support up to 5 meters)
Leaf-to-Server: 800G OSFP 2xDR4 and 400G OSFP DR4 (both support up to 100 meters)
Single-Mode Transceivers + 800G Breakout DAC/ACC Cables
Typical Use Case
Breakout DAC cables connect servers to leaf switches in adjacent racks. For Leaf-to-Spine distances exceeding 50 meters and up to 2 kilometers, single-mode transceivers provide reliable, high-performance connectivity.
Recommendations
Spine-to-Leaf: 800G OSFP 2xFR4 - 800G OSFP 2xFR4 (supports up to 2 kilometers; suitable for inter-building connections) or 800G OSFP 2xDR4 - 800G OSFP 2xDR4 (optimized for distances under 500 meters with high port density) transceivers.
Leaf-to-Server: 800G OSFP Breakout DAC/ACC cables (support up to 5 meters)

Large-scale InfiniBand AI Data Center Solutions

Meeting the extreme bandwidth and latency demands of trillion-parameter ultra-large model.

XDR/NDR Transceivers and DAC/ACC Cables

For large AI clusters, a hybrid approach combining DAC cables with XDR or NDR transceivers is commonly adopted. It combines ultra-high speeds for demanding AI workloads with cost-efficient infrastructure.

XDR Transceivers + XDR Breakout DAC/ACC Cables
Typical Use Case
When compute nodes connectivity is at 800Gp/s speed, short distance in-rack and inter-rack connections use 1.6T copper cables, while long-distance links adopt 1.6T XDR single-mode modules, reducing the cost compared to using single-mode modules for all connections.
Recommendations
Spine-to-Leaf:1.6T OSFP224 2xDR4 (support up to 500 meters)

XDR and NDR Transceivers + NDR Breakout DAC/ACC Cables

Typical Use Case
During the shift from NDR to XDR networks, hybrid setups balance performance upgrades with efficiency. For compute nodes operating with 400 Gb/s NICs, the server side continues to operate with NDR optical modules and cables, while Spine-to-Leaf connections adopt XDR transceivers for maximum throughput.
Recommendations
Spine-to-Leaf: 1.6T OSFP224 2xDR4 (support up to 500 meters)

Common Network Issues Affecting AI Training Efficiency

80% AI Training Interruptions Stem from Network-Side Issues95% Network Problems Often Linked to Faulty Optical Interconnects

NADDOD - Safeguarding AI Clusters from Training Interruptions

Broadcom DSP and VCSEL Deliver Ultra-Low BER and Stability

Rigorous Compatibility Across NVIDIA Ecosystems

Industry-Leading Manufacturing Ensures Consistent Quality and Fast Delivery

Comprehensive Product Portfolio with Customizable Solutions

Extensive Expertise and Dedicated Support for InfiniBand Cluster Deployments

NADDOD InfiniBand Product Portfolio for AI Workloads

InfiniBand Transceivers and Cables

NVIDIA Quantum-X800 and Quantum-2 connectivity options enable flexible topologies with a variety of transceivers, MPO connectors, ACC, and DACs. Backward compatibility connects 800b/s, 400Gb/s clusters to existing 200Gb/s or 100Gb/s infrastructures, ensuring seamless scalability and integration.

InfiniBand XDR Optics and Cables InfiniBand NDR Optics and Cables

InfiniBand Switches

The NVIDIA Quantum‑X800 and Quantum‑2 are designed to meet the demands of high-performance AI and HPC networks, delivering 800 Gb/s and 400 Gb/s per port, respectively, and are available in air-cooled or liquid-cooled configurations to suit diverse data center needs.

Quantum-X800 Switches Quantum-2 Switches

InfiniBand Adapters/NICs

The NVIDIA ConnectX-8 and ConnectX-7 InfiniBand adapter delivers unmatched performance for AI and HPC workloads, offeringsingle or dual network ports with speeds of up to 800Gb/s, available in multiple form factors to meet diverse deployment needs.

ConnectX-8 Adapters ConnectX-7 Adapters

What Customers Say

Our InfiniBand setup is using NADDOD's transceivers and cables—rock-solid performance!
We use NADDOD optics for our InfiniBand setup. Great quality and performance.
Perfect!👍Can’t imagine an easier solution for our infrastructure.
Reliable products and great support
NADDOD truly understands our needs! Best choice for AI network!
naddod infiniband ndr transceiver
infiniband cluster
naddod infiniband ndr transceiver
infiniband clusterinfiniband cluster

Contact us

Partner with NADDOD to Accelerate Your InfiniBand Network for Next-Gen AI Innovation

+1
I agree to NADDOD's Privacy Policy and Term of Use.
Submit