-->
Item Spotlights
N9570-128QC, 128-Port Ethernet L3 4U Data Center Switch, 128x 400Gb QSFP112, Support RoCEv2, NVIDIA Spectrum-4
N9570-128QC is a next-generation, high-density 400G switch built on open principles, introduced by Naddod specifically for Generative AI applications. It offers robust forwarding performance and extensive AI networking capabilities. Featuring 128 x 400G QSFP112 ports and an independent management plane, the N9570-128QC runs the Naddod Network Operating System (NOS), based on SONiC. It is suitable for deployment as both an AIGC network switch and a data center switch, addressing the need for high-performance AI Fabric and data center networking.
Optimized for high-density AI computing and next-gen data centers, the N9570-128QC provides 128x 400G ports in a compact 4U design, ensuring efficient, high-performance networking.
Powered by the NVIDIA Spectrum-4 high-performance chip, the N9570-128QC supports RoCEv2 lossless networking and full L2/L3 forwarding, delivering 1.6x higher AI networking performance, ultra-low latency, and reliability. Ideal for data centers, AI/ML clusters, HPC, and distributed storage, it combines standard Ethernet connectivity with performance isolation for mission-critical workloads.
Adaptive Routing (AR) intelligently balances traffic at the packet level across multipath networks, reducing latency, maximizing bandwidth utilization, and significantly boosting throughput. Its end-to-end optimization eliminates congestion bottlenecks, making it ideal for HPC, AI workloads, and other bandwidth-intensive environments.
The open-source SONiC network OS features hardware-agnostic design, enabling cross-vendor compatibility and seamless third-party integration via SNMP/RESTful APIs to meet diverse network needs. Backed by an active community, it efficiently addresses technical challenges for flexible management and stable operation.
By combining end-to-end adaptive RDMA routing and lossless networking technologies, N9570-128QC significantly enhances the effective bandwidth of the network, increasing it from 60% to 95%.
N9570-128QC enhances NCCL performance in deep learning and HPC via high-bandwidth, low-latency connections with topology awareness and dynamic load balancing to minimize data transmission time and eliminate multi-GPU/node bottlenecks, as validated in NCCL testing: Scenario 1 involves Node2-31 and 30 servers running All Reduce/All to All normally while 8 GPUs on Node32 generate ib_write_bw noise traffic to 1 GPU on Node1; Scenario 2 sees Group 1 (Node1,9,17,25) and 4 servers executing All Reduce/All to All normally, with Groups 2-8 performing All Reduce under moderate noise and switching to All to All under severe noise to simulate intensive workloads.
N9570-128QC AI Fabric senses NCCL traffic, optimizes NCCL performance, and provides consistent and predictable network performance in different background noise scenarios. NCCL All Reduce performance is improved by up to 210% (severe noise scenarios); NCCL All to All performance is improved by up to 45% (mild noise scenario).