Resources

NADDOD 1.6T OSFP224 Transceivers Achieve Full Interoperability in Connectivity Tests with NVIDIA MMS4A00

NADDOD 1.6T OSFP224 Transceivers Achieve Full Interoperability in Connectivity Tests with NVIDIA MMS4A00

NADDOD 1.6T OSFP224 optical module are fully compatible with NVIDIA Quantum X-800 switches and enable seamlessly interconnection with NVIDIA's 1.6T optical module.
Claire
May 16, 2025
Introduction to Open-source SONiC: A Cost-Efficient and Flexible Choice for Data Center Switching

Introduction to Open-source SONiC: A Cost-Efficient and Flexible Choice for Data Center Switching

Discover how SONiC, the Linux-based open-source network OS, transforms data centers with hardware independence and automaion. And explore NADDOD’s SONiC-Powered 51.2T/25.6T Ethernet AI Switches.
Neo
Apr 24, 2025
OFC 2025 Recap: Key Innovations Driving Optical Networking Forward

OFC 2025 Recap: Key Innovations Driving Optical Networking Forward

Discover the highlights from OFC2025—large‑scale 800G/1.6T deployments to Silicon Photonics (SiPh), Linear Drive Pluggable Optics (LPO), and Co-Packaged Optics (CPO) innovations, novel fiber&connectivity advances, and chip‑level leaps driving the next generation of high‑bandwidth, energy‑efficient optical communications.
Mark
Apr 17, 2025
NADDOD 1.6T XDR Infiniband Module: Proven Compatibility with NVIDIA Quantum-X800 Switch

NADDOD 1.6T XDR Infiniband Module: Proven Compatibility with NVIDIA Quantum-X800 Switch

Following the launch of NVIDIA Quantum-X800 InfiniBand switches, NADDOD's 1.6T OSFP224 optical modules are quickly finished compatibility tests and demonstrated exceptional performance and low BER, reinforcing its reliability in powering AI infrastructure.
Dylan
Apr 8, 2025
Vera Rubin Superchip - Transformative Force in Accelerated AI Compute

Vera Rubin Superchip - Transformative Force in Accelerated AI Compute

NVIDIA's next-gen GPU superchip - Vera Rubin combines CPU Vera and GPU Rubin together, configuring in NVL144 rack to deliver 50 petaflops of FP4 inference performance. Read this article to know about its architecture, rack and wide application in AI industry.
Brandon
Apr 2, 2025
NVIDIA GB300 Deep Dive: Performance Breakthroughs vs GB200, Liquid Cooling Innovations, and Copper Interconnect Advancements.

NVIDIA GB300 Deep Dive: Performance Breakthroughs vs GB200, Liquid Cooling Innovations, and Copper Interconnect Advancements.

Explore NVIDIA's revolutionary GB300 GPU architecture—unpacking its 1.5x FP4 performance boost, 288GB HBM3E memory, 1.6T networking, and groundbreaking liquid cooling solutions. Learn how GB300 surpasses GB200 in AI workloads and reshapes data center efficiency.
Abel
Mar 27, 2025
Blackwell Ultra - Powering the AI Reasoning Revolution

Blackwell Ultra - Powering the AI Reasoning Revolution

NVIDIA introduced Blackwell Ultra, an accelerated computing platform built for the age of AI reasoning, which includes training, post-training, and test-time scaling.
Jason
Mar 26, 2025
Introduction to NVIDIA Dynamo Distributed LLM Inference Framework

Introduction to NVIDIA Dynamo Distributed LLM Inference Framework

Learn about the overview of NVIDIA Dynamo open-source distributed LLM inference framework for large-scale distributed reasoning models. Explore Dynamo key features and architecture, disaggregated serving, smart router, distributed KV cache manager and NVIDIA inference transfer library.
Claire
Mar 25, 2025
How NADDOD 800G FR8 Module & DAC Accelerates 10K H100 AI Hyperscale Cluster?

How NADDOD 800G FR8 Module & DAC Accelerates 10K H100 AI Hyperscale Cluster?

Learn how NADDOD’s 800G 2xFR4 optical module and DAC solution enabled stable, high-performance LLM training for a leading AI supercomputing cluster?
Dylan
Mar 25, 2025