Blogs

NADDOD N9570-128QC: 51.2T 128×400G Switch Powered by NVIDIA Spectrum-4 Chip

NADDOD N9570-128QC: 51.2T 128×400G Switch Powered by NVIDIA Spectrum-4 Chip

As AI training clusters scale, network latency, stability, and link utilization become critical. Learn how the NADDOD N9570-128QC leverages Spectrum-4 and ConnectX-8 to address RoCE performance challenges in large GPU environments.
Neo
Jan 27, 2026
A Comprehensive Guide to 400G OSFP Ethernet Optical Transceivers

A Comprehensive Guide to 400G OSFP Ethernet Optical Transceivers

Explore 400G OSFP Ethernet optical transceivers for modern data centers, AI and HPC networks. Learn OSFP advantages, use cases, and NADDOD’s 400G OSFP solutions for high-density, high-performance connectivity.
Nathan
Jan 23, 2026
From RoCE to Spectrum-X: The Evolution of Ethernet Networking for AI Data Centers

From RoCE to Spectrum-X: The Evolution of Ethernet Networking for AI Data Centers

This article explores the evolution of RoCE networking and the technical challenges of Ethernet in large-scale AI clusters, and explains how NVIDIA Spectrum-X improves lossless transport, congestion control, and performance predictability for AI data centers.
Jason
Jan 23, 2026
NVIDIA MetroX-2: A Remote Interconnect System for Scale-Across

NVIDIA MetroX-2: A Remote Interconnect System for Scale-Across

Against the backdrop of scale-across becoming the inevitable path for AI computing power expansion, this paper analyses how NVIDIA MetroX-2, a metropolitan-scale AI networking system, enables cross-data centre training and inference through low-latency, highly predictable remote interconnects. This facilitates genuine cross-domain computing power coordination and resource pooling.
Dylan
Jan 22, 2026
Physical AI Analysis: From Information Intelligence to Real-World Intelligence

Physical AI Analysis: From Information Intelligence to Real-World Intelligence

Physical AI is driving artificial intelligence from the digital space into the real world. This article systematically introduces the development stages, core working mechanisms, typical applications, and NVIDIA's full-stack technology layout of physical AI.
Jason
Jan 21, 2026
Analyzing DGX Spark and DGX Station: NVIDIA's Deskside AI Supercomputing

Analyzing DGX Spark and DGX Station: NVIDIA's Deskside AI Supercomputing

An in-depth analysis of how DGX Spark and DGX Station bring data center–class AI computing to the Deskside , enabling local development, fine-tuning, and deployment of large models while reducing costs, enhancing security, and accelerating the adoption of AI in real-world business scenarios.
Jason
Jan 16, 2026
Analyzing the Impact of NVIDIA BlueField-4 on AI Context Inference

Analyzing the Impact of NVIDIA BlueField-4 on AI Context Inference

Learn how the BlueField-4 DPU optimizes context management for large language models in AI systems, achieving cross-node sharing, low-latency access, and system-level scalability through a dedicated context memory layer and high-speed key-value cache.
Abel
Jan 15, 2026
Introduction to the Three Key Processing Cores Inside NVIDIA GPUs

Introduction to the Three Key Processing Cores Inside NVIDIA GPUs

Starting with the SM organizational structure of NVIDIA GPUs, this paper outlines the architectural positioning and capability focus of internal computing cores such as CUDA core, Tensor core, and RT core, and explains the differences in the division of labor and applicable scenarios of different processing cores in NVIDIA GPUs.
Abel
Jan 9, 2026
Inference Chip Guide: The Foundation of Scalable AI Applications

Inference Chip Guide: The Foundation of Scalable AI Applications

AI inference is becoming a core factor in computing power costs. This article systematically analyzes the key differences between AI training and inference, introduces the advantages of inference chips, and explains the inference roadmaps of Amazon Inferentia2, Google TPU Ironwood, and NVIDIA, helping you understand why inference chips have become a key infrastructure for the large-scale deployment of AI.
Adam
Jan 9, 2026