Resources

Focusing AI Frontiers: NADDOD Unveils 1.6T InfiniBand XDR Silicon Photonics Transceiver

Focusing AI Frontiers: NADDOD Unveils 1.6T InfiniBand XDR Silicon Photonics Transceiver

Learn how NADDOD latest 1.6T InfiniBand XDR silicon photonics transceivers performance and its application in AI and HPC data centers with ultra-high bandwidth, low latency, and unmatched scalability.
Gavin
Jan 3, 2025
NADDOD Leads in Compatibility and Performance on Thor2 & CX7

NADDOD Leads in Compatibility and Performance on Thor2 & CX7

Read NADDOD’s detailed test report on Thor2 and CX7, with a focus on the impact of compatibility and low BER on clusters and how 8x50G applications benefit from NADDOD solutions to boost network efficiency.
Claire
Dec 5, 2024
4 Common Spectrum-X Product Solutions in Ethernet Networking

4 Common Spectrum-X Product Solutions in Ethernet Networking

A comprehensive introduction of 
the NVIDIA Spectrum-X, from components to advantages and product solutions.
Jason
Nov 29, 2024
Broadcom Thor 2 vs NVIDIA CX7: 400G Ethernet NIC for AI/ML Workloads

Broadcom Thor 2 vs NVIDIA CX7: 400G Ethernet NIC for AI/ML Workloads

Explore how Broadcom Thor 2 and NVIDIA CX7 400G Ethernet NICs compare in powering AI/ML workloads. This article examines their performance, power efficiency, reliability, and scalability to help optimize your high-performance networking infrastructure.
Jason
Nov 22, 2024
Single-Phase vs. Two-Phase Immersion Cooling in Data Centers

Single-Phase vs. Two-Phase Immersion Cooling in Data Centers

Discover the two primary types of immersion cooling, their mechanisms, benefits, and ideal applications for high-efficiency data centers.
Jason
Nov 14, 2024
Nine Technologies for Lossless Networking in Distributed Supercomputing Data Centers

Nine Technologies for Lossless Networking in Distributed Supercomputing Data Centers

What technologies are essential for building a lossless, scalable, high-performance network in distributed AI and supercomputing data centers?
Jason
Nov 13, 2024
Introduction to NVIDIA DGX H100/H200 System

Introduction to NVIDIA DGX H100/H200 System

This article will focus on the detailed components and features of the NVIDIA DGX H100/H200 system.
Adam
Nov 7, 2024
Meta Trains Llama 4 on a 100,000+ H100 GPU Supercluster

Meta Trains Llama 4 on a 100,000+ H100 GPU Supercluster

Meta is setting a new standard in AI with Llama 4, trained on a supercluster of 100,000+ NVIDIA H100 GPUs. Announced by CEO Mark Zuckerberg, this initiative rivals AI efforts by Microsoft, Google, and xAI. Discover how Meta’s open-source strategy and major infrastructure investments aim to reshape AI’s future and drive new revenue.
Jason
Nov 5, 2024
Comparing NVIDIA’s Top AI GPUs H100, A100, A6000, and L40S

Comparing NVIDIA’s Top AI GPUs H100, A100, A6000, and L40S

Choosing the right GPU is key to optimizing AI model training and inference. NVIDIA’s H100, A100, A6000, and L40S each have unique strengths, from high-capacity training to efficient inference. This article compares their performance and applications, showcasing real-world examples where top companies use these GPUs to power advanced AI projects.
Jason
Nov 1, 2024