Where to Buy Infiniband Products - NADDOD Blog

Where to Buy Infiniband products: A Purchasing Guide for HPC Data Centers

NADDOD Abel InfiniBand Expert Jun 1, 2023

Recently, NVIDIA has made significant news with their release of a series of new chips, architecture for supercomputing, switches, and industry applications at Computex 2023:“NVIDIA is currently in the process of constructing its own large-scale AI supercomputer, known as the NVIDIA Helios. This powerful system is set to go live later this year and will leverage four DGX GH200 systems, each interconnected with NVIDIA Quantum-2 InfiniBand networks, to deliver the data throughput necessary for training large AI models.”

According to Jenny Huang, the company plans to release over 400 systems utilizing the latest versions of NVIDIA’s Hopper, Grace, Ada Lovelace, and BlueField architectures. These systems will be instrumental in addressing complex challenges in the fields of AI, data science, and high-performance computing.

It can be predicted that data centers are moving towards accelerated computing, with AIGC driving the process forward. AI applications continue to catalyze the computing and optical communication markets, but the key to computing power - NVIDIA’s IB product delivery, has fallen short of expectations, and where to buy Infiniband products has become the focus point of the HPC industry.

What is Infiniband and who makes it?

InfiniBand is a cable technology that supports multiple concurrent links. It can handle storage I/O, network I/O, and inter-process communication (IPC). This allows it to interconnect disk arrays, SANs, LANs, servers, and cluster servers, as well as connect to external networks such as WANs, VPNs, and the internet. Infiniband networks are mainly used in high-performance computing scenarios.

What features does Infiniband have?

Traditional TCP/IP protocol’s multi-layer structure introduces complex buffering management, resulting in significant network latency and additional operating system overhead. As network technology matures, networks require an open, high-bandwidth, low-latency, highly reliable, and exchange-based architecture that can meet the infinite scaling needs of clusters. In this context, InfiniBand (IB) began to emerge.

IB 400G solution
IB has the characteristics of high bandwidth, low latency, high reliability, and infinite scalability for cluster networks and storage networks. IB uses Remote Direct Memory Access (RDMA) technology, which allows servers to know and use other servers’ memory without the intervention of the operating system’s kernel. This approach inherits the high bandwidth and low latency of the bus while reducing the CPU’s processing burden, making it suitable for clusters such as storage.

Compared to network protocols such as TCP/IP, IB has higher transmission efficiency. This is because many network protocols have the ability to forward lost data packets, but due to constant confirmation and retransmission, communication based on these protocols can become slow, greatly affecting performance. TCP protocol is a widely used transmission protocol that can be found in various devices, from refrigerators to supercomputers. However, using it comes at a high cost: TCP protocol is extremely complex, has a huge amount of code and is full of various exceptions, and it is difficult to uninstall.

RDMA Connect solution
In contrast, IB uses a trust-based, flow control mechanism to ensure connection integrity and minimize data packet loss. Using IB, data transmission only occurs when the receiving buffer has sufficient space, and the receiving party provides a signal to indicate the availability of buffer space after data transmission is completed. This method eliminates the retransmission delay caused by original data packet loss, thereby improving efficiency and overall performance.

Who made Infiniband?

InfiniBand is a high-bandwidth, low-latency, and easily scalable technology developed under the supervision of the InfiniBand Trade Association (IBTA) and has become one of the fastest developing high-speed interconnect network technologies. IBTA, founded in 1999, has more than 300 members who specialize in maintaining and promoting Infiniband standards, as well as ensuring the compliance and interoperability testing of commercial Infiniband products. Through its roadmap, IBTA is more actively promoting higher-performance development than any other interconnect solution, ensuring the advanced architecture design.

There are only two companies, Mellanox and Emulex, among the nine main directors of the InfiniBand Trade Association (IBTA), which specialize in InfiniBand. Other members only play a role in using InfiniBand. Emulex was acquired by Avago in February 2015 due to poor business, and Qlogic’s Infiniband business was sold to Intel in 2012, which subsequently laid out OPA based on it.

Mellanox dominates the InfiniBand market, with the number of cluster deployments using its products far exceeding the number of clusters using its competitors. The number of clusters that choose Mellanox products is nearly four times that of its competitor Omni-Path (Intel) network, and five times that of another competitor, Cray system.

What are the main benefits of infiniband?

InfiniBand is a high-performance interconnect technology that enables the most efficient data center operations. Its credit-based flow control mechanism allows for data transmission with extremely low latency and negligible CPU usage. This approach eliminates the need for packet loss mechanisms, such as TCP window algorithms, to determine the optimal number of data packets being transmitted. In comparison to other interconnect technologies, InfiniBand has several key advantages, including higher throughput, lower latency, improved scalability, increased CPU efficiency, reduced management overhead, and simplicity.

One of the most significant benefits of InfiniBand is its high throughput capability. InfiniBand has consistently supported the highest end-to-end throughput for server and storage connections, with 100Gb/s (EDR) introduced in 2014 and 200Gb/s HDR) in 2018, far exceeding the capabilities of Ethernet and Fiber Channel. InfiniBand also offers higher scalability, with the ability to accommodate up to 40,000 nodes in a single subnet and an infinite number of nodes in a global network.

Spine-leaf Network
InfiniBand’s low latency and Remote Direct Memory Access (RDMA) technology reduce operating system overhead, allowing for rapid data movement within the network. This technology also enables data movement offloading, freeing up CPU cycles for other applications and reducing job processing times.
InfiniBand’s management overhead is significantly reduced, as its switches can operate in software-defined network (SDN) mode, allowing them to function as part of the network’s structure without the need for CPU management. Additionally, InfiniBand is simple to install, making it an ideal choice for building simple fat-tree clusters.

The Importance of Infiniband in HPC

InfiniBand is particularly important in high-performance computing (HPC), where high-speed interconnect networks (HSI) are crucial for the development of HPC. InfiniBand is one of the fastest-growing HSI technologies, supporting bandwidth of up to 200Gbps and point-to-point latency of less than 0.6us. InfiniBand’s high-speed network enables the construction of a high-performance computing cluster, where multiple servers are combined to achieve linear performance scaling. This technology has been instrumental in the development of high-performance computing clusters, including supercomputers.

InfiniBand’s importance in high-performance computing cannot be overstated, as it enables the construction of high-performance computing clusters that achieve linear performance scaling through the use of multiple servers. As a result, InfiniBand has become a critical component of data center operations, providing high reliability, availability, scalability, and performance for enterprise data centers and large or super-large data centers.

Its ability to support high-bandwidth, low-latency transmission over relatively short distances, with redundant I/O channels in single or multiple interconnect networks, makes it an ideal choice for HPC applications. By incorporating InfiniBand into their data center operations, businesses can achieve higher productivity and a better return on investment, with competitive pricing and improved CPU efficiency.

Where to buy Infiniband products?

Since the end of 2022, the ChatGPT-powered AI hype has swept the world, and generative AI has sparked a new wave of technological revolution worldwide. However, the recent severe shortage of A800/H800 SXM, the preferred large model for AI big models, has delayed many large model projects, including network architecture. Infiniband architecture products have also experienced varying degrees of shortages, price increases, and long delivery cycles. This has become a major issue for some high-performance computing data centers looking to upgrade or expand their infrastructure. In this article, we will provide a guide on where to buy Infiniband products.

1. NVIDIA Networking Store

As a leader in the IB field, Mellanox has been at the forefront of protocol standard formulation, software and hardware development, and ecosystem construction. In April 2020, NVIDIA acquired Mellanox, and its Mellanox products have continued to develop rapidly. Purchasing connectors from the NVIDIA Networking Store is the most efficient way to buy, as it offers a wide range of connectors and is a reliable source for acquiring them. However, some products may not be directly sold by the official website, and in such cases, you may choose to buy from NVIDIA partners.

2. NVIDIA® Partner Network

NVIDIA’s partners have the latest market solutions and supplies, and IB cables and IB transceivers are sold globally through NVIDIA authorized distributors/dealers network. You can find their distributors/dealers through the NVIDIA official website. However, due to the deep cooperation between distributors/partners and NVIDIA, they may face connector shortages, insufficient market supply, and long delivery cycles. Therefore, it is important to ensure that the distributor you choose is reliable and trustworthy.

3. NADDOD Network

NADDOD Network is one of the third-party brands that is fully compatible with Infiniband worldwide. It provides end-to-end Infiniband intelligent interconnection products that meet the technical and performance requirements of the original Mellanox cable. NADDOD’s Infiniband cables have been verified by the AI market to have low latency, low loss, ultra-low BER, and high performance, and they can perfectly compatible with NVIDIA switches and NICs, providing the best transmission efficiency for supercomputers. In addition, NADDOD can deliver quickly and can ensure the progress of customers’ AI projects as soon as possible, which is especially important in the current HPC market where the supply is less than the demand. If you are looking for a reliable and efficient source for Infiniband cables, NADDOD Network is a good option to consider.

4. E-commerce platforms

Infiniband products are specialized networking hardware typically used in high-performance computing environments, the easiest and most common place to buy InfiniBand is through e-commerce platforms such as Amazon, Newegg, CDW, and B&H. These retailers often have a wide variety of IB connectors available, from network adapters to cables and switches. Some vendors even specialize in providing InfiniBand equipment, such as Thinkmate and Supermicro.

IB products are essential for high-performance computing data centers, and finding a reliable and efficient source for purchasing them in bulk is crucial for the success of projects and operations. The NVIDIA Networking Store, NVIDIA Partner Network, and NADDOD Network are three options to consider when looking to buy IB products. It is important to ensure that the distributor you choose is reliable, trustworthy, and can deliver Infiniband connectors quickly to avoid delays and disruptions in your operations.

How to choose the Infiniband product?

Infiniband products are essential for high-performance computing data centers, and choosing the right one can be crucial for the success of your operations and projects. The total Infiniband systems include InfiniBand Switches, InfiniBand Adapters, InfiniBand LongHaul, InfiniBand Gateway to Ethernet, InfiniBand Cables and Transceivers, InfiniBand Telemetry and Software Management, and InfiniBand Acceleration Software.

InfiniBand network bandwidth has developed from SDR, DDR, QDR, FDR, EDR, HDR, NDR to XDR. InfiniBand cables differ from traditional Ethernet cables and fiber optic cables, and specialized InfiniBand cables are required for different connection scenarios. In this section, we will provide a guide on how to choose the right Infiniband product for your specific needs.
Infiniband data rate trend

  1. InfiniBand Network Interconnect Product
    InfiniBand network interconnect systems include DAC high-speed copper cables, AOC active cables, and optical modules. DAC cables are passive copper cables that provide a cost-effective solution for short-range, high-speed interconnects. AOC cables are active cables that use optical technology to transmit data over longer distances. Optical modules are commonly used for long-distance, high-speed interconnects.
  2. Speed Rates
    InfiniBand products are available in different speed rates, including QDR (40G), EDR(100G), HDR (200G), and NDR (400G). The speed rate you choose depends on your specific needs and the bandwidth requirements of your applications. If you need a higher bandwidth for your applications, you should choose a higher speed rate Infiniband product.
  3. Package Modules
    Infiniband optical transceivers are available in different package modules, including QSFP+, QSFP28, QSFP56, and OSFP. QSFP+ is a high-speed pluggable module that supports data rates up to 40G. QSFP28 is a higher speed module that supports data rates up to 100G. QSFP56 is a newer module that supports data rates up to 200G, and OSFP is the latest module that supports data rates up to 400G. The package module you choose depends on the bandwidth and distance requirements of your applications.

Choosing the right Infiniband product is crucial for the success of your high-performance computing data center. By considering your bandwidth and distance requirements, connectors, budget, compatibility, reliability, and future-proofing needs, you can select the right IB connector for your specific needs. Understanding the different product categories, speed rates, and package modules can help you make an informed decision. Choose a reliable and trustworthy supplier, such as NVIDIA Networking Store, NVIDIA Partner Network, or NADDOD Network, to ensure that you receive high-quality Infiniband products that meet your performance and budget requirements.

InfiniBand vs Ethernet: Which Is Better for High-Performance Computing?

InfiniBand and Ethernet are two popular networking technologies used in modern data centers for high-performance computing (HPC) and other demanding workloads. While Ethernet has been widely used for many years, InfiniBand is gaining popularity due to its high speed, low latency, and other performance advantages.

Is InfiniBand better than Ethernet?

InfiniBand is a complete networking protocol that defines its own one to four-layer format. In InfiniBand networks, the transmission of messages is based on end-to-end flow control, which means that the receiving end controls whether the message can be sent by the sending end. This ensures that messages are not congested during transmission, resulting in a lossless network. Additionally, the absence of congestion on the network prevents the accumulation of cached data, reducing delay and jitter.

Infiniabnd vs Ethernet
In each InfiniBand layer 2 network, there is a subnet manager that configures the local ID of the nodes in the network. The control plane of InfiniBand calculates and distributes forwarding path information to the InfiniBand switches, allowing for easy deployment of large-scale networks without flooding, VLAN, or ring-breaking issues. This is a unique advantage of InfiniBand over Ethernet.

Why InfiniBand is Better for High-Performance Computing?

InfiniBand is the ideal choice for HPC environments that require fast and reliable data transmission. It provides higher bandwidth, lower latency, and less jitter than Ethernet, making it suitable for demanding workloads such as scientific simulations, financial modeling, and data analysis. InfiniBand offers faster data transfer rates, with speeds ranging from 40G to 400G, while Ethernet is limited to 100G.

Second, InfiniBand is better suited for GPU workloads, which require high-speed data transfer between the CPU and the GPU. InfiniBand is also ideal for parallel computing, as it allows multiple processors to communicate with each other simultaneously, which is essential for tasks that require massive computational power.

Third, InfiniBand’s high-speed, low-latency, and lossless nature make it a popular choice for modern data centers. According to the latest global HPC TOP500 rankings, InfiniBand’s market share has been steadily increasing, and it currently dominates the TOP100, while Ethernet’s market share is declining.

Finally, InfiniBand is a reliable and future-proof technology that can support the growing demands of modern data centers. It is an efficient and cost-effective solution for large-scale networks, as it requires minimal configuration and has no flooding issues.

TOP500 HPC systems
According to NVIDIA’s calculations, if a company spends $500 million to build a switching network, the throughput difference between InfiniBand and Ethernet is 15%-20%. In addition, InfiniBand is more suitable for GPU workloads, and the unit price of optical modules that are compatible with InfiniBand protocol is higher, which will further increase the value of optical modules in supercomputing.

The Verdict

The AIGC is now in full swing. Major platform companies like OpenAI, Microsoft, and Google, as well as application companies such as Midjourney and Character AI, are accelerating the development and iteration of AI applications and services. Additionally, new companies and applications are emerging at a rapid pace, creating a sense of urgency in the AI arms race.

It’s clear that computing power will be equivalent to productivity, Now NVIDIA IB products are being in short supply, by purchasing high-quality third-party Infiniband cables or transceivers, you can create a high-performance computing data center quickly which meets the needs of your business, then you will drive it success for weeks to come. Now, do you know where to buy Infiniband Products?

Related Resources:
What Is InfiniBand and HDR and Why Is IB Used for Supercomputers?
InfiniBand: Unlocking the Power of HPC Networking
InfiniBand NDR: The Future of High-Speed Data Transfer in HPC and AI