200G HDR InfiniBand Solutions Quick Guide
In the world of ever-increasing digital information, the ability to analyze data in real-time and extract its features has become a competitive advantage. A modern network must address the challenge of rapidly and efficiently transmitting the growing volume of data while simultaneously analyzing it in real-time.
Network convergence technology has completely transformed this industry, clearly indicating that the traditional CPU-centric data center architecture is outdated, where as many functions as possible need to be processed by the CPU. Transitioning to a data-centric architecture requires high-speed and efficient networks, which means that more functions must be offloaded from the CPU to other parts of the network, allowing the CPU to focus on general computing and control scheduling.
The urgent need for greater network throughput
As the demand for data analysis continues to grow, the need for higher data throughput becomes increasingly significant. A few years ago, applications such as analyzing automotive structures or weather simulations required 100Gb/s bandwidth. Today, high-performance computing, machine learning, storage, and large-scale technologies demand even faster networks.
100Gb/s bandwidth is no longer sufficient for many advanced data centers today. Whether it's brain mapping or national security, the highest-performance supercomputers and data center applications require the generation and processing of analytical results within a specific timeframe.
Looking back over the past decade, no one has driven the networking industry forward more than Mellanox. From the first generation to 40Gb/s, 56Gb/s, and 100Gb/s bandwidth products, Mellanox has not only improved the performance of data centers and cloud computing but also exceeded return on investment expectations, surpassing the speed predicted by Moore's Law and its own roadmap.
Therefore, Mellanox now announces that it is the first company to achieve end-to-end 200Gb/s data speeds using Mellanox Quantum switches, ConnectX-6 adapters, and LinkX cables in 2018.
200G HDR InfiniBand Switch
Mellanox Quantum features 40 ports of 200Gb/s HDR InfiniBand, delivering an astounding bidirectional throughput of 16Tb/s and processing 15.6 billion messages per second, with a port-to-port switching latency of only 90ns.
Quantum provides industry-leading 160 SerDes integration capability, with each channel flexibly supporting speeds from 2.5Gb/s to 50Gb/s, making Quantum the most versatile switch in the world.
Furthermore, Quantum is the smartest switch, processing data as it traverses the network and eliminating the need for multiple data transmissions between ports, thus improving performance. With communication accelerators, scalable hierarchical aggregation and reduction protocol (SHARP) 2.0 MPI, and congestion control technologies, Quantum meets the network bandwidth and latency requirements of high-performance computing, machine learning, and even the most demanding applications.
Introducing HDR100 for Ultimate Scalability
Mellanox Quantum also offers the HDR100 option, supporting ultimate scalability for data centers.
With two dual-channel pairs per port, Quantum can support up to 80 ports of 100Gb/s, making it the densest and most efficient top-of-rack (ToR) switch on the market. The HDR100 feature allows for connecting a 400-node compute system with 1.6 times fewer switches and twice the cable savings compared to competitors. Quantum can also connect up to 128,000 nodes in a 3-Level Fat Tree topology, which is 4.6 times better than mainstream proprietary product network switches.
For customers, the ultimate question is whether to achieve double the throughput with 40 ports of 200G HDR or the same throughput with half the switches and cables using 80 ports of 100G HDR100. Quantum provides the lowest overall cost of ownership for today's data centers and HPC clusters.
200Gb/s InfiniBand and Ethernet Adapters
ConnectX-6 delivers unparalleled performance for both InfiniBand and Ethernet, with 200Gb/s throughput and the ability to send two billion messages per second at a latency of 600 nanoseconds. Additionally, like all Mellanox standard-based products, ConnectX-6 is backward compatible, supporting HDR, HDR100, EDR, FDR, ODR, DDR, and SDR InfiniBand as well as 200, 100, 50, 40, 25, and 10 GE.
ConnectX-6 enhances Mellanox's Multi-Host technology, allowing up to eight hosts to be connected to a single adapter by partitioning the PCIe interface into multiple independent interfaces. This has led to the development of various new rack design schemes, reducing capital expenditures (cable, network adapter, and switch port costs) and operational costs (reducing switch port management and overall power consumption) to lower the total cost of ownership of data centers.
Storage customers benefit from ConnectX-6's embedded 16-channel PCIe switch, which allows them to create disaggregated devices where adapters connect directly to SSDs. Leveraging the capabilities of ConnectX-6 PCIe Gen3/Gen4, customers can build largeI apologize for the previous response.
In-Network Computing and Secure Offloading
ConnectX-6 and Quantum support the new generation of data center architecture, known as data-centric architecture, where the network becomes a distributed processor. By adding additional accelerators, ConnectX-6 and Quantum enable in-network computing and in-network storage capabilities, offloading further computational tasks to the network, thereby saving CPU scheduling cycles and improving network efficiency.
ConnectX-6 offers block-level encryption, providing an important innovation for data center security. Data is encrypted during transmission and decrypted by ConnectX-6 hardware during storage or retrieval, reducing latency and alleviating CPU load.
Furthermore, with the ability to use different encryption keys, ConnectX-6 block-level encryption offloads support protection between users sharing the same resources. ConnectX-6 complies with Federal Information Processing Standards (FIPS), reducing system requirements for self-encrypting drives.
l LinkX InfiniBand and Ethernet Active Optical Cables
In addition, Mellanox provides LinkX cables for InfiniBand and Ethernet, which are active optical cables capable of supporting 200Gbps. Mellanox offers direct-attach 200G copper cables with a maximum reach of 3 meters and 200G active optical cables with a maximum reach of 100 meters. All LinkX cables for 200Gb/s connections utilize the standard QSFP56 form factor. Additionally, the optical cables feature the world's first silicon photonics engine that supports 50Gb/s channels.
Naddod for InfiniBand optical modules and high-speed cables, provides a one-stop solution!
Delivery Time: We have abundant and stable inventory to ensure fast delivery. After placing an order, we promise to complete the delivery within two weeks, enabling your project to progress quickly and saving time and resources, so that your project is no longer limited by waiting.
① Our products undergo 100% real device testing to ensure quality and reliability, and we can provide you with professional test reports.
② Our testing scenarios involve the simultaneous application of tens of thousands of cables to ensure that the products can operate smoothly under real application requirements without packet loss or errors.
① We have successfully cooperated with multiple enterprises and delivered products that have been running stably, gaining trust from our customers.
② We provide fast and responsive technical services to ensure after-sales support throughout your product usage process.
Multiple successful deliveries and real-world application cases are the best endorsement of our quality assurance. You don't need to worry about product quality and inventory issues as we always maintain sufficient stock to ensure your needs are met promptly.
In addition to providing third-party high-quality optical modules, we also have a large inventory of original NVIDIA products, offering you more choices at any time. Contact NADDOD now to learn more details!
200G HDR Empowers Next-generation Data Centers
With the increasing demand for intensive data analytics, there is a corresponding need for higher bandwidth. Even 100Gb/s is insufficient to meet the performance requirements of some of today's most demanding data centers and clusters. Furthermore, the traditional CPU-centric interconnect approach has proven to be inefficient for these complex applications.
Mellanox's 200Gb/s solution addresses these challenges by providing the world's first 200Gb/s switches, adapters, and cables, enabling in-network computing to replace CPU processing across the entire network. Through the 200Gb/s solution, Mellanox remains at the forefront of the competition in driving the industry towards Exascale computing.