InfiniBand Switches for Future Expansion
Updated at Jun 24th 20241 min read
In the architecture of HPC, network performance is crucial to the efficiency and speed of the entire system. InfiniBand switches, with their ultra-low latency and high throughput, have become a key for achieving fast data transfer and efficient resource utilisation. In this article, we will delve into the fundamental principles of InfiniBand switches, their performance advantages, and common FAQs to reveal how InfiniBand switches are driving the new computing and storage solutions era.
What is InfiniBand Network?
An InfiniBand network (IB network) refers to a unified connection fabric established among storage, networking, and server devices through a set of central InfiniBand switches. These switches control traffic flow, reducing congestion between hardware devices and effectively addressing communication bottlenecks typical of traditional I/O architectures. IB networks also connect with remote storage and network devices.
End-to-end IB networks support data redundancy and error correction mechanisms, ensuring reliable data transmission. When handling large datasets, errors or data loss during transmission can interrupt or even fail training processes, underscoring the critical importance of reliability, which IB networks effectively ensure.

What are InfiniBand Switches?
InfiniBand switches are high-speed network switches that utilise InfiniBand technology, specifically designed for high-performance computing and data centres. This protocol aims to achieve high performance, low latency, and high bandwidth data transmission. They provide exceptional throughput and minimal latency for interconnecting data between computers.
InfiniBand switches can directly or interchangeably connect servers and storage systems, as well as facilitate connections between storage systems. Transmission speeds typically range from Gbps to Tbps, significantly faster than traditional Ethernet and Fibre Channel protocols. InfiniBand switches are widely used in high-performance computing (HPC), data centres, cloud computing, and large-scale storage environments to meet demands for high-performance computing and data transfer.
What are the advantages of InfiniBand Switches?
InfiniBand switches combine high bandwidth, high speed, and low latency to enhance server performance and application efficiency, making them an ideal choice for many high-demand computing and storage environments.
High Bandwidth and Low Latency: Offering data transfer rates of up to 200G and 400G with extremely low network latency, suitable for high-performance computing and data centres.
Efficient Packet Forwarding and Routing: Advanced routing algorithms and protocols ensure efficient packet transmission within the network, optimising data flows and minimising latency.
Multipath Redundancy and Load Balancing: Support for multiple paths achieves network redundancy and load balancing, enhancing fault tolerance and resource utilisation to ensure high availability and performance.
Scalability and Flexibility: Provides scalability and flexibility to seamlessly expand from small-scale clusters to large-scale systems, meeting diverse network configuration needs.
Explore FS InfiniBand Switches
FS offers a range of advanced NVIDIA InfiniBand switches that leverage NVIDIA's high-speed interconnect technology, providing high-speed, ultra-low latency, and scalable solutions. These switches incorporate advanced technologies such as Remote Direct Memory Access (RDMA), adaptive routing, and NVIDIA's Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™.
The product line spans a variety of scalability and performance requirements, giving users flexible choices. Notably, the NVIDIA Quantum™-2 NDR 400G and NVIDIA Quantum™ HDR 200G data centre switches feature core routing, forwarding, and data flow management capabilities, ensuring efficient data transmission and communication.
Link Speed | 400Gb/s | 400Gb/s | 200Gb/s | 200Gb/s |
Ports | 32 | 32 | 40 | 40 |
Height | 1U | 1U | 1U | 1U |
Switching Capacity | 51.2Tb/s | 51.2Tb/s | 16Tb/s | 16Tb/s |
Cooling System | Air Cooled | Air Cooled | Air Cooled | Air Cooled |
Spine Modules | 16 | 16 | 20 | 20 |
Leaf Modules | 16 | 16 | 20 | 20 |
Interface | OSFP | OSFP | QSFP56 | QSFP56 |
Number of PSUS | 2 | 2 | 2 | 2 |
Management | Inband/outband | Inband | Inband | Inband/outband |
Learn more about FS InfiniBand Switches: The Complete Guide to FS InfiniBand Switches.
FAQs about InfiniBand Switches
Q1: In which fields are InfiniBand switches commonly used?
A1: InfiniBand switches are widely applied in high-performance computing, data centre interconnects, storage networks, virtualized environments, cloud computing, financial services, and research and education sectors.
Q2: What is Virtual Protocol Interconnect (VPI)?
A2: Virtual Protocol Interconnect (VPI) is a technology that allows InfiniBand switches and NICs to automatically or explicitly adapt between InfiniBand and Ethernet protocols, facilitating seamless connectivity between the two.
Q3: How can InfiniBand networks be bridged to Ethernet?
A3: Modern InfiniBand switches with built-in Ethernet ports and gateways can be used, or InfiniBand Network Interface Cards (NICs) / Ethernet Converged Network Adaptors (CNAs) can bridge the connection. Alternatively, Mellanox InfiniBand switches and ConnectX series NICs support Virtual Protocol Interconnect (VPI) to seamlessly adapt between InfiniBand and Ethernet protocols.
Q4: What bandwidths are supported by InfiniBand switches?
A4: InfiniBand switches typically support bandwidths ranging from 10Gbps to 400Gbps, catering to various scales and requirements of high-performance computing and data centre applications.
Q5: What are some considerations when using InfiniBand switches?
A5: When using InfiniBand switches, it's important to consider network topology design, compatibility of cables and ports, firmware and driver updates, as well as network monitoring and management.
Q6: What are the main differences between InfiniBand switches and Ethernet switches?
A6: InfiniBand switches excel in higher bandwidths and lower latency, whereas Ethernet switches are favoured for compatibility and widespread use. InfiniBand is particularly suitable for applications requiring high performance and low latency.
Conclusion
As data centres continue to expand and the demand for computational performance grows, selecting the right network solution becomes increasingly important. With their outstanding low latency, high bandwidth, and scalability, InfiniBand switches are leading the trend in network technology development. In the future, as technology advances and applications broaden, InfiniBand switches are expected to drive performance enhancements across a wider range, thereby providing robust support for various high-performance computing and data-intensive applications.