InfiniBand, What Exactly Is It?
Updated at Mar 8th 20241 min read
InfiniBand, a high-speed networking technology, has seen significant developments since its inception. Understanding its journey, technical principles, advantages, product offerings, and prospects is essential for those interested in cutting-edge networking solutions.
The Development Journey of InfiniBand
InfiniBand's inception dates back to the late 1990s when the InfiniBand Trade Association (IBTA) was formed to develop and promote the technology. Initially envisioned as a solution for high-performance computing clusters, InfiniBand has since expanded its reach into various other domains, including data centers and cloud computing.
Technical Principles of InfiniBand
After outlining the evolution of InfiniBand, let's delve into its working principle and the reasons behind its superiority over traditional Ethernet.
RDMA: The Foundational Capability
Remote Direct Memory Access (RDMA) is a fundamental feature of InfiniBand, allowing data to be transferred directly between application memory spaces without involving the CPU. This capability significantly reduces latency and enhances overall system efficiency.

InfiniBand Network and Architecture
InfiniBand employs a switched fabric architecture, where multiple nodes are interconnected through switches. This architecture provides high bandwidth and low latency communication between nodes, making it ideal for demanding applications.
InfiniBand is a channel-based structure, and its component units can be broadly categorized into four main groups:
HCA (Host Channel Adapter): This unit serves as the interface between the host system and the InfiniBand network. It facilitates the transmission of data between the host and other devices connected to the network.
TCA (Target Channel Adapter): Opposite to the HCA, the TCA operates on the target devices within the InfiniBand network. It manages data reception and processing on these target devices.
InfiniBand link: This forms the physical connection channel within the InfiniBand network. It can be established using various mediums such as cables, optical fibers, or even on-board links within devices.
InfiniBand switches and routers: These components play a crucial role in facilitating networking within the InfiniBand infrastructure. They manage the routing of data packets between different devices connected to the network, enabling seamless communication and data exchange.

For more detailed information, please refer to InfiniBand Network and Architecture Overview
InfiniBand Protocol Stack
InfiniBand utilizes a layered protocol stack, including physical, link, network, transport, and application layers. Each layer plays a crucial role in ensuring efficient and reliable communication between InfiniBand devices.

InfiniBand Link Rates
InfiniBand supports multiple link rates, including Single Data Rate (SDR), Double Data Rate (DDR), Quad Data Rate (QDR), and beyond, with each successive generation offering higher bandwidth and improved performance.
Benefits of InfiniBand
InfiniBand stands out from traditional networking technologies due to several key advantages:
High-Speed Data Transfer: InfiniBand excels in delivering exceptionally high data transfer rates, facilitating swift and efficient communication between nodes within the network. This rapid exchange of data is instrumental in supporting demanding applications and workloads that require substantial throughput.
Low Latency: InfiniBand achieves low latency via RDMA and a switched fabric architecture. RDMA enables direct data transfers between memory locations, reducing processing overhead. The switched fabric architecture ensures efficient data routing, crucial for latency-sensitive applications like high-frequency trading and scientific computing.
Scalability: InfiniBand's scalable architecture meets the evolving needs of modern data centers. It seamlessly expands network capacity to accommodate increasing workloads and supports dynamic resource allocation. Whether scaling up for growing data volumes or out to support additional compute resources, InfiniBand offers the flexibility needed to adapt effectively.
InfiniBand Products
FS offers a comprehensive range of InfiniBand products, including switches, adapters, transceivers and cables, catering to various networking requirements. These products are designed to deliver high performance, reliability, and scalability, meeting the needs of modern data center environments.
InfiniBand Switches
Vital for directing data within InfiniBand networks, these switches ensure high-speed data transmission at the physical layer. FS offers HDR 200Gb/s and NDR 400Gb/s switches with latency under 130ns, ideal for data centers demanding exceptional bandwidth.
![]() | ![]() | ![]() | ![]() | |
Product | ||||
Link Speed | 200Gb/s | 200Gb/s | 400Gb/s | 400Gb/s |
Ports | 40 | 40 | 32 | 32 |
Switch Chip | NVIDIA QUANTUM | NVIDIA QUANTUM | NVIDIA QUANTUM-2 | NVIDIA QUANTUM-2 |
Switching Capacity | 16Tb/s | 16Tb/s | 51.2Tb/s | 51.2Tb/s |
Fan Number | 5+1 Hot-swappable | 5+1 Hot-swappable | 6+1 Hot-swappable | 6+1 Hot-swappable |
Power Supply | 1+1 Hot-swappable | 1+1 Hot-swappable | 1+1 Hot-swappable | 1+1 Hot-swappable |
InfiniBand Adapters
Serving as network interface cards (NICs), InfiniBand adapters enable devices to connect to InfiniBand networks. FS offers ConnectX-6 and ConnectX-7 cards, providing top performance and flexibility to meet the evolving demands of data center applications.
Product | Ports | PCIe Interface | Support InfiniBand Data Rate | Support Ethernet Data Rate | |
![]() | Single-Port OSFP | PCIe 5.0x 16 | NDR/NDR200/HDR/HDR100/EDR/FDR/SDR | 400/200/100/50/40/ 10/1 Gb/s | |
![]() | Single-Port QSFP112 | PCIe 5.0x 16 | NDR/NDR200/HDR/HDR100/EDR/FDR/SDR | 400/200/100/50/25/ 10 Gb/s | |
![]() | Single-Port QSFP56 | PCIe 4.0x 16 | HDR/EDR/FDR/QDR/DDR/SDR | 200/50/40/25/20/1 Gb/s | |
![]() | Dual-Port QSFP56 | PCIe 4.0x 16 | HDR/EDR/FDR/QDR/DDR/SDR | 200/50/40/25/20/1 Gb/s | |
![]() | Single-Port QSFP56 | PCIe 4.0x 16 | HDR100/EDR/FDR/QDR/DDR/SDR | 100/50/40/25/20/1 Gb/s | |
![]() | Dual-Port QSFP56 | PCIe 4.0x 16 | HDR100/EDR/FDR/QDR/DDR/SDR | 100/50/40/25/20/1 Gb/s | |
![]() | Single-Port OSFP | PCIe 5.0x 16 | NDR200/NDR/HDR/EDR/FDR/SDR | - |
The backbone of InfiniBand networks relies on transceivers and cables for high-speed data transfer. FS provides a range of transceivers and DAC/AOC cables, including 40G, 56G, 100G, 200G, 400G, and 800G options. Utilizing active copper technology enhances signal integrity and minimizes loss over varying distances.
Product | Modules | AOC | DAC | |
![]() | 850nm 50m 1310nm 500m 1310nm 2km | - | 0.5m, 1m, 1.5m, 2m | |
![]() | 850nm 50m 1310nm 500m | 3m, 5m, 10m | 1m, 1.5m, 2m | |
![]() | 850nm 100m 1310nm 2km | 1m, 2m, 3m, 5m, 7m, 10m, 15m, 20m, 30m, 50m, 100m | 0.5m, 1m, 1.5m, 2m | |
![]() | 850nm 100m 1310nm 500m 1310nm 2km 1310nm 10km | 1m, 2m, 3m, 5m, 7m, 10m, 15m, 20m, 30m, 50m, 100m | 0.5m, 1m, 1.5m, 2m, 3m | |
![]() | - | 1m, 2m, 3m, 5m, 10m, 15m, 20m, 25m, 30m, 50m, 100m | 0.5m, 1m, 1.5m, 2m, 3m, 4m, 5m | |
![]() | 850nm 150m | 1m, 3m, 5m, 7m, 10m, 15m, 20m, 25m, 30m, 50m, 100m | 0.5m, 1m, 1.5m, 2m, 3m, 4m, 5m |
InfiniBand is poised to continue its growth trajectory, fueled by advancements in high-performance computing, and cloud computing. Emerging technologies such as Exascale computing and High-Performance Data Analytics (HPDA) will further drive the adoption of InfiniBand, cementing its position as a critical component of next-generation data center architectures. InfiniBand remains at the forefront, enabling high-performance, low-latency communication essential for today's demanding applications.