A Primer Guide of InfiniBand Cables
Mar 28, 20241 min read
InfiniBand cables are instrumental components in high-performance computing and data center networks, facilitating rapid data transfer and low-latency communication. As the volume of data processed and transmitted continues to grow exponentially, the importance of InfiniBand cables in ensuring seamless connectivity and optimal performance cannot be overstated. This article will briefly introduce the structure, classification and connection scenarios of infiniBand cables.
Operation Principles of InfiniBand Cables
InfiniBand cables utilize a switched fabric architecture, where data packets are transmitted between nodes via channel adapters. Each processor node is equipped with a host channel adapter, while peripheral devices are equipped with target channel adapters. These adapters exchange information to ensure secure and efficient data transmission, operating at predefined quality of service (QoS) levels.
Cables play a pivotal role in ensuring the performance, scalability, and future adaptability of InfiniBand networks. In pursuit of scalable performance, InfiniBand adopts a multilane cable architecture, where a serial data stream is distributed across multiple parallel physical links operating at identical signaling rates. The figure shows three link widths of 1, 4, and 12 parallel lanes, referred to as 1X, 4X, and 12X.

Common Types of InfiniBand Cables
InfiniBand cables come in various types to accommodate different connectivity requirements and environments. Some of the most common types include active optical cable (AOC), direct attach copper cable (DAC), and active copper cable (ACC). Each type has its own unique characteristics, advantages, and use cases, allowing for flexible deployment options in diverse networking scenarios. In comparison to traditional Category 5e and Category 6 UTP cabling, InfiniBand cables exhibit greater thickness, bulkiness, and weight. Their susceptibility to bend radius necessitates careful handling during installation, emphasizing the importance of proper strain relief to uphold connection reliability over time.
The infiniband cables provided by FS cover a variety of speeds, including 200G HDR, 400/800G NDR and the latest 1.6T XDR, meeting the needs of different scenarios. And FS is known for its excellent manufacturing technology and strict quality control process. Our InfiniBand cables have been rigorously tested to ensure perfect compatibility, stable performance, and high reliability, meeting your stringent requirements for network connection quality.
The table below shows the InfiniBand cables that FS can provide.
Category | Type | Form Factor | Cable Length | Power Consumption | Modulation Format | Application |
1.6T XDR InfiniBand | DAC | 0.5m-1m | 0.1W | PAM4 | For HPC and Machine Learning (ML) | |
800G NDR InfiniBand | DAC | 0.5m-3m | 0.1W | PAM4 | Compatible with NVIDIA DGX H100 | |
0.5m-3m | 0.1W | PAM4 | Compatible with NVIDIA Quantum-2 Switches | |||
Breakout DAC | 0.5m-3m | 0.1W | PAM4 | Compatible with Quantum-2 Switches and ConnectX-7 HCA | ||
0.5m-2m | 0.1W | PAM4 | Compatible with Quantum-2 Switches and ConnectX-7 Adapters and BlueField-3 DPUs | |||
0.5m-2m | 0.1W | PAM4 | ||||
0.5m-2m | 0.1W | PAM4 | Compatible with Quantum-2 Switches and ConnectX-7 HCA | |||
0.5m-2m | 0.1W | PAM4 | Compatible with NVIDIA Quantum-2 Switches | |||
400G NDR InfiniBand | DAC | 0.5m-2m | 0.1W | PAM4 | Compatible with ConnectX-7 HCA | |
BreakoutDAC | 0.5m-2m | 0.1W | PAM4 | Compatible with Quantum-2 Switches and ConnectX-6 HCA | ||
0.5m-3m | 0.1W | PAM4 | ||||
BreakoutAOC | 3m-30m | / | PAM4 | |||
200G HDR InfiniBand | DAC | 3m-7m | 1.5W | PAM4 | Compatible with Quantum Switches and ConnectX-6 HCA | |
0.5m-2m | 0.1W | PAM4 | ||||
BreakoutDAC | 0.5m-2m | ≤0.5W | PAM4 | |||
3m-7m | 1.5W | PAM4 | ||||
AOC | 1m-100m | 4.5W | PAM4 | |||
BreakoutAOC | 1m-30m | 4.5W | PAM4 | |||
3m-30m | 4.35W | PAM4 | Compatible with Quantum Switches |

Connection Scenarios of InfiniBand Cables
In the deployment of InfiniBand cable solutions, various pairing applications can be configured based on the devices and scenarios involved. The following takes the connection scenario of 400G/800G NDR InfiniBand cables as an example.
800G-to-800G Links for NDR Switch to NDR Switch
It is worth noting that most of the current high-speed 800G and 1.6T switches come with OSFP ports. The OSFP packaging module/cable is divided into finned top and flat top. The finned top appearance is suitable for switches, and the flat top is suitable for network adapters and GPU servers.

800G to 4x 200G links to Breakout Interconnection
800G breakout cables can be used to establish both 800G-to-2×400G links and 800G-to-4×200G links. In this example, we focus on the 800G-to-4×200G configuration.
Due to variations in cable packaging, the breakout cable for 800G-to-4×200G interconnections must be selected based on the specific device requirements. For instance, OSFP packaging is available in two designs: flat top and finned top. Flat top modules are typically used with network cards and servers, while finned top modules are designed for switches. Read the post Understanding 400G and 800G OSFP Transceivers: Finned-Top vs. Flat-Top to find more details.
800G OSFP to Four 200G OSFP (finned top) Links for Switch to Switch
When both ends of the OSFP InfiniBand DAC/AOC cables are finned top, the breakout cable can connect the switch to the switch.

800G OSFP to Four 200G OSFP (flat top) Links to ConnectX-7
When the breakout (branch) end of the OSFP InfiniBand DAC/AOC cable features a flat-top design, is flat top, the cable can connect the 800G switch to four 200G ConnectX-7 network cards.

800G OSFP to Four 200G QSFP112 Links to ConnectX-7
In addition to OSFP flat-top InfiniBand DAC/AOC cables, the QSFP112 form factor is also compatible with network cards. With 800G OSFP Finned Top to 4x 200G QSFP112 InfiniBand NDR DAC cable, Quantum-2 switch could link to 400G ConnectX-7 QSFP112 network adapter. Therefore, it is essential to verify your device’s specific packaging type when selecting cables.

400G to 400G Links to ConnectX-7
400G direct cable is also suitable for direct interconnection between two 400G InfiniBand ConnectX-7 network interface cards, meeting the low-latency data transmission requirements in high-performance computing environments.

400G to 2x 200G Breakout Interconnection
Using a 400G-to-2x200G breakout cable enables interconnection between 200G and 400G networks. This solution enables breakout interconnections between a 400G NDR InfiniBand switch and two 200G HDR InfiniBand switches, or between a 400G NDR InfiniBand switch and two 200G ConnectX-7 network adapters, making it ideal for hierarchical network architectures or hybrid deployment scenarios.
400G to Two 200G Links for 400G NDR Switch to 200G HDR Switch

400G to Two 200G Links to ConnectX-6

400G to 4x 100G Breakout Interconnection
With 400G to 4x 100G breakout cable, the NVIDIA NDR InfiniBand Quantum™-2 switch can interconnect with the NVIDIA InfiniBand Quantum™ switch configured for 100G, or support up to four ConnectX-6 adapters, enabling more granular bandwidth allocation and flexible network topologies.
400G to Four 100G Links for Quantum™-2 Switch to Quantum™ Switch

400G to Four 100G Links to ConnectX-6

Summary
InfiniBand cables represent a cornerstone of modern networking infrastructure, providing the high-speed, low-latency connectivity required for today's demanding applications. The landscape of InfiniBand cables is continuously evolving, driven by advancements in networking technology and emerging application demands. Current trends include the adoption of higher-speed interfaces, enhanced reliability features, and integration with emerging technologies such as artificial intelligence and machine learning. You can check out this article to learn more about infiniband: Key Advantages of InfiniBand Technology.