FREE SHIPPING on Orders Over US$79
United States

FS NVIDIA ConnectX‑8 400G SuperNIC Accelerates Modern AI Infrastructure

HowardJan 16, 20261 min read

FS is proud to introduce the NVIDIA® ConnectX-8 900-9X81Q-00CN-ST0, a next-generation 400G SuperNIC that integrates a PCIe Gen6-capable switch with ultra-high-speed networking in a single device. Designed for modern AI, HPC, and cloud-scale infrastructures, CX8 delivers exceptional throughput, streamlines server architecture, and enhances both power and cost efficiency.
As an authorized NVIDIA and Mellanox partner since August 2022, FS provides complete networking solutions and expert technical support to help enterprises fully leverage the performance of their AI and HPC clusters.
Overview of ConnectX-8 SuperNIC 900-9X81Q-00CN-ST0
The NVIDIA ConnectX-8 SuperNIC 900-9X81Q-00CN-ST0 brings next-generation AI networking performance to modern accelerated compute infrastructures. As part of the ConnectX-8 portfolio, this model features dual QSFP112 ports, provides up to 400Gb/s per port, offering a total of 800Gb/s network capacity, enabling high-bandwidth connectivity for large-scale GPU clusters and latency-sensitive workloads. Designed for massive AI fabrics and next-generation data centers, it integrates seamlessly with NVIDIA’s advanced networking platforms, supporting the scalability and robustness needed for trillion-parameter model training and high-density GPU deployments.
Specifications
Form factor: PCIe Half Height, Half Length (HHHL)
Networking Port: Dual-port QSFP112 InfiniBand and Ethernet
InfiniBand: NDR/HDR/HDR100/EDR/SDR
Ethernet (Default): 400/200/100Gb/s Ethernet
Host interface: PCIe Gen6 ×16 lanes (Gen5 compatible)
Network interface: Dual-port 400G QSFP112
With improved power efficiency and NVLink-optimized data paths, the ConnectX-8 900-9X81Q-00CN-ST0 helps build more sustainable, future-ready AI infrastructures operating at large scale. It also enables advanced routing and telemetry-driven congestion control to maintain predictable performance during peak AI workloads. For InfiniBand environments, ConnectX-8 SuperNICs extend the benefits of NVIDIA® SHARP™ In-Network Computing, accelerating collective operations and enhancing overall system efficiency in HPC and AI clusters.
Why ConnectX-8 SuperNIC Solution is Revolutionary for Server Architecture?
In traditional PCIe switch server designs, CPUs, GPUs, NICs, and DPUs are interconnected through multiple discrete PCIe switches, creating a complex, multi-hop topology. A typical configuration—for example, a dual-CPU server with eight GPUs (such as NVIDIA L40) and five NIC-class devices (four NVIDIA ConnectX-7 NICs and one NVIDIA BlueField-3 DPU)—requires two to four standalone PCIe switches just to provide basic GPU-to-GPU and GPU-to-NIC connectivity. This increases component count, board area, and cooling requirements, driving up both system complexity and cost. Architecturally, GPU-to-GPU traffic often traverses two CPU sockets, where host CPU resources and inter-socket links can become bottlenecks, limiting effective bandwidth to around 25 GB/s or less depending on link utilization. Similarly, in common 2:1 GPU-to-NIC topologies, GPU-to-NIC communication is constrained by the available PCIe bandwidth and tightly coupled to the underlying PCIe generation, making it harder to scale IO performance in line with rapidly growing GPU compute capabilities.
Advantages of NVIDIA ConnectX-8 SuperNIC in Server Architecture
1. Integrated PCIe Gen6 Switch
ConnectX-8 SuperNIC integrates a 48-lane PCIe Gen6 switch directly into the network card, removing the need for multiple discrete PCIe switches. This consolidation simplifies the motherboard layout, reduces component count, and lowers system-level costs, creating a cleaner, more scalable architecture for GPU-dense servers.
2. Double Effective Network Bandwidth per GPU
With up to 800 Gb/s networking throughput, ConnectX-8 provides each GPU with up to 400 Gb/s effective bandwidth in a 2:1 GPU-to-NIC configuration. This eliminates I/O bottlenecks, accelerates data movement between GPUs, NICs, and storage, and enables up to twice the collective communication performance for large-scale AI training workloads.
3. Improves GPU Communication Efficiency
ConnectX-8 optimizes critical GPU communication paths, including GPU-to-GPU across CPU sockets, GPU-to-NIC communication, and GPU-to-GPU via the same PCIe switch. Compared to traditional architectures, these improvements deliver higher bandwidth, lower latency, and more consistent GPU utilization, accelerating distributed AI and HPC workloads.
4. Streamlined System Design, Improved Airflow and Serviceability
By combining networking and PCIe switching into a single SuperNIC, ConnectX-8 simplifies PCB layout, improves airflow, and reduces power consumption. This results in a more compact, service-friendly, and energy-efficient server platform, helping system builders deploy high-density AI and HPC infrastructures more effectively.
Bottom Line
Whether building today’s high-speed GPU-to-NIC connections or planning next-generation AI and HPC data centers, FS NVIDIA® ConnectX-8 SuperNIC 900-9X81Q-00CN-ST0 delivers the performance, scalability, and efficiency needed for large-scale, latency-sensitive workloads. FS also offers a full range of complementary 400G, 800G and 1.6T optical transceivers to support diverse data center architectures.
FS ConnectX‑8 portfolio spans 400G to 800G, supporting a wide range of AI data center deployments. Explore more high-performance CX8 network card solutions: FS Launches NVIDIA ConnectX-8 800G SuperNIC for Massive-Scale AI