FS NVIDIA® 100G Network Adapters Selection Guide
Mar 21, 20251 min read
Driven by hybrid cloud services, AI, virtualization, and scale-out storage requirements, IT professionals are challenged to deliver reliable, high-speed networking performance while maximizing return on investment. As data traffic continues to surge due to cloud computing, big data analytics, and high-definition video streaming, 100G Ethernet networks have become the standard for modern enterprises, offering high bandwidth, low latency, and scalability.
However, to fully harness the potential of 100G networking, choosing the right 100G network adapter is essential. 100G network adapters serve as the bridge between servers and high-speed networks, enabling efficient data transfer, hardware acceleration, and advanced security features. These adapters not only support faster data transmission over fiber optic infrastructure but also enhance network efficiency by reducing energy consumption per gigabit transferred—critical for large-scale operations.
With seamless integration into existing network architectures, NVIDIA® 100G network adapters provide the reliability and performance needed for next-generation workloads. In this article, we’ll explore how to select the 100G network adapter to optimize network performance based on your enterprise needs.
Key Considerations When Choosing a 100G Network Adapter
As businesses scale their infrastructures, selecting the right 100G network adapter is crucial to ensuring optimal performance, efficiency, and future-proofing investments. The decision primarily revolves around two key factors:
1. Network Architecture: Should you choose an InfiniBand or an Ethernet adapter?
2. Adapter Performance & Features: Which ConnectX series adapter best suits your workload—ConnectX®-5 or ConnectX®-6?
InfiniBand Adapter or Ethernet Adapter
When deploying 100G networking, businesses must decide between InfiniBand adapters and Ethernet adapters based on performance needs. 100G InfiniBand adapters, such as those in the ConnectX series, provide ultra-low latency and RDMA capabilities ideal for HPC and AI. Meanwhile, 100G Ethernet NICs offer broad compatibility, making them the preferred choice for scalable cloud environments.
InfiniBand and Ethernet use different protocols for data transmission. InfiniBand uses a switched fabric topology and supports Remote Direct Memory Access (RDMA), which allows for direct memory transfers between devices without involving the CPU. This leads to lower latency and higher data transfer rates. On the other hand, Ethernet uses the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol, which is simpler but can lead to higher latency and lower data transfer rates compared to InfiniBand.
FS networking solutions include InfiniBand and Ethernet options that work across all modern workloads and storage requirements and enable a new era of accelerated computing to maximize AI investments
When selecting 100G network adapters, FS provides NVIDIA® Ethernet adapters and VPI adapters. A VPI adapter can be set to deliver either InfiniBand or Ethernet semantics per port. A dual-port VPI adapter can be configured to one of the following options:
An adapter (HCA) with two InfiniBand ports
A NIC with two Ethernet ports
An adapter with one InfiniBand port and one Ethernet port at the same time
Ethernet adapters offer best-in-class network performance, serving low-latency, high-throughput applications. 100G Ethernet adapters offer advanced hardware offloads and accelerations, enable the highest ROI and lowest total cost of ownership for hyperscale, public and private clouds, storage, machine learning, AI, big data, and telco platforms.
VPI adapters support both Ethernet and InfiniBand data transmission rate, InfiniBand data transmission rate including SDR, DDR, QDR, FDR, EDR, HDR100. These adapters leverage faster speeds and innovative In-Network Computing, achieve extreme performance and scale. VPI adapters lower cost per operation, increasing ROI for high-performance computing (HPC), machine learning (ML), advanced storage, clustered databases, low-latency embedded I/O applications, and more.
ConnectX®-5 vs. ConnectX®-6: Which 100G Adapter is Right for You?
When selecting a 100G network adapter, FS offers two options within the NVIDIA® ConnectX series: ConnectX®-5 vs. ConnectX®-6. Each model delivers distinct performance, acceleration technologies, and security enhancements, catering to various application needs.
ConnectX®-5 delivers a message rate of 148 million messages per second (DPDK), and features PCIe 3.0 x16, offers advanced offload capabilities to meet the demands of the most demanding applications. This boosts data center infrastructure efficiency and provides the highest performance and most flexible solution for Web 2.0, Cloud, Data Analytics and Storage platforms.
ConnectX®-6 Dx provides up to two ports of 100Gb/s or a single-port of 200Gb/s, delivers a message rate of 215 million messages per second (DPDK), supports PCIe 4.0 x16 for high-speed data transfer, offering advanced encryption capabilities, including IPSec, TLS, and AES-XTS. By incorporating new acceleration engines, it maximizes performance and scalability for cloud, Web 2.0, big data, storage, and machine learning.
ConnectX®-6 VPI adapters support both InfiniBand and Ethernet through its Virtual Protocol Interconnect (VPI) technology. For networks with higher InfiniBand demands, ConnectX®-6 VPI adapters offer ultra-low latency and RDMA capabilities, making it a strong choice for high-performance computing (HPC) and AI workloads. Its support for both InfiniBand and Ethernet allows businesses to adapt their infrastructure as needed, ensuring maximum efficiency across different applications.

FS NVIDIA® 100G Network Adapters Comparison Table
Part Number | |||||
ConnectX | ConnectX-6 Dx | ConnectX-5 | |||
Technology | InfiniBand & Ethernet | Ethernet | |||
Ports | Dual-Port QSFP56 | Single-Port QSFP56 | Dual-Port QSFP56 | Dual-Port QSFP28 | Single-Port QSFP28 |
Host Interface | PCIe 4.0x 16 | PCIe 4.0x 16 | PCIe 4.0x 16 | PCIe 3.0x 16 | PCIe 3.0x 16 |
Data Transmission Rate | Ethernet: 100/50/40/25/10 GbE InfiniBand: HDR100, EDR, FDR, QDR, DDR, SDR | Ethernet: 100/50/40/25/10 GbE InfiniBand: HDR100, EDR, FDR, QDR, DDR, SDR | 100/50/40/25/10/1GbE | 100/50/40/25/10/1GbE | 100/50/40/25/10/1GbE |
IB RDMA / RoCE | IB RDMA, RoCE | IB RDMA, RoCE | RoCE | RoCE | RoCE |
Security | Secure Boot | - | |||
Encryption | IPSec/TLS/AES-XTS | - |
This table highlights the key differences of NVIDIA® 100G adapters, helping FAE and IT managers make informed decisions based on their organization's requirements, applications, and budget. For detailed information on the features and protocol support of each model, you can visit the documentation resources.
Conclusion
Upgrading to a 100G network adapter is a smart decision. It will make sure your network can handle the data demands of both today and the future.
For InfiniBand-centric infrastructures, the VPI adapters in the ConnectX®-6 series are the ideal choice. If security and offloading are priorities, ConnectX®-6 Dx offers advanced encryption. For general high-performance Ethernet applications, ConnectX®-5 provides a cost-effective solution.
At FS, we have the solution to fit your needs. Check out our wide selection of 100G network adapters today and get your network ready for the next generation of technology.