Data Center Network Architecture Needs 800G/400G Transceivers
Jan 24, 20241 min read
As HPC continues to advance, the increasing scale of data processing and model training is accelerating the upgrade of data center network architecture. Traditional architectures can no longer meet the stringent requirements for bandwidth and latency in large-scale training and computation. Driven by these challenges, data centers are evolving toward high-bandwidth, low-latency topologies supported by 400G and 800G optical transceivers. This article explores how HPC drives the evolution of data center networks, examines the role of high-speed optical modules in enabling efficient connectivity, and presents typical 400G/800G transceiver solutions for modern data centers.
HPC Drives the Upgrade of Data Center Network Architecture
The Fat Tree Data Center Network Architecture
As HPC large model training becomes more widely used in numerous industries, traditional networks cannot match the bandwidth and latency requirements of large model cluster training. Large model distributed training necessitates communication between GPUs, which increases east-west traffic in ML data centers. However, conventional three-tier architectures, consisting of core, aggregation, and access layers, were originally optimized for north–south traffic. When exposed to massive east–west traffic in HPC workloads, this design becomes inefficient, leading to a high bandwidth convergence ratio, high inter-access latency, and limited NIC bandwidth. These bottlenecks severely impede the efficiency of HPC applications and make it imperative to adopt a new network architecture.
To meet this demand, the Fat Tree architecture has emerged as an ideal solution. In the traditional tree network topology, bandwidth converges layer by layer, with the network bandwidth at the tree's base being much less than the total bandwidth of all the leaves. In comparison, the fat tree looks like a real tree, with thicker branches closer to the root. As a result, network bandwidth increases from the leaf to the root, improving network efficiency and accelerating training. The fat tree architecture ensures non-blocking transmission through a 1:1 bandwidth-free convergence ratio configuration, eliminating the bottlenecks found in traditional networks. This is the underlying premise of the fat tree architecture, allowing for non-blocking networks.
Furthermore, to minimize latency, the network is designed to form resource pools of switches, enabling single-hop communication for nodes within the same pool. When combined with RDMA (Remote Direct Memory Access) technology, this architecture can reduce latency by approximately tenfold compared to TCP/IP networks, thus enhancing communication efficiency for HPC data centers.

Data Center Network Rate Upgrade Evolution
The pace of network speed evolution in data centers is driven by growing compute and storage demands. Earlier deployments widely adopted 10G and 25G connections for servers due to their low cost and high density. To meet rising bandwidth needs, 40G and 100G solutions achieved higher speeds by aggregating four parallel lanes, but this approach increased transceiver size, power consumption, and overall system complexity. With the adoption of PCIe Gen4/Gen5 and the rising need for higher host connectivity, single-lane speeds have advanced to 50 Gbps based on technology from 400GbE. These 50G lanes can be combined to form 100G, 200G, or 400G interfaces within a unified architecture. This approach not only improves scalability and efficiency but also lays the foundation for the ongoing transition toward 800G networks.

The Rise of 800G/400G Transceivers Driven by HPC Data Center
Reasons Behind the Growing Demand for 800G/400G Transceivers
Large-Scale Data Processing Demands
Training and inference of HPC algorithms necessitate extensive datasets. Hence, data centers must efficiently handle the transmission of substantial data. The advent of 800G transceivers provides increased bandwidth, aiding in addressing this challenge. The upgraded data center network architecture typically includes two levels extending from the switch to the server, with 400G functioning as the bottom layer. So, upgrading to 800G will undoubtedly boost demand for 400G.
Real-Time Requirements
In certain HPC application scenarios, the demand for real-time data processing is crucial. For example, in autonomous driving systems, the copious data generated by sensors necessitates prompt transmission and processing. Optimizing system latency becomes a pivotal factor in ensuring timely responses. The introduction of high-speed optical modules contributes to swiftly meeting these real-time demands by reducing the latency in data transmission and processing, thereby enhancing the system's responsiveness.
Multitasking Concurrency
Modern data centers often need to concurrently process multiple tasks, encompassing activities such as image recognition and natural language processing. The incorporation of high-speed 800G/400G optical transceivers enhances support for such multitasking workloads.
Bright Prospects for the 800G/400G Optical Module Market
The demand for 400G and 800G optical modules continues to rise steadily in 2025, driven by the expanding scale of HPC workloads and the increasing need for high-bandwidth, low-latency data transmission within data centers. According to recent reports from LightCounting and Cignal AI, global deployments of high-speed optical modules have maintained strong momentum since 2024, with 800G transceivers becoming a key growth engine. The surge in computing and interconnect requirements is propelling continuous upgrades in network infrastructure and accelerating the transition toward 800G deployments. This trend underscores the robust growth outlook for the 400G/800G optical transceiver market as data centers evolve to meet the performance demands of next-generation HPC applications.

Typical 800G/400G Transceiver Solution in Data Center
In data center networks, achieving interconnects across different distances has traditionally required multiple transmission technologies, which increased both system complexity and deployment costs. With the advancement of high-density 400G and 800G pluggable optical transceivers, network design has become more streamlined. Based on InfiniBand and Ethernet protocols, modern HPC data centers adopt two primary interconnection architectures. FS provides optical connectivity solutions covering different distance requirements for both architectures. The following sections take short-reach solutions as typical examples, showcasing how FS InfiniBand and Ethernet products enable efficient, high-bandwidth interconnects in modern data center environments.
InfiniBand Solution
For high-performance computing environments, InfiniBand networks deliver ultra-low latency and high throughput, making them ideal for large-scale GPU clusters. FS offers the MMA4Z00-NS Compatible 800G OSFP finned top SR8 InfiniBand transceiver and compatible switches such as the MQM9790-NS2F to build high-bandwidth connections between the spine and leaf layers. At the server layer, the MMA4Z00-NS400 Compatible 400G OSFP flat top SR4 transceiver connects GPU nodes through MCX75510AAS-NEAT NICs, ensuring lossless transmission and high-speed east-west traffic within the cluster.

Ethernet Solution
In Ethernet-based data centers, 400G and 800G modules are widely deployed for different connection distances. FS provides a complete Ethernet product portfolio, covering transmission distances from 50m to 40km. Among these, the QDD 400G SR8 modules are mainly used for TOR-to-Leaf connections (<100 m), 400G QDD FR4 modules for Leaf-to-Spine links (<2 km), and 400G QSFP-DD LR4 modules for Spine-to-Core connections (<10 km) or even data center interconnects (DCI). This hierarchical deployment ensures high bandwidth and low latency.

The Era of 800G/400G Transceivers is Here
The transition to 800G and 400G optical transceivers marks a significant evolution in data center capabilities, addressing the escalating bandwidth and latency demands of modern high-performance computing. These transceivers, with their advanced integration of technologies such as PAM4 modulation and LPO, provide the foundational infrastructure required to support next-generation computational workloads.
FS provides a comprehensive portfolio of 400G and 800G optical transceivers for both Ethernet networking and InfiniBand networking, supporting various transmission distances. Each module undergoes rigorous compatibility and performance testing to ensure reliability in diverse network environments. For specialized requirements, FS offers customized solution design and integration services to address specific application challenges.