Enhancing Data Center Performance with Spine-Leaf and EVPN-VXLAN
Updated at Jun 4th 20241 min read
In response to current business transformation trends and increasing big data demands, most data center networks now use advanced Spine-Leaf architecture and EVPN-VXLAN technology for network virtualization. This approach ensures high-performance, high-bandwidth, low-latency traffic with excellent scalability and flexibility. The transition from traditional core-aggregation-access models to Spine-Leaf maximizes interconnection bandwidth and simplifies expansion. Additionally, forward-looking equipment selection is essential, anticipating technological advancements and industry trends to optimize data center resources and support core business operations efficiently.
Advanced Data Center Network Architecture
The high-performance data center network architecture has progressed from the conventional core-aggregation-access model to the Spine-Leaf architecture, maximizing network interconnection bandwidth, reducing multi-layer convergence ratios, and facilitating scalability. In this architecture, each interconnection link operates at 100G bandwidth, with a reasonable network convergence ratio tailored to internal traffic within and among PODs (Points of Delivery) in the data center. The Spine-Leaf design's three-layer underlay network enables the decoupling of core and access switches. Horizontal scaling is achievable by adding uplink links and reducing convergence ratios when traffic bottlenecks occur between the core switch and the aggregation switch or between the aggregation switch and the access switch, with minimal impact on bandwidth expansion. The overlay network employs distributed gateways utilizing EVPN-VXLAN technology, allowing flexible and adaptable network deployments and resource allocation based on business requirements.

Leveraging the design and deployment expertise of Internet-scale data center networks, this solution adopts a spine-leaf network architecture and integrates EVPN-VXLAN technology for network virtualization, delivering a flexible and scalable network infrastructure for upper-layer services. The data center network is segregated into production and office networks, isolated and safeguarded by domain firewalls, and connected to office buildings, laboratories, and regional center exits via network firewalls.

The core switches of the production and office networks facilitate interconnection between PODs and connection to firewall devices, offering up to 1.6Tb/s of inter-POD communication bandwidth and 160G high-speed network egress capacity. Within each POD, the internal horizontal network capacity is 24Tb, providing robust support for high-performance computing clusters (CPU/GPU) and storage clusters, ensuring minimal packet loss due to network performance bottlenecks.
Cabling infrastructure is designed based on the Spine-Leaf architecture. Switches within each POD are interconnected using 100G links and deployed in TOR (Top of Rack) mode, where 2-3 cabinets are consolidated into one TOR group. TORs are linked to Leafs via 100G connections. Each POD's Leaf is partitioned into two groups and deployed in separate network-occupied cabinets, enhancing reliability at the cross-cabinet level within the POD. This approach results in a well-defined network structure, streamlining cable deployment and management processes.
Forward-Looking Equipment Selection for High-Performance Data Center
When devising and establishing a high-performance data center network, it's crucial to anticipate technological advancements and industry trends, along with operational costs, for at least the next half-decade. This strategic foresight aids in optimizing the utilization of existing data center resources to effectively support the enterprise's core operations.
The choice of network switches plays a pivotal role in the overall data center network design. Traditional approaches often opt for chassis-based devices to bolster the network system's overall capacity and provide limited scalability. However, this method presents certain constraints and risks:
Chassis-based devices exhibit constrained overall capacity, failing to keep pace with the burgeoning scale demands of data centers.
Dual connections in core chassis-based devices lead to a substantial fault radius, potentially compromising business security.
The multi-chip architecture inherent in chassis-based devices engenders severe bottlenecks in traffic processing capacity and network latency.
Deployment of chassis-based devices is intricate, and diagnosing and rectifying failures entails lengthy cycles, resulting in prolonged business interruption during upgrades and maintenance.
To accommodate future business expansion, chassis-based devices necessitate reserved slots, escalating upfront investment costs.
Subsequent expansion encounters constraints such as vendor entrenchment and diminished bargaining power, substantially inflating the costs of future scaling endeavors.
Hence, for this project's network equipment selection, it is advisable to tentatively embrace a modular switch network architecture. This entails unifying switches of varying hierarchy levels in the model, facilitating swift acclimatization for the maintenance team. Additionally, this approach furnishes operational flexibility for forthcoming network architecture modifications, device reutilization, and repair replacements.
By embracing the Spine-Leaf (CLOS) architecture coupled with modular switch networking, the initial network investment (Total Cost of Ownership, TCO) undergoes significant reduction. The Spine-Leaf framework enables horizontal scalability, wherein even if a spine switch encounters an outage, only 1/8 of the network bandwidth is affected, safeguarding uninterrupted business operations. For future expansion endeavors, supplementary switches and hierarchy levels can be incorporated based on data center scale requisites, thereby augmenting access capacity and backbone network switching capability. The entire network can be procured and deployed on-demand, contingent upon service, application, and business demands.
Summary: FS Accelerate Data Center Interconnection
FS is at the forefront of data center interconnection, aligning with industry trends by adopting advanced Spine-Leaf architecture and EVPN-VXLAN technology for network virtualization. This approach ensures high-bandwidth, low-latency network traffic while offering scalability and flexibility.
Looking ahead, as the costs of high-speed optical modules and AOCs/DACs decrease, data center interconnect technologies will evolve further. FS, a leading provider of optical networking solutions, is dedicated to creating a connected and intelligent world through innovative computing and networking solutions. Their product lineup includes efficient and reliable switch & AOC/DAC/optical module solutions tailored for data centers, high-performance computing, edge computing, artificial intelligence, and other applications.

FS offers a wide range of optical modules, AOCs, and DACs supporting both InfiniBand and Ethernet, ranging from 100G to 800G, catering to diverse data center needs. These high-quality interconnect products ensure faster and more reliable data transmission solutions for data centers. With a professional technical team, extensive implementation experience, and outstanding products and solutions, FS has gained the trust and preference of many customers. Their solutions enable the construction of data center networks that can meet future technological demands, deliver efficient services, and reduce operational costs and energy consumption.