AI Everywhere Is Connected by Ethernet: How Ethernet Powers AI Development
Updated at Oct 9th 20241 min read
As AI becomes increasingly pervasive, a natural question arises: how is data seamlessly moved everywhere? The answer is Ethernet. For over 50 years, Ethernet has been evolving to support modern workloads, including AI, by efficiently moving data across the globe.
Ethernet's widespread use can be attributed to several factors. Firstly, it is an open industry standard with a wide range of interoperable products. With many vendors offering Ethernet solutions, the competition keeps costs lower compared to proprietary products from single sources. Additionally, Ethernet's long history means it’s a well-known technology; nearly every data center manager is familiar with its use and capabilities.
Bringing AI to the Edge with Ethernet
AI inference, a key aspect of artificial intelligence, is increasingly moving to the edge to minimize latency and enhance the customer experience. Traditionally, edge devices would send data to a server to run AI models, which would then return the results to the device. However, by running AI models directly on the edge device, latency is significantly reduced. The edge device still communicates with a server for model fine-tuning and data storage, utilizing either wireless or lower-speed wired Ethernet connections to make AI at the edge a reality.

This approach allows edge devices to process AI models and deliver real-time results, leading to improved customer experiences and more efficient operations.
Ethernet for AI Training on Small Clusters
AI training encompasses a wide range of use cases. While complex models like deep learning require large clusters and accelerators to train efficiently, other models can achieve good training times using smaller clusters. Additionally, some models are pre-trained and later fine-tuned with curated datasets on small clusters.
In these scenarios, Ethernet proves to be an ideal technology for AI training on small clusters, whether for fine-tuning or training smaller models. FS has introduced the N9600-64OD switch, an 800G Ethernet switch tailored for AI applications. This switch offers 64x 800GbE OSFP ports in a 4U form factor, with flexible configurations supporting 400/800GbE. It also accommodates breakout cables to support 2x 400GbE, 4x 200GbE, or 8x 100GbE, meeting various network requirements.
Built with the reliable Broadcom BCM78900 Tomahawk 5 chipset and equipped with redundant pluggable power supplies and fans, the N9600-64OD provides a powerful switching capacity of up to 102.4 Tbps and a forwarding rate of 20,695 Mpps for data center networks. It supports RoCEv2 networks based on Priority-based Flow Control (PFC), Explicit Congestion Notification (ECN), Global Load Balancing (GLB), and Dynamic Load Balancing (DLB) to ensure low latency, lossless RDMA applications, and high-speed computing services.
This next-generation, high-performance, high-density fixed switch is designed for high-performance computing, distributed storage, big data, and other demanding application scenarios, making it an ideal network solution for AI and other data-intensive applications.

For complex AI training on large clusters, the current standard often involves using closed, proprietary technologies like InfiniBand. However, there is a growing need for an open, standards-based alternative. Recognizing this need, the industry is increasingly rallying around Ethernet. Earlier this year, industry leaders formed the Ultra Ethernet Consortium to advance Ethernet as the preferred interconnect for large-scale clusters.
Ethernet unifies the various applications of AI
Over the past 50 years, Ethernet has continuously evolved, with the industry striving to enhance its performance, reduce costs, and scale more efficiently than proprietary technologies. As AI becomes increasingly integrated into all aspects of life and business, Ethernet naturally serves as the networking technology that connects diverse AI applications. It facilitates AI's reach, from the edge to the cloud.