Need Help?

Chat live with us

Live Chat

Want to call?

+1 (888) 468 7419

United States / $ USD

Country/Region

United States

Language/Currency

English / $ USD

  • English / $ USD
Cart
Your Shopping Cart Is Empty.

Free Project Support

For Enterprise

For Service Provider

For Cloud Provider

40G Data Center Structured 

Fiber Cabling Solution

DC Cabling Technical Team
DCI Engineers
with CCIE/HCIE Certification

Free Project Support

Metro WDM System

FMT Multi-service Platform

Management System

Flexible Ring Architecture for 

Robust and Large-Scale

Optical Transport

OTN Technical Team
Senior Telecommunication
Engineers

Free Project Support

In-Building Wireless

Power Over Ethernet

Networking & Security

Secure Data Center Design and Deploymnet

FS Makes it Faster, More Reliable

And More Efficient

Enterprise Technical Team
Network Engineers
with CCIE/HCIE Certification

Free Project Support

Multi-brand Compatibility

Coded & Tested In-house

Design Consistency

FS Box V2.0 - Data Centre Tool Kit

for SFP/SFP+/XFP/

QSFP+/QSFP28

Optics Technical Team
Senior Hardware Design
Engineers

Know-How

fiber optics1
Data Center

25GbE–A New Trend for Future Ethernet Network


25GbE–A New Trend for Future Ethernet Network

FS Official 2016-05-19

Nowadays, the requirement for bandwidth in cloud data centers is increasing strikingly. To meet the demand for higher bandwidth, networking and the Ethernet industry are moving toward a new direction. Discussions previously focusing on 10GbE and 40GbE are now shifting onto 25GbE. It seems that 25GbE is more preferred and accepted by end users, which poses a threat to 10GbE and 40GbE. Why does it happen? This tutorial will lead you to interpret 25GbE in an all-sided perspective.

The Emergence of 25GbE

Network engineers were once shocked at the idea of 10GbE link. Then, virtualization and cloud computing created new networking challenges. One of the main challenges was the increasing desire for more bandwidth. A few years ago, The IEEE ratified a 40GbE and 100GbE standard to keep up with demand. However, Top of Rack (ToR) switches, typically the largest number of connections in data centers, are rapidly outgrowing 10GbE. The next step up, 40GbE, isn't cost-effective or power-efficient in ToR switching for cloud providers. Against such a backdrop, 25GbE was proposed as a standard developed by IEEE 802.3 Task Force P802.3by, used for Ethernet connectivity that will benefit cloud and enterprise data center environments. The specification makes use of single-lane 25 Gbps Ethernet links and is based on the IEEE 100GbE standard (802.3bj). And subsequently, the 25GbE Consortium was formed in June 2014 with founding members including Arista, Broadcom, Google, Mellanox and Microsoft, aiming at promoting 25GbE technology.

25GbE Optics & Cables

The 25GbE physical interface specification supports two main form factors—SFP28 (1x25 Gbps) and QSFP28 (4x25 Gbps).

The 25GbE PMDs (Physical Medium Dependent) specify low-cost, twin-axial copper cables, requiring only two twin-axial cable pairs for 25Gbps operation. Links based on copper twin-axial cables can connect servers to ToR switches, and as intra-rack connections between switches and/or routers. Fan-out cables (cables that connect to higher speeds and “fan out” to multiple lower speed links) can connect to 10/25/40/50Gpbs speeds, and can now be accomplished on MMF, SMF, and copper cables, matching reach-range to the specific application need.

The 25GBASE-SR SFP28 is an 850nm VCSEL 25GbE transceiver available in the market. It is designed to transmit and receive optical data over 50/125µm multimode optical fiber (MMF) and supports up to 70 m on OM3 MMF and 100 m on OM4 MMF (LC duplex). In fact, using an SFP28 direct attach copper (DAC) cable for direct connection to switches is more commonly used now. In addition, a more cost-effective solution is recommended that we can use a QSFP28 to 4xSFP28 breakout cable to connect a 100GbE QSFP28 switch port to four SFP28 ports. DAC cable lengths are limited to three meters for 25GbE. Thus, the active optic cable (AOC) solutions are also used for longer lengths of application. The following table provides a summary of key upcoming IEEE standard interfaces that specify 25GbE.

Table 1: IEEE 802.3 Standard Interfaces that Specify 25GbE

Physical Layer Name
MMF Optics 25GBASE-SR
Direct Attach Copper 25GBASE-CR
Direct Attach Copper 25GBASE-CR-S
Electrical Backplane 25GBASE-KR
Electrical Backplane 25GBASE-KR-S
Twisted Pair 25GBASE-T

Why Choose 25GbE?

While 10GbE is fine for many existing deployments, it cannot efficiently deliver the bandwidth required by next-generation Cloud and web-scale environments. It would require organizations to purchase and install twice as many (or more) 10GbE switches, along with additional cables, space, power and cooling—a significant increase in capital, operating and management expenses without the ability to meet future network demands. And 40GbE, as mentioned above, isn't cost-effective or power-efficient in ToR switching for cloud providers. Thus, 25GbE was designed to break through the dilemma.

Number of SerDes Lanes

SerDes (abbreviation of Serializer and Deserializer) is an integrated circuit or transceiver used in high-speed communications for converting serial data to parallel interfaces and vice versa. The transmitter section is a serial-to-parallel converter, and the receiver section is a parallel-to-serial converter. Currently, the rate of SerDes is 25 Gbps. That is to say, we can only use one SerDes lane at the speed of 25 Gbps to connect from 25GbE card to the other end of 25GbE card. In contrast, 40GbE needs four 10GbE SerDes lanes to achieve connection. As a result, the communication between two 40GbE cards requires as many as four pairs of fiber. Furthermore, 25 Gbps Ethernet provides an easy upgrade path to 50GbE and 100GbE networks, which utilize multiple 25GbE lanes.

Figure 1: Numbers of lanes needed in different Gigabit Ethernet

25GbE Network Interface Cards Are More Efficient for PCIe Lanes

At present, the mainstream Intel Xeon CPU only provides 40 lanes of PCIe 3.0. The lane bandwidth of a single PCIe 3.0 is about 8 Gbps. These PCle lanes are not only used for communications between CPU and network cards, but also for connection between RAID cards, GPU cards, and all other peripheral cards. Therefore, it is necessary to consider the utilization of limited PCIe lanes by the network cards more efficiently. Single 40GbE NIC needs at least one PCIe 3.0 x8 slot, so two 40GbE cards need to occupy two PCIe 3.0 x8 lanes. Even if the two 40GbE ports can run full of data at the same time, the actual lane bandwidth utilization is only: (40G+40G) / 8G*16= 62.5%. On the contrary, 25GbE card only needs one PCIe 3.0 x8 lane, and then the utilization efficiency is 25G*2 / (8G*8) = 78%. Apparently, 25GbE is significantly more efficient and more flexible than 40GbE in terms of the use of PCIe lanes.

The Cost of 25GbE Wiring Is Lower

40GbE card and the switch use QSFP+ module for short distance. For longer distance, QSFP+ has to be cooperated with MPO fiber optic cable to transmit, but MPO fiber optic cable has 12 optical fibers inside, which costs more than LC optical fibers of 10GbE and what’s worse, they can’t be compatible. If now upgrading to 40GbE based on 10GbE, all the fiber optic cables shall be abandoned and rewired, which not only consumes manpower, material resources but also time. In comparison, 25GbE card and switch use SFP28 modules and can be compatible with LC optical fibers of 10GbE due to single lane connection. If upgrading from 10GbE to 25GbE, rewiring shall be avoided, which turns out to be time saving and economical at the same time.

10GbE vs 25GbE vs 40GbE

25GbE enables resellers and their customers to provide 2.5X the performance of 10Gb Ethernet, making it a cost-effective upgrade to the 10GbE infrastructure. Since 25GbE is delivered across a single lane, it provides greater switch port density and network scalability compared to 40GbE, which is actually four 10GbE lanes. Thus, it costs less, requires lower power consumption and provides higher bandwidth. What’s more, 25GbE can run over existing fiber optic cable plant designed for 10GbE, 50 or 100GbE and also 40GbE by changing the transceivers. This is a tremendous cost savings for large data centers that already use fiber cabling for 10GbE and 40GbE.

Table 2: Bandwidth comparison for 25GbE and other Ethernet speeds

Port Speed Lane Speed (Gb/s) Lanes Per Port Usable Ports Total BW (Gb/s)
10GbE 10 1 128 1280
25GbE 25 1 128 3200
40GbE 10 4 32 1280
100GbE 25 4 32 3200
Note: Ethernet switch with 3.2 Tbps capacity and 128 ports

Key Benefits of 25GbE For Server I/O

 Maximum switch I/O performance and fabric capability: Web-scale and Cloud organizations can enjoy 2.5 times the network bandwidth performance of 10GbE. Since 25GbE is delivered across a single lane, it provides greater switch port density and network scalability, compared to 40GbE, which consumes four lanes.  Reduced capital expenditures (CAPEX) and operating expenses (OPEX): Deploying 25GbE network enables organizations to significantly reduce the required number of switches and cables, along with the considerations for the reduction of facility costs related to space, power, and cooling compared to 40GbE technology. Therefore, fewer physical network components reduce ongoing management and maintenance costs.  Leverage of existing IEEE 100GbE standard: The specification adopted by the Consortium uses a single-lane 25Gbps Ethernet link protocol that leverages existing 100GbE technology which is implemented as four 25Gbps lanes (IEEE 802.3bj) running on four fiber or copper pairs.

Forecast on Future Ethernet Network

In the old days, we are confined to network speeds options, but fast forward to 2017 and numerous choices comprising 1/10/25/40 and even 100GbE are exhibited. As shown in the following figure, 1GbE and 10GbE account for most Enterprise market’s Server Ethernet ports, while 25, 40, and 100GbE take up a small part today.

Figure 2: Server 5-year forecast by Dell’Oro group

Much of the Enterprise market is still running on 1GbE and will continue to migrate to 10GbE over the next 5-6 years. 25GbE is expected to seek a broader market in 2017 and will keep thriving in the future. Since 25GbE adapter can also run at 10GbE speeds, and Cloud providers could migrate immediately to 100GbE ToR switches and bypass the 40GbE upgrade, 25GbE is expected to be a future-proof trend in high speed data center network. Google and Microsoft have already committed to 25GbE, and it’s likely that other Cloud providers or vendors will follow suit. While the 40GbE market will contract due to more benefits of 25GbE, but will continue to take a place among the Enterprise where 10GbE and 40GbE switches are widely used.

Summary

No matter the market research or the attitude of users, 25GbE seems to be the preferred option down the road. Actually, coming back to reality, there will be a significant increase in 100GbE and 25GbE port density in the next few years. However, will 10GbE and 40GbE be replaced and dead? We do not know what will happen in the future. But the trends will always be higher speed, wider band, and higher port density. 25GbE vs 40GbE, let's wait and see how things play out.


    Facebook LinkedIn Twitter

DC Cabling Solution Team
DCI Engineers w/CCIE/HCIE
Certification

Ask Our Experts

*Free CCIE/HCIE Support

*Network Design/ Redesign, Analysis

*One-to-one service for each online PO

Thanks for your feedback.

Your opinion matters to us. With your help we’ll continue to improve your shopping experience.

Share Feedback

Thanks for visiting FS. Your feedback will help us provide customers a better experience.

How likely are you to recommend FS to your friends?*

1
2
3
4
5
Not likely Extremely Likely

Please select a topic for your feedback.*

0/1000

If you would like to hear back from us, please leave your contact information.