Interconnect Trends in the Data Center

Posted on by FS.COM

Changes are taking place in the data center in order to better satisfy the demands for high speed, wide bandwidth and high density. Meanwhile, driven by the cloud services, the hyperscale/cloud data centers are becoming larger, more modular, more homogeneous with an architecture change from traditional 3-tier to flattened 2-tier topology. Deal with changes, what we will need for optical interconnection a few years down the road? Here are several key trends to watch in 2016 which give you some guides to the roadmap.

Significant Increase in 100G & 25G Port Density

For easy understanding, we divide the data center into three parts—intra-rack, inter-rack and long span/inter-building. Within the data center rack, 10 GbE is being widely deployed now but 25 GbE will be deployed soon and 50 GbE to the server will follow. Between data center racks, 40 GbE now is deployed and 100 GbE or beyond will follow. When for long spans/inter data center and WAN (Wide Area Network), 100 GbE has been deployed until now and 400 GbE is standardized.

data-center-connection

The overall trend of data center connection is moving from 10/40 G to 25/100 G. Small form factors, power dissipation under 3.5W, active optical cables (AOCs) etc., are becoming the trends. To satisfy the demands for high port density, the 100G optical module has gone through a revolution of form factor. Until now, QSFP28 is becoming the 100G module form factor of choice for new data center switches. QSFP28 is both a 100G and a high-density 25G form factor (4×25 Gbps). It will have very high volumes, because it supports both 100G and 25G links.

When compared to 40G cabling, we believe a 25G connection speed offers a more flexible and cost-efficient upgrade point from 10 GbE for cloud providers on the long-term path to 100G connections. As a 25G module form factor, SFP28 is the choice for new Servers / NICs (Network Interface Cards).

25G-link

Extension of Optical Links Beyond the Standards

Duplex and parallel optics products of both 40 GbE and 100 GbE continue to proliferate which result in a proliferation of standards, de facto standards, MSAs (Multi-Source Agreements), and proprietary codes, each optimized for a particular use case. Meanwhile, various recent 25G and 100G Ethernet standards and MSAs require the use of RS-FEC (Reed-Solomon Forward Error Correction) on the host to increase overall link length. RS-FEC does not increase the total bit rate, but it introduces an additional latency of ~100ns in the link. As the fiber propagation time of each bit over 100m of MMF is ~500ns, the amount of additional latency introduced by RS-FEC may impact the overall performance of short links 500 meters. Thus, the low-Latency QSFP28 SR4 and SFP28 SR without FEC will be a trending option for data center short-reach interconnection.

optical-standard-proliferation

Reutilization of Existing 10G Fiber Plant on 40/100G

One of the main interconnect trends in the data center is that data center operators want to upgrade from 10G to 40/100G without touching the duplex MMF (multimode fiber) infrastructure. Today’s data centers are architected around 10 GbE, primarily focusing on 10GBASE-SR using duplex MMF with LC connectors. If can maintain the existing 10G fiber infrastructure and run a direct migration to 40/100 GbE, it can save more costs and times for the data center operators. In this case, the breakout cabling solution may be the preferred path for 40/100G migration (40G migration is shown below).

10G-to-40G-migration

Moving Beyond 100G, to 200G and 400G

There is no end to pursuit the speed and bandwidth. Several more speeds are being considered beyond 100G, including 200GbE and multiple speeds beyond 400GbE.

Development of 400GbE was launched by the IEEE 802.3 working group in March 2013 and is expected to be ratified in December 2017. 50 GbE and 200 GbE standardization by IEEE have started and would be completed in the near future, later than 400 GbE.

The 400 GbE is using 16 lanes of 25Gbps technology in the CDFP form factor (the first generation form factor for 400 GbE), but the industry also wants to use eight lanes of 50 Gbps to create higher density 400 GbE in the CFP2 form factor (CFP8). 50Gbps lanes will enable 50 GbE in the SFP+ form factor and 200 GbE in the QSFP56 form factor. The speeds based on 50Gbps lanes should be available by 2020.

200G and 400G applications will be distinct. The former is mainly used by data center and enterprises as 200G uplinks. While the 400 GbE can be used for service provider applications, such as 400G router-router and router-transport client interfaces. With 4 x 100G fan-outs, it can also be used for data center and enterprise applications to satisfy the high port count/density.

Ethernet-speed

The increased demand for blazing connection speeds and unlimited bandwidth drives the data center to change. The interconnect trends in data center market toward 25/100G with smaller module form factors for higher port density, lower power consumption and low cost per bit. Meanwhile, data center is required to increase performance to leverage existing fiber infrastructure that achieves 40/100G migration from 10 GbE. For higher speed beyond 100G, new Ethernet speeds including 50G, 200G, 400G are being standardized.

Tags: , , ,
FS.COM FHD Fiber Enclosure
FS.COM FMU CWDM DWDM MUX/DEMUX
Calendar
March 2017
S M T W T F S
« Feb    
 1234
567891011
12131415161718
19202122232425
262728293031