How Will Fiber Optic Transceivers Evolve for Future Data Centers

Posted on by FS.COM

QSFP+ to 4 SFP+ CableIn the modern data center, fiber optic transceivers always play an important role. And their importance will continue to grow in the following years because server access and switch-to-switch interconnects require increasingly higher speeds to meet the rising demands for bandwidth driven by streaming video, cloud computing and storage, or application virtualization. So, what challenges were introduced by data center applications on them? And how will fiber optic transceivers evolve for future data center?

Challenge in Cost of Fiber Optic Transceivers

In a mega-scale data center, there are thousands of devices running. Assuming a single mega-scale data center housing 100,000 servers interconnected by a highly redundant horizontal mesh requires a similarly high number of optical links. The number of fiber optic transceivers is at least twice the number of optical links as each link must be terminated with fiber optic transceivers on both ends. In fact, the number of fiber optic transceivers can reach even higher numbers if optical breakout configurations are used. It is no doubt that such big volumes of fiber optic transceivers cost too much and not conducive to the data center development. Thus, people are eager for a low cost strategy of fiber optic transceivers. However, it is a big challenge for suppliers to achieve low price. Because today’s pricing is 5 times to 10 times higher, even though at different data rates or in a different application space.

COST_REDUCTo be honest, cost reductions are difficult to achieve if only by making minor refinements of proven approaches to transceivers design and manufacturing. But relaxing specifications, such as lowering the maximum operating temperature, reducing the operating temperature range, shortening the product usage lifetime, as well as allowing the use of forward error correction (FEC), can help reduce the cost of fiber optic transceivers since it allows vendors to adopt lower cost designs with higher levels of optical integration, non-hermetic packaging, uncooled operation, or simplified testing.

At present, the market of fiber optic transceivers is fairly mature. And thanks to the MSA (Multi-Source Agreement), end users have more choice when selecting vendors as you not have to pay more money to purchase the fiber optic transceivers directly from system vendors, but get the same performance.

Fiber Optic Transceiver Transition From 40G to 100G for the Data Center

The modern mega-scale data centers typically have 10G access ports that interface to 40G switching fabrics. But the 25G access ports and 100G switching fabrics are accelerating in the near future. In data center, an important factor to determine the applications of fiber optic transceiver is form factor. Today’s data centers have consolidated around transceivers in the SFP (Small Form-factor Pluggable) form factor for server access and around QSFP (Quad Small Form Pluggable) transceivers for switch-to-switch interconnects. In addition, direct attach copper cables are typically used when the distance to the access port is less than 5 m while active optical cables (AOCs) can be used for longer reaches.


SFP+ (Enhanced Small Form-factor Pluggable ) transceiver plays a key role in 10G transmission with its advantage of compactness, performance and cost savings. They are widely used in 10G access ports for a long time. However, the situation will be changed in the near future when the access speed increases to 25G and the 10G access ports turn to SFP28. What’s more, the ecosystem around 25G lanes will be expected to leverage in applications such as next-generation enterprise networks that will drive demand for SFP28 modules operating over Single-Mode fiber (SMF) for reaches of 10 km to 40 km.

QSFP transceiver is a parallel transceiver which accept 4 electrical input lanes, and operate at 4 x 10 Gbps. Today, 40G QSFP+ is widely deployed in data center switching fabrics and ramping up hard as data centers deploy 40GbE, particularly as a high-density 10G interface via breakout cables. However, IHS Infonetics released a research in May this year which said QSFP28 modules will be deployed in high volumes as data centers transition from 40G to 100G switching fabrics starting in 2016. What’s QSFP28? As we know, the first-generation QSFP transceivers are equipped with four Tx and Rx and each channel has a rate of 10 Gbps. But now each channel of QSFP can transmit and receive data up to 28 Gbps thanks to the development of technology. This type of transceiver is called QSFP28 which is a new trend for 100G applications.

Actually, the first way to achieve 100G is “10GbE-40GbE-100GbE”. The first fiber optic transceivers that were shipped with 100G transceivers were CFP (100 Gbps, 10 x 10G lane electrical interface as defined in 802.3ba). But CFP2 soon came along and achieved 5 x 25G (or 10 x 10G) lane electrical interface while it reduced the form factor by half of CFP. Even so, it costs too expensive and the its footprint is too large to trigger mass deployment. After CFP2, CFP4 which is half the size of the CFP2 has been launched. Meanwhile, there is another form factor namely QSFP28 mentioned above competing with it. At present, CFP4 and QSFP28 seems to be neck and neck. QSFP28 has the density advantage over CFP4, but CFP4’s higher maximum power consumption gives it the advantage on longer reach optical distances. In addition, in data center, there is another trend that nearly all link lengths are less than 2km. Thus for intra-connection as well as the aggregation switch design, although there are still some technical issues of QSFP28 waiting to be solved, QSFP28 transceivers which have almost the same size of QSFP+ seem to be a superior choice for data center applications.


Fiber Optic Transceiver Considerations in Data Centers Beyond 100G

Today the optical industry buzz is all about “beyond 100G” bit rates. The next data center developments will be following the 4x trend set by 40 G and 100 G, e.g. 200 G, 400 G etc. Fiber optic transceivers also should be considered beyond 100G to satisfy the demands. One of the most important metric for data center switches is the front panel bandwidth, which is the aggregate bandwidth of all transceivers that can fit in a 19″ wide and 1RU tall switching hardware. Generally, a common switch can accommodate 32 QSFP ports on the front panel. If the ports are QSFP+, the corresponding front panel bandwidth is 1.28 Tbps (32 x 40 Gbps). With the upgrade to QSFP28, this bandwidth increases to 3.2 Tbps (32 x 100 Gbps).

How about the path after QSFP28? Next generation switching ASICs (Application Specific Integrated Circuits) are expected to have native port speeds of 50G and 128 ports, which correspond to a net throughput of 6.4 Tbps. In other word, for 200G application, 200G QSFP modules (QSFP56, 4 x 50 Gbps) would result in a front panel bandwidth of 6.4 Tbps (32 x 200 Gbps). However, 200GbE standard does not exist so far. The completion of 200GbE standard would be later than 400 GbE.

For 400G applications, formal standards should be completed in 2015, which is approximately the same time most carriers are indicating they will start initial evaluation and deployment of 400G interfaces. As the module must accommodate either 16 x 25G or 8 x 50G electrical input lanes, which exceeds the 4 lanes defined for the QSFP, 400G transceivers will have larger size than QSFP. Additionally, meeting the 3.5W power limit of QSFP modules appears infeasible for some 400G implementations. Thus, proposals for larger form factors for 400 G can be anticipated from CFP MSA, which has had large success in 100G with CFP, CFP2, and CFP4. In this case, a key requirement will be that the size allows for at least 16 ports on the front panel in order to satisfy a net throughput of 6.4 Tbps (16 x 400 Gbps, and possibly more).

Overall, three potential solutions for 400G fiber optic transceivers are given the current objectives at present, from easy to difficult:

  • 400G-PSM16 (16 x 25 G): Parallel Fiber Only
  • 400G-PSM4 (4 x 100 G): Parallel Fiber + PAM4
  • 400G-LR4 (1 x 400 G): Duplex Fiber + PAM4 + WDM


100G-PSM4+ 400G-PSM16 400G-PSM4 400G-LR4
Time to Market 0 12-18 month 2-3 years 3-4 years
Optical Lanes 4 16 4 1
Electrical Lanes Supported 4 16 16,8 16,8
Power < 3.5 W < 6 W ~ 6 W > 7.5 W
Link Budget Delta 0 < 7.0 ~ 9.5 > 15.3
Reach > 500m
Reach > 2km X X
Reach > 10km X X X
Module Cost 1 2.05 1.96 8.53
Link Cost @ 500m Low Lowest Highest
Link Cost @ 2km High Lowest Highest
Link Cost @ 10 km Highest Low Lowest

Among them, the second solution, namely 400G-PSM4 seems to be the most favorable solution. Though time to market will be longer than the first solution, but it still reasonable. In addition, it has the lowest potential cost compared to other solutions as well as a low potential power.

Prospect of Fiber Optic Transceiver

As the key component of data center, the prospect of fiber optic transceiver development is broad. The transition from 40G to 100G interconnects is imminent. And meanwhile, standards beyond 100G are in process and several possible paths have been proposed for the evolution. In order to ensure the development of solutions to meet future data center requirements of cost and power per gigabit, more new concepts of fiber optic transceivers are needed. Moreover, the fiber optic transceiver vendors should keep up the pace with the demands and work closely with networking equipment manufacturers and data center operators. We believe, there may be more form factor and interface solutions of fiber optic transceivers in the future and the transceivers will definitely become more and more suitable for the future data center.

Fiberstore offers comprehensive solutions of fiber optic transceivers that meet the needs of 1G to 100G applications. We highly recommend you our whole series of 40GBASE QSFP+ transceivers and 100G CFP2 and CFP4 transceivers. More over, most of the basic type of transceivers are with ready stock and on whole sale. For more information, please contact us over

Tags: , ,
FS.COM FHD Fiber Enclosure
January 2017
« Dec