Large data centers interconnect bottlenecks.

Large data centers interconnect bottlenecks are dominated by the switch I/O BW and the front panel BW as a result of pluggable modules. To overcome the front panel BW and the switch ASIC BW limitation one approach is to either move the optics onto the mid-plan or integrate the optics into the switch ASIC. Over the last 4 years, VCSEL based optical engines have been integrated into the packages of large-scale HPC routers, moderate size Ethernet switches, and even FPGA's. Competing solutions based on Silicon Photonics (SiP) have also been proposed for integration into HPC and Ethernet switch packages but with better integration path through the use of TSV (Through Silicon Via) stack dies. Integrating either VCSEL or SiP based optical engines into complex ASIC package that operates at high temperatures, where the required reliability is not trivial, one should ask what is the technical or the economic advantage before embarking on such a complex integration. High density Ethernet switches addressing data centers currently in development are based on 25G NRZ signaling and QSFP28 optical module that can support up to 3.6 Tb of front panel bandwidth.

[1]  Ali Ghiasi,et al.  Next-generation 10 GBaud module based on emerging SFP+ with host-based EDC [Topics in Optical Communications] , 2007, IEEE Communications Magazine.

[2]  Laurent Schares,et al.  Optical interconnects in future servers , 2011, 2011 Optical Fiber Communication Conference and Exposition and the National Fiber Optic Engineers Conference.

[3]  Thomas Toifl,et al.  A 28Gb/s 4-tap FFE/15-tap DFE serial link transceiver in 32nm SOI CMOS technology , 2012, 2012 IEEE International Solid-State Circuits Conference.

[4]  A. Ghiasi Is there a need for on-chip photonic integration for large data warehouse switches , 2012, The 9th International Conference on Group IV Photonics (GFP).

[5]  C. L. Schow Power-efficient transceivers for high-bandwidth, short-reach interconnects , 2012, OFC/NFOEC.