Blocking probability in optical interconnects in data center networks

Research paper by Mohsin Fayyaz, Khurram Aziz, Ghulam Mujtaba

Indexed on: 19 May '15Published on: 19 May '15Published in: Photonic Network Communications


Cloud computing and Web-based applications are creating a need for powerful data centers. Data centers have a great need for high bandwidth, low latency, low blocking probability, and low bit-error rate to sustain the interaction between different applications. Current data center networks (DCNs) suffer from several problems such as high-energy consumption, high latency, fixed throughput of links, and limited reconfigurability. Electronic switches are low radix and have high latency due to a large hop count since each hop employs a store-and-forward mechanism. Optical interconnects, on the other hand, offer several advantages such as low-energy consumption, high bandwidth, reconfigurability, malleability to changing traffic, high-radix switch design, fast switching transition times, and wavelength multiplexing. These benefits provide the incentive to shift from electrical interconnects to optical interconnects in DCNs. Despite several advantages over their electrical counterparts, the performance of optical interconnects can be further improved by considering some performance parameters of optical interconnects. One such important parameter for the performance of any communication network is the blocking probability. This paper makes a comprehensive investigation of the performance of optical interconnects in different DCN architectures on the basis of blocking probability and concludes by suggesting ways to reduce the blocking.