Weight-adaptive joint mixed-precision quantization and pruning for neural network-based equalization in short-reach direct detection links

Neural network (NN)-based equalizers have been widely applied for dealing with nonlinear impairments in intensity-modulated direct detection (IM/DD) systems due to their excellent performance. However, the computational complexity (CC) is a major concern that limits the real-time application of NN-b...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Optics letters 2024-06, Vol.49 (12), p.3500
Hauptverfasser: Xu, Zhaopeng, Wu, Qi, Lu, Weiqi, Ji, Honglin, Chen, Hui, Ji, Tonghui, Yang, Yu, Qiao, Gang, Tang, Jianwei, Cheng, Chen, Liu, Lulu, Wang, Shangcheng, Liang, Junpeng, Wei, Jinlong, Hu, Weisheng, Shieh, William
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Neural network (NN)-based equalizers have been widely applied for dealing with nonlinear impairments in intensity-modulated direct detection (IM/DD) systems due to their excellent performance. However, the computational complexity (CC) is a major concern that limits the real-time application of NN-based receivers. In this Letter, we propose, to our knowledge, a novel weight-adaptive joint mixed-precision quantization and pruning approach to reduce the CC of NN-based equalizers, where only integer arithmetic is taken into account instead of floating-point operations. The NN connections are either directly cutoff or represented by a proper number of quantization bits by weight partitioning, leading to a hybrid compressed sparse network that computes much faster and consumes less hardware resources. The proposed approach is verified in a 50-Gb/s 25-km pulse amplitude modulation (PAM)-4 IM/DD link using a directly modulated laser (DML) in the C-band. Compared with the traditional fully connected NN-based equalizer operated with standard floating-point arithmetic, about 80% memory can be saved at a minimum network size without degrading the system performance. Quantization is also shown to be more suitable to over-parameterized NN-based equalizers compared with NNs selected at a minimum size.
ISSN:0146-9592
1539-4794
1539-4794
DOI:10.1364/OL.527293