Mixed precision quantization of silicon optical neural network chip

In recent years, the field of neural network research has witnessed remarkable advancements in various domains. One of the emerging approaches is the integration of photonic computing, which leverages the unique properties of light for ultra-fast information processing. In this article, we establish...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Optics communications 2025-01, Vol.574, p.131231, Article 131231
Hauptverfasser: Zhang, Ye, Wang, Ruiting, Zhang, Yejin, Pan, Jiaoqing
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In recent years, the field of neural network research has witnessed remarkable advancements in various domains. One of the emerging approaches is the integration of photonic computing, which leverages the unique properties of light for ultra-fast information processing. In this article, we establish a mixed precision quantization model to silicon-based optical neural networks and evaluates their performance on the MNIST and Fashion-MNIST datasets. Through a genetic algorithm-based optimization process, we achieve significant parameter compression while maintaining competitive accuracy. Our findings demonstrate that with an average quantization bitwidth of 4.5 bits on the MNIST dataset, we achieve an impressive 85.94% reduction in parameter size compared to traditional 32-bit networks, with only a marginal accuracy drop of 0.65%. Similarly, on the Fashion-MNIST dataset, we achieve an average quantization bitwidth of 5.67 bits, resulting in an 82.28% reduction in parameter size with a slight accuracy drop of 0.8%. •A mixed precision quantization model varying bitwidths across and within layers is established for silicon ONN.•Genetic algorithm is employed to optimize bitwidths for input, weight, and output.•Significant parameter compression while maintaining competitive accuracy is achieved on the MNIST and Fashion-MNIST datasets.
ISSN:0030-4018
DOI:10.1016/j.optcom.2024.131231