Hierarchical Mixture-of-Experts approach for neural compact modeling of MOSFETs

With scaling, physics-based analytical MOSFET compact models are becoming more complex. Parameter extraction based on measured or simulated data consumes a significant time in the compact model generation process. To tackle this problem, ANN-based approaches have shown promising performance improvem...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Solid-state electronics 2023-01, Vol.199, p.108500, Article 108500
Hauptverfasser: Park, Chanwoo, Vincent, Premkumar, Chong, Soogine, Park, Junghwan, Cha, Ye Sle, Cho, Hyunbo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With scaling, physics-based analytical MOSFET compact models are becoming more complex. Parameter extraction based on measured or simulated data consumes a significant time in the compact model generation process. To tackle this problem, ANN-based approaches have shown promising performance improvements in terms of accuracy and speed. However, most previous studies used a multilayer perceptron (MLP) architecture which commonly requires a large number of parameters and train data to guarantee accuracy. In this article, we present a Mixture-of-Experts approach to neural compact modeling. It is 78.43% more parameter-efficient and achieves higher accuracy using fewer data when compared to a conventional neural compact modeling approach. It also uses 43.8% less time to train, thus, demonstrating its computational efficiency. •Neural compact models offer an accurate and efficient way to generate device models.•MoE based model is proved to be faster to develop, more accurate, and computationally less intensive.•Our approach was 78.4 % more parameter efficient, while using 56.7 % less data.
ISSN:0038-1101
1879-2405
DOI:10.1016/j.sse.2022.108500