Attention‐based model for dynamic IR drop prediction with multi‐view features
Dynamic IR drop prediction based on machine learning has been studied in recent years. However, most proposed models used all input features extracted from circuits or manually selected parts of raw features as inputs, which failed to differentiate the order of priority among input features in a fle...
Gespeichert in:
Veröffentlicht in: | Electronics letters 2023-07, Vol.59 (13), p.n/a |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Dynamic IR drop prediction based on machine learning has been studied in recent years. However, most proposed models used all input features extracted from circuits or manually selected parts of raw features as inputs, which failed to differentiate the order of priority among input features in a flexible manner. In this paper, QuantumForest to vector‐based dynamic IR drop prediction is introduced. With the sparse attention mechanism brought by QuantumForest, important attributes of circuits are weighed more heavily than others. A new multi‐view feature creation method is also proposed and a novel regional distance feature is built up subsequently. The performance is evaluated on two chip designs with real simulation vectors. The experiment results indicate that the prediction result of the method outperforms other prominent methods for dealing with machine learning based IR drop analysis, reaching an average MAE of only 1.457 mV$\text{mV}$ on two designs.
In this letter, the sparse attention mechanism to vector‐based dynamic IR drop prediction with QuantumForest to weigh important attributes of circuits more heavily than others is introduced. A new multi‐view feature creation method is proposed and a novel regional distance feature is built up subsequently. The evaluation result proves that QuantumForest reaches the best accuracy on two example chip designs using the multi‐view features. |
---|---|
ISSN: | 0013-5194 1350-911X |
DOI: | 10.1049/ell2.12855 |