Automation of Quantum Dot Measurement Analysis via Explainable Machine Learning
Mach. Learn.: Sci. Technol. 6, 015006 (2025) The rapid development of quantum dot (QD) devices for quantum computing has necessitated more efficient and automated methods for device characterization and tuning. This work demonstrates the feasibility and advantages of applying explainable machine lea...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Mach. Learn.: Sci. Technol. 6, 015006 (2025) The rapid development of quantum dot (QD) devices for quantum computing has
necessitated more efficient and automated methods for device characterization
and tuning. This work demonstrates the feasibility and advantages of applying
explainable machine learning techniques to the analysis of quantum dot
measurements, paving the way for further advances in automated and transparent
QD device tuning. Many of the measurements acquired during the tuning process
come in the form of images that need to be properly analyzed to guide the
subsequent tuning steps. By design, features present in such images capture
certain behaviors or states of the measured QD devices. When considered
carefully, such features can aid the control and calibration of QD devices. An
important example of such images are so-called $\textit{triangle plots}$, which
visually represent current flow and reveal characteristics important for QD
device calibration. While image-based classification tools, such as
convolutional neural networks (CNNs), can be used to verify whether a given
measurement is $\textit{good}$ and thus warrants the initiation of the next
phase of tuning, they do not provide any insights into how the device should be
adjusted in the case of $\textit{bad}$ images. This is because CNNs sacrifice
prediction and model intelligibility for high accuracy. To ameliorate this
trade-off, a recent study introduced an image vectorization approach that
relies on the Gabor wavelet transform (Schug $\textit{et al.}$ 2024
$\textit{Proc. XAI4Sci: Explainable Machine Learning for Sciences Workshop
(AAAI 2024) (Vancouver, Canada)}$ pp 1-6). Here we propose an alternative
vectorization method that involves mathematical modeling of synthetic triangles
to mimic the experimental data. Using explainable boosting machines, we show
that this new method offers superior explainability of model prediction without
sacrificing accuracy. |
---|---|
DOI: | 10.48550/arxiv.2402.13699 |