Learning complexity gradually in quantum machine learning models
Quantum machine learning is an emergent field that continues to draw significant interest for its potential to offer improvements over classical algorithms in certain areas. However, training quantum models remains a challenging task, largely because of the difficulty in establishing an effective in...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Quantum machine learning is an emergent field that continues to draw
significant interest for its potential to offer improvements over classical
algorithms in certain areas. However, training quantum models remains a
challenging task, largely because of the difficulty in establishing an
effective inductive bias when solving high-dimensional problems. In this work,
we propose a training framework that prioritizes informative data points over
the entire training set. This approach draws inspiration from classical
techniques such as curriculum learning and hard example mining to introduce an
additional inductive bias through the training data itself. By selectively
focusing on informative samples, we aim to steer the optimization process
toward more favorable regions of the parameter space. This data-centric
approach complements existing strategies such as warm-start initialization
methods, providing an additional pathway to address performance challenges in
quantum machine learning. We provide theoretical insights into the benefits of
prioritizing informative data for quantum models, and we validate our
methodology with numerical experiments on selected recognition tasks of quantum
phases of matter. Our findings indicate that this strategy could be a valuable
approach for improving the performance of quantum machine learning models. |
---|---|
DOI: | 10.48550/arxiv.2411.11954 |