Learning complexity gradually in quantum machine learning models

Quantum machine learning is an emergent field that continues to draw significant interest for its potential to offer improvements over classical algorithms in certain areas. However, training quantum models remains a challenging task, largely because of the difficulty in establishing an effective in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-11
Hauptverfasser: Recio-Armengol, Erik, Schreiber, Franz J, Eisert, Jens, Bravo-Prieto, Carlos
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Recio-Armengol, Erik
Schreiber, Franz J
Eisert, Jens
Bravo-Prieto, Carlos
description Quantum machine learning is an emergent field that continues to draw significant interest for its potential to offer improvements over classical algorithms in certain areas. However, training quantum models remains a challenging task, largely because of the difficulty in establishing an effective inductive bias when solving high-dimensional problems. In this work, we propose a training framework that prioritizes informative data points over the entire training set. This approach draws inspiration from classical techniques such as curriculum learning and hard example mining to introduce an additional inductive bias through the training data itself. By selectively focusing on informative samples, we aim to steer the optimization process toward more favorable regions of the parameter space. This data-centric approach complements existing strategies such as warm-start initialization methods, providing an additional pathway to address performance challenges in quantum machine learning. We provide theoretical insights into the benefits of prioritizing informative data for quantum models, and we validate our methodology with numerical experiments on selected recognition tasks of quantum phases of matter. Our findings indicate that this strategy could be a valuable approach for improving the performance of quantum machine learning models.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3130966853</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3130966853</sourcerecordid><originalsourceid>FETCH-proquest_journals_31309668533</originalsourceid><addsrcrecordid>eNqNi70KwjAYAIMgWLTv8IFzIc1nYt0EURwc3UtoY03JT5s0YN9eB92dbri7BckYYllUO8ZWJI-xp5QysWecY0aONyWD066DxtvBqJeeZuiCbJM0ZgbtYEzSTcmClc1TOwXmN1jfKhM3ZPmQJqr8yzXZXs7307UYgh-TilPd-xTcR9VYIj0IUXHE_6o3lhk5yw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3130966853</pqid></control><display><type>article</type><title>Learning complexity gradually in quantum machine learning models</title><source>Free E- Journals</source><creator>Recio-Armengol, Erik ; Schreiber, Franz J ; Eisert, Jens ; Bravo-Prieto, Carlos</creator><creatorcontrib>Recio-Armengol, Erik ; Schreiber, Franz J ; Eisert, Jens ; Bravo-Prieto, Carlos</creatorcontrib><description>Quantum machine learning is an emergent field that continues to draw significant interest for its potential to offer improvements over classical algorithms in certain areas. However, training quantum models remains a challenging task, largely because of the difficulty in establishing an effective inductive bias when solving high-dimensional problems. In this work, we propose a training framework that prioritizes informative data points over the entire training set. This approach draws inspiration from classical techniques such as curriculum learning and hard example mining to introduce an additional inductive bias through the training data itself. By selectively focusing on informative samples, we aim to steer the optimization process toward more favorable regions of the parameter space. This data-centric approach complements existing strategies such as warm-start initialization methods, providing an additional pathway to address performance challenges in quantum machine learning. We provide theoretical insights into the benefits of prioritizing informative data for quantum models, and we validate our methodology with numerical experiments on selected recognition tasks of quantum phases of matter. Our findings indicate that this strategy could be a valuable approach for improving the performance of quantum machine learning models.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Bias ; Data points ; Machine learning</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Recio-Armengol, Erik</creatorcontrib><creatorcontrib>Schreiber, Franz J</creatorcontrib><creatorcontrib>Eisert, Jens</creatorcontrib><creatorcontrib>Bravo-Prieto, Carlos</creatorcontrib><title>Learning complexity gradually in quantum machine learning models</title><title>arXiv.org</title><description>Quantum machine learning is an emergent field that continues to draw significant interest for its potential to offer improvements over classical algorithms in certain areas. However, training quantum models remains a challenging task, largely because of the difficulty in establishing an effective inductive bias when solving high-dimensional problems. In this work, we propose a training framework that prioritizes informative data points over the entire training set. This approach draws inspiration from classical techniques such as curriculum learning and hard example mining to introduce an additional inductive bias through the training data itself. By selectively focusing on informative samples, we aim to steer the optimization process toward more favorable regions of the parameter space. This data-centric approach complements existing strategies such as warm-start initialization methods, providing an additional pathway to address performance challenges in quantum machine learning. We provide theoretical insights into the benefits of prioritizing informative data for quantum models, and we validate our methodology with numerical experiments on selected recognition tasks of quantum phases of matter. Our findings indicate that this strategy could be a valuable approach for improving the performance of quantum machine learning models.</description><subject>Algorithms</subject><subject>Bias</subject><subject>Data points</subject><subject>Machine learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi70KwjAYAIMgWLTv8IFzIc1nYt0EURwc3UtoY03JT5s0YN9eB92dbri7BckYYllUO8ZWJI-xp5QysWecY0aONyWD066DxtvBqJeeZuiCbJM0ZgbtYEzSTcmClc1TOwXmN1jfKhM3ZPmQJqr8yzXZXs7307UYgh-TilPd-xTcR9VYIj0IUXHE_6o3lhk5yw</recordid><startdate>20241118</startdate><enddate>20241118</enddate><creator>Recio-Armengol, Erik</creator><creator>Schreiber, Franz J</creator><creator>Eisert, Jens</creator><creator>Bravo-Prieto, Carlos</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241118</creationdate><title>Learning complexity gradually in quantum machine learning models</title><author>Recio-Armengol, Erik ; Schreiber, Franz J ; Eisert, Jens ; Bravo-Prieto, Carlos</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31309668533</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Bias</topic><topic>Data points</topic><topic>Machine learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Recio-Armengol, Erik</creatorcontrib><creatorcontrib>Schreiber, Franz J</creatorcontrib><creatorcontrib>Eisert, Jens</creatorcontrib><creatorcontrib>Bravo-Prieto, Carlos</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Recio-Armengol, Erik</au><au>Schreiber, Franz J</au><au>Eisert, Jens</au><au>Bravo-Prieto, Carlos</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Learning complexity gradually in quantum machine learning models</atitle><jtitle>arXiv.org</jtitle><date>2024-11-18</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Quantum machine learning is an emergent field that continues to draw significant interest for its potential to offer improvements over classical algorithms in certain areas. However, training quantum models remains a challenging task, largely because of the difficulty in establishing an effective inductive bias when solving high-dimensional problems. In this work, we propose a training framework that prioritizes informative data points over the entire training set. This approach draws inspiration from classical techniques such as curriculum learning and hard example mining to introduce an additional inductive bias through the training data itself. By selectively focusing on informative samples, we aim to steer the optimization process toward more favorable regions of the parameter space. This data-centric approach complements existing strategies such as warm-start initialization methods, providing an additional pathway to address performance challenges in quantum machine learning. We provide theoretical insights into the benefits of prioritizing informative data for quantum models, and we validate our methodology with numerical experiments on selected recognition tasks of quantum phases of matter. Our findings indicate that this strategy could be a valuable approach for improving the performance of quantum machine learning models.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_3130966853
source Free E- Journals
subjects Algorithms
Bias
Data points
Machine learning
title Learning complexity gradually in quantum machine learning models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T14%3A36%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Learning%20complexity%20gradually%20in%20quantum%20machine%20learning%20models&rft.jtitle=arXiv.org&rft.au=Recio-Armengol,%20Erik&rft.date=2024-11-18&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3130966853%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3130966853&rft_id=info:pmid/&rfr_iscdi=true