Anomaly detection with variational quantum generative adversarial networks
Generative adversarial networks (GANs) are a machine learning framework comprising a generative model for sampling from a target distribution and a discriminative model for evaluating the proximity of a sample to the target distribution. GANs exhibit strong performance in imaging or anomaly detectio...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2021-07 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Herr, Daniel Obert, Benjamin Rosenkranz, Matthias |
description | Generative adversarial networks (GANs) are a machine learning framework comprising a generative model for sampling from a target distribution and a discriminative model for evaluating the proximity of a sample to the target distribution. GANs exhibit strong performance in imaging or anomaly detection. However, they suffer from training instabilities, and sampling efficiency may be limited by the classical sampling procedure. We introduce variational quantum-classical Wasserstein GANs to address these issues and embed this model in a classical machine learning framework for anomaly detection. Classical Wasserstein GANs improve training stability by using a cost function better suited for gradient descent. Our model replaces the generator of Wasserstein GANs with a hybrid quantum-classical neural net and leaves the classical discriminative model unchanged. This way, high-dimensional classical data only enters the classical model and need not be prepared in a quantum circuit. We demonstrate the effectiveness of this method on a credit card fraud dataset. For this dataset our method shows performance on par with classical methods in terms of the \(F_1\) score. We analyze the influence of the circuit ansatz, layer width and depth, neural net architecture parameter initialization strategy, and sampling noise on convergence and performance. |
doi_str_mv | 10.48550/arxiv.2010.10492 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2010_10492</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2452682762</sourcerecordid><originalsourceid>FETCH-LOGICAL-a522-ed7639abce9bef07de98017e437cbab9e6043bbbad2e21c851eb5ec71515db3f3</originalsourceid><addsrcrecordid>eNotj8tqwzAUREWh0JDmA7qqoGun0pVl2csQ-kgJdJO9ubKvW6d-JJLtNH9fO-lqmOEwcBh7kGIZxlqLZ3S_5bAEMQ5ShAncsBkoJYM4BLhjC-_3QgiIDGitZuxj1bQ1VmeeU0dZV7YNP5XdNx_QlThVrPixx6bra_5FDblxHIhjPpDzE1PxhrpT6378PbstsPK0-M85272-7NbvwfbzbbNebQPUAAHlJlIJ2owSS4UwOSWxkIZCZTKLNqFIhMpaizkQyCzWkqymzEgtdW5Voebs8Xp7EU0PrqzRndNJOL0Ij8TTlTi49tiT79J927vRxKcQaohiMBGoP3-oWug</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2452682762</pqid></control><display><type>article</type><title>Anomaly detection with variational quantum generative adversarial networks</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Herr, Daniel ; Obert, Benjamin ; Rosenkranz, Matthias</creator><creatorcontrib>Herr, Daniel ; Obert, Benjamin ; Rosenkranz, Matthias</creatorcontrib><description>Generative adversarial networks (GANs) are a machine learning framework comprising a generative model for sampling from a target distribution and a discriminative model for evaluating the proximity of a sample to the target distribution. GANs exhibit strong performance in imaging or anomaly detection. However, they suffer from training instabilities, and sampling efficiency may be limited by the classical sampling procedure. We introduce variational quantum-classical Wasserstein GANs to address these issues and embed this model in a classical machine learning framework for anomaly detection. Classical Wasserstein GANs improve training stability by using a cost function better suited for gradient descent. Our model replaces the generator of Wasserstein GANs with a hybrid quantum-classical neural net and leaves the classical discriminative model unchanged. This way, high-dimensional classical data only enters the classical model and need not be prepared in a quantum circuit. We demonstrate the effectiveness of this method on a credit card fraud dataset. For this dataset our method shows performance on par with classical methods in terms of the \(F_1\) score. We analyze the influence of the circuit ansatz, layer width and depth, neural net architecture parameter initialization strategy, and sampling noise on convergence and performance.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2010.10492</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Anomalies ; Circuits ; Cost function ; Datasets ; Dimensional stability ; Fraud ; Generative adversarial networks ; Machine learning ; Physics - Quantum Physics ; Sampling ; Training</subject><ispartof>arXiv.org, 2021-07</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27904</link.rule.ids><backlink>$$Uhttps://doi.org/10.1088/2058-9565/ac0d4d$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2010.10492$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Herr, Daniel</creatorcontrib><creatorcontrib>Obert, Benjamin</creatorcontrib><creatorcontrib>Rosenkranz, Matthias</creatorcontrib><title>Anomaly detection with variational quantum generative adversarial networks</title><title>arXiv.org</title><description>Generative adversarial networks (GANs) are a machine learning framework comprising a generative model for sampling from a target distribution and a discriminative model for evaluating the proximity of a sample to the target distribution. GANs exhibit strong performance in imaging or anomaly detection. However, they suffer from training instabilities, and sampling efficiency may be limited by the classical sampling procedure. We introduce variational quantum-classical Wasserstein GANs to address these issues and embed this model in a classical machine learning framework for anomaly detection. Classical Wasserstein GANs improve training stability by using a cost function better suited for gradient descent. Our model replaces the generator of Wasserstein GANs with a hybrid quantum-classical neural net and leaves the classical discriminative model unchanged. This way, high-dimensional classical data only enters the classical model and need not be prepared in a quantum circuit. We demonstrate the effectiveness of this method on a credit card fraud dataset. For this dataset our method shows performance on par with classical methods in terms of the \(F_1\) score. We analyze the influence of the circuit ansatz, layer width and depth, neural net architecture parameter initialization strategy, and sampling noise on convergence and performance.</description><subject>Anomalies</subject><subject>Circuits</subject><subject>Cost function</subject><subject>Datasets</subject><subject>Dimensional stability</subject><subject>Fraud</subject><subject>Generative adversarial networks</subject><subject>Machine learning</subject><subject>Physics - Quantum Physics</subject><subject>Sampling</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj8tqwzAUREWh0JDmA7qqoGun0pVl2csQ-kgJdJO9ubKvW6d-JJLtNH9fO-lqmOEwcBh7kGIZxlqLZ3S_5bAEMQ5ShAncsBkoJYM4BLhjC-_3QgiIDGitZuxj1bQ1VmeeU0dZV7YNP5XdNx_QlThVrPixx6bra_5FDblxHIhjPpDzE1PxhrpT6378PbstsPK0-M85272-7NbvwfbzbbNebQPUAAHlJlIJ2owSS4UwOSWxkIZCZTKLNqFIhMpaizkQyCzWkqymzEgtdW5Voebs8Xp7EU0PrqzRndNJOL0Ij8TTlTi49tiT79J927vRxKcQaohiMBGoP3-oWug</recordid><startdate>20210721</startdate><enddate>20210721</enddate><creator>Herr, Daniel</creator><creator>Obert, Benjamin</creator><creator>Rosenkranz, Matthias</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>GOX</scope></search><sort><creationdate>20210721</creationdate><title>Anomaly detection with variational quantum generative adversarial networks</title><author>Herr, Daniel ; Obert, Benjamin ; Rosenkranz, Matthias</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a522-ed7639abce9bef07de98017e437cbab9e6043bbbad2e21c851eb5ec71515db3f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Anomalies</topic><topic>Circuits</topic><topic>Cost function</topic><topic>Datasets</topic><topic>Dimensional stability</topic><topic>Fraud</topic><topic>Generative adversarial networks</topic><topic>Machine learning</topic><topic>Physics - Quantum Physics</topic><topic>Sampling</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Herr, Daniel</creatorcontrib><creatorcontrib>Obert, Benjamin</creatorcontrib><creatorcontrib>Rosenkranz, Matthias</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Herr, Daniel</au><au>Obert, Benjamin</au><au>Rosenkranz, Matthias</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Anomaly detection with variational quantum generative adversarial networks</atitle><jtitle>arXiv.org</jtitle><date>2021-07-21</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Generative adversarial networks (GANs) are a machine learning framework comprising a generative model for sampling from a target distribution and a discriminative model for evaluating the proximity of a sample to the target distribution. GANs exhibit strong performance in imaging or anomaly detection. However, they suffer from training instabilities, and sampling efficiency may be limited by the classical sampling procedure. We introduce variational quantum-classical Wasserstein GANs to address these issues and embed this model in a classical machine learning framework for anomaly detection. Classical Wasserstein GANs improve training stability by using a cost function better suited for gradient descent. Our model replaces the generator of Wasserstein GANs with a hybrid quantum-classical neural net and leaves the classical discriminative model unchanged. This way, high-dimensional classical data only enters the classical model and need not be prepared in a quantum circuit. We demonstrate the effectiveness of this method on a credit card fraud dataset. For this dataset our method shows performance on par with classical methods in terms of the \(F_1\) score. We analyze the influence of the circuit ansatz, layer width and depth, neural net architecture parameter initialization strategy, and sampling noise on convergence and performance.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2010.10492</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-07 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2010_10492 |
source | arXiv.org; Free E- Journals |
subjects | Anomalies Circuits Cost function Datasets Dimensional stability Fraud Generative adversarial networks Machine learning Physics - Quantum Physics Sampling Training |
title | Anomaly detection with variational quantum generative adversarial networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T18%3A41%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Anomaly%20detection%20with%20variational%20quantum%20generative%20adversarial%20networks&rft.jtitle=arXiv.org&rft.au=Herr,%20Daniel&rft.date=2021-07-21&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2010.10492&rft_dat=%3Cproquest_arxiv%3E2452682762%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2452682762&rft_id=info:pmid/&rfr_iscdi=true |