Meta-Designing Quantum Experiments with Language Models

Artificial Intelligence (AI) has the potential to significantly advance scientific discovery by finding solutions beyond human capabilities. However, these super-human solutions are often unintuitive and require considerable effort to uncover underlying principles, if possible at all. Here, we show...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Arlt, Sören, Duan, Haonan, Li, Felix, Xie, Sang Michael, Wu, Yuhuai, Krenn, Mario
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Arlt, Sören
Duan, Haonan
Li, Felix
Xie, Sang Michael
Wu, Yuhuai
Krenn, Mario
description Artificial Intelligence (AI) has the potential to significantly advance scientific discovery by finding solutions beyond human capabilities. However, these super-human solutions are often unintuitive and require considerable effort to uncover underlying principles, if possible at all. Here, we show how a code-generating language model trained on synthetic data can not only find solutions to specific problems but can create meta-solutions, which solve an entire class of problems in one shot and simultaneously offer insight into the underlying design principles. Specifically, for the design of new quantum physics experiments, our sequence-to-sequence transformer architecture generates interpretable Python code that describes experimental blueprints for a whole class of quantum systems. We discover general and previously unknown design rules for infinitely large classes of quantum states. The ability to automatically generate generalized patterns in readable computer code is a crucial step toward machines that help discover new scientific understanding -- one of the central aims of physics.
doi_str_mv 10.48550/arxiv.2406.02470
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_02470</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_02470</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2406_024703</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw0zMwMjE34GQw900tSdR1SS3OTM_LzEtXCCxNzCspzVVwrShILcrMTc0rKVYozyzJUPBJzEsvTUxPVfDNT0nNKeZhYE1LzClO5YXS3Azybq4hzh66YCviC4B6E4sq40FWxYOtMiasAgDdUzNl</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Meta-Designing Quantum Experiments with Language Models</title><source>arXiv.org</source><creator>Arlt, Sören ; Duan, Haonan ; Li, Felix ; Xie, Sang Michael ; Wu, Yuhuai ; Krenn, Mario</creator><creatorcontrib>Arlt, Sören ; Duan, Haonan ; Li, Felix ; Xie, Sang Michael ; Wu, Yuhuai ; Krenn, Mario</creatorcontrib><description>Artificial Intelligence (AI) has the potential to significantly advance scientific discovery by finding solutions beyond human capabilities. However, these super-human solutions are often unintuitive and require considerable effort to uncover underlying principles, if possible at all. Here, we show how a code-generating language model trained on synthetic data can not only find solutions to specific problems but can create meta-solutions, which solve an entire class of problems in one shot and simultaneously offer insight into the underlying design principles. Specifically, for the design of new quantum physics experiments, our sequence-to-sequence transformer architecture generates interpretable Python code that describes experimental blueprints for a whole class of quantum systems. We discover general and previously unknown design rules for infinitely large classes of quantum states. The ability to automatically generate generalized patterns in readable computer code is a crucial step toward machines that help discover new scientific understanding -- one of the central aims of physics.</description><identifier>DOI: 10.48550/arxiv.2406.02470</identifier><language>eng</language><subject>Computer Science - Learning ; Physics - Quantum Physics</subject><creationdate>2024-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.02470$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.02470$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Arlt, Sören</creatorcontrib><creatorcontrib>Duan, Haonan</creatorcontrib><creatorcontrib>Li, Felix</creatorcontrib><creatorcontrib>Xie, Sang Michael</creatorcontrib><creatorcontrib>Wu, Yuhuai</creatorcontrib><creatorcontrib>Krenn, Mario</creatorcontrib><title>Meta-Designing Quantum Experiments with Language Models</title><description>Artificial Intelligence (AI) has the potential to significantly advance scientific discovery by finding solutions beyond human capabilities. However, these super-human solutions are often unintuitive and require considerable effort to uncover underlying principles, if possible at all. Here, we show how a code-generating language model trained on synthetic data can not only find solutions to specific problems but can create meta-solutions, which solve an entire class of problems in one shot and simultaneously offer insight into the underlying design principles. Specifically, for the design of new quantum physics experiments, our sequence-to-sequence transformer architecture generates interpretable Python code that describes experimental blueprints for a whole class of quantum systems. We discover general and previously unknown design rules for infinitely large classes of quantum states. The ability to automatically generate generalized patterns in readable computer code is a crucial step toward machines that help discover new scientific understanding -- one of the central aims of physics.</description><subject>Computer Science - Learning</subject><subject>Physics - Quantum Physics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw0zMwMjE34GQw900tSdR1SS3OTM_LzEtXCCxNzCspzVVwrShILcrMTc0rKVYozyzJUPBJzEsvTUxPVfDNT0nNKeZhYE1LzClO5YXS3Azybq4hzh66YCviC4B6E4sq40FWxYOtMiasAgDdUzNl</recordid><startdate>20240604</startdate><enddate>20240604</enddate><creator>Arlt, Sören</creator><creator>Duan, Haonan</creator><creator>Li, Felix</creator><creator>Xie, Sang Michael</creator><creator>Wu, Yuhuai</creator><creator>Krenn, Mario</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240604</creationdate><title>Meta-Designing Quantum Experiments with Language Models</title><author>Arlt, Sören ; Duan, Haonan ; Li, Felix ; Xie, Sang Michael ; Wu, Yuhuai ; Krenn, Mario</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2406_024703</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Learning</topic><topic>Physics - Quantum Physics</topic><toplevel>online_resources</toplevel><creatorcontrib>Arlt, Sören</creatorcontrib><creatorcontrib>Duan, Haonan</creatorcontrib><creatorcontrib>Li, Felix</creatorcontrib><creatorcontrib>Xie, Sang Michael</creatorcontrib><creatorcontrib>Wu, Yuhuai</creatorcontrib><creatorcontrib>Krenn, Mario</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Arlt, Sören</au><au>Duan, Haonan</au><au>Li, Felix</au><au>Xie, Sang Michael</au><au>Wu, Yuhuai</au><au>Krenn, Mario</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Meta-Designing Quantum Experiments with Language Models</atitle><date>2024-06-04</date><risdate>2024</risdate><abstract>Artificial Intelligence (AI) has the potential to significantly advance scientific discovery by finding solutions beyond human capabilities. However, these super-human solutions are often unintuitive and require considerable effort to uncover underlying principles, if possible at all. Here, we show how a code-generating language model trained on synthetic data can not only find solutions to specific problems but can create meta-solutions, which solve an entire class of problems in one shot and simultaneously offer insight into the underlying design principles. Specifically, for the design of new quantum physics experiments, our sequence-to-sequence transformer architecture generates interpretable Python code that describes experimental blueprints for a whole class of quantum systems. We discover general and previously unknown design rules for infinitely large classes of quantum states. The ability to automatically generate generalized patterns in readable computer code is a crucial step toward machines that help discover new scientific understanding -- one of the central aims of physics.</abstract><doi>10.48550/arxiv.2406.02470</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.02470
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_02470
source arXiv.org
subjects Computer Science - Learning
Physics - Quantum Physics
title Meta-Designing Quantum Experiments with Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T06%3A42%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Meta-Designing%20Quantum%20Experiments%20with%20Language%20Models&rft.au=Arlt,%20S%C3%B6ren&rft.date=2024-06-04&rft_id=info:doi/10.48550/arxiv.2406.02470&rft_dat=%3Carxiv_GOX%3E2406_02470%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true