MONGOOSE: Path-wise Smooth Bayesian Optimisation via Meta-learning
In Bayesian optimisation, we often seek to minimise the black-box objective functions that arise in real-world physical systems. A primary contributor to the cost of evaluating such black-box objective functions is often the effort required to prepare the system for measurement. We consider a common...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Yang, Adam X Aitchison, Laurence Moss, Henry B |
description | In Bayesian optimisation, we often seek to minimise the black-box objective
functions that arise in real-world physical systems. A primary contributor to
the cost of evaluating such black-box objective functions is often the effort
required to prepare the system for measurement. We consider a common scenario
where preparation costs grow as the distance between successive evaluations
increases. In this setting, smooth optimisation trajectories are preferred and
the jumpy paths produced by the standard myopic (i.e.\ one-step-optimal)
Bayesian optimisation methods are sub-optimal. Our algorithm, MONGOOSE, uses a
meta-learnt parametric policy to generate smooth optimisation trajectories,
achieving performance gains over existing methods when optimising functions
with large movement costs. |
doi_str_mv | 10.48550/arxiv.2302.11533 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2302_11533</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2302_11533</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-96d37038d029fd136ea35e8357148cd555384bc384f9a5d3fad9d94ad069068c3</originalsourceid><addsrcrecordid>eNotz0FPwjAYxvFePBj0A3iyX6Cz3bu3a70JQTQBawL35WXtoAnbyNagfHsVvDz_25P8GHtQMisMonyi4TueshxknimFALdsunIfC-fW82f-SWkvvuIY-Lrt-7TnUzqHMVLH3THFNo6UYt_xUyS-ConEIdDQxW53x24aOozh_r8Ttnmdb2ZvYukW77OXpSBdgrDaQynBeJnbxivQgQCDASxVYWqPiGCKbf07jSX00JC33hbkpbZSmxom7PF6e0FUxyG2NJyrP0x1wcAPwJhC_A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>MONGOOSE: Path-wise Smooth Bayesian Optimisation via Meta-learning</title><source>arXiv.org</source><creator>Yang, Adam X ; Aitchison, Laurence ; Moss, Henry B</creator><creatorcontrib>Yang, Adam X ; Aitchison, Laurence ; Moss, Henry B</creatorcontrib><description>In Bayesian optimisation, we often seek to minimise the black-box objective
functions that arise in real-world physical systems. A primary contributor to
the cost of evaluating such black-box objective functions is often the effort
required to prepare the system for measurement. We consider a common scenario
where preparation costs grow as the distance between successive evaluations
increases. In this setting, smooth optimisation trajectories are preferred and
the jumpy paths produced by the standard myopic (i.e.\ one-step-optimal)
Bayesian optimisation methods are sub-optimal. Our algorithm, MONGOOSE, uses a
meta-learnt parametric policy to generate smooth optimisation trajectories,
achieving performance gains over existing methods when optimising functions
with large movement costs.</description><identifier>DOI: 10.48550/arxiv.2302.11533</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2023-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2302.11533$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2302.11533$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Adam X</creatorcontrib><creatorcontrib>Aitchison, Laurence</creatorcontrib><creatorcontrib>Moss, Henry B</creatorcontrib><title>MONGOOSE: Path-wise Smooth Bayesian Optimisation via Meta-learning</title><description>In Bayesian optimisation, we often seek to minimise the black-box objective
functions that arise in real-world physical systems. A primary contributor to
the cost of evaluating such black-box objective functions is often the effort
required to prepare the system for measurement. We consider a common scenario
where preparation costs grow as the distance between successive evaluations
increases. In this setting, smooth optimisation trajectories are preferred and
the jumpy paths produced by the standard myopic (i.e.\ one-step-optimal)
Bayesian optimisation methods are sub-optimal. Our algorithm, MONGOOSE, uses a
meta-learnt parametric policy to generate smooth optimisation trajectories,
achieving performance gains over existing methods when optimising functions
with large movement costs.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz0FPwjAYxvFePBj0A3iyX6Cz3bu3a70JQTQBawL35WXtoAnbyNagfHsVvDz_25P8GHtQMisMonyi4TueshxknimFALdsunIfC-fW82f-SWkvvuIY-Lrt-7TnUzqHMVLH3THFNo6UYt_xUyS-ConEIdDQxW53x24aOozh_r8Ttnmdb2ZvYukW77OXpSBdgrDaQynBeJnbxivQgQCDASxVYWqPiGCKbf07jSX00JC33hbkpbZSmxom7PF6e0FUxyG2NJyrP0x1wcAPwJhC_A</recordid><startdate>20230222</startdate><enddate>20230222</enddate><creator>Yang, Adam X</creator><creator>Aitchison, Laurence</creator><creator>Moss, Henry B</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230222</creationdate><title>MONGOOSE: Path-wise Smooth Bayesian Optimisation via Meta-learning</title><author>Yang, Adam X ; Aitchison, Laurence ; Moss, Henry B</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-96d37038d029fd136ea35e8357148cd555384bc384f9a5d3fad9d94ad069068c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Adam X</creatorcontrib><creatorcontrib>Aitchison, Laurence</creatorcontrib><creatorcontrib>Moss, Henry B</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Adam X</au><au>Aitchison, Laurence</au><au>Moss, Henry B</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MONGOOSE: Path-wise Smooth Bayesian Optimisation via Meta-learning</atitle><date>2023-02-22</date><risdate>2023</risdate><abstract>In Bayesian optimisation, we often seek to minimise the black-box objective
functions that arise in real-world physical systems. A primary contributor to
the cost of evaluating such black-box objective functions is often the effort
required to prepare the system for measurement. We consider a common scenario
where preparation costs grow as the distance between successive evaluations
increases. In this setting, smooth optimisation trajectories are preferred and
the jumpy paths produced by the standard myopic (i.e.\ one-step-optimal)
Bayesian optimisation methods are sub-optimal. Our algorithm, MONGOOSE, uses a
meta-learnt parametric policy to generate smooth optimisation trajectories,
achieving performance gains over existing methods when optimising functions
with large movement costs.</abstract><doi>10.48550/arxiv.2302.11533</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2302.11533 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2302_11533 |
source | arXiv.org |
subjects | Computer Science - Learning |
title | MONGOOSE: Path-wise Smooth Bayesian Optimisation via Meta-learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T18%3A17%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MONGOOSE:%20Path-wise%20Smooth%20Bayesian%20Optimisation%20via%20Meta-learning&rft.au=Yang,%20Adam%20X&rft.date=2023-02-22&rft_id=info:doi/10.48550/arxiv.2302.11533&rft_dat=%3Carxiv_GOX%3E2302_11533%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |