Parallel-mentoring for Offline Model-based Optimization
We study offline model-based optimization to maximize a black-box objective function with a static dataset of designs and scores. These designs encompass a variety of domains, including materials, robots and DNA sequences. A common approach trains a proxy on the static dataset to approximate the bla...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study offline model-based optimization to maximize a black-box objective
function with a static dataset of designs and scores. These designs encompass a
variety of domains, including materials, robots and DNA sequences. A common
approach trains a proxy on the static dataset to approximate the black-box
objective function and performs gradient ascent to obtain new designs. However,
this often results in poor designs due to the proxy inaccuracies for
out-of-distribution designs. Recent studies indicate that: (a) gradient ascent
with a mean ensemble of proxies generally outperforms simple gradient ascent,
and (b) a trained proxy provides weak ranking supervision signals for design
selection. Motivated by (a) and (b), we propose \textit{parallel-mentoring} as
an effective and novel method that facilitates mentoring among parallel
proxies, creating a more robust ensemble to mitigate the out-of-distribution
issue. We focus on the three-proxy case and our method consists of two modules.
The first module, \textit{voting-based pairwise supervision}, operates on three
parallel proxies and captures their ranking supervision signals as pairwise
comparison labels. These labels are combined through majority voting to
generate consensus labels, which incorporate ranking supervision signals from
all proxies and enable mutual mentoring. However, label noise arises due to
possible incorrect consensus. To alleviate this, we introduce an
\textit{adaptive soft-labeling} module with soft-labels initialized as
consensus labels. Based on bi-level optimization, this module fine-tunes
proxies in the inner level and learns more accurate labels in the outer level
to adaptively mentor proxies, resulting in a more robust ensemble. Experiments
validate the effectiveness of our method. Our code is available here. |
---|---|
DOI: | 10.48550/arxiv.2309.11592 |