Automated Dynamic Algorithm Configuration
The performance of an algorithm often critically depends on its parameter configuration. While a variety of automated algorithm configuration methods have been proposed to relieve users from the tedious and error-prone task of manually tuning parameters, there is still a lot of untapped potential as...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The performance of an algorithm often critically depends on its parameter
configuration. While a variety of automated algorithm configuration methods
have been proposed to relieve users from the tedious and error-prone task of
manually tuning parameters, there is still a lot of untapped potential as the
learned configuration is static, i.e., parameter settings remain fixed
throughout the run. However, it has been shown that some algorithm parameters
are best adjusted dynamically during execution, e.g., to adapt to the current
part of the optimization landscape. Thus far, this is most commonly achieved
through hand-crafted heuristics. A promising recent alternative is to
automatically learn such dynamic parameter adaptation policies from data. In
this article, we give the first comprehensive account of this new field of
automated dynamic algorithm configuration (DAC), present a series of recent
advances, and provide a solid foundation for future research in this field.
Specifically, we (i) situate DAC in the broader historical context of AI
research; (ii) formalize DAC as a computational problem; (iii) identify the
methods used in prior-art to tackle this problem; (iv) conduct empirical case
studies for using DAC in evolutionary optimization, AI planning, and machine
learning. |
---|---|
DOI: | 10.48550/arxiv.2205.13881 |