Undirected Training of Run Transferable Libraries
This paper investigates the robustness of Run Transferable Libraries(RTLs) on scaled problems. RTLs provide GP with a library of functions which replace the usual primitive functions provided when approaching a problem. The RTL evolves from run to run using feedback based on function usage, and has...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper investigates the robustness of Run Transferable Libraries(RTLs) on scaled problems. RTLs provide GP with a library of functions which replace the usual primitive functions provided when approaching a problem. The RTL evolves from run to run using feedback based on function usage, and has been shown to outperform GP by an order of magnitude on a variety of scalable problems.
RTLs can, however, also be applied across a domain of related problems, as well as across a range of scaled instances of a single problem. To do this successfully, it will need to balance a range of functions. We introduce a problem that can deceive the system into converging to a sub-optimal set of functions, and demonstrate that this is a consequence of the greediness of the library update algorithm.
We demonstrate that a much simpler, truly evolutionary, update strategy doesn’t suffer from this problem, and exhibits far better optimization properties than the original strategy. |
---|---|
ISSN: | 0302-9743 1611-3349 |
DOI: | 10.1007/978-3-540-31989-4_33 |