Multi-objective Reinforcement Learning for Responsive Grids

Grids organize resource sharing, a fundamental requirement of large scientific collaborations. Seamless integration of Grids into everyday use requires responsiveness, which can be provided by elastic Clouds, in the Infrastructure as a Service (IaaS) paradigm. This paper proposes a model-free resour...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of grid computing 2010-09, Vol.8 (3), p.473-492
Hauptverfasser: Perez, Julien, Germain-Renaud, Cécile, Kégl, Balazs, Loomis, Charles
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Grids organize resource sharing, a fundamental requirement of large scientific collaborations. Seamless integration of Grids into everyday use requires responsiveness, which can be provided by elastic Clouds, in the Infrastructure as a Service (IaaS) paradigm. This paper proposes a model-free resource provisioning strategy supporting both requirements. Provisioning is modeled as a continuous action-state space, multi-objective reinforcement learning (RL) problem, under realistic hypotheses; simple utility functions capture the high level goals of users, administrators, and shareholders. The model-free approach falls under the general program of autonomic computing, where the incremental learning of the value function associated with the RL model provides the so-called feedback loop. The RL model includes an approximation of the value function through an Echo State Network. Experimental validation on a real data-set from the EGEE Grid shows that introducing a moderate level of elasticity is critical to ensure a high level of user satisfaction.
ISSN:1570-7873
1572-9184
DOI:10.1007/s10723-010-9161-0