Reinforcement Learning Discovers Efficient Decentralized Graph Path Search Strategies
Proceedings of the Third Learning on Graphs Conference (LoG 2024), PMLR 269 Graph path search is a classic computer science problem that has been recently approached with Reinforcement Learning (RL) due to its potential to outperform prior methods. Existing RL techniques typically assume a global vi...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Proceedings of the Third Learning on Graphs Conference (LoG 2024),
PMLR 269 Graph path search is a classic computer science problem that has been
recently approached with Reinforcement Learning (RL) due to its potential to
outperform prior methods. Existing RL techniques typically assume a global view
of the network, which is not suitable for large-scale, dynamic, and
privacy-sensitive settings. An area of particular interest is search in social
networks due to its numerous applications. Inspired by seminal work in
experimental sociology, which showed that decentralized yet efficient search is
possible in social networks, we frame the problem as a collaborative task
between multiple agents equipped with a limited local view of the network. We
propose a multi-agent approach for graph path search that successfully
leverages both homophily and structural heterogeneity. Our experiments, carried
out over synthetic and real-world social networks, demonstrate that our model
significantly outperforms learned and heuristic baselines. Furthermore, our
results show that meaningful embeddings for graph navigation can be constructed
using reward-driven learning. |
---|---|
DOI: | 10.48550/arxiv.2409.07932 |