Non-Bayesian Social Learning with Observation Reuse and Soft Switching

We propose a non-Bayesian social learning update rule for agents in a network, which minimizes the sum of the Kullback-Leibler divergence between the true distribution generating the agents’ local observations and the agents’ beliefs (parameterized by a hypothesis set), and a weighted varentropy-rel...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM transactions on sensor networks 2018-07, Vol.14 (2), p.1-21
Hauptverfasser: Bhotto, MD. Zulfiquar Ali, Tay, Wee Peng
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We propose a non-Bayesian social learning update rule for agents in a network, which minimizes the sum of the Kullback-Leibler divergence between the true distribution generating the agents’ local observations and the agents’ beliefs (parameterized by a hypothesis set), and a weighted varentropy-related term. The varentropy-related term allows us to control the rate of convergence of our update rule, which also reuses some of the most recent observations of each agent to speed up convergence. Under mild technical conditions, we show that the belief of each agent concentrates on the optimal hypothesis set, and we derive a bound for the convergence rate. Furthermore, to overcome the performance degradation due to misinforming agents, who use a corrupted likelihood functions in their belief updates, we propose to use multiple social networks that update their beliefs independently and a convex combination mechanism among the beliefs of all the networks. Simulations with applications to location identification and group recommendation demonstrate that our proposed methods offer improvements over two other current state-of-the art non-Bayesian social learning algorithms.
ISSN:1550-4859
1550-4867
DOI:10.1145/3199513