Selective Harvesting over Networks
Active search (AS) on graphs focuses on collecting certain labeled nodes (targets) given global knowledge of the network topology and its edge weights under a query budget. However, in most networks, nodes, topology and edge weights are all initially unknown. We introduce selective harvesting, a var...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Active search (AS) on graphs focuses on collecting certain labeled nodes
(targets) given global knowledge of the network topology and its edge weights
under a query budget. However, in most networks, nodes, topology and edge
weights are all initially unknown. We introduce selective harvesting, a variant
of AS where the next node to be queried must be chosen among the neighbors of
the current queried node set; the available training data for deciding which
node to query is restricted to the subgraph induced by the queried set (and
their node attributes) and their neighbors (without any node or edge
attributes). Therefore, selective harvesting is a sequential decision problem,
where we must decide which node to query at each step. A classifier trained in
this scenario suffers from a tunnel vision effect: without recourse to
independent sampling, the urge to query promising nodes forces classifiers to
gather increasingly biased training data, which we show significantly hurts the
performance of AS methods and standard classifiers. We find that it is possible
to collect a much larger set of targets by using multiple classifiers, not by
combining their predictions as an ensemble, but switching between classifiers
used at each step, as a way to ease the tunnel vision effect. We discover that
switching classifiers collects more targets by (a) diversifying the training
data and (b) broadening the choices of nodes that can be queried next. This
highlights an exploration, exploitation, and diversification trade-off in our
problem that goes beyond the exploration and exploitation duality found in
classic sequential decision problems. From these observations we propose D3TS,
a method based on multi-armed bandits for non-stationary stochastic processes
that enforces classifier diversity, matching or exceeding the performance of
competing methods on seven real network datasets in our evaluation. |
---|---|
DOI: | 10.48550/arxiv.1703.05082 |