Reliable counting of weakly labeled concepts by a single spiking neuron model
Making an informed, correct and quick decision can be life-saving. It's crucial for animals during an escape behaviour or for autonomous cars during driving. The decision can be complex and may involve an assessment of the amount of threats present and the nature of each threat. Thus, we should...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Making an informed, correct and quick decision can be life-saving. It's
crucial for animals during an escape behaviour or for autonomous cars during
driving. The decision can be complex and may involve an assessment of the
amount of threats present and the nature of each threat. Thus, we should expect
early sensory processing to supply classification information fast and
accurately, even before relying the information to higher brain areas or more
complex system components downstream. Today, advanced convolutional artificial
neural networks can successfully solve visual detection and classification
tasks and are commonly used to build complex decision making systems. However,
in order to perform well on these tasks they require increasingly complex,
"very deep" model structure, which is costly in inference run-time, energy
consumption and number of training samples, only trainable on cloud-computing
clusters. A single spiking neuron has been shown to be able to solve
recognition tasks for homogeneous Poisson input statistics, a commonly used
model for spiking activity in the neocortex. When modeled as leaky integrate
and fire with gradient decent learning algorithm it was shown to posses a
variety of complex computational capabilities. Here we improve its
implementation. We also account for more natural stimulus generated inputs that
deviate from this homogeneous Poisson spiking. The improved gradient-based
local learning rule allows for significantly better and stable generalization.
We also show that with its improved capabilities it can count weakly labeled
concepts by applying our model to a problem of multiple instance learning (MIL)
with counting where labels are only available for collections of concepts. In
this counting MNIST task the neuron exploits the improved implementation and
outperforms conventional ConvNet architecture under similar condtions. |
---|---|
DOI: | 10.48550/arxiv.1805.07569 |