Improved lower bound for the mutual information between signal and neural spike count

The mutual information between a stimulus signal and the spike count of a stochastic neuron is in many cases difficult to determine. Therefore, it is often approximated by a lower bound formula that involves linear correlations between input and output only. Here, we improve the linear lower bound f...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Biological cybernetics 2018-12, Vol.112 (6), p.523-538
Hauptverfasser: Voronenko, Sergej O., Lindner, Benjamin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The mutual information between a stimulus signal and the spike count of a stochastic neuron is in many cases difficult to determine. Therefore, it is often approximated by a lower bound formula that involves linear correlations between input and output only. Here, we improve the linear lower bound for the mutual information by incorporating nonlinear correlations. For the special case of a Gaussian output variable with nonlinear signal dependencies of mean and variance we also derive an exact integral formula for the full mutual information. In our numerical analysis, we first compare the linear and nonlinear lower bounds and the exact integral formula for two different Gaussian models and show under which conditions the nonlinear lower bound provides a significant improvement to the linear approximation. We then inspect two neuron models, the leaky integrate-and-fire model with white Gaussian noise and the Na–K model with channel noise. We show that for certain firing regimes and for intermediate signal strengths the nonlinear lower bound can provide a substantial improvement compared to the linear lower bound. Our results demonstrate the importance of nonlinear input–output correlations for neural information transmission and provide a simple nonlinear approximation for the mutual information that can be applied to more complicated neuron models as well as to experimental data.
ISSN:0340-1200
1432-0770
DOI:10.1007/s00422-018-0779-5