Biological neurons act as generalization filters in reservoir computing

Reservoir computing is a machine learning paradigm that transforms the transient dynamics of high-dimensional nonlinear systems for processing time-series data. Although the paradigm was initially proposed to model information processing in the mammalian cortex, it remains unclear how the nonrandom...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings of the National Academy of Sciences - PNAS 2023-06, Vol.120 (25), p.e2217008120-e2217008120
Hauptverfasser: Sumi, Takuma, Yamamoto, Hideaki, Katori, Yuichi, Ito, Koki, Moriya, Satoshi, Konno, Tomohiro, Sato, Shigeo, Hirano-Iwata, Ayumi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Reservoir computing is a machine learning paradigm that transforms the transient dynamics of high-dimensional nonlinear systems for processing time-series data. Although the paradigm was initially proposed to model information processing in the mammalian cortex, it remains unclear how the nonrandom network architecture, such as the modular architecture, in the cortex integrates with the biophysics of living neurons to characterize the function of biological neuronal networks (BNNs). Here, we used optogenetics and calcium imaging to record the multicellular responses of cultured BNNs and employed the reservoir computing framework to decode their computational capabilities. Micropatterned substrates were used to embed the modular architecture in the BNNs. We first show that the dynamics of modular BNNs in response to static inputs can be classified with a linear decoder and that the modularity of the BNNs positively correlates with the classification accuracy. We then used a timer task to verify that BNNs possess a short-term memory of several 100 ms and finally show that this property can be exploited for spoken digit classification. Interestingly, BNN-based reservoirs allow categorical learning, wherein a network trained on one dataset can be used to classify separate datasets of the same category. Such classification was not possible when the inputs were directly decoded by a linear decoder, suggesting that BNNs act as a generalization filter to improve reservoir computing performance. Our findings pave the way toward a mechanistic understanding of information representation within BNNs and build future expectations toward the realization of physical reservoir computing systems based on BNNs.
ISSN:0027-8424
1091-6490
DOI:10.1073/pnas.2217008120