Biological Neurons Compete with Deep Reinforcement Learning in Sample Efficiency in a Simulated Gameworld
How do biological systems and machine learning algorithms compare in the number of samples required to show significant improvements in completing a task? We compared the learning efficiency of in vitro biological neural networks to the state-of-the-art deep reinforcement learning (RL) algorithms in...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | How do biological systems and machine learning algorithms compare in the
number of samples required to show significant improvements in completing a
task? We compared the learning efficiency of in vitro biological neural
networks to the state-of-the-art deep reinforcement learning (RL) algorithms in
a simplified simulation of the game `Pong'. Using DishBrain, a system that
embodies in vitro neural networks with in silico computation using a
high-density multi-electrode array, we contrasted the learning rate and the
performance of these biological systems against time-matched learning from
three state-of-the-art deep RL algorithms (i.e., DQN, A2C, and PPO) in the same
game environment. This allowed a meaningful comparison between biological
neural systems and deep RL. We find that when samples are limited to a
real-world time course, even these very simple biological cultures outperformed
deep RL algorithms across various game performance characteristics, implying a
higher sample efficiency. Ultimately, even when tested across multiple types of
information input to assess the impact of higher dimensional data input,
biological neurons showcased faster learning than all deep reinforcement
learning agents. |
---|---|
DOI: | 10.48550/arxiv.2405.16946 |