Biological Neurons Compete with Deep Reinforcement Learning in Sample Efficiency in a Simulated Gameworld

How do biological systems and machine learning algorithms compare in the number of samples required to show significant improvements in completing a task? We compared the learning efficiency of in vitro biological neural networks to the state-of-the-art deep reinforcement learning (RL) algorithms in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Khajehnejad, Moein, Habibollahi, Forough, Paul, Aswin, Razi, Adeel, Kagan, Brett J
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Khajehnejad, Moein
Habibollahi, Forough
Paul, Aswin
Razi, Adeel
Kagan, Brett J
description How do biological systems and machine learning algorithms compare in the number of samples required to show significant improvements in completing a task? We compared the learning efficiency of in vitro biological neural networks to the state-of-the-art deep reinforcement learning (RL) algorithms in a simplified simulation of the game `Pong'. Using DishBrain, a system that embodies in vitro neural networks with in silico computation using a high-density multi-electrode array, we contrasted the learning rate and the performance of these biological systems against time-matched learning from three state-of-the-art deep RL algorithms (i.e., DQN, A2C, and PPO) in the same game environment. This allowed a meaningful comparison between biological neural systems and deep RL. We find that when samples are limited to a real-world time course, even these very simple biological cultures outperformed deep RL algorithms across various game performance characteristics, implying a higher sample efficiency. Ultimately, even when tested across multiple types of information input to assess the impact of higher dimensional data input, biological neurons showcased faster learning than all deep reinforcement learning agents.
doi_str_mv 10.48550/arxiv.2405.16946
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_16946</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_16946</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-e69e1f6d22dbdda6a0e686592c25a38bf0fd367a500969de3eb02b784401b6a93</originalsourceid><addsrcrecordid>eNotz7FOwzAUhWEvDKjwAEz4BRIcJ76JRwilIFUg0e7RTXxdLNlO5KaUvj1qYTrSPxzpY-yuEHnVKCUeMP2471xWQuUF6AqumXtyox93bkDP3-mQxrjn7Rgmmokf3fzFn4km_kku2jENFCjOfE2Yoos77iLfYJg88aW1bnAUh9M5It-4cPA4k-ErDHQckzc37Mqi39Pt_y7Y9mW5bV-z9cfqrX1cZwg1ZASaCgtGStMbg4CCoAGl5SAVlk1vhTUl1KiE0KANldQL2ddNVYmiB9Tlgt3_3V6s3ZRcwHTqzubuYi5_AU2uUuU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Biological Neurons Compete with Deep Reinforcement Learning in Sample Efficiency in a Simulated Gameworld</title><source>arXiv.org</source><creator>Khajehnejad, Moein ; Habibollahi, Forough ; Paul, Aswin ; Razi, Adeel ; Kagan, Brett J</creator><creatorcontrib>Khajehnejad, Moein ; Habibollahi, Forough ; Paul, Aswin ; Razi, Adeel ; Kagan, Brett J</creatorcontrib><description>How do biological systems and machine learning algorithms compare in the number of samples required to show significant improvements in completing a task? We compared the learning efficiency of in vitro biological neural networks to the state-of-the-art deep reinforcement learning (RL) algorithms in a simplified simulation of the game `Pong'. Using DishBrain, a system that embodies in vitro neural networks with in silico computation using a high-density multi-electrode array, we contrasted the learning rate and the performance of these biological systems against time-matched learning from three state-of-the-art deep RL algorithms (i.e., DQN, A2C, and PPO) in the same game environment. This allowed a meaningful comparison between biological neural systems and deep RL. We find that when samples are limited to a real-world time course, even these very simple biological cultures outperformed deep RL algorithms across various game performance characteristics, implying a higher sample efficiency. Ultimately, even when tested across multiple types of information input to assess the impact of higher dimensional data input, biological neurons showcased faster learning than all deep reinforcement learning agents.</description><identifier>DOI: 10.48550/arxiv.2405.16946</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Quantitative Biology - Neurons and Cognition</subject><creationdate>2024-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.16946$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.16946$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Khajehnejad, Moein</creatorcontrib><creatorcontrib>Habibollahi, Forough</creatorcontrib><creatorcontrib>Paul, Aswin</creatorcontrib><creatorcontrib>Razi, Adeel</creatorcontrib><creatorcontrib>Kagan, Brett J</creatorcontrib><title>Biological Neurons Compete with Deep Reinforcement Learning in Sample Efficiency in a Simulated Gameworld</title><description>How do biological systems and machine learning algorithms compare in the number of samples required to show significant improvements in completing a task? We compared the learning efficiency of in vitro biological neural networks to the state-of-the-art deep reinforcement learning (RL) algorithms in a simplified simulation of the game `Pong'. Using DishBrain, a system that embodies in vitro neural networks with in silico computation using a high-density multi-electrode array, we contrasted the learning rate and the performance of these biological systems against time-matched learning from three state-of-the-art deep RL algorithms (i.e., DQN, A2C, and PPO) in the same game environment. This allowed a meaningful comparison between biological neural systems and deep RL. We find that when samples are limited to a real-world time course, even these very simple biological cultures outperformed deep RL algorithms across various game performance characteristics, implying a higher sample efficiency. Ultimately, even when tested across multiple types of information input to assess the impact of higher dimensional data input, biological neurons showcased faster learning than all deep reinforcement learning agents.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Quantitative Biology - Neurons and Cognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7FOwzAUhWEvDKjwAEz4BRIcJ76JRwilIFUg0e7RTXxdLNlO5KaUvj1qYTrSPxzpY-yuEHnVKCUeMP2471xWQuUF6AqumXtyox93bkDP3-mQxrjn7Rgmmokf3fzFn4km_kku2jENFCjOfE2Yoos77iLfYJg88aW1bnAUh9M5It-4cPA4k-ErDHQckzc37Mqi39Pt_y7Y9mW5bV-z9cfqrX1cZwg1ZASaCgtGStMbg4CCoAGl5SAVlk1vhTUl1KiE0KANldQL2ddNVYmiB9Tlgt3_3V6s3ZRcwHTqzubuYi5_AU2uUuU</recordid><startdate>20240527</startdate><enddate>20240527</enddate><creator>Khajehnejad, Moein</creator><creator>Habibollahi, Forough</creator><creator>Paul, Aswin</creator><creator>Razi, Adeel</creator><creator>Kagan, Brett J</creator><scope>AKY</scope><scope>ALC</scope><scope>GOX</scope></search><sort><creationdate>20240527</creationdate><title>Biological Neurons Compete with Deep Reinforcement Learning in Sample Efficiency in a Simulated Gameworld</title><author>Khajehnejad, Moein ; Habibollahi, Forough ; Paul, Aswin ; Razi, Adeel ; Kagan, Brett J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-e69e1f6d22dbdda6a0e686592c25a38bf0fd367a500969de3eb02b784401b6a93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Quantitative Biology - Neurons and Cognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Khajehnejad, Moein</creatorcontrib><creatorcontrib>Habibollahi, Forough</creatorcontrib><creatorcontrib>Paul, Aswin</creatorcontrib><creatorcontrib>Razi, Adeel</creatorcontrib><creatorcontrib>Kagan, Brett J</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Quantitative Biology</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Khajehnejad, Moein</au><au>Habibollahi, Forough</au><au>Paul, Aswin</au><au>Razi, Adeel</au><au>Kagan, Brett J</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Biological Neurons Compete with Deep Reinforcement Learning in Sample Efficiency in a Simulated Gameworld</atitle><date>2024-05-27</date><risdate>2024</risdate><abstract>How do biological systems and machine learning algorithms compare in the number of samples required to show significant improvements in completing a task? We compared the learning efficiency of in vitro biological neural networks to the state-of-the-art deep reinforcement learning (RL) algorithms in a simplified simulation of the game `Pong'. Using DishBrain, a system that embodies in vitro neural networks with in silico computation using a high-density multi-electrode array, we contrasted the learning rate and the performance of these biological systems against time-matched learning from three state-of-the-art deep RL algorithms (i.e., DQN, A2C, and PPO) in the same game environment. This allowed a meaningful comparison between biological neural systems and deep RL. We find that when samples are limited to a real-world time course, even these very simple biological cultures outperformed deep RL algorithms across various game performance characteristics, implying a higher sample efficiency. Ultimately, even when tested across multiple types of information input to assess the impact of higher dimensional data input, biological neurons showcased faster learning than all deep reinforcement learning agents.</abstract><doi>10.48550/arxiv.2405.16946</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2405.16946
ispartof
issn
language eng
recordid cdi_arxiv_primary_2405_16946
source arXiv.org
subjects Computer Science - Artificial Intelligence
Quantitative Biology - Neurons and Cognition
title Biological Neurons Compete with Deep Reinforcement Learning in Sample Efficiency in a Simulated Gameworld
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T18%3A08%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Biological%20Neurons%20Compete%20with%20Deep%20Reinforcement%20Learning%20in%20Sample%20Efficiency%20in%20a%20Simulated%20Gameworld&rft.au=Khajehnejad,%20Moein&rft.date=2024-05-27&rft_id=info:doi/10.48550/arxiv.2405.16946&rft_dat=%3Carxiv_GOX%3E2405_16946%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true