A multi‐step finite‐state automaton for arbitrarily deterministic Tsetlin Machine learning
Due to the high arithmetic complexity and scalability challenges of deep learning, there is a critical need to shift research focus towards energy efficiency. Tsetlin Machines (TMs) are a recent approach to machine learning (ML) that has demonstrated significantly reduced energy compared to neural n...
Gespeichert in:
Veröffentlicht in: | Expert systems 2023-05, Vol.40 (4), p.n/a |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | n/a |
---|---|
container_issue | 4 |
container_start_page | |
container_title | Expert systems |
container_volume | 40 |
creator | Abeyrathna, Kuruge Darshana Granmo, Ole‐Christoffer Shafik, Rishad Jiao, Lei Wheeldon, Adrian Yakovlev, Alex Lei, Jie Goodwin, Morten |
description | Due to the high arithmetic complexity and scalability challenges of deep learning, there is a critical need to shift research focus towards energy efficiency. Tsetlin Machines (TMs) are a recent approach to machine learning (ML) that has demonstrated significantly reduced energy compared to neural networks alike, while providing comparable accuracy on several benchmarks. However, TMs rely heavily on energy‐costly random number generation to stochastically guide a team of Tsetlin Automata (TA) in TM learning. In this paper, we propose a novel finite‐state learning automaton that can replace the TA in the TM, for increased determinism. The new automaton uses multi‐step deterministic state jumps to reinforce sub‐patterns, without resorting to randomization. A determinism parameter d finely controls trading off the energy consumption of random number generation, against randomization for increased accuracy. Randomization is controlled by flipping a coin before every d'th state jump, ignoring the state jump on tails. For example, d=1 makes every update random and d=∞ makes the automaton completely deterministic. Both theoretically and empirically, we establish that the proposed automaton converges to the optimal action almost surely. Further, used together with the TM, only substantial degrees of determinism reduce accuracy. Energy‐wise, random number generation constitutes switching energy consumption of the TM, saving up to 11 mW power for larger datasets with high d values. Our new learning automaton approach thus facilitates low‐energy ML. |
doi_str_mv | 10.1111/exsy.12836 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2800399202</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2800399202</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2966-d458f08c6e2a445c56130af3eaa224c5788ffbd4d1d80eb089bd4a853607c3833</originalsourceid><addsrcrecordid>eNp9kMtKAzEUhoMoWKsbnyDgTmjNbTKZZSneoOLCCroxpJlEU-ZSkww6Ox_BZ_RJTDuuPZvDD9-58AFwitEUp7own6GfYiIo3wMjzLiYIFqwfTBChPMJywk6BEchrBFCOM_5CLzMYN1V0f18fYdoNtC6xkWzSyoaqLrY1iq2DbSth8qvXPTKu6qHpYnG14kO0Wm4DCZWroF3Sr-5xsDKKN-45vUYHFhVBXPy18fg8epyOb-ZLO6vb-ezxUSTIj1WskxYJDQ3RDGW6YxjipSlRilCmM5yIaxdlazEpUBmhUSRghIZ5SjXVFA6BmfD3o1v3zsToly3nW_SSUkESg4KgkiizgdK-zYEb6zceFcr30uM5Naf3PqTO38JxgP84SrT_0PKy6eH52HmFwxldl0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2800399202</pqid></control><display><type>article</type><title>A multi‐step finite‐state automaton for arbitrarily deterministic Tsetlin Machine learning</title><source>Wiley Online Library Journals Frontfile Complete</source><source>Business Source Complete</source><creator>Abeyrathna, Kuruge Darshana ; Granmo, Ole‐Christoffer ; Shafik, Rishad ; Jiao, Lei ; Wheeldon, Adrian ; Yakovlev, Alex ; Lei, Jie ; Goodwin, Morten</creator><creatorcontrib>Abeyrathna, Kuruge Darshana ; Granmo, Ole‐Christoffer ; Shafik, Rishad ; Jiao, Lei ; Wheeldon, Adrian ; Yakovlev, Alex ; Lei, Jie ; Goodwin, Morten</creatorcontrib><description>Due to the high arithmetic complexity and scalability challenges of deep learning, there is a critical need to shift research focus towards energy efficiency. Tsetlin Machines (TMs) are a recent approach to machine learning (ML) that has demonstrated significantly reduced energy compared to neural networks alike, while providing comparable accuracy on several benchmarks. However, TMs rely heavily on energy‐costly random number generation to stochastically guide a team of Tsetlin Automata (TA) in TM learning. In this paper, we propose a novel finite‐state learning automaton that can replace the TA in the TM, for increased determinism. The new automaton uses multi‐step deterministic state jumps to reinforce sub‐patterns, without resorting to randomization. A determinism parameter d finely controls trading off the energy consumption of random number generation, against randomization for increased accuracy. Randomization is controlled by flipping a coin before every d'th state jump, ignoring the state jump on tails. For example, d=1 makes every update random and d=∞ makes the automaton completely deterministic. Both theoretically and empirically, we establish that the proposed automaton converges to the optimal action almost surely. Further, used together with the TM, only substantial degrees of determinism reduce accuracy. Energy‐wise, random number generation constitutes switching energy consumption of the TM, saving up to 11 mW power for larger datasets with high d values. Our new learning automaton approach thus facilitates low‐energy ML.</description><identifier>ISSN: 0266-4720</identifier><identifier>EISSN: 1468-0394</identifier><identifier>DOI: 10.1111/exsy.12836</identifier><language>eng</language><publisher>Oxford: Blackwell Publishing Ltd</publisher><subject>Accuracy ; Artificial intelligence ; Deep learning ; Determinism ; Energy consumption ; Energy conversion efficiency ; learning automata ; low‐power machine learning ; Machine learning ; Neural networks ; Tsetlin machine</subject><ispartof>Expert systems, 2023-05, Vol.40 (4), p.n/a</ispartof><rights>2021 The Authors. published by John Wiley & Sons Ltd.</rights><rights>2021. This article is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c2966-d458f08c6e2a445c56130af3eaa224c5788ffbd4d1d80eb089bd4a853607c3833</cites><orcidid>0000-0003-4816-2597</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1111%2Fexsy.12836$$EPDF$$P50$$Gwiley$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1111%2Fexsy.12836$$EHTML$$P50$$Gwiley$$Hfree_for_read</linktohtml><link.rule.ids>314,777,781,1412,27905,27906,45555,45556</link.rule.ids></links><search><creatorcontrib>Abeyrathna, Kuruge Darshana</creatorcontrib><creatorcontrib>Granmo, Ole‐Christoffer</creatorcontrib><creatorcontrib>Shafik, Rishad</creatorcontrib><creatorcontrib>Jiao, Lei</creatorcontrib><creatorcontrib>Wheeldon, Adrian</creatorcontrib><creatorcontrib>Yakovlev, Alex</creatorcontrib><creatorcontrib>Lei, Jie</creatorcontrib><creatorcontrib>Goodwin, Morten</creatorcontrib><title>A multi‐step finite‐state automaton for arbitrarily deterministic Tsetlin Machine learning</title><title>Expert systems</title><description>Due to the high arithmetic complexity and scalability challenges of deep learning, there is a critical need to shift research focus towards energy efficiency. Tsetlin Machines (TMs) are a recent approach to machine learning (ML) that has demonstrated significantly reduced energy compared to neural networks alike, while providing comparable accuracy on several benchmarks. However, TMs rely heavily on energy‐costly random number generation to stochastically guide a team of Tsetlin Automata (TA) in TM learning. In this paper, we propose a novel finite‐state learning automaton that can replace the TA in the TM, for increased determinism. The new automaton uses multi‐step deterministic state jumps to reinforce sub‐patterns, without resorting to randomization. A determinism parameter d finely controls trading off the energy consumption of random number generation, against randomization for increased accuracy. Randomization is controlled by flipping a coin before every d'th state jump, ignoring the state jump on tails. For example, d=1 makes every update random and d=∞ makes the automaton completely deterministic. Both theoretically and empirically, we establish that the proposed automaton converges to the optimal action almost surely. Further, used together with the TM, only substantial degrees of determinism reduce accuracy. Energy‐wise, random number generation constitutes switching energy consumption of the TM, saving up to 11 mW power for larger datasets with high d values. Our new learning automaton approach thus facilitates low‐energy ML.</description><subject>Accuracy</subject><subject>Artificial intelligence</subject><subject>Deep learning</subject><subject>Determinism</subject><subject>Energy consumption</subject><subject>Energy conversion efficiency</subject><subject>learning automata</subject><subject>low‐power machine learning</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Tsetlin machine</subject><issn>0266-4720</issn><issn>1468-0394</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>24P</sourceid><sourceid>WIN</sourceid><recordid>eNp9kMtKAzEUhoMoWKsbnyDgTmjNbTKZZSneoOLCCroxpJlEU-ZSkww6Ox_BZ_RJTDuuPZvDD9-58AFwitEUp7own6GfYiIo3wMjzLiYIFqwfTBChPMJywk6BEchrBFCOM_5CLzMYN1V0f18fYdoNtC6xkWzSyoaqLrY1iq2DbSth8qvXPTKu6qHpYnG14kO0Wm4DCZWroF3Sr-5xsDKKN-45vUYHFhVBXPy18fg8epyOb-ZLO6vb-ezxUSTIj1WskxYJDQ3RDGW6YxjipSlRilCmM5yIaxdlazEpUBmhUSRghIZ5SjXVFA6BmfD3o1v3zsToly3nW_SSUkESg4KgkiizgdK-zYEb6zceFcr30uM5Naf3PqTO38JxgP84SrT_0PKy6eH52HmFwxldl0</recordid><startdate>202305</startdate><enddate>202305</enddate><creator>Abeyrathna, Kuruge Darshana</creator><creator>Granmo, Ole‐Christoffer</creator><creator>Shafik, Rishad</creator><creator>Jiao, Lei</creator><creator>Wheeldon, Adrian</creator><creator>Yakovlev, Alex</creator><creator>Lei, Jie</creator><creator>Goodwin, Morten</creator><general>Blackwell Publishing Ltd</general><scope>24P</scope><scope>WIN</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7TB</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-4816-2597</orcidid></search><sort><creationdate>202305</creationdate><title>A multi‐step finite‐state automaton for arbitrarily deterministic Tsetlin Machine learning</title><author>Abeyrathna, Kuruge Darshana ; Granmo, Ole‐Christoffer ; Shafik, Rishad ; Jiao, Lei ; Wheeldon, Adrian ; Yakovlev, Alex ; Lei, Jie ; Goodwin, Morten</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2966-d458f08c6e2a445c56130af3eaa224c5788ffbd4d1d80eb089bd4a853607c3833</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Artificial intelligence</topic><topic>Deep learning</topic><topic>Determinism</topic><topic>Energy consumption</topic><topic>Energy conversion efficiency</topic><topic>learning automata</topic><topic>low‐power machine learning</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Tsetlin machine</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Abeyrathna, Kuruge Darshana</creatorcontrib><creatorcontrib>Granmo, Ole‐Christoffer</creatorcontrib><creatorcontrib>Shafik, Rishad</creatorcontrib><creatorcontrib>Jiao, Lei</creatorcontrib><creatorcontrib>Wheeldon, Adrian</creatorcontrib><creatorcontrib>Yakovlev, Alex</creatorcontrib><creatorcontrib>Lei, Jie</creatorcontrib><creatorcontrib>Goodwin, Morten</creatorcontrib><collection>Wiley Online Library Open Access</collection><collection>Wiley Free Content</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Expert systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Abeyrathna, Kuruge Darshana</au><au>Granmo, Ole‐Christoffer</au><au>Shafik, Rishad</au><au>Jiao, Lei</au><au>Wheeldon, Adrian</au><au>Yakovlev, Alex</au><au>Lei, Jie</au><au>Goodwin, Morten</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A multi‐step finite‐state automaton for arbitrarily deterministic Tsetlin Machine learning</atitle><jtitle>Expert systems</jtitle><date>2023-05</date><risdate>2023</risdate><volume>40</volume><issue>4</issue><epage>n/a</epage><issn>0266-4720</issn><eissn>1468-0394</eissn><abstract>Due to the high arithmetic complexity and scalability challenges of deep learning, there is a critical need to shift research focus towards energy efficiency. Tsetlin Machines (TMs) are a recent approach to machine learning (ML) that has demonstrated significantly reduced energy compared to neural networks alike, while providing comparable accuracy on several benchmarks. However, TMs rely heavily on energy‐costly random number generation to stochastically guide a team of Tsetlin Automata (TA) in TM learning. In this paper, we propose a novel finite‐state learning automaton that can replace the TA in the TM, for increased determinism. The new automaton uses multi‐step deterministic state jumps to reinforce sub‐patterns, without resorting to randomization. A determinism parameter d finely controls trading off the energy consumption of random number generation, against randomization for increased accuracy. Randomization is controlled by flipping a coin before every d'th state jump, ignoring the state jump on tails. For example, d=1 makes every update random and d=∞ makes the automaton completely deterministic. Both theoretically and empirically, we establish that the proposed automaton converges to the optimal action almost surely. Further, used together with the TM, only substantial degrees of determinism reduce accuracy. Energy‐wise, random number generation constitutes switching energy consumption of the TM, saving up to 11 mW power for larger datasets with high d values. Our new learning automaton approach thus facilitates low‐energy ML.</abstract><cop>Oxford</cop><pub>Blackwell Publishing Ltd</pub><doi>10.1111/exsy.12836</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0003-4816-2597</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0266-4720 |
ispartof | Expert systems, 2023-05, Vol.40 (4), p.n/a |
issn | 0266-4720 1468-0394 |
language | eng |
recordid | cdi_proquest_journals_2800399202 |
source | Wiley Online Library Journals Frontfile Complete; Business Source Complete |
subjects | Accuracy Artificial intelligence Deep learning Determinism Energy consumption Energy conversion efficiency learning automata low‐power machine learning Machine learning Neural networks Tsetlin machine |
title | A multi‐step finite‐state automaton for arbitrarily deterministic Tsetlin Machine learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T16%3A28%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20multi%E2%80%90step%20finite%E2%80%90state%20automaton%20for%20arbitrarily%20deterministic%20Tsetlin%20Machine%20learning&rft.jtitle=Expert%20systems&rft.au=Abeyrathna,%20Kuruge%20Darshana&rft.date=2023-05&rft.volume=40&rft.issue=4&rft.epage=n/a&rft.issn=0266-4720&rft.eissn=1468-0394&rft_id=info:doi/10.1111/exsy.12836&rft_dat=%3Cproquest_cross%3E2800399202%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2800399202&rft_id=info:pmid/&rfr_iscdi=true |