Sequence Encoders Enable Large‐Scale Lexical Modeling: Reply to Bowers and Davis (2009)

Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed‐width distributed representations of variable‐length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosy...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Cognitive science 2009-09, Vol.33 (7), p.1187-1191
Hauptverfasser: Sibley, Daragh E., Kello, Christopher T., Plaut, David C., Elman, Jeffrey L.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1191
container_issue 7
container_start_page 1187
container_title Cognitive science
container_volume 33
creator Sibley, Daragh E.
Kello, Christopher T.
Plaut, David C.
Elman, Jeffrey L.
description Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed‐width distributed representations of variable‐length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (2009) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence it is not a useful component of large‐scale word‐reading models. In this reply, it is noted that the sequence encoder has facilitated the creation of large‐scale word‐reading models. The reasons for this success are explained and stand as counterarguments to claims made by Bowers and Davis.
doi_str_mv 10.1111/j.1551-6709.2009.01064.x
format Article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_2746651</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>85702458</sourcerecordid><originalsourceid>FETCH-LOGICAL-a5864-d4a3226b4414e3da31d2c68589531f910bb265e872c0251168e4514ed65136f83</originalsourceid><addsrcrecordid>eNqNkc9uEzEQxi0EomngFZCFBCqHBI__rbcHJBpKQQqqRODAyfJ6vWGjzTrYSZvceASekSfBpiEFDggf7LHm943H8yGEgYwhreeLMQgBI1mQckxJ2ggQycfbO2hwSNxFA8IkHxEK7Agdx7gghEjJyvvoKGm4LIUaoE8z92XjeuvweW997UJMgak6h6cmzN33r99m1uSb27YpwO8S07X9_BS_d6tuh9cen_nrLDN9jV-Zqzbik9zSswfoXmO66B7uzyH6-Pr8w-TNaHp58XbycjoyQqX2am4YpbLiHLhjtWFQUyuVUKVg0JRAqopK4VRBLaECQCrHRUJrKYDJRrEhenFTd7Wplq62rl8H0-lVaJcm7LQ3rf4z07ef9dxfaVpwmYsM0dN9geDTLOJaL9toXdeZ3vlN1EoUhHKRXzr5JwiKiVSypDyhj_9CF34T-jQHTUFwULSkCVI3kA0-xuCaQ9NAdPZZL3S2U2c7dR6q_umz3ibpo98_fRD-MjYBT_aAicm3JpjetvGWowBlIcvb6V23ndv9dwN6cnkxyyH7AWS0wSc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>215418292</pqid></control><display><type>article</type><title>Sequence Encoders Enable Large‐Scale Lexical Modeling: Reply to Bowers and Davis (2009)</title><source>Wiley Online Library Journals Frontfile Complete</source><source>Wiley Online Library Free Content</source><source>Education Source (EBSCOhost)</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Sibley, Daragh E. ; Kello, Christopher T. ; Plaut, David C. ; Elman, Jeffrey L.</creator><creatorcontrib>Sibley, Daragh E. ; Kello, Christopher T. ; Plaut, David C. ; Elman, Jeffrey L.</creatorcontrib><description>Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed‐width distributed representations of variable‐length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (2009) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence it is not a useful component of large‐scale word‐reading models. In this reply, it is noted that the sequence encoder has facilitated the creation of large‐scale word‐reading models. The reasons for this success are explained and stand as counterarguments to claims made by Bowers and Davis.</description><identifier>ISSN: 0364-0213</identifier><identifier>EISSN: 1551-6709</identifier><identifier>DOI: 10.1111/j.1551-6709.2009.01064.x</identifier><identifier>PMID: 20046958</identifier><identifier>CODEN: COGSD5</identifier><language>eng</language><publisher>Oxford, UK: Blackwell Publishing Ltd</publisher><subject>Biological and medical sciences ; Fundamental and applied biological sciences. Psychology ; Information processing ; Language ; Large‐scale modeling ; Orthography ; Phonetics ; Phonology ; Production and perception of written language ; Psychology. Psychoanalysis. Psychiatry ; Psychology. Psychophysiology ; Recognition ; Sequence encoder ; Wordforms</subject><ispartof>Cognitive science, 2009-09, Vol.33 (7), p.1187-1191</ispartof><rights>Copyright © 2009 Cognitive Science Society, Inc.</rights><rights>2015 INIST-CNRS</rights><rights>Copyright Lawrence Erlbaum Associates, Inc. Sep/Oct 2009</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a5864-d4a3226b4414e3da31d2c68589531f910bb265e872c0251168e4514ed65136f83</citedby><cites>FETCH-LOGICAL-a5864-d4a3226b4414e3da31d2c68589531f910bb265e872c0251168e4514ed65136f83</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1111%2Fj.1551-6709.2009.01064.x$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1111%2Fj.1551-6709.2009.01064.x$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>230,314,777,781,882,1412,1428,27905,27906,45555,45556,46390,46814</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=22119769$$DView record in Pascal Francis$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/20046958$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Sibley, Daragh E.</creatorcontrib><creatorcontrib>Kello, Christopher T.</creatorcontrib><creatorcontrib>Plaut, David C.</creatorcontrib><creatorcontrib>Elman, Jeffrey L.</creatorcontrib><title>Sequence Encoders Enable Large‐Scale Lexical Modeling: Reply to Bowers and Davis (2009)</title><title>Cognitive science</title><addtitle>Cogn Sci</addtitle><description>Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed‐width distributed representations of variable‐length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (2009) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence it is not a useful component of large‐scale word‐reading models. In this reply, it is noted that the sequence encoder has facilitated the creation of large‐scale word‐reading models. The reasons for this success are explained and stand as counterarguments to claims made by Bowers and Davis.</description><subject>Biological and medical sciences</subject><subject>Fundamental and applied biological sciences. Psychology</subject><subject>Information processing</subject><subject>Language</subject><subject>Large‐scale modeling</subject><subject>Orthography</subject><subject>Phonetics</subject><subject>Phonology</subject><subject>Production and perception of written language</subject><subject>Psychology. Psychoanalysis. Psychiatry</subject><subject>Psychology. Psychophysiology</subject><subject>Recognition</subject><subject>Sequence encoder</subject><subject>Wordforms</subject><issn>0364-0213</issn><issn>1551-6709</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2009</creationdate><recordtype>article</recordtype><recordid>eNqNkc9uEzEQxi0EomngFZCFBCqHBI__rbcHJBpKQQqqRODAyfJ6vWGjzTrYSZvceASekSfBpiEFDggf7LHm943H8yGEgYwhreeLMQgBI1mQckxJ2ggQycfbO2hwSNxFA8IkHxEK7Agdx7gghEjJyvvoKGm4LIUaoE8z92XjeuvweW997UJMgak6h6cmzN33r99m1uSb27YpwO8S07X9_BS_d6tuh9cen_nrLDN9jV-Zqzbik9zSswfoXmO66B7uzyH6-Pr8w-TNaHp58XbycjoyQqX2am4YpbLiHLhjtWFQUyuVUKVg0JRAqopK4VRBLaECQCrHRUJrKYDJRrEhenFTd7Wplq62rl8H0-lVaJcm7LQ3rf4z07ef9dxfaVpwmYsM0dN9geDTLOJaL9toXdeZ3vlN1EoUhHKRXzr5JwiKiVSypDyhj_9CF34T-jQHTUFwULSkCVI3kA0-xuCaQ9NAdPZZL3S2U2c7dR6q_umz3ibpo98_fRD-MjYBT_aAicm3JpjetvGWowBlIcvb6V23ndv9dwN6cnkxyyH7AWS0wSc</recordid><startdate>200909</startdate><enddate>200909</enddate><creator>Sibley, Daragh E.</creator><creator>Kello, Christopher T.</creator><creator>Plaut, David C.</creator><creator>Elman, Jeffrey L.</creator><general>Blackwell Publishing Ltd</general><general>Wiley-Blackwell</general><general>Wiley Subscription Services, Inc</general><scope>IQODW</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7TK</scope><scope>7X8</scope><scope>7T9</scope><scope>5PM</scope></search><sort><creationdate>200909</creationdate><title>Sequence Encoders Enable Large‐Scale Lexical Modeling: Reply to Bowers and Davis (2009)</title><author>Sibley, Daragh E. ; Kello, Christopher T. ; Plaut, David C. ; Elman, Jeffrey L.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a5864-d4a3226b4414e3da31d2c68589531f910bb265e872c0251168e4514ed65136f83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2009</creationdate><topic>Biological and medical sciences</topic><topic>Fundamental and applied biological sciences. Psychology</topic><topic>Information processing</topic><topic>Language</topic><topic>Large‐scale modeling</topic><topic>Orthography</topic><topic>Phonetics</topic><topic>Phonology</topic><topic>Production and perception of written language</topic><topic>Psychology. Psychoanalysis. Psychiatry</topic><topic>Psychology. Psychophysiology</topic><topic>Recognition</topic><topic>Sequence encoder</topic><topic>Wordforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sibley, Daragh E.</creatorcontrib><creatorcontrib>Kello, Christopher T.</creatorcontrib><creatorcontrib>Plaut, David C.</creatorcontrib><creatorcontrib>Elman, Jeffrey L.</creatorcontrib><collection>Pascal-Francis</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Neurosciences Abstracts</collection><collection>MEDLINE - Academic</collection><collection>Linguistics and Language Behavior Abstracts (LLBA)</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Cognitive science</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sibley, Daragh E.</au><au>Kello, Christopher T.</au><au>Plaut, David C.</au><au>Elman, Jeffrey L.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Sequence Encoders Enable Large‐Scale Lexical Modeling: Reply to Bowers and Davis (2009)</atitle><jtitle>Cognitive science</jtitle><addtitle>Cogn Sci</addtitle><date>2009-09</date><risdate>2009</risdate><volume>33</volume><issue>7</issue><spage>1187</spage><epage>1191</epage><pages>1187-1191</pages><issn>0364-0213</issn><eissn>1551-6709</eissn><coden>COGSD5</coden><abstract>Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed‐width distributed representations of variable‐length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (2009) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence it is not a useful component of large‐scale word‐reading models. In this reply, it is noted that the sequence encoder has facilitated the creation of large‐scale word‐reading models. The reasons for this success are explained and stand as counterarguments to claims made by Bowers and Davis.</abstract><cop>Oxford, UK</cop><pub>Blackwell Publishing Ltd</pub><pmid>20046958</pmid><doi>10.1111/j.1551-6709.2009.01064.x</doi><tpages>5</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0364-0213
ispartof Cognitive science, 2009-09, Vol.33 (7), p.1187-1191
issn 0364-0213
1551-6709
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_2746651
source Wiley Online Library Journals Frontfile Complete; Wiley Online Library Free Content; Education Source (EBSCOhost); EZB-FREE-00999 freely available EZB journals
subjects Biological and medical sciences
Fundamental and applied biological sciences. Psychology
Information processing
Language
Large‐scale modeling
Orthography
Phonetics
Phonology
Production and perception of written language
Psychology. Psychoanalysis. Psychiatry
Psychology. Psychophysiology
Recognition
Sequence encoder
Wordforms
title Sequence Encoders Enable Large‐Scale Lexical Modeling: Reply to Bowers and Davis (2009)
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T00%3A14%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Sequence%20Encoders%20Enable%20Large%E2%80%90Scale%20Lexical%20Modeling:%20Reply%20to%20Bowers%20and%20Davis%20(2009)&rft.jtitle=Cognitive%20science&rft.au=Sibley,%20Daragh%20E.&rft.date=2009-09&rft.volume=33&rft.issue=7&rft.spage=1187&rft.epage=1191&rft.pages=1187-1191&rft.issn=0364-0213&rft.eissn=1551-6709&rft.coden=COGSD5&rft_id=info:doi/10.1111/j.1551-6709.2009.01064.x&rft_dat=%3Cproquest_pubme%3E85702458%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=215418292&rft_id=info:pmid/20046958&rfr_iscdi=true