Visually Grounded Speech Models have a Mutual Exclusivity Bias

When children learn new words, they employ constraints such as the mutual exclusivity (ME) bias: a novel word is mapped to a novel object rather than a familiar one. This bias has been studied computationally, but only in models that use discrete word representations as input, ignoring the high vari...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-03
Hauptverfasser: Nortje, Leanne, Oneaţă, Dan, Matusevych, Yevgen, Kamper, Herman
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Nortje, Leanne
Oneaţă, Dan
Matusevych, Yevgen
Kamper, Herman
description When children learn new words, they employ constraints such as the mutual exclusivity (ME) bias: a novel word is mapped to a novel object rather than a familiar one. This bias has been studied computationally, but only in models that use discrete word representations as input, ignoring the high variability of spoken words. We investigate the ME bias in the context of visually grounded speech models that learn from natural images and continuous speech audio. Concretely, we train a model on familiar words and test its ME bias by asking it to select between a novel and a familiar object when queried with a novel word. To simulate prior acoustic and visual knowledge, we experiment with several initialisation strategies using pretrained speech and vision networks. Our findings reveal the ME bias across the different initialisation approaches, with a stronger bias in models with more prior (in particular, visual) knowledge. Additional tests confirm the robustness of our results, even when different loss functions are considered.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2973290259</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2973290259</sourcerecordid><originalsourceid>FETCH-proquest_journals_29732902593</originalsourceid><addsrcrecordid>eNqNysEKgkAQgOElCJLyHQY6C9tsZl46FFYXT0VXWXTClcU1x5V8-zr0AJ3-w__NRIBKbaL9FnEhQuZGSom7BONYBeLwMOy1tRNceufbiiq4dURlDbmryDLUeiTQkPvhyyB7l9azGc0wwdFoXon5U1um8NelWJ-z--kadb17eeKhaJzv2-8qME0UphLjVP2nPpwPOBU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2973290259</pqid></control><display><type>article</type><title>Visually Grounded Speech Models have a Mutual Exclusivity Bias</title><source>Free E- Journals</source><creator>Nortje, Leanne ; Oneaţă, Dan ; Matusevych, Yevgen ; Kamper, Herman</creator><creatorcontrib>Nortje, Leanne ; Oneaţă, Dan ; Matusevych, Yevgen ; Kamper, Herman</creatorcontrib><description>When children learn new words, they employ constraints such as the mutual exclusivity (ME) bias: a novel word is mapped to a novel object rather than a familiar one. This bias has been studied computationally, but only in models that use discrete word representations as input, ignoring the high variability of spoken words. We investigate the ME bias in the context of visually grounded speech models that learn from natural images and continuous speech audio. Concretely, we train a model on familiar words and test its ME bias by asking it to select between a novel and a familiar object when queried with a novel word. To simulate prior acoustic and visual knowledge, we experiment with several initialisation strategies using pretrained speech and vision networks. Our findings reveal the ME bias across the different initialisation approaches, with a stronger bias in models with more prior (in particular, visual) knowledge. Additional tests confirm the robustness of our results, even when different loss functions are considered.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Bias ; Speech ; Words (language)</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Nortje, Leanne</creatorcontrib><creatorcontrib>Oneaţă, Dan</creatorcontrib><creatorcontrib>Matusevych, Yevgen</creatorcontrib><creatorcontrib>Kamper, Herman</creatorcontrib><title>Visually Grounded Speech Models have a Mutual Exclusivity Bias</title><title>arXiv.org</title><description>When children learn new words, they employ constraints such as the mutual exclusivity (ME) bias: a novel word is mapped to a novel object rather than a familiar one. This bias has been studied computationally, but only in models that use discrete word representations as input, ignoring the high variability of spoken words. We investigate the ME bias in the context of visually grounded speech models that learn from natural images and continuous speech audio. Concretely, we train a model on familiar words and test its ME bias by asking it to select between a novel and a familiar object when queried with a novel word. To simulate prior acoustic and visual knowledge, we experiment with several initialisation strategies using pretrained speech and vision networks. Our findings reveal the ME bias across the different initialisation approaches, with a stronger bias in models with more prior (in particular, visual) knowledge. Additional tests confirm the robustness of our results, even when different loss functions are considered.</description><subject>Bias</subject><subject>Speech</subject><subject>Words (language)</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNysEKgkAQgOElCJLyHQY6C9tsZl46FFYXT0VXWXTClcU1x5V8-zr0AJ3-w__NRIBKbaL9FnEhQuZGSom7BONYBeLwMOy1tRNceufbiiq4dURlDbmryDLUeiTQkPvhyyB7l9azGc0wwdFoXon5U1um8NelWJ-z--kadb17eeKhaJzv2-8qME0UphLjVP2nPpwPOBU</recordid><startdate>20240320</startdate><enddate>20240320</enddate><creator>Nortje, Leanne</creator><creator>Oneaţă, Dan</creator><creator>Matusevych, Yevgen</creator><creator>Kamper, Herman</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240320</creationdate><title>Visually Grounded Speech Models have a Mutual Exclusivity Bias</title><author>Nortje, Leanne ; Oneaţă, Dan ; Matusevych, Yevgen ; Kamper, Herman</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29732902593</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Bias</topic><topic>Speech</topic><topic>Words (language)</topic><toplevel>online_resources</toplevel><creatorcontrib>Nortje, Leanne</creatorcontrib><creatorcontrib>Oneaţă, Dan</creatorcontrib><creatorcontrib>Matusevych, Yevgen</creatorcontrib><creatorcontrib>Kamper, Herman</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nortje, Leanne</au><au>Oneaţă, Dan</au><au>Matusevych, Yevgen</au><au>Kamper, Herman</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Visually Grounded Speech Models have a Mutual Exclusivity Bias</atitle><jtitle>arXiv.org</jtitle><date>2024-03-20</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>When children learn new words, they employ constraints such as the mutual exclusivity (ME) bias: a novel word is mapped to a novel object rather than a familiar one. This bias has been studied computationally, but only in models that use discrete word representations as input, ignoring the high variability of spoken words. We investigate the ME bias in the context of visually grounded speech models that learn from natural images and continuous speech audio. Concretely, we train a model on familiar words and test its ME bias by asking it to select between a novel and a familiar object when queried with a novel word. To simulate prior acoustic and visual knowledge, we experiment with several initialisation strategies using pretrained speech and vision networks. Our findings reveal the ME bias across the different initialisation approaches, with a stronger bias in models with more prior (in particular, visual) knowledge. Additional tests confirm the robustness of our results, even when different loss functions are considered.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-03
issn 2331-8422
language eng
recordid cdi_proquest_journals_2973290259
source Free E- Journals
subjects Bias
Speech
Words (language)
title Visually Grounded Speech Models have a Mutual Exclusivity Bias
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T17%3A50%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Visually%20Grounded%20Speech%20Models%20have%20a%20Mutual%20Exclusivity%20Bias&rft.jtitle=arXiv.org&rft.au=Nortje,%20Leanne&rft.date=2024-03-20&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2973290259%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2973290259&rft_id=info:pmid/&rfr_iscdi=true