ShapeGlot: Learning Language for Shape Differentiation

In this work we explore how fine-grained differences between the shapes of common objects are expressed in language, grounded on images and 3D models of the objects. We first build a large scale, carefully controlled dataset of human utterances that each refers to a 2D rendering of a 3D CAD model so...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2019-05
Hauptverfasser: Achlioptas, Panos, Fan, Judy, Hawkins, Robert X D, Goodman, Noah D, Guibas, Leonidas J
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Achlioptas, Panos
Fan, Judy
Hawkins, Robert X D
Goodman, Noah D
Guibas, Leonidas J
description In this work we explore how fine-grained differences between the shapes of common objects are expressed in language, grounded on images and 3D models of the objects. We first build a large scale, carefully controlled dataset of human utterances that each refers to a 2D rendering of a 3D CAD model so as to distinguish it from a set of shape-wise similar alternatives. Using this dataset, we develop neural language understanding (listening) and production (speaking) models that vary in their grounding (pure 3D forms via point-clouds vs. rendered 2D images), the degree of pragmatic reasoning captured (e.g. speakers that reason about a listener or not), and the neural architecture (e.g. with or without attention). We find models that perform well with both synthetic and human partners, and with held out utterances and objects. We also find that these models are amenable to zero-shot transfer learning to novel object classes (e.g. transfer from training on chairs to testing on lamps), as well as to real-world images drawn from furniture catalogs. Lesion studies indicate that the neural listeners depend heavily on part-related words and associate these words correctly with visual parts of objects (without any explicit network training on object parts), and that transfer to novel classes is most successful when known part-words are available. This work illustrates a practical approach to language grounding, and provides a case study in the relationship between object shape and linguistic structure when it comes to object differentiation.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2222322428</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2222322428</sourcerecordid><originalsourceid>FETCH-proquest_journals_22223224283</originalsourceid><addsrcrecordid>eNqNissKwjAQAIMgWLT_EPBcqJu2Fq8-D73pvexhk6aUTc3j_y3iBziXOcysRAZKHYq2AtiIPISxLEtojlDXKhPNc8CZ7pOLJ9kRerZsZIdsEhqS2nn5HeTFak2eOFqM1vFOrDVOgfKft2J_u77Oj2L27p0oxH50yfOSelhQABW06r_rA6KxNP0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2222322428</pqid></control><display><type>article</type><title>ShapeGlot: Learning Language for Shape Differentiation</title><source>Free E- Journals</source><creator>Achlioptas, Panos ; Fan, Judy ; Hawkins, Robert X D ; Goodman, Noah D ; Guibas, Leonidas J</creator><creatorcontrib>Achlioptas, Panos ; Fan, Judy ; Hawkins, Robert X D ; Goodman, Noah D ; Guibas, Leonidas J</creatorcontrib><description>In this work we explore how fine-grained differences between the shapes of common objects are expressed in language, grounded on images and 3D models of the objects. We first build a large scale, carefully controlled dataset of human utterances that each refers to a 2D rendering of a 3D CAD model so as to distinguish it from a set of shape-wise similar alternatives. Using this dataset, we develop neural language understanding (listening) and production (speaking) models that vary in their grounding (pure 3D forms via point-clouds vs. rendered 2D images), the degree of pragmatic reasoning captured (e.g. speakers that reason about a listener or not), and the neural architecture (e.g. with or without attention). We find models that perform well with both synthetic and human partners, and with held out utterances and objects. We also find that these models are amenable to zero-shot transfer learning to novel object classes (e.g. transfer from training on chairs to testing on lamps), as well as to real-world images drawn from furniture catalogs. Lesion studies indicate that the neural listeners depend heavily on part-related words and associate these words correctly with visual parts of objects (without any explicit network training on object parts), and that transfer to novel classes is most successful when known part-words are available. This work illustrates a practical approach to language grounding, and provides a case study in the relationship between object shape and linguistic structure when it comes to object differentiation.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Differentiation ; Furniture ; Learning ; Three dimensional models ; Training ; Two dimensional models</subject><ispartof>arXiv.org, 2019-05</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Achlioptas, Panos</creatorcontrib><creatorcontrib>Fan, Judy</creatorcontrib><creatorcontrib>Hawkins, Robert X D</creatorcontrib><creatorcontrib>Goodman, Noah D</creatorcontrib><creatorcontrib>Guibas, Leonidas J</creatorcontrib><title>ShapeGlot: Learning Language for Shape Differentiation</title><title>arXiv.org</title><description>In this work we explore how fine-grained differences between the shapes of common objects are expressed in language, grounded on images and 3D models of the objects. We first build a large scale, carefully controlled dataset of human utterances that each refers to a 2D rendering of a 3D CAD model so as to distinguish it from a set of shape-wise similar alternatives. Using this dataset, we develop neural language understanding (listening) and production (speaking) models that vary in their grounding (pure 3D forms via point-clouds vs. rendered 2D images), the degree of pragmatic reasoning captured (e.g. speakers that reason about a listener or not), and the neural architecture (e.g. with or without attention). We find models that perform well with both synthetic and human partners, and with held out utterances and objects. We also find that these models are amenable to zero-shot transfer learning to novel object classes (e.g. transfer from training on chairs to testing on lamps), as well as to real-world images drawn from furniture catalogs. Lesion studies indicate that the neural listeners depend heavily on part-related words and associate these words correctly with visual parts of objects (without any explicit network training on object parts), and that transfer to novel classes is most successful when known part-words are available. This work illustrates a practical approach to language grounding, and provides a case study in the relationship between object shape and linguistic structure when it comes to object differentiation.</description><subject>Differentiation</subject><subject>Furniture</subject><subject>Learning</subject><subject>Three dimensional models</subject><subject>Training</subject><subject>Two dimensional models</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNissKwjAQAIMgWLT_EPBcqJu2Fq8-D73pvexhk6aUTc3j_y3iBziXOcysRAZKHYq2AtiIPISxLEtojlDXKhPNc8CZ7pOLJ9kRerZsZIdsEhqS2nn5HeTFak2eOFqM1vFOrDVOgfKft2J_u77Oj2L27p0oxH50yfOSelhQABW06r_rA6KxNP0</recordid><startdate>20190508</startdate><enddate>20190508</enddate><creator>Achlioptas, Panos</creator><creator>Fan, Judy</creator><creator>Hawkins, Robert X D</creator><creator>Goodman, Noah D</creator><creator>Guibas, Leonidas J</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PIMPY</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20190508</creationdate><title>ShapeGlot: Learning Language for Shape Differentiation</title><author>Achlioptas, Panos ; Fan, Judy ; Hawkins, Robert X D ; Goodman, Noah D ; Guibas, Leonidas J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_22223224283</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Differentiation</topic><topic>Furniture</topic><topic>Learning</topic><topic>Three dimensional models</topic><topic>Training</topic><topic>Two dimensional models</topic><toplevel>online_resources</toplevel><creatorcontrib>Achlioptas, Panos</creatorcontrib><creatorcontrib>Fan, Judy</creatorcontrib><creatorcontrib>Hawkins, Robert X D</creatorcontrib><creatorcontrib>Goodman, Noah D</creatorcontrib><creatorcontrib>Guibas, Leonidas J</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied &amp; Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Achlioptas, Panos</au><au>Fan, Judy</au><au>Hawkins, Robert X D</au><au>Goodman, Noah D</au><au>Guibas, Leonidas J</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>ShapeGlot: Learning Language for Shape Differentiation</atitle><jtitle>arXiv.org</jtitle><date>2019-05-08</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>In this work we explore how fine-grained differences between the shapes of common objects are expressed in language, grounded on images and 3D models of the objects. We first build a large scale, carefully controlled dataset of human utterances that each refers to a 2D rendering of a 3D CAD model so as to distinguish it from a set of shape-wise similar alternatives. Using this dataset, we develop neural language understanding (listening) and production (speaking) models that vary in their grounding (pure 3D forms via point-clouds vs. rendered 2D images), the degree of pragmatic reasoning captured (e.g. speakers that reason about a listener or not), and the neural architecture (e.g. with or without attention). We find models that perform well with both synthetic and human partners, and with held out utterances and objects. We also find that these models are amenable to zero-shot transfer learning to novel object classes (e.g. transfer from training on chairs to testing on lamps), as well as to real-world images drawn from furniture catalogs. Lesion studies indicate that the neural listeners depend heavily on part-related words and associate these words correctly with visual parts of objects (without any explicit network training on object parts), and that transfer to novel classes is most successful when known part-words are available. This work illustrates a practical approach to language grounding, and provides a case study in the relationship between object shape and linguistic structure when it comes to object differentiation.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2019-05
issn 2331-8422
language eng
recordid cdi_proquest_journals_2222322428
source Free E- Journals
subjects Differentiation
Furniture
Learning
Three dimensional models
Training
Two dimensional models
title ShapeGlot: Learning Language for Shape Differentiation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-18T23%3A01%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=ShapeGlot:%20Learning%20Language%20for%20Shape%20Differentiation&rft.jtitle=arXiv.org&rft.au=Achlioptas,%20Panos&rft.date=2019-05-08&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2222322428%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2222322428&rft_id=info:pmid/&rfr_iscdi=true