Ensemble-of-Concept Models for Unsupervised Formation of Multiple Categories
Recent studies have shown that robots can form concepts and understand the meanings of words through inference. The key idea underlying these studies is the "multimodal categorization" of a robot's experiences. Despite the success in the formation of concepts by robots, a major drawba...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on cognitive and developmental systems 2018-12, Vol.10 (4), p.1043-1057 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1057 |
---|---|
container_issue | 4 |
container_start_page | 1043 |
container_title | IEEE transactions on cognitive and developmental systems |
container_volume | 10 |
creator | Nakamura, Tomoaki Nagai, Takayuki |
description | Recent studies have shown that robots can form concepts and understand the meanings of words through inference. The key idea underlying these studies is the "multimodal categorization" of a robot's experiences. Despite the success in the formation of concepts by robots, a major drawback of previous studies stems from the fact that they have been mainly focused on object concepts. Obviously, human concepts are limited not only to object concepts but also to other kinds such as those connected to the tactile sense and color. In this paper, we propose a novel model called the ensemble-of-concept models (EoCMs) to form various kinds of concepts. In EoCMs, we introduce weights that represent the strength connecting modalities and concepts. By changing these weights, many concepts that are connected to particular modalities can be formed; however, meaningless concepts for humans are included in these concepts. To communicate with humans, robots are required to form meaningful concepts for us. Therefore, we utilize utterances taught by human users as the robot observes objects. The robot connects words included in the teaching utterances with formed concepts and selects meaningful concepts to communicate with users. The experimental results show that the robot can form not only object concepts but also others such as color-related concepts and haptic concepts. Furthermore, using word2vec, we compare the meanings of the words acquired by the robot in connecting them to the concepts formed. |
doi_str_mv | 10.1109/TCDS.2017.2745502 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2299148065</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8023995</ieee_id><sourcerecordid>2299148065</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-59ba566aeb24be6f0598022f44b4cea2737b122379ea0684164c0a3818441ef73</originalsourceid><addsrcrecordid>eNo9kE1Lw0AQhhdRsGh_gHhZ8Jy6n9ndo8RWhRYPtudlk85KSpqNu4ngvzehpaeZw_POvDwIPVCyoJSY523x-rVghKoFU0JKwq7QjHFlMm24ub7sjNyieUoHQgjNudJCzdB62SY4lg1kwWdFaCvoerwJe2gS9iHiXZuGDuJvnWCPVyEeXV-HFgePN0PT110DuHA9fIdYQ7pHN941CebneYd2q-W2eM_Wn28fxcs6q7g0fSZN6WSeOyiZKCH3RBpNGPNClKICxxRXJWVTaXAk14LmoiKOa6qFoOAVv0NPp7tdDD8DpN4ewhDb8aVlzBgqNMnlSNETVcWQUgRvu1gfXfyzlNjJm5282cmbPXsbM4-nTA0AF35sx42R_B_yL2gW</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2299148065</pqid></control><display><type>article</type><title>Ensemble-of-Concept Models for Unsupervised Formation of Multiple Categories</title><source>IEEE Electronic Library (IEL)</source><creator>Nakamura, Tomoaki ; Nagai, Takayuki</creator><creatorcontrib>Nakamura, Tomoaki ; Nagai, Takayuki</creatorcontrib><description>Recent studies have shown that robots can form concepts and understand the meanings of words through inference. The key idea underlying these studies is the "multimodal categorization" of a robot's experiences. Despite the success in the formation of concepts by robots, a major drawback of previous studies stems from the fact that they have been mainly focused on object concepts. Obviously, human concepts are limited not only to object concepts but also to other kinds such as those connected to the tactile sense and color. In this paper, we propose a novel model called the ensemble-of-concept models (EoCMs) to form various kinds of concepts. In EoCMs, we introduce weights that represent the strength connecting modalities and concepts. By changing these weights, many concepts that are connected to particular modalities can be formed; however, meaningless concepts for humans are included in these concepts. To communicate with humans, robots are required to form meaningful concepts for us. Therefore, we utilize utterances taught by human users as the robot observes objects. The robot connects words included in the teaching utterances with formed concepts and selects meaningful concepts to communicate with users. The experimental results show that the robot can form not only object concepts but also others such as color-related concepts and haptic concepts. Furthermore, using word2vec, we compare the meanings of the words acquired by the robot in connecting them to the concepts formed.</description><identifier>ISSN: 2379-8920</identifier><identifier>EISSN: 2379-8939</identifier><identifier>DOI: 10.1109/TCDS.2017.2745502</identifier><identifier>CODEN: ITCDA4</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Color ; Color imaging ; Concept formation ; Haptic interfaces ; language acquisition ; multimodal object dataset ; Pragmatics ; Robot sensing systems ; Robots ; Visualization</subject><ispartof>IEEE transactions on cognitive and developmental systems, 2018-12, Vol.10 (4), p.1043-1057</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c359t-59ba566aeb24be6f0598022f44b4cea2737b122379ea0684164c0a3818441ef73</citedby><cites>FETCH-LOGICAL-c359t-59ba566aeb24be6f0598022f44b4cea2737b122379ea0684164c0a3818441ef73</cites><orcidid>0000-0002-3183-4599</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8023995$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,793,27905,27906,54739</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8023995$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Nakamura, Tomoaki</creatorcontrib><creatorcontrib>Nagai, Takayuki</creatorcontrib><title>Ensemble-of-Concept Models for Unsupervised Formation of Multiple Categories</title><title>IEEE transactions on cognitive and developmental systems</title><addtitle>TCDS</addtitle><description>Recent studies have shown that robots can form concepts and understand the meanings of words through inference. The key idea underlying these studies is the "multimodal categorization" of a robot's experiences. Despite the success in the formation of concepts by robots, a major drawback of previous studies stems from the fact that they have been mainly focused on object concepts. Obviously, human concepts are limited not only to object concepts but also to other kinds such as those connected to the tactile sense and color. In this paper, we propose a novel model called the ensemble-of-concept models (EoCMs) to form various kinds of concepts. In EoCMs, we introduce weights that represent the strength connecting modalities and concepts. By changing these weights, many concepts that are connected to particular modalities can be formed; however, meaningless concepts for humans are included in these concepts. To communicate with humans, robots are required to form meaningful concepts for us. Therefore, we utilize utterances taught by human users as the robot observes objects. The robot connects words included in the teaching utterances with formed concepts and selects meaningful concepts to communicate with users. The experimental results show that the robot can form not only object concepts but also others such as color-related concepts and haptic concepts. Furthermore, using word2vec, we compare the meanings of the words acquired by the robot in connecting them to the concepts formed.</description><subject>Color</subject><subject>Color imaging</subject><subject>Concept formation</subject><subject>Haptic interfaces</subject><subject>language acquisition</subject><subject>multimodal object dataset</subject><subject>Pragmatics</subject><subject>Robot sensing systems</subject><subject>Robots</subject><subject>Visualization</subject><issn>2379-8920</issn><issn>2379-8939</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE1Lw0AQhhdRsGh_gHhZ8Jy6n9ndo8RWhRYPtudlk85KSpqNu4ngvzehpaeZw_POvDwIPVCyoJSY523x-rVghKoFU0JKwq7QjHFlMm24ub7sjNyieUoHQgjNudJCzdB62SY4lg1kwWdFaCvoerwJe2gS9iHiXZuGDuJvnWCPVyEeXV-HFgePN0PT110DuHA9fIdYQ7pHN941CebneYd2q-W2eM_Wn28fxcs6q7g0fSZN6WSeOyiZKCH3RBpNGPNClKICxxRXJWVTaXAk14LmoiKOa6qFoOAVv0NPp7tdDD8DpN4ewhDb8aVlzBgqNMnlSNETVcWQUgRvu1gfXfyzlNjJm5282cmbPXsbM4-nTA0AF35sx42R_B_yL2gW</recordid><startdate>20181201</startdate><enddate>20181201</enddate><creator>Nakamura, Tomoaki</creator><creator>Nagai, Takayuki</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-3183-4599</orcidid></search><sort><creationdate>20181201</creationdate><title>Ensemble-of-Concept Models for Unsupervised Formation of Multiple Categories</title><author>Nakamura, Tomoaki ; Nagai, Takayuki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-59ba566aeb24be6f0598022f44b4cea2737b122379ea0684164c0a3818441ef73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Color</topic><topic>Color imaging</topic><topic>Concept formation</topic><topic>Haptic interfaces</topic><topic>language acquisition</topic><topic>multimodal object dataset</topic><topic>Pragmatics</topic><topic>Robot sensing systems</topic><topic>Robots</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Nakamura, Tomoaki</creatorcontrib><creatorcontrib>Nagai, Takayuki</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on cognitive and developmental systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Nakamura, Tomoaki</au><au>Nagai, Takayuki</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Ensemble-of-Concept Models for Unsupervised Formation of Multiple Categories</atitle><jtitle>IEEE transactions on cognitive and developmental systems</jtitle><stitle>TCDS</stitle><date>2018-12-01</date><risdate>2018</risdate><volume>10</volume><issue>4</issue><spage>1043</spage><epage>1057</epage><pages>1043-1057</pages><issn>2379-8920</issn><eissn>2379-8939</eissn><coden>ITCDA4</coden><abstract>Recent studies have shown that robots can form concepts and understand the meanings of words through inference. The key idea underlying these studies is the "multimodal categorization" of a robot's experiences. Despite the success in the formation of concepts by robots, a major drawback of previous studies stems from the fact that they have been mainly focused on object concepts. Obviously, human concepts are limited not only to object concepts but also to other kinds such as those connected to the tactile sense and color. In this paper, we propose a novel model called the ensemble-of-concept models (EoCMs) to form various kinds of concepts. In EoCMs, we introduce weights that represent the strength connecting modalities and concepts. By changing these weights, many concepts that are connected to particular modalities can be formed; however, meaningless concepts for humans are included in these concepts. To communicate with humans, robots are required to form meaningful concepts for us. Therefore, we utilize utterances taught by human users as the robot observes objects. The robot connects words included in the teaching utterances with formed concepts and selects meaningful concepts to communicate with users. The experimental results show that the robot can form not only object concepts but also others such as color-related concepts and haptic concepts. Furthermore, using word2vec, we compare the meanings of the words acquired by the robot in connecting them to the concepts formed.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TCDS.2017.2745502</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-3183-4599</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2379-8920 |
ispartof | IEEE transactions on cognitive and developmental systems, 2018-12, Vol.10 (4), p.1043-1057 |
issn | 2379-8920 2379-8939 |
language | eng |
recordid | cdi_proquest_journals_2299148065 |
source | IEEE Electronic Library (IEL) |
subjects | Color Color imaging Concept formation Haptic interfaces language acquisition multimodal object dataset Pragmatics Robot sensing systems Robots Visualization |
title | Ensemble-of-Concept Models for Unsupervised Formation of Multiple Categories |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T22%3A36%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Ensemble-of-Concept%20Models%20for%20Unsupervised%20Formation%20of%20Multiple%20Categories&rft.jtitle=IEEE%20transactions%20on%20cognitive%20and%20developmental%20systems&rft.au=Nakamura,%20Tomoaki&rft.date=2018-12-01&rft.volume=10&rft.issue=4&rft.spage=1043&rft.epage=1057&rft.pages=1043-1057&rft.issn=2379-8920&rft.eissn=2379-8939&rft.coden=ITCDA4&rft_id=info:doi/10.1109/TCDS.2017.2745502&rft_dat=%3Cproquest_RIE%3E2299148065%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2299148065&rft_id=info:pmid/&rft_ieee_id=8023995&rfr_iscdi=true |