Simultaneous learning of spatial visual attention and physical actions

This paper introduces a new method for learning top-down and task-driven visual attention control along with physical actions in interactive environments. Our method is based on the Reinforcement Learning of Visual Classes(RLVC) algorithm and adapts it for learning spatial visual selection in order...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Borji, A, Ahmadabadi, M N, Araabi, B N
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1276
container_issue
container_start_page 1270
container_title
container_volume
creator Borji, A
Ahmadabadi, M N
Araabi, B N
description This paper introduces a new method for learning top-down and task-driven visual attention control along with physical actions in interactive environments. Our method is based on the Reinforcement Learning of Visual Classes(RLVC) algorithm and adapts it for learning spatial visual selection in order to reduce computational complexity. Proposed algorithm also addresses aliasings due to not knowing previous actions and perceptions. Continuing learning shows our method is robust to perturbations in perceptual information. Our method also allows object recognition when class labels are used instead of physical actions. We have tried to gain maximum generalization while performing local processing. Experiments over visual navigation and object recognition tasks show that our method is more efficient in terms of computational complexity and is biologically more plausible.
doi_str_mv 10.1109/IROS.2010.5650749
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_5650749</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5650749</ieee_id><sourcerecordid>5650749</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-4c9d019806173fcf6901b4fb4271d4b2788391f1aa56a86934e96a27f1db36603</originalsourceid><addsrcrecordid>eNpVkF1LwzAYheMXOGZ_gHjTP9CZNx9vkksZbg4GA6fXI20TjXRtaVJh_94Nh-DVwzkHnotDyD3QGQA1j6vXzXbG6DFKlFQJc0EyozQIJgSiQnFJJgwkL6hGvPq3CXr9t0l9S7IYvyg9qpTRBidksQ37sUm2dd0Y88bZoQ3tR975PPY2Bdvk3yGOR9iUXJtC1-a2rfP-8xBDdaqrUxfvyI23TXTZmVPyvnh-m78U681yNX9aFwGUTIWoTE3BaIqguK88Ggql8KVgCmpRMqU1N-DBWolWo-HCGbRMeahLjkj5lDz8eoNzbtcPYW-Hw-78Cv8BXzVQ9g</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Simultaneous learning of spatial visual attention and physical actions</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Borji, A ; Ahmadabadi, M N ; Araabi, B N</creator><creatorcontrib>Borji, A ; Ahmadabadi, M N ; Araabi, B N</creatorcontrib><description>This paper introduces a new method for learning top-down and task-driven visual attention control along with physical actions in interactive environments. Our method is based on the Reinforcement Learning of Visual Classes(RLVC) algorithm and adapts it for learning spatial visual selection in order to reduce computational complexity. Proposed algorithm also addresses aliasings due to not knowing previous actions and perceptions. Continuing learning shows our method is robust to perturbations in perceptual information. Our method also allows object recognition when class labels are used instead of physical actions. We have tried to gain maximum generalization while performing local processing. Experiments over visual navigation and object recognition tasks show that our method is more efficient in terms of computational complexity and is biologically more plausible.</description><identifier>ISSN: 2153-0858</identifier><identifier>ISBN: 9781424466740</identifier><identifier>ISBN: 1424466741</identifier><identifier>EISSN: 2153-0866</identifier><identifier>EISBN: 9781424466764</identifier><identifier>EISBN: 1424466768</identifier><identifier>EISBN: 142446675X</identifier><identifier>EISBN: 9781424466757</identifier><identifier>DOI: 10.1109/IROS.2010.5650749</identifier><language>eng</language><publisher>IEEE</publisher><subject>Biology ; Feature extraction ; History ; Learning ; Navigation ; Object recognition ; Visualization</subject><ispartof>2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010, p.1270-1276</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5650749$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2052,27902,54895</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5650749$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Borji, A</creatorcontrib><creatorcontrib>Ahmadabadi, M N</creatorcontrib><creatorcontrib>Araabi, B N</creatorcontrib><title>Simultaneous learning of spatial visual attention and physical actions</title><title>2010 IEEE/RSJ International Conference on Intelligent Robots and Systems</title><addtitle>IROS</addtitle><description>This paper introduces a new method for learning top-down and task-driven visual attention control along with physical actions in interactive environments. Our method is based on the Reinforcement Learning of Visual Classes(RLVC) algorithm and adapts it for learning spatial visual selection in order to reduce computational complexity. Proposed algorithm also addresses aliasings due to not knowing previous actions and perceptions. Continuing learning shows our method is robust to perturbations in perceptual information. Our method also allows object recognition when class labels are used instead of physical actions. We have tried to gain maximum generalization while performing local processing. Experiments over visual navigation and object recognition tasks show that our method is more efficient in terms of computational complexity and is biologically more plausible.</description><subject>Biology</subject><subject>Feature extraction</subject><subject>History</subject><subject>Learning</subject><subject>Navigation</subject><subject>Object recognition</subject><subject>Visualization</subject><issn>2153-0858</issn><issn>2153-0866</issn><isbn>9781424466740</isbn><isbn>1424466741</isbn><isbn>9781424466764</isbn><isbn>1424466768</isbn><isbn>142446675X</isbn><isbn>9781424466757</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2010</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNpVkF1LwzAYheMXOGZ_gHjTP9CZNx9vkksZbg4GA6fXI20TjXRtaVJh_94Nh-DVwzkHnotDyD3QGQA1j6vXzXbG6DFKlFQJc0EyozQIJgSiQnFJJgwkL6hGvPq3CXr9t0l9S7IYvyg9qpTRBidksQ37sUm2dd0Y88bZoQ3tR975PPY2Bdvk3yGOR9iUXJtC1-a2rfP-8xBDdaqrUxfvyI23TXTZmVPyvnh-m78U681yNX9aFwGUTIWoTE3BaIqguK88Ggql8KVgCmpRMqU1N-DBWolWo-HCGbRMeahLjkj5lDz8eoNzbtcPYW-Hw-78Cv8BXzVQ9g</recordid><startdate>201010</startdate><enddate>201010</enddate><creator>Borji, A</creator><creator>Ahmadabadi, M N</creator><creator>Araabi, B N</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201010</creationdate><title>Simultaneous learning of spatial visual attention and physical actions</title><author>Borji, A ; Ahmadabadi, M N ; Araabi, B N</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-4c9d019806173fcf6901b4fb4271d4b2788391f1aa56a86934e96a27f1db36603</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2010</creationdate><topic>Biology</topic><topic>Feature extraction</topic><topic>History</topic><topic>Learning</topic><topic>Navigation</topic><topic>Object recognition</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Borji, A</creatorcontrib><creatorcontrib>Ahmadabadi, M N</creatorcontrib><creatorcontrib>Araabi, B N</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Borji, A</au><au>Ahmadabadi, M N</au><au>Araabi, B N</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Simultaneous learning of spatial visual attention and physical actions</atitle><btitle>2010 IEEE/RSJ International Conference on Intelligent Robots and Systems</btitle><stitle>IROS</stitle><date>2010-10</date><risdate>2010</risdate><spage>1270</spage><epage>1276</epage><pages>1270-1276</pages><issn>2153-0858</issn><eissn>2153-0866</eissn><isbn>9781424466740</isbn><isbn>1424466741</isbn><eisbn>9781424466764</eisbn><eisbn>1424466768</eisbn><eisbn>142446675X</eisbn><eisbn>9781424466757</eisbn><abstract>This paper introduces a new method for learning top-down and task-driven visual attention control along with physical actions in interactive environments. Our method is based on the Reinforcement Learning of Visual Classes(RLVC) algorithm and adapts it for learning spatial visual selection in order to reduce computational complexity. Proposed algorithm also addresses aliasings due to not knowing previous actions and perceptions. Continuing learning shows our method is robust to perturbations in perceptual information. Our method also allows object recognition when class labels are used instead of physical actions. We have tried to gain maximum generalization while performing local processing. Experiments over visual navigation and object recognition tasks show that our method is more efficient in terms of computational complexity and is biologically more plausible.</abstract><pub>IEEE</pub><doi>10.1109/IROS.2010.5650749</doi><tpages>7</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2153-0858
ispartof 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010, p.1270-1276
issn 2153-0858
2153-0866
language eng
recordid cdi_ieee_primary_5650749
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Biology
Feature extraction
History
Learning
Navigation
Object recognition
Visualization
title Simultaneous learning of spatial visual attention and physical actions
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T17%3A45%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Simultaneous%20learning%20of%20spatial%20visual%20attention%20and%20physical%20actions&rft.btitle=2010%20IEEE/RSJ%20International%20Conference%20on%20Intelligent%20Robots%20and%20Systems&rft.au=Borji,%20A&rft.date=2010-10&rft.spage=1270&rft.epage=1276&rft.pages=1270-1276&rft.issn=2153-0858&rft.eissn=2153-0866&rft.isbn=9781424466740&rft.isbn_list=1424466741&rft_id=info:doi/10.1109/IROS.2010.5650749&rft_dat=%3Cieee_6IE%3E5650749%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=9781424466764&rft.eisbn_list=1424466768&rft.eisbn_list=142446675X&rft.eisbn_list=9781424466757&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=5650749&rfr_iscdi=true