Correlation Net: Spatiotemporal multimodal deep learning for action recognition
This paper describes a network that captures multimodal correlations over arbitrary timestamps. The proposed scheme operates as a complementary, extended network over a multimodal convolutional neural network (CNN). Spatial and temporal streams are required for action recognition by a deep CNN, but...
Gespeichert in:
Veröffentlicht in: | Signal processing. Image communication 2020-03, Vol.82, p.115731, Article 115731 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | 115731 |
container_title | Signal processing. Image communication |
container_volume | 82 |
creator | Yudistira, Novanto Kurita, Takio |
description | This paper describes a network that captures multimodal correlations over arbitrary timestamps. The proposed scheme operates as a complementary, extended network over a multimodal convolutional neural network (CNN). Spatial and temporal streams are required for action recognition by a deep CNN, but overfitting reduction and fusing these two streams remain open problems. The existing fusion approach averages the two streams. Here we propose a correlation network with a Shannon fusion for learning a pre-trained CNN. A Long-range video may consist of spatiotemporal correlations over arbitrary times, which can be captured by forming the correlation network from simple fully connected layers. This approach was found to complement the existing network fusion methods. The importance of multimodal correlation is validated in comparison experiments on the UCF-101 and HMDB-51 datasets. The multimodal correlation enhanced the accuracy of the video recognition results.
•The proposed model captures spatiotemporal correlation without time correspondence.•Introduce Shannon fusion to select features based on distribution entropy.•The proposed network provides complementary information for long video recognition. |
doi_str_mv | 10.1016/j.image.2019.115731 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2369326251</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0923596519304163</els_id><sourcerecordid>2369326251</sourcerecordid><originalsourceid>FETCH-LOGICAL-c397t-6d519b2d87f0ad79304450f00519a8a83264a0894c16d7507f6cd76a4cf3a85f3</originalsourceid><addsrcrecordid>eNp9kEtLxDAQgIMouK7-Ai8Fz62TpE0awYMsvmBxD-o5xDyWlLapaVfw35taz57mwXwzzIfQJYYCA2bXTeE7tbcFASwKjCtO8RFa4ZqLnDDOj9EKBKF5JVh1is7GsQEAUoJYod0mxGhbNfnQZy92usleh7mYbDeEqNqsO7ST74JJqbF2yFqrYu_7feZCzJT-5aLVYd_7OT9HJ061o734i2v0_nD_tnnKt7vH583dNtdU8ClnpsLig5iaO1CGCwplWYEDSG1Vq5oSViqoRakxM7wC7pg2nKlSO6rqytE1ulr2DjF8Huw4ySYcYp9OSkKZSDypcJqiy5SOYRyjdXKIyVT8lhjkbE428tecnM3JxVyibhfKpge-vI1y1N722hqfPp2kCf5f_gcRSneW</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2369326251</pqid></control><display><type>article</type><title>Correlation Net: Spatiotemporal multimodal deep learning for action recognition</title><source>Elsevier ScienceDirect Journals</source><creator>Yudistira, Novanto ; Kurita, Takio</creator><creatorcontrib>Yudistira, Novanto ; Kurita, Takio</creatorcontrib><description>This paper describes a network that captures multimodal correlations over arbitrary timestamps. The proposed scheme operates as a complementary, extended network over a multimodal convolutional neural network (CNN). Spatial and temporal streams are required for action recognition by a deep CNN, but overfitting reduction and fusing these two streams remain open problems. The existing fusion approach averages the two streams. Here we propose a correlation network with a Shannon fusion for learning a pre-trained CNN. A Long-range video may consist of spatiotemporal correlations over arbitrary times, which can be captured by forming the correlation network from simple fully connected layers. This approach was found to complement the existing network fusion methods. The importance of multimodal correlation is validated in comparison experiments on the UCF-101 and HMDB-51 datasets. The multimodal correlation enhanced the accuracy of the video recognition results.
•The proposed model captures spatiotemporal correlation without time correspondence.•Introduce Shannon fusion to select features based on distribution entropy.•The proposed network provides complementary information for long video recognition.</description><identifier>ISSN: 0923-5965</identifier><identifier>EISSN: 1879-2677</identifier><identifier>DOI: 10.1016/j.image.2019.115731</identifier><language>eng</language><publisher>Amsterdam: Elsevier B.V</publisher><subject>Activity recognition ; Artificial neural networks ; CNN ; Correlation ; Correlation Net ; Deep learning ; Fusion ; Recognition ; Streams</subject><ispartof>Signal processing. Image communication, 2020-03, Vol.82, p.115731, Article 115731</ispartof><rights>2019 Elsevier B.V.</rights><rights>Copyright Elsevier BV Mar 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c397t-6d519b2d87f0ad79304450f00519a8a83264a0894c16d7507f6cd76a4cf3a85f3</citedby><cites>FETCH-LOGICAL-c397t-6d519b2d87f0ad79304450f00519a8a83264a0894c16d7507f6cd76a4cf3a85f3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0923596519304163$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65306</link.rule.ids></links><search><creatorcontrib>Yudistira, Novanto</creatorcontrib><creatorcontrib>Kurita, Takio</creatorcontrib><title>Correlation Net: Spatiotemporal multimodal deep learning for action recognition</title><title>Signal processing. Image communication</title><description>This paper describes a network that captures multimodal correlations over arbitrary timestamps. The proposed scheme operates as a complementary, extended network over a multimodal convolutional neural network (CNN). Spatial and temporal streams are required for action recognition by a deep CNN, but overfitting reduction and fusing these two streams remain open problems. The existing fusion approach averages the two streams. Here we propose a correlation network with a Shannon fusion for learning a pre-trained CNN. A Long-range video may consist of spatiotemporal correlations over arbitrary times, which can be captured by forming the correlation network from simple fully connected layers. This approach was found to complement the existing network fusion methods. The importance of multimodal correlation is validated in comparison experiments on the UCF-101 and HMDB-51 datasets. The multimodal correlation enhanced the accuracy of the video recognition results.
•The proposed model captures spatiotemporal correlation without time correspondence.•Introduce Shannon fusion to select features based on distribution entropy.•The proposed network provides complementary information for long video recognition.</description><subject>Activity recognition</subject><subject>Artificial neural networks</subject><subject>CNN</subject><subject>Correlation</subject><subject>Correlation Net</subject><subject>Deep learning</subject><subject>Fusion</subject><subject>Recognition</subject><subject>Streams</subject><issn>0923-5965</issn><issn>1879-2677</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp9kEtLxDAQgIMouK7-Ai8Fz62TpE0awYMsvmBxD-o5xDyWlLapaVfw35taz57mwXwzzIfQJYYCA2bXTeE7tbcFASwKjCtO8RFa4ZqLnDDOj9EKBKF5JVh1is7GsQEAUoJYod0mxGhbNfnQZy92usleh7mYbDeEqNqsO7ST74JJqbF2yFqrYu_7feZCzJT-5aLVYd_7OT9HJ061o734i2v0_nD_tnnKt7vH583dNtdU8ClnpsLig5iaO1CGCwplWYEDSG1Vq5oSViqoRakxM7wC7pg2nKlSO6rqytE1ulr2DjF8Huw4ySYcYp9OSkKZSDypcJqiy5SOYRyjdXKIyVT8lhjkbE428tecnM3JxVyibhfKpge-vI1y1N722hqfPp2kCf5f_gcRSneW</recordid><startdate>202003</startdate><enddate>202003</enddate><creator>Yudistira, Novanto</creator><creator>Kurita, Takio</creator><general>Elsevier B.V</general><general>Elsevier BV</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>202003</creationdate><title>Correlation Net: Spatiotemporal multimodal deep learning for action recognition</title><author>Yudistira, Novanto ; Kurita, Takio</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c397t-6d519b2d87f0ad79304450f00519a8a83264a0894c16d7507f6cd76a4cf3a85f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Activity recognition</topic><topic>Artificial neural networks</topic><topic>CNN</topic><topic>Correlation</topic><topic>Correlation Net</topic><topic>Deep learning</topic><topic>Fusion</topic><topic>Recognition</topic><topic>Streams</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yudistira, Novanto</creatorcontrib><creatorcontrib>Kurita, Takio</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Signal processing. Image communication</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yudistira, Novanto</au><au>Kurita, Takio</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Correlation Net: Spatiotemporal multimodal deep learning for action recognition</atitle><jtitle>Signal processing. Image communication</jtitle><date>2020-03</date><risdate>2020</risdate><volume>82</volume><spage>115731</spage><pages>115731-</pages><artnum>115731</artnum><issn>0923-5965</issn><eissn>1879-2677</eissn><abstract>This paper describes a network that captures multimodal correlations over arbitrary timestamps. The proposed scheme operates as a complementary, extended network over a multimodal convolutional neural network (CNN). Spatial and temporal streams are required for action recognition by a deep CNN, but overfitting reduction and fusing these two streams remain open problems. The existing fusion approach averages the two streams. Here we propose a correlation network with a Shannon fusion for learning a pre-trained CNN. A Long-range video may consist of spatiotemporal correlations over arbitrary times, which can be captured by forming the correlation network from simple fully connected layers. This approach was found to complement the existing network fusion methods. The importance of multimodal correlation is validated in comparison experiments on the UCF-101 and HMDB-51 datasets. The multimodal correlation enhanced the accuracy of the video recognition results.
•The proposed model captures spatiotemporal correlation without time correspondence.•Introduce Shannon fusion to select features based on distribution entropy.•The proposed network provides complementary information for long video recognition.</abstract><cop>Amsterdam</cop><pub>Elsevier B.V</pub><doi>10.1016/j.image.2019.115731</doi></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0923-5965 |
ispartof | Signal processing. Image communication, 2020-03, Vol.82, p.115731, Article 115731 |
issn | 0923-5965 1879-2677 |
language | eng |
recordid | cdi_proquest_journals_2369326251 |
source | Elsevier ScienceDirect Journals |
subjects | Activity recognition Artificial neural networks CNN Correlation Correlation Net Deep learning Fusion Recognition Streams |
title | Correlation Net: Spatiotemporal multimodal deep learning for action recognition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T20%3A46%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Correlation%20Net:%20Spatiotemporal%20multimodal%20deep%20learning%20for%20action%20recognition&rft.jtitle=Signal%20processing.%20Image%20communication&rft.au=Yudistira,%20Novanto&rft.date=2020-03&rft.volume=82&rft.spage=115731&rft.pages=115731-&rft.artnum=115731&rft.issn=0923-5965&rft.eissn=1879-2677&rft_id=info:doi/10.1016/j.image.2019.115731&rft_dat=%3Cproquest_cross%3E2369326251%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2369326251&rft_id=info:pmid/&rft_els_id=S0923596519304163&rfr_iscdi=true |