DAMUN: A Domain Adaptive Human Activity Recognition Network Based on Multimodal Feature Fusion
There is a rapidly increasing demand for Human Activity Recognition (HAR) due to its extensive applications in various fields such as smart homes, healthcare, nursing, and sports. A more stable and powerful system that can adapt to various complex actual environments with affordable cost of data acq...
Gespeichert in:
Veröffentlicht in: | IEEE sensors journal 2023-09, Vol.23 (18), p.1-1 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | 18 |
container_start_page | 1 |
container_title | IEEE sensors journal |
container_volume | 23 |
creator | Feng, Xinxin Weng, Yuxin Li, Wenlong Chen, Pengcheng Zheng, Haifeng |
description | There is a rapidly increasing demand for Human Activity Recognition (HAR) due to its extensive applications in various fields such as smart homes, healthcare, nursing, and sports. A more stable and powerful system that can adapt to various complex actual environments with affordable cost of data acquisition is needed. In this paper, we propose a domain adaptive human activity recognition network based on multimodal feature fusion (DAMUN) to capture information of data from FMCW radar and USB cameras. In the network, we add a domain discriminator to reduce data differences due to the changes in environments and user habits. In order to reduce the workload of radar data acquisition and processing, we also design a data augmentation model based on a generative adversarial network, which can generate radar data directly from image data. Finally, we implement the real-time application based on the DAMUN on edge computing platforms. The experimental results show that the proposed network achieves obvious advantages over the existing methods and can effectively adapt to different environments. In addition, the network can meet the real-time requirement in the prediction stage, and its average running time is about 0.17s. |
doi_str_mv | 10.1109/JSEN.2023.3300357 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10209421</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10209421</ieee_id><sourcerecordid>2865090359</sourcerecordid><originalsourceid>FETCH-LOGICAL-c294t-f3700b5a276ed58d473e04a2ffa70a698a62a3be4f7cc8bb2abfd89156ba22ea3</originalsourceid><addsrcrecordid>eNpNkE1Lw0AQhoMoWKs_QPCw4Dl1P7LZxFu0rVXaCmrBk8skmUhqk9TdjdJ_b0J78DQfPO8MPJ53yeiIMRrfPL1OliNOuRgJQamQ6sgbMCkjn6kgOu57Qf1AqPdT78zaNaUsVlINvI9xslgtb0lCxk0FZU2SHLau_EEyayvoxqwbSrcjL5g1n3XpyqYmS3S_jfkid2AxJ91i0W5cWTU5bMgUwbUGybS1HXrunRSwsXhxqENvNZ283c_8-fPD430y9zMeB84vhKI0lcBViLmM8kAJpAHwogBFIYwjCDmIFINCZVmUphzSIo9iJsMUOEcQQ-96f3drmu8WrdPrpjV191LzKJQ07pTEHcX2VGYaaw0WemvKCsxOM6p7jbrXqHuN-qCxy1ztMyUi_uM5jQPOxB-qfm6A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2865090359</pqid></control><display><type>article</type><title>DAMUN: A Domain Adaptive Human Activity Recognition Network Based on Multimodal Feature Fusion</title><source>IEEE Electronic Library (IEL)</source><creator>Feng, Xinxin ; Weng, Yuxin ; Li, Wenlong ; Chen, Pengcheng ; Zheng, Haifeng</creator><creatorcontrib>Feng, Xinxin ; Weng, Yuxin ; Li, Wenlong ; Chen, Pengcheng ; Zheng, Haifeng</creatorcontrib><description>There is a rapidly increasing demand for Human Activity Recognition (HAR) due to its extensive applications in various fields such as smart homes, healthcare, nursing, and sports. A more stable and powerful system that can adapt to various complex actual environments with affordable cost of data acquisition is needed. In this paper, we propose a domain adaptive human activity recognition network based on multimodal feature fusion (DAMUN) to capture information of data from FMCW radar and USB cameras. In the network, we add a domain discriminator to reduce data differences due to the changes in environments and user habits. In order to reduce the workload of radar data acquisition and processing, we also design a data augmentation model based on a generative adversarial network, which can generate radar data directly from image data. Finally, we implement the real-time application based on the DAMUN on edge computing platforms. The experimental results show that the proposed network achieves obvious advantages over the existing methods and can effectively adapt to different environments. In addition, the network can meet the real-time requirement in the prediction stage, and its average running time is about 0.17s.</description><identifier>ISSN: 1530-437X</identifier><identifier>EISSN: 1558-1748</identifier><identifier>DOI: 10.1109/JSEN.2023.3300357</identifier><identifier>CODEN: ISJEAZ</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Cameras ; Continuous radiation ; Data acquisition ; Data augmentation ; Edge computing ; Feature extraction ; feature fusion ; FMCW Radar ; Generative adversarial networks ; Human activity recognition ; Radar ; Radar data ; Radar imaging ; Real time ; real-time application ; Run time (computers) ; Sensors ; Smart buildings</subject><ispartof>IEEE sensors journal, 2023-09, Vol.23 (18), p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c294t-f3700b5a276ed58d473e04a2ffa70a698a62a3be4f7cc8bb2abfd89156ba22ea3</citedby><cites>FETCH-LOGICAL-c294t-f3700b5a276ed58d473e04a2ffa70a698a62a3be4f7cc8bb2abfd89156ba22ea3</cites><orcidid>0000-0002-2142-721X ; 0000-0003-1166-2280</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10209421$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10209421$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Feng, Xinxin</creatorcontrib><creatorcontrib>Weng, Yuxin</creatorcontrib><creatorcontrib>Li, Wenlong</creatorcontrib><creatorcontrib>Chen, Pengcheng</creatorcontrib><creatorcontrib>Zheng, Haifeng</creatorcontrib><title>DAMUN: A Domain Adaptive Human Activity Recognition Network Based on Multimodal Feature Fusion</title><title>IEEE sensors journal</title><addtitle>JSEN</addtitle><description>There is a rapidly increasing demand for Human Activity Recognition (HAR) due to its extensive applications in various fields such as smart homes, healthcare, nursing, and sports. A more stable and powerful system that can adapt to various complex actual environments with affordable cost of data acquisition is needed. In this paper, we propose a domain adaptive human activity recognition network based on multimodal feature fusion (DAMUN) to capture information of data from FMCW radar and USB cameras. In the network, we add a domain discriminator to reduce data differences due to the changes in environments and user habits. In order to reduce the workload of radar data acquisition and processing, we also design a data augmentation model based on a generative adversarial network, which can generate radar data directly from image data. Finally, we implement the real-time application based on the DAMUN on edge computing platforms. The experimental results show that the proposed network achieves obvious advantages over the existing methods and can effectively adapt to different environments. In addition, the network can meet the real-time requirement in the prediction stage, and its average running time is about 0.17s.</description><subject>Cameras</subject><subject>Continuous radiation</subject><subject>Data acquisition</subject><subject>Data augmentation</subject><subject>Edge computing</subject><subject>Feature extraction</subject><subject>feature fusion</subject><subject>FMCW Radar</subject><subject>Generative adversarial networks</subject><subject>Human activity recognition</subject><subject>Radar</subject><subject>Radar data</subject><subject>Radar imaging</subject><subject>Real time</subject><subject>real-time application</subject><subject>Run time (computers)</subject><subject>Sensors</subject><subject>Smart buildings</subject><issn>1530-437X</issn><issn>1558-1748</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1Lw0AQhoMoWKs_QPCw4Dl1P7LZxFu0rVXaCmrBk8skmUhqk9TdjdJ_b0J78DQfPO8MPJ53yeiIMRrfPL1OliNOuRgJQamQ6sgbMCkjn6kgOu57Qf1AqPdT78zaNaUsVlINvI9xslgtb0lCxk0FZU2SHLau_EEyayvoxqwbSrcjL5g1n3XpyqYmS3S_jfkid2AxJ91i0W5cWTU5bMgUwbUGybS1HXrunRSwsXhxqENvNZ283c_8-fPD430y9zMeB84vhKI0lcBViLmM8kAJpAHwogBFIYwjCDmIFINCZVmUphzSIo9iJsMUOEcQQ-96f3drmu8WrdPrpjV191LzKJQ07pTEHcX2VGYaaw0WemvKCsxOM6p7jbrXqHuN-qCxy1ztMyUi_uM5jQPOxB-qfm6A</recordid><startdate>20230915</startdate><enddate>20230915</enddate><creator>Feng, Xinxin</creator><creator>Weng, Yuxin</creator><creator>Li, Wenlong</creator><creator>Chen, Pengcheng</creator><creator>Zheng, Haifeng</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-2142-721X</orcidid><orcidid>https://orcid.org/0000-0003-1166-2280</orcidid></search><sort><creationdate>20230915</creationdate><title>DAMUN: A Domain Adaptive Human Activity Recognition Network Based on Multimodal Feature Fusion</title><author>Feng, Xinxin ; Weng, Yuxin ; Li, Wenlong ; Chen, Pengcheng ; Zheng, Haifeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c294t-f3700b5a276ed58d473e04a2ffa70a698a62a3be4f7cc8bb2abfd89156ba22ea3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Cameras</topic><topic>Continuous radiation</topic><topic>Data acquisition</topic><topic>Data augmentation</topic><topic>Edge computing</topic><topic>Feature extraction</topic><topic>feature fusion</topic><topic>FMCW Radar</topic><topic>Generative adversarial networks</topic><topic>Human activity recognition</topic><topic>Radar</topic><topic>Radar data</topic><topic>Radar imaging</topic><topic>Real time</topic><topic>real-time application</topic><topic>Run time (computers)</topic><topic>Sensors</topic><topic>Smart buildings</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Feng, Xinxin</creatorcontrib><creatorcontrib>Weng, Yuxin</creatorcontrib><creatorcontrib>Li, Wenlong</creatorcontrib><creatorcontrib>Chen, Pengcheng</creatorcontrib><creatorcontrib>Zheng, Haifeng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE sensors journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Feng, Xinxin</au><au>Weng, Yuxin</au><au>Li, Wenlong</au><au>Chen, Pengcheng</au><au>Zheng, Haifeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DAMUN: A Domain Adaptive Human Activity Recognition Network Based on Multimodal Feature Fusion</atitle><jtitle>IEEE sensors journal</jtitle><stitle>JSEN</stitle><date>2023-09-15</date><risdate>2023</risdate><volume>23</volume><issue>18</issue><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>1530-437X</issn><eissn>1558-1748</eissn><coden>ISJEAZ</coden><abstract>There is a rapidly increasing demand for Human Activity Recognition (HAR) due to its extensive applications in various fields such as smart homes, healthcare, nursing, and sports. A more stable and powerful system that can adapt to various complex actual environments with affordable cost of data acquisition is needed. In this paper, we propose a domain adaptive human activity recognition network based on multimodal feature fusion (DAMUN) to capture information of data from FMCW radar and USB cameras. In the network, we add a domain discriminator to reduce data differences due to the changes in environments and user habits. In order to reduce the workload of radar data acquisition and processing, we also design a data augmentation model based on a generative adversarial network, which can generate radar data directly from image data. Finally, we implement the real-time application based on the DAMUN on edge computing platforms. The experimental results show that the proposed network achieves obvious advantages over the existing methods and can effectively adapt to different environments. In addition, the network can meet the real-time requirement in the prediction stage, and its average running time is about 0.17s.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSEN.2023.3300357</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-2142-721X</orcidid><orcidid>https://orcid.org/0000-0003-1166-2280</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1530-437X |
ispartof | IEEE sensors journal, 2023-09, Vol.23 (18), p.1-1 |
issn | 1530-437X 1558-1748 |
language | eng |
recordid | cdi_ieee_primary_10209421 |
source | IEEE Electronic Library (IEL) |
subjects | Cameras Continuous radiation Data acquisition Data augmentation Edge computing Feature extraction feature fusion FMCW Radar Generative adversarial networks Human activity recognition Radar Radar data Radar imaging Real time real-time application Run time (computers) Sensors Smart buildings |
title | DAMUN: A Domain Adaptive Human Activity Recognition Network Based on Multimodal Feature Fusion |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T10%3A51%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DAMUN:%20A%20Domain%20Adaptive%20Human%20Activity%20Recognition%20Network%20Based%20on%20Multimodal%20Feature%20Fusion&rft.jtitle=IEEE%20sensors%20journal&rft.au=Feng,%20Xinxin&rft.date=2023-09-15&rft.volume=23&rft.issue=18&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=1530-437X&rft.eissn=1558-1748&rft.coden=ISJEAZ&rft_id=info:doi/10.1109/JSEN.2023.3300357&rft_dat=%3Cproquest_RIE%3E2865090359%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2865090359&rft_id=info:pmid/&rft_ieee_id=10209421&rfr_iscdi=true |