MarNASNets: Towards CNN model architectures specific to sensor-based human activity recognition
Deep learning (DL) models for sensor-based human activity recognition (HAR) are still in their nascent stages compared with image recognition. HAR's inference is generally implemented on edge devices such as smartphones because of the secure privacy. However, lightweight DL models for HAR while...
Gespeichert in:
Veröffentlicht in: | IEEE sensors journal 2023-07, p.1-1 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE sensors journal |
container_volume | |
creator | Kobayashi, Satoshi Hasegawa, Tatsuhito Miyoshi, Takeru Koshino, Makoto |
description | Deep learning (DL) models for sensor-based human activity recognition (HAR) are still in their nascent stages compared with image recognition. HAR's inference is generally implemented on edge devices such as smartphones because of the secure privacy. However, lightweight DL models for HAR while meeting the hardware limitations are lacking. In this study, using the neural architecture search (NAS), we investigated an effective DL model architectures that can be used for inference on smartphones. We designed multiple search spaces for the type of convolution, the kernel size of the convolution process, the type of skip operation, the number of layers, and the number of output filters by Bayesian optimization. We propose models called mobile-aware convolutional neural network (CNN) for sensor-based HAR by NAS (MarNASNets). We constructed four MarNASNet networks, MarNASNets-A to D, each with a different model size and a parameter search space of four patterns. Experimental results show that MarNASNets achieve the same accuracy as existing CNN architectures with fewer parameters and are effective model architectures for on-device and sensor-based HAR. We also developed Activitybench, an iOS app, for measuring model performance on smartphones, and evaluated the on-device performance of each model. MarNASNets explored achieved accuracy comparable to existing CNN models with smaller model sizes. MarNASNet-C achieved accuracies of 92.60, 94.52, and 88.92 % for HASC, UCI, and WISDM, respectively. Especially for HASC and UCI, MarNASNet-C achieved the highest accuracies despite the small model size. Their latency was also comparable to that of existing CNN models, enabling real-time on-device inference. |
doi_str_mv | 10.1109/JSEN.2023.3292380 |
format | Article |
fullrecord | <record><control><sourceid>ieee_RIE</sourceid><recordid>TN_cdi_ieee_primary_10179199</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10179199</ieee_id><sourcerecordid>10179199</sourcerecordid><originalsourceid>FETCH-ieee_primary_101791993</originalsourceid><addsrcrecordid>eNqFybGKAjEQANAUHqinHyBYzA_sXrJR1tiJKHJgGi3slpgddcTdSCYq_v0111u94gkxUjJXSpqf393K5oUsdK4LU-iZ7IiemmqZTXR56Io-81VKZcpp2RPV1kW72FlMPId9eLlYMyythSbUeAMX_YUS-vSIyMB39HQiDykAY8shZkfHWMPl0bgWnE_0pPSGiD6cW0oU2oH4Orkb4_DfbzFer_bLTUaIWN0jNS6-KyVVaZQx-kP_AduLRCA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>MarNASNets: Towards CNN model architectures specific to sensor-based human activity recognition</title><source>IEEE Xplore</source><creator>Kobayashi, Satoshi ; Hasegawa, Tatsuhito ; Miyoshi, Takeru ; Koshino, Makoto</creator><creatorcontrib>Kobayashi, Satoshi ; Hasegawa, Tatsuhito ; Miyoshi, Takeru ; Koshino, Makoto</creatorcontrib><description>Deep learning (DL) models for sensor-based human activity recognition (HAR) are still in their nascent stages compared with image recognition. HAR's inference is generally implemented on edge devices such as smartphones because of the secure privacy. However, lightweight DL models for HAR while meeting the hardware limitations are lacking. In this study, using the neural architecture search (NAS), we investigated an effective DL model architectures that can be used for inference on smartphones. We designed multiple search spaces for the type of convolution, the kernel size of the convolution process, the type of skip operation, the number of layers, and the number of output filters by Bayesian optimization. We propose models called mobile-aware convolutional neural network (CNN) for sensor-based HAR by NAS (MarNASNets). We constructed four MarNASNet networks, MarNASNets-A to D, each with a different model size and a parameter search space of four patterns. Experimental results show that MarNASNets achieve the same accuracy as existing CNN architectures with fewer parameters and are effective model architectures for on-device and sensor-based HAR. We also developed Activitybench, an iOS app, for measuring model performance on smartphones, and evaluated the on-device performance of each model. MarNASNets explored achieved accuracy comparable to existing CNN models with smaller model sizes. MarNASNet-C achieved accuracies of 92.60, 94.52, and 88.92 % for HASC, UCI, and WISDM, respectively. Especially for HASC and UCI, MarNASNet-C achieved the highest accuracies despite the small model size. Their latency was also comparable to that of existing CNN models, enabling real-time on-device inference.</description><identifier>ISSN: 1530-437X</identifier><identifier>DOI: 10.1109/JSEN.2023.3292380</identifier><identifier>CODEN: ISJEAZ</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computational modeling ; Computer architecture ; Convolutional Neural Network ; Convolutional neural networks ; Deep Learning ; Human activity recognition ; Image recognition ; Neural Architecture Search ; Sensors ; Smart phones</subject><ispartof>IEEE sensors journal, 2023-07, p.1-1</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0003-3048-6921 ; 0000-0002-0768-1406</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10179199$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10179199$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Kobayashi, Satoshi</creatorcontrib><creatorcontrib>Hasegawa, Tatsuhito</creatorcontrib><creatorcontrib>Miyoshi, Takeru</creatorcontrib><creatorcontrib>Koshino, Makoto</creatorcontrib><title>MarNASNets: Towards CNN model architectures specific to sensor-based human activity recognition</title><title>IEEE sensors journal</title><addtitle>JSEN</addtitle><description>Deep learning (DL) models for sensor-based human activity recognition (HAR) are still in their nascent stages compared with image recognition. HAR's inference is generally implemented on edge devices such as smartphones because of the secure privacy. However, lightweight DL models for HAR while meeting the hardware limitations are lacking. In this study, using the neural architecture search (NAS), we investigated an effective DL model architectures that can be used for inference on smartphones. We designed multiple search spaces for the type of convolution, the kernel size of the convolution process, the type of skip operation, the number of layers, and the number of output filters by Bayesian optimization. We propose models called mobile-aware convolutional neural network (CNN) for sensor-based HAR by NAS (MarNASNets). We constructed four MarNASNet networks, MarNASNets-A to D, each with a different model size and a parameter search space of four patterns. Experimental results show that MarNASNets achieve the same accuracy as existing CNN architectures with fewer parameters and are effective model architectures for on-device and sensor-based HAR. We also developed Activitybench, an iOS app, for measuring model performance on smartphones, and evaluated the on-device performance of each model. MarNASNets explored achieved accuracy comparable to existing CNN models with smaller model sizes. MarNASNet-C achieved accuracies of 92.60, 94.52, and 88.92 % for HASC, UCI, and WISDM, respectively. Especially for HASC and UCI, MarNASNet-C achieved the highest accuracies despite the small model size. Their latency was also comparable to that of existing CNN models, enabling real-time on-device inference.</description><subject>Computational modeling</subject><subject>Computer architecture</subject><subject>Convolutional Neural Network</subject><subject>Convolutional neural networks</subject><subject>Deep Learning</subject><subject>Human activity recognition</subject><subject>Image recognition</subject><subject>Neural Architecture Search</subject><subject>Sensors</subject><subject>Smart phones</subject><issn>1530-437X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNqFybGKAjEQANAUHqinHyBYzA_sXrJR1tiJKHJgGi3slpgddcTdSCYq_v0111u94gkxUjJXSpqf393K5oUsdK4LU-iZ7IiemmqZTXR56Io-81VKZcpp2RPV1kW72FlMPId9eLlYMyythSbUeAMX_YUS-vSIyMB39HQiDykAY8shZkfHWMPl0bgWnE_0pPSGiD6cW0oU2oH4Orkb4_DfbzFer_bLTUaIWN0jNS6-KyVVaZQx-kP_AduLRCA</recordid><startdate>20230711</startdate><enddate>20230711</enddate><creator>Kobayashi, Satoshi</creator><creator>Hasegawa, Tatsuhito</creator><creator>Miyoshi, Takeru</creator><creator>Koshino, Makoto</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><orcidid>https://orcid.org/0000-0003-3048-6921</orcidid><orcidid>https://orcid.org/0000-0002-0768-1406</orcidid></search><sort><creationdate>20230711</creationdate><title>MarNASNets: Towards CNN model architectures specific to sensor-based human activity recognition</title><author>Kobayashi, Satoshi ; Hasegawa, Tatsuhito ; Miyoshi, Takeru ; Koshino, Makoto</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-ieee_primary_101791993</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computational modeling</topic><topic>Computer architecture</topic><topic>Convolutional Neural Network</topic><topic>Convolutional neural networks</topic><topic>Deep Learning</topic><topic>Human activity recognition</topic><topic>Image recognition</topic><topic>Neural Architecture Search</topic><topic>Sensors</topic><topic>Smart phones</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kobayashi, Satoshi</creatorcontrib><creatorcontrib>Hasegawa, Tatsuhito</creatorcontrib><creatorcontrib>Miyoshi, Takeru</creatorcontrib><creatorcontrib>Koshino, Makoto</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><jtitle>IEEE sensors journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kobayashi, Satoshi</au><au>Hasegawa, Tatsuhito</au><au>Miyoshi, Takeru</au><au>Koshino, Makoto</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MarNASNets: Towards CNN model architectures specific to sensor-based human activity recognition</atitle><jtitle>IEEE sensors journal</jtitle><stitle>JSEN</stitle><date>2023-07-11</date><risdate>2023</risdate><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>1530-437X</issn><coden>ISJEAZ</coden><abstract>Deep learning (DL) models for sensor-based human activity recognition (HAR) are still in their nascent stages compared with image recognition. HAR's inference is generally implemented on edge devices such as smartphones because of the secure privacy. However, lightweight DL models for HAR while meeting the hardware limitations are lacking. In this study, using the neural architecture search (NAS), we investigated an effective DL model architectures that can be used for inference on smartphones. We designed multiple search spaces for the type of convolution, the kernel size of the convolution process, the type of skip operation, the number of layers, and the number of output filters by Bayesian optimization. We propose models called mobile-aware convolutional neural network (CNN) for sensor-based HAR by NAS (MarNASNets). We constructed four MarNASNet networks, MarNASNets-A to D, each with a different model size and a parameter search space of four patterns. Experimental results show that MarNASNets achieve the same accuracy as existing CNN architectures with fewer parameters and are effective model architectures for on-device and sensor-based HAR. We also developed Activitybench, an iOS app, for measuring model performance on smartphones, and evaluated the on-device performance of each model. MarNASNets explored achieved accuracy comparable to existing CNN models with smaller model sizes. MarNASNet-C achieved accuracies of 92.60, 94.52, and 88.92 % for HASC, UCI, and WISDM, respectively. Especially for HASC and UCI, MarNASNet-C achieved the highest accuracies despite the small model size. Their latency was also comparable to that of existing CNN models, enabling real-time on-device inference.</abstract><pub>IEEE</pub><doi>10.1109/JSEN.2023.3292380</doi><orcidid>https://orcid.org/0000-0003-3048-6921</orcidid><orcidid>https://orcid.org/0000-0002-0768-1406</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1530-437X |
ispartof | IEEE sensors journal, 2023-07, p.1-1 |
issn | 1530-437X |
language | eng |
recordid | cdi_ieee_primary_10179199 |
source | IEEE Xplore |
subjects | Computational modeling Computer architecture Convolutional Neural Network Convolutional neural networks Deep Learning Human activity recognition Image recognition Neural Architecture Search Sensors Smart phones |
title | MarNASNets: Towards CNN model architectures specific to sensor-based human activity recognition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T19%3A00%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MarNASNets:%20Towards%20CNN%20model%20architectures%20specific%20to%20sensor-based%20human%20activity%20recognition&rft.jtitle=IEEE%20sensors%20journal&rft.au=Kobayashi,%20Satoshi&rft.date=2023-07-11&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=1530-437X&rft.coden=ISJEAZ&rft_id=info:doi/10.1109/JSEN.2023.3292380&rft_dat=%3Cieee_RIE%3E10179199%3C/ieee_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10179199&rfr_iscdi=true |