Efficient segmentation and 1D-CNN model optimization for recognizing human actions with mobile sensors
This research proposes a unique approach to human action recognition using mobile sensor data and a computationally efficient 1D Convolutional Neural Network (1D-CNN). In this research, a 1D-CNN model is constructed to recognize human actions using data from an accelerometer sensor. In order to incr...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 1 |
container_start_page | |
container_title | |
container_volume | 3122 |
creator | Thyagharajan, K. K. Kalaiarasi, G. Saravanan, P. Balaji, L. Vignesh, T. |
description | This research proposes a unique approach to human action recognition using mobile sensor data and a computationally efficient 1D Convolutional Neural Network (1D-CNN). In this research, a 1D-CNN model is constructed to recognize human actions using data from an accelerometer sensor. In order to increase model performance, the study looks into the ideal number of layers and epochs. The proposed method also automates the data annotation to simplify the training process. This research highlights the significance of the chosen model and the size of segments in action recognition. This paper investigates the ideal segmentation size for precisely identifying actions. Experimental analysis confirms the effectiveness of the segmentation length in recognizing human actions by reducing false alarms. This paper demonstrates that increasing the number of fully connected layers does not increase precision or accuracy. The paper concludes by proposing subject-independent methods for action recognition and optimizing power consumption for wearable devices. It highlights the potential of using mobile sensor data and 1D-CNNs for future research in human action recognition. The method and model presented in this paper achieve 95% of recognition accuracy. |
doi_str_mv | 10.1063/5.0216521 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>proquest_scita</sourceid><recordid>TN_cdi_proquest_journals_3069308516</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3069308516</sourcerecordid><originalsourceid>FETCH-LOGICAL-p133t-bf96a7fa091fe8f73db6550bd5ce5bb2165e15d37a8ac23b297391bd3c8255ac3</originalsourceid><addsrcrecordid>eNotUM1LwzAcDaLgnB78DwLehM78miVtjzLnB4x5UfAWkjTZMtakJh3i_noz6ukd3hfvIXQLZAaE0wc2IyVwVsIZmgBjUFQc-DmaENLMi3JOvy7RVUo7QsqmquoJsktrnXbGDziZTZdRDi54LH2L4alYrNe4C63Z49APrnPHkbUh4mh02Hh3dH6Dt4dOZo8-kQn_uGGbXcrtTQ71KcR0jS6s3Cdz849T9Pm8_Fi8Fqv3l7fF46rogdKhULbhsrKSNGBNbSvaKs4YUS3Thil1WmaAtbSStdQlVXkEbUC1VNclY1LTKbobc_sYvg8mDWIXDtHnSkEJbyipGfCsuh9VSbtxr-ij62T8FUDE6UfBxP-P9A-FHGYL</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype><pqid>3069308516</pqid></control><display><type>conference_proceeding</type><title>Efficient segmentation and 1D-CNN model optimization for recognizing human actions with mobile sensors</title><source>AIP Journals Complete</source><creator>Thyagharajan, K. K. ; Kalaiarasi, G. ; Saravanan, P. ; Balaji, L. ; Vignesh, T.</creator><contributor>P, Thangaraj ; H, Shankar ; K, Mohana Sundaram</contributor><creatorcontrib>Thyagharajan, K. K. ; Kalaiarasi, G. ; Saravanan, P. ; Balaji, L. ; Vignesh, T. ; P, Thangaraj ; H, Shankar ; K, Mohana Sundaram</creatorcontrib><description>This research proposes a unique approach to human action recognition using mobile sensor data and a computationally efficient 1D Convolutional Neural Network (1D-CNN). In this research, a 1D-CNN model is constructed to recognize human actions using data from an accelerometer sensor. In order to increase model performance, the study looks into the ideal number of layers and epochs. The proposed method also automates the data annotation to simplify the training process. This research highlights the significance of the chosen model and the size of segments in action recognition. This paper investigates the ideal segmentation size for precisely identifying actions. Experimental analysis confirms the effectiveness of the segmentation length in recognizing human actions by reducing false alarms. This paper demonstrates that increasing the number of fully connected layers does not increase precision or accuracy. The paper concludes by proposing subject-independent methods for action recognition and optimizing power consumption for wearable devices. It highlights the potential of using mobile sensor data and 1D-CNNs for future research in human action recognition. The method and model presented in this paper achieve 95% of recognition accuracy.</description><identifier>ISSN: 0094-243X</identifier><identifier>EISSN: 1551-7616</identifier><identifier>DOI: 10.1063/5.0216521</identifier><identifier>CODEN: APCPCS</identifier><language>eng</language><publisher>Melville: American Institute of Physics</publisher><subject>Accelerometers ; Annotations ; Artificial neural networks ; False alarms ; Human activity recognition ; Power consumption ; Segmentation ; Sensors ; Wearable technology</subject><ispartof>AIP conference proceedings, 2024, Vol.3122 (1)</ispartof><rights>Author(s)</rights><rights>2024 Author(s). Published under an exclusive license by AIP Publishing.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://pubs.aip.org/acp/article-lookup/doi/10.1063/5.0216521$$EHTML$$P50$$Gscitation$$H</linktohtml><link.rule.ids>309,310,314,776,780,785,786,790,4498,23909,23910,25118,27901,27902,76126</link.rule.ids></links><search><contributor>P, Thangaraj</contributor><contributor>H, Shankar</contributor><contributor>K, Mohana Sundaram</contributor><creatorcontrib>Thyagharajan, K. K.</creatorcontrib><creatorcontrib>Kalaiarasi, G.</creatorcontrib><creatorcontrib>Saravanan, P.</creatorcontrib><creatorcontrib>Balaji, L.</creatorcontrib><creatorcontrib>Vignesh, T.</creatorcontrib><title>Efficient segmentation and 1D-CNN model optimization for recognizing human actions with mobile sensors</title><title>AIP conference proceedings</title><description>This research proposes a unique approach to human action recognition using mobile sensor data and a computationally efficient 1D Convolutional Neural Network (1D-CNN). In this research, a 1D-CNN model is constructed to recognize human actions using data from an accelerometer sensor. In order to increase model performance, the study looks into the ideal number of layers and epochs. The proposed method also automates the data annotation to simplify the training process. This research highlights the significance of the chosen model and the size of segments in action recognition. This paper investigates the ideal segmentation size for precisely identifying actions. Experimental analysis confirms the effectiveness of the segmentation length in recognizing human actions by reducing false alarms. This paper demonstrates that increasing the number of fully connected layers does not increase precision or accuracy. The paper concludes by proposing subject-independent methods for action recognition and optimizing power consumption for wearable devices. It highlights the potential of using mobile sensor data and 1D-CNNs for future research in human action recognition. The method and model presented in this paper achieve 95% of recognition accuracy.</description><subject>Accelerometers</subject><subject>Annotations</subject><subject>Artificial neural networks</subject><subject>False alarms</subject><subject>Human activity recognition</subject><subject>Power consumption</subject><subject>Segmentation</subject><subject>Sensors</subject><subject>Wearable technology</subject><issn>0094-243X</issn><issn>1551-7616</issn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNotUM1LwzAcDaLgnB78DwLehM78miVtjzLnB4x5UfAWkjTZMtakJh3i_noz6ukd3hfvIXQLZAaE0wc2IyVwVsIZmgBjUFQc-DmaENLMi3JOvy7RVUo7QsqmquoJsktrnXbGDziZTZdRDi54LH2L4alYrNe4C63Z49APrnPHkbUh4mh02Hh3dH6Dt4dOZo8-kQn_uGGbXcrtTQ71KcR0jS6s3Cdz849T9Pm8_Fi8Fqv3l7fF46rogdKhULbhsrKSNGBNbSvaKs4YUS3Thil1WmaAtbSStdQlVXkEbUC1VNclY1LTKbobc_sYvg8mDWIXDtHnSkEJbyipGfCsuh9VSbtxr-ij62T8FUDE6UfBxP-P9A-FHGYL</recordid><startdate>20240618</startdate><enddate>20240618</enddate><creator>Thyagharajan, K. K.</creator><creator>Kalaiarasi, G.</creator><creator>Saravanan, P.</creator><creator>Balaji, L.</creator><creator>Vignesh, T.</creator><general>American Institute of Physics</general><scope>8FD</scope><scope>H8D</scope><scope>L7M</scope></search><sort><creationdate>20240618</creationdate><title>Efficient segmentation and 1D-CNN model optimization for recognizing human actions with mobile sensors</title><author>Thyagharajan, K. K. ; Kalaiarasi, G. ; Saravanan, P. ; Balaji, L. ; Vignesh, T.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p133t-bf96a7fa091fe8f73db6550bd5ce5bb2165e15d37a8ac23b297391bd3c8255ac3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accelerometers</topic><topic>Annotations</topic><topic>Artificial neural networks</topic><topic>False alarms</topic><topic>Human activity recognition</topic><topic>Power consumption</topic><topic>Segmentation</topic><topic>Sensors</topic><topic>Wearable technology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Thyagharajan, K. K.</creatorcontrib><creatorcontrib>Kalaiarasi, G.</creatorcontrib><creatorcontrib>Saravanan, P.</creatorcontrib><creatorcontrib>Balaji, L.</creatorcontrib><creatorcontrib>Vignesh, T.</creatorcontrib><collection>Technology Research Database</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Thyagharajan, K. K.</au><au>Kalaiarasi, G.</au><au>Saravanan, P.</au><au>Balaji, L.</au><au>Vignesh, T.</au><au>P, Thangaraj</au><au>H, Shankar</au><au>K, Mohana Sundaram</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Efficient segmentation and 1D-CNN model optimization for recognizing human actions with mobile sensors</atitle><btitle>AIP conference proceedings</btitle><date>2024-06-18</date><risdate>2024</risdate><volume>3122</volume><issue>1</issue><issn>0094-243X</issn><eissn>1551-7616</eissn><coden>APCPCS</coden><abstract>This research proposes a unique approach to human action recognition using mobile sensor data and a computationally efficient 1D Convolutional Neural Network (1D-CNN). In this research, a 1D-CNN model is constructed to recognize human actions using data from an accelerometer sensor. In order to increase model performance, the study looks into the ideal number of layers and epochs. The proposed method also automates the data annotation to simplify the training process. This research highlights the significance of the chosen model and the size of segments in action recognition. This paper investigates the ideal segmentation size for precisely identifying actions. Experimental analysis confirms the effectiveness of the segmentation length in recognizing human actions by reducing false alarms. This paper demonstrates that increasing the number of fully connected layers does not increase precision or accuracy. The paper concludes by proposing subject-independent methods for action recognition and optimizing power consumption for wearable devices. It highlights the potential of using mobile sensor data and 1D-CNNs for future research in human action recognition. The method and model presented in this paper achieve 95% of recognition accuracy.</abstract><cop>Melville</cop><pub>American Institute of Physics</pub><doi>10.1063/5.0216521</doi><tpages>14</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0094-243X |
ispartof | AIP conference proceedings, 2024, Vol.3122 (1) |
issn | 0094-243X 1551-7616 |
language | eng |
recordid | cdi_proquest_journals_3069308516 |
source | AIP Journals Complete |
subjects | Accelerometers Annotations Artificial neural networks False alarms Human activity recognition Power consumption Segmentation Sensors Wearable technology |
title | Efficient segmentation and 1D-CNN model optimization for recognizing human actions with mobile sensors |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T17%3A40%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_scita&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Efficient%20segmentation%20and%201D-CNN%20model%20optimization%20for%20recognizing%20human%20actions%20with%20mobile%20sensors&rft.btitle=AIP%20conference%20proceedings&rft.au=Thyagharajan,%20K.%20K.&rft.date=2024-06-18&rft.volume=3122&rft.issue=1&rft.issn=0094-243X&rft.eissn=1551-7616&rft.coden=APCPCS&rft_id=info:doi/10.1063/5.0216521&rft_dat=%3Cproquest_scita%3E3069308516%3C/proquest_scita%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3069308516&rft_id=info:pmid/&rfr_iscdi=true |