Modeling transformer architecture with attention layer for human activity recognition
Human activity recognition (HAR) is necessary in numerous fields, involving medicine, sports, and security. Traditional HAR methods often rely on complex feature extraction from raw input data, while convolutional neural networks (CNN) are primarily designed for 2D data. The proposed approach seeks...
Gespeichert in:
Veröffentlicht in: | Neural computing & applications 2024-04, Vol.36 (10), p.5515-5528 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 5528 |
---|---|
container_issue | 10 |
container_start_page | 5515 |
container_title | Neural computing & applications |
container_volume | 36 |
creator | Pareek, Gunjan Nigam, Swati Singh, Rajiv |
description | Human activity recognition (HAR) is necessary in numerous fields, involving medicine, sports, and security. Traditional HAR methods often rely on complex feature extraction from raw input data, while convolutional neural networks (CNN) are primarily designed for 2D data. The proposed approach seeks to overcome these limitations by leveraging both spatial and temporal attributes for improved action detection and enhancing the understanding of human movements across adjacent frames. This research aims to address the challenges of HAR by introducing a new model that combines a 3D CNN architecture with an attention layer. A 3D convolution transformer is employed to capture intricate spatial and temporal features, generate multiple data channels from input frames, and optimize performance through regularization and model ensemble techniques. The main findings reveal outstanding results on benchmark datasets, with an accuracy of 98.09% and 99.09% on the Weizmann and UCF101 datasets, respectively. These results underscore the model's effectiveness in accurately identifying human activities in movie-based natural environments. |
doi_str_mv | 10.1007/s00521-023-09362-7 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2937178578</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2937178578</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-339902fd51b18deba043e6398c76e5b6a5cc05244670e0f3ec375e99aeb590063</originalsourceid><addsrcrecordid>eNp9kD1PwzAQhi0EEqXwB5gsMQfOcWzHI6r4kopY6Gw57qV11SbFdkD997gEiY3phnve93QPIdcMbhmAuosAomQFlLwAzWVZqBMyYRXnBQdRn5IJ6CqvZcXPyUWMGwCoZC0mZPHaL3HruxVNwXax7cMOA7XBrX1Cl4aA9MunNbUpYZd839GtPWQig3Q97GxHrUv-06cDDej6VeeP0CU5a-024tXvnJLF48P77LmYvz29zO7nheNMp4JzraFsl4I1rF5iY6HiKLmunZIoGmmFc_mvqpIKEFqOjiuBWltshAaQfEpuxt596D8GjMls-iF0-aQpNVdM1ULVmSpHyoU-xoCt2Qe_s-FgGJijPjPqM1mf-dFnVA7xMRQz3K0w_FX_k_oG591ztQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2937178578</pqid></control><display><type>article</type><title>Modeling transformer architecture with attention layer for human activity recognition</title><source>SpringerLink Journals - AutoHoldings</source><creator>Pareek, Gunjan ; Nigam, Swati ; Singh, Rajiv</creator><creatorcontrib>Pareek, Gunjan ; Nigam, Swati ; Singh, Rajiv</creatorcontrib><description>Human activity recognition (HAR) is necessary in numerous fields, involving medicine, sports, and security. Traditional HAR methods often rely on complex feature extraction from raw input data, while convolutional neural networks (CNN) are primarily designed for 2D data. The proposed approach seeks to overcome these limitations by leveraging both spatial and temporal attributes for improved action detection and enhancing the understanding of human movements across adjacent frames. This research aims to address the challenges of HAR by introducing a new model that combines a 3D CNN architecture with an attention layer. A 3D convolution transformer is employed to capture intricate spatial and temporal features, generate multiple data channels from input frames, and optimize performance through regularization and model ensemble techniques. The main findings reveal outstanding results on benchmark datasets, with an accuracy of 98.09% and 99.09% on the Weizmann and UCF101 datasets, respectively. These results underscore the model's effectiveness in accurately identifying human activities in movie-based natural environments.</description><identifier>ISSN: 0941-0643</identifier><identifier>EISSN: 1433-3058</identifier><identifier>DOI: 10.1007/s00521-023-09362-7</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Accuracy ; Artificial Intelligence ; Artificial neural networks ; Computational Biology/Bioinformatics ; Computational Science and Engineering ; Computer Science ; Data Mining and Knowledge Discovery ; Datasets ; Deep learning ; Feature extraction ; Human activity recognition ; Human motion ; Image Processing and Computer Vision ; Motion perception ; Neural networks ; Original Article ; Probability and Statistics in Computer Science ; Regularization</subject><ispartof>Neural computing & applications, 2024-04, Vol.36 (10), p.5515-5528</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-339902fd51b18deba043e6398c76e5b6a5cc05244670e0f3ec375e99aeb590063</citedby><cites>FETCH-LOGICAL-c319t-339902fd51b18deba043e6398c76e5b6a5cc05244670e0f3ec375e99aeb590063</cites><orcidid>0000-0003-4022-9945</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00521-023-09362-7$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00521-023-09362-7$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Pareek, Gunjan</creatorcontrib><creatorcontrib>Nigam, Swati</creatorcontrib><creatorcontrib>Singh, Rajiv</creatorcontrib><title>Modeling transformer architecture with attention layer for human activity recognition</title><title>Neural computing & applications</title><addtitle>Neural Comput & Applic</addtitle><description>Human activity recognition (HAR) is necessary in numerous fields, involving medicine, sports, and security. Traditional HAR methods often rely on complex feature extraction from raw input data, while convolutional neural networks (CNN) are primarily designed for 2D data. The proposed approach seeks to overcome these limitations by leveraging both spatial and temporal attributes for improved action detection and enhancing the understanding of human movements across adjacent frames. This research aims to address the challenges of HAR by introducing a new model that combines a 3D CNN architecture with an attention layer. A 3D convolution transformer is employed to capture intricate spatial and temporal features, generate multiple data channels from input frames, and optimize performance through regularization and model ensemble techniques. The main findings reveal outstanding results on benchmark datasets, with an accuracy of 98.09% and 99.09% on the Weizmann and UCF101 datasets, respectively. These results underscore the model's effectiveness in accurately identifying human activities in movie-based natural environments.</description><subject>Accuracy</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Computational Biology/Bioinformatics</subject><subject>Computational Science and Engineering</subject><subject>Computer Science</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>Human activity recognition</subject><subject>Human motion</subject><subject>Image Processing and Computer Vision</subject><subject>Motion perception</subject><subject>Neural networks</subject><subject>Original Article</subject><subject>Probability and Statistics in Computer Science</subject><subject>Regularization</subject><issn>0941-0643</issn><issn>1433-3058</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kD1PwzAQhi0EEqXwB5gsMQfOcWzHI6r4kopY6Gw57qV11SbFdkD997gEiY3phnve93QPIdcMbhmAuosAomQFlLwAzWVZqBMyYRXnBQdRn5IJ6CqvZcXPyUWMGwCoZC0mZPHaL3HruxVNwXax7cMOA7XBrX1Cl4aA9MunNbUpYZd839GtPWQig3Q97GxHrUv-06cDDej6VeeP0CU5a-024tXvnJLF48P77LmYvz29zO7nheNMp4JzraFsl4I1rF5iY6HiKLmunZIoGmmFc_mvqpIKEFqOjiuBWltshAaQfEpuxt596D8GjMls-iF0-aQpNVdM1ULVmSpHyoU-xoCt2Qe_s-FgGJijPjPqM1mf-dFnVA7xMRQz3K0w_FX_k_oG591ztQ</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Pareek, Gunjan</creator><creator>Nigam, Swati</creator><creator>Singh, Rajiv</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-4022-9945</orcidid></search><sort><creationdate>20240401</creationdate><title>Modeling transformer architecture with attention layer for human activity recognition</title><author>Pareek, Gunjan ; Nigam, Swati ; Singh, Rajiv</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-339902fd51b18deba043e6398c76e5b6a5cc05244670e0f3ec375e99aeb590063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Computational Biology/Bioinformatics</topic><topic>Computational Science and Engineering</topic><topic>Computer Science</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>Human activity recognition</topic><topic>Human motion</topic><topic>Image Processing and Computer Vision</topic><topic>Motion perception</topic><topic>Neural networks</topic><topic>Original Article</topic><topic>Probability and Statistics in Computer Science</topic><topic>Regularization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Pareek, Gunjan</creatorcontrib><creatorcontrib>Nigam, Swati</creatorcontrib><creatorcontrib>Singh, Rajiv</creatorcontrib><collection>CrossRef</collection><jtitle>Neural computing & applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Pareek, Gunjan</au><au>Nigam, Swati</au><au>Singh, Rajiv</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Modeling transformer architecture with attention layer for human activity recognition</atitle><jtitle>Neural computing & applications</jtitle><stitle>Neural Comput & Applic</stitle><date>2024-04-01</date><risdate>2024</risdate><volume>36</volume><issue>10</issue><spage>5515</spage><epage>5528</epage><pages>5515-5528</pages><issn>0941-0643</issn><eissn>1433-3058</eissn><abstract>Human activity recognition (HAR) is necessary in numerous fields, involving medicine, sports, and security. Traditional HAR methods often rely on complex feature extraction from raw input data, while convolutional neural networks (CNN) are primarily designed for 2D data. The proposed approach seeks to overcome these limitations by leveraging both spatial and temporal attributes for improved action detection and enhancing the understanding of human movements across adjacent frames. This research aims to address the challenges of HAR by introducing a new model that combines a 3D CNN architecture with an attention layer. A 3D convolution transformer is employed to capture intricate spatial and temporal features, generate multiple data channels from input frames, and optimize performance through regularization and model ensemble techniques. The main findings reveal outstanding results on benchmark datasets, with an accuracy of 98.09% and 99.09% on the Weizmann and UCF101 datasets, respectively. These results underscore the model's effectiveness in accurately identifying human activities in movie-based natural environments.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s00521-023-09362-7</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0003-4022-9945</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0941-0643 |
ispartof | Neural computing & applications, 2024-04, Vol.36 (10), p.5515-5528 |
issn | 0941-0643 1433-3058 |
language | eng |
recordid | cdi_proquest_journals_2937178578 |
source | SpringerLink Journals - AutoHoldings |
subjects | Accuracy Artificial Intelligence Artificial neural networks Computational Biology/Bioinformatics Computational Science and Engineering Computer Science Data Mining and Knowledge Discovery Datasets Deep learning Feature extraction Human activity recognition Human motion Image Processing and Computer Vision Motion perception Neural networks Original Article Probability and Statistics in Computer Science Regularization |
title | Modeling transformer architecture with attention layer for human activity recognition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T10%3A57%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Modeling%20transformer%20architecture%20with%20attention%20layer%20for%20human%20activity%20recognition&rft.jtitle=Neural%20computing%20&%20applications&rft.au=Pareek,%20Gunjan&rft.date=2024-04-01&rft.volume=36&rft.issue=10&rft.spage=5515&rft.epage=5528&rft.pages=5515-5528&rft.issn=0941-0643&rft.eissn=1433-3058&rft_id=info:doi/10.1007/s00521-023-09362-7&rft_dat=%3Cproquest_cross%3E2937178578%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2937178578&rft_id=info:pmid/&rfr_iscdi=true |