Attention-based encoder-decoder networks for workflow recognition
Behavior recognition is a fundamental yet challenging task in intelligent surveillance system, which plays an increasingly important role in the process of “Industry 4.0”. However, monitoring the workflow of both workers and machines in production procedure is quite difficult in complex industrial e...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2021-11, Vol.80 (28-29), p.34973-34995 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 34995 |
---|---|
container_issue | 28-29 |
container_start_page | 34973 |
container_title | Multimedia tools and applications |
container_volume | 80 |
creator | Zhang, Min Hu, Haiyang Li, Zhongjin Chen, Jie |
description | Behavior recognition is a fundamental yet challenging task in intelligent surveillance system, which plays an increasingly important role in the process of “Industry 4.0”. However, monitoring the workflow of both workers and machines in production procedure is quite difficult in complex industrial environments. In this paper, we propose a novel workflow recognition framework to recognize the behavior of working subjects based on the well-designed encoder-decoder structure. Namely, attention-based workflow recognition framework, termed as AWR. To improve the accuracy of workflow recognition, a temporal attention cell (
AttCell
) is introduced to draw dynamic attention distribution in the last stage of the framework. In addition, a Rough-to-Refine phase localization model is exploited to improve localization accuracy, which can effectively identify the boundaries of a specific phase instance in long untrimmed videos. Comprehensive experiments indicate a 1.4% mAP@IoU= 0.4 boost on THUMOS’14 dataset and a 3.4% mAP@IoU= 0.4 boost on hand-crafted workflow dataset detection challenge compared to the advanced GTAN pipeline respectively. More remarkably, the effectiveness of the workflow recognition system is validated in a real-world production scenario. |
doi_str_mv | 10.1007/s11042-021-10633-5 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2604660496</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2604660496</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-6d0b27e7a3c4f9fbf3fd92ccbe4ca0779146a4dc047be880d46a016d0270d8063</originalsourceid><addsrcrecordid>eNp9UE1PwzAMjRBIjMEf4FSJc8D5aNMepwkY0iQucI7axJk6RjKSThP_nnRF4sbBsi2_92w_Qm4Z3DMA9ZAYA8kpcEYZVELQ8ozMWKkEVYqz81yLGqgqgV2Sq5S2AKwquZyRxWIY0A998LRrE9oCvQkWI7V4yoXH4RjiRypciMVYuV04FjFPN74fedfkwrW7hDe_eU7enx7fliu6fn1-WS7W1AjWDLSy0HGFqhVGusZ1TjjbcGM6lKYFpRomq1ZaA1J1WNdgc5uPtMAV2Dr_NCd3k-4-hq8DpkFvwyH6vFLzCmSVoxlRfEKZGFKK6PQ-9p9t_NYM9GiVnqzS2Sp9skqXmSQmUspgv8H4J_0P6wddEmxG</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2604660496</pqid></control><display><type>article</type><title>Attention-based encoder-decoder networks for workflow recognition</title><source>SpringerLink Journals - AutoHoldings</source><creator>Zhang, Min ; Hu, Haiyang ; Li, Zhongjin ; Chen, Jie</creator><creatorcontrib>Zhang, Min ; Hu, Haiyang ; Li, Zhongjin ; Chen, Jie</creatorcontrib><description>Behavior recognition is a fundamental yet challenging task in intelligent surveillance system, which plays an increasingly important role in the process of “Industry 4.0”. However, monitoring the workflow of both workers and machines in production procedure is quite difficult in complex industrial environments. In this paper, we propose a novel workflow recognition framework to recognize the behavior of working subjects based on the well-designed encoder-decoder structure. Namely, attention-based workflow recognition framework, termed as AWR. To improve the accuracy of workflow recognition, a temporal attention cell (
AttCell
) is introduced to draw dynamic attention distribution in the last stage of the framework. In addition, a Rough-to-Refine phase localization model is exploited to improve localization accuracy, which can effectively identify the boundaries of a specific phase instance in long untrimmed videos. Comprehensive experiments indicate a 1.4% mAP@IoU= 0.4 boost on THUMOS’14 dataset and a 3.4% mAP@IoU= 0.4 boost on hand-crafted workflow dataset detection challenge compared to the advanced GTAN pipeline respectively. More remarkably, the effectiveness of the workflow recognition system is validated in a real-world production scenario.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-021-10633-5</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>1166- Advances of machine learning in data analytics and visual information processing ; Coders ; Computer Communication Networks ; Computer Science ; Data Structures and Information Theory ; Datasets ; Encoders-Decoders ; Localization ; Multimedia Information Systems ; Recognition ; Special Purpose and Application-Based Systems ; Workflow</subject><ispartof>Multimedia tools and applications, 2021-11, Vol.80 (28-29), p.34973-34995</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-6d0b27e7a3c4f9fbf3fd92ccbe4ca0779146a4dc047be880d46a016d0270d8063</citedby><cites>FETCH-LOGICAL-c319t-6d0b27e7a3c4f9fbf3fd92ccbe4ca0779146a4dc047be880d46a016d0270d8063</cites><orcidid>0000-0002-6070-8524</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-021-10633-5$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-021-10633-5$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51297</link.rule.ids></links><search><creatorcontrib>Zhang, Min</creatorcontrib><creatorcontrib>Hu, Haiyang</creatorcontrib><creatorcontrib>Li, Zhongjin</creatorcontrib><creatorcontrib>Chen, Jie</creatorcontrib><title>Attention-based encoder-decoder networks for workflow recognition</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>Behavior recognition is a fundamental yet challenging task in intelligent surveillance system, which plays an increasingly important role in the process of “Industry 4.0”. However, monitoring the workflow of both workers and machines in production procedure is quite difficult in complex industrial environments. In this paper, we propose a novel workflow recognition framework to recognize the behavior of working subjects based on the well-designed encoder-decoder structure. Namely, attention-based workflow recognition framework, termed as AWR. To improve the accuracy of workflow recognition, a temporal attention cell (
AttCell
) is introduced to draw dynamic attention distribution in the last stage of the framework. In addition, a Rough-to-Refine phase localization model is exploited to improve localization accuracy, which can effectively identify the boundaries of a specific phase instance in long untrimmed videos. Comprehensive experiments indicate a 1.4% mAP@IoU= 0.4 boost on THUMOS’14 dataset and a 3.4% mAP@IoU= 0.4 boost on hand-crafted workflow dataset detection challenge compared to the advanced GTAN pipeline respectively. More remarkably, the effectiveness of the workflow recognition system is validated in a real-world production scenario.</description><subject>1166- Advances of machine learning in data analytics and visual information processing</subject><subject>Coders</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Data Structures and Information Theory</subject><subject>Datasets</subject><subject>Encoders-Decoders</subject><subject>Localization</subject><subject>Multimedia Information Systems</subject><subject>Recognition</subject><subject>Special Purpose and Application-Based Systems</subject><subject>Workflow</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>BENPR</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9UE1PwzAMjRBIjMEf4FSJc8D5aNMepwkY0iQucI7axJk6RjKSThP_nnRF4sbBsi2_92w_Qm4Z3DMA9ZAYA8kpcEYZVELQ8ozMWKkEVYqz81yLGqgqgV2Sq5S2AKwquZyRxWIY0A998LRrE9oCvQkWI7V4yoXH4RjiRypciMVYuV04FjFPN74fedfkwrW7hDe_eU7enx7fliu6fn1-WS7W1AjWDLSy0HGFqhVGusZ1TjjbcGM6lKYFpRomq1ZaA1J1WNdgc5uPtMAV2Dr_NCd3k-4-hq8DpkFvwyH6vFLzCmSVoxlRfEKZGFKK6PQ-9p9t_NYM9GiVnqzS2Sp9skqXmSQmUspgv8H4J_0P6wddEmxG</recordid><startdate>20211101</startdate><enddate>20211101</enddate><creator>Zhang, Min</creator><creator>Hu, Haiyang</creator><creator>Li, Zhongjin</creator><creator>Chen, Jie</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-6070-8524</orcidid></search><sort><creationdate>20211101</creationdate><title>Attention-based encoder-decoder networks for workflow recognition</title><author>Zhang, Min ; Hu, Haiyang ; Li, Zhongjin ; Chen, Jie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-6d0b27e7a3c4f9fbf3fd92ccbe4ca0779146a4dc047be880d46a016d0270d8063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>1166- Advances of machine learning in data analytics and visual information processing</topic><topic>Coders</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Data Structures and Information Theory</topic><topic>Datasets</topic><topic>Encoders-Decoders</topic><topic>Localization</topic><topic>Multimedia Information Systems</topic><topic>Recognition</topic><topic>Special Purpose and Application-Based Systems</topic><topic>Workflow</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Min</creatorcontrib><creatorcontrib>Hu, Haiyang</creatorcontrib><creatorcontrib>Li, Zhongjin</creatorcontrib><creatorcontrib>Chen, Jie</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Min</au><au>Hu, Haiyang</au><au>Li, Zhongjin</au><au>Chen, Jie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Attention-based encoder-decoder networks for workflow recognition</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2021-11-01</date><risdate>2021</risdate><volume>80</volume><issue>28-29</issue><spage>34973</spage><epage>34995</epage><pages>34973-34995</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>Behavior recognition is a fundamental yet challenging task in intelligent surveillance system, which plays an increasingly important role in the process of “Industry 4.0”. However, monitoring the workflow of both workers and machines in production procedure is quite difficult in complex industrial environments. In this paper, we propose a novel workflow recognition framework to recognize the behavior of working subjects based on the well-designed encoder-decoder structure. Namely, attention-based workflow recognition framework, termed as AWR. To improve the accuracy of workflow recognition, a temporal attention cell (
AttCell
) is introduced to draw dynamic attention distribution in the last stage of the framework. In addition, a Rough-to-Refine phase localization model is exploited to improve localization accuracy, which can effectively identify the boundaries of a specific phase instance in long untrimmed videos. Comprehensive experiments indicate a 1.4% mAP@IoU= 0.4 boost on THUMOS’14 dataset and a 3.4% mAP@IoU= 0.4 boost on hand-crafted workflow dataset detection challenge compared to the advanced GTAN pipeline respectively. More remarkably, the effectiveness of the workflow recognition system is validated in a real-world production scenario.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-021-10633-5</doi><tpages>23</tpages><orcidid>https://orcid.org/0000-0002-6070-8524</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1380-7501 |
ispartof | Multimedia tools and applications, 2021-11, Vol.80 (28-29), p.34973-34995 |
issn | 1380-7501 1573-7721 |
language | eng |
recordid | cdi_proquest_journals_2604660496 |
source | SpringerLink Journals - AutoHoldings |
subjects | 1166- Advances of machine learning in data analytics and visual information processing Coders Computer Communication Networks Computer Science Data Structures and Information Theory Datasets Encoders-Decoders Localization Multimedia Information Systems Recognition Special Purpose and Application-Based Systems Workflow |
title | Attention-based encoder-decoder networks for workflow recognition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T04%3A06%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Attention-based%20encoder-decoder%20networks%20for%20workflow%20recognition&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Zhang,%20Min&rft.date=2021-11-01&rft.volume=80&rft.issue=28-29&rft.spage=34973&rft.epage=34995&rft.pages=34973-34995&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-021-10633-5&rft_dat=%3Cproquest_cross%3E2604660496%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2604660496&rft_id=info:pmid/&rfr_iscdi=true |