Challenges of the Creation of a Dataset for Vision Based Human Hand Action Recognition in Industrial Assembly
This work presents the Industrial Hand Action Dataset V1, an industrial assembly dataset consisting of 12 classes with 459,180 images in the basic version and 2,295,900 images after spatial augmentation. Compared to other freely available datasets tested, it has an above-average duration and, in add...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-03 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Sturm, Fabian Hergenroether, Elke Reinhardt, Julian Vojnovikj, Petar Smilevski Siegel, Melanie |
description | This work presents the Industrial Hand Action Dataset V1, an industrial assembly dataset consisting of 12 classes with 459,180 images in the basic version and 2,295,900 images after spatial augmentation. Compared to other freely available datasets tested, it has an above-average duration and, in addition, meets the technical and legal requirements for industrial assembly lines. Furthermore, the dataset contains occlusions, hand-object interaction, and various fine-grained human hand actions for industrial assembly tasks that were not found in combination in examined datasets. The recorded ground truth assembly classes were selected after extensive observation of real-world use cases. A Gated Transformer Network, a state-of-the-art model from the transformer domain was adapted, and proved with a test accuracy of 86.25% before hyperparameter tuning by 18,269,959 trainable parameters, that it is possible to train sequential deep learning models with this dataset. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2784692394</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2784692394</sourcerecordid><originalsourceid>FETCH-proquest_journals_27846923943</originalsourceid><addsrcrecordid>eNqNis0KwjAQhIMgWNR3WPAs1KT-9KhVqVcRr2W1W01JE82mB9_eKj6Ap5n55uuJSCo1m64SKQdizFzHcSwXSzmfq0g02R2NIXsjBldBuBNknjBoZz8bYYsBmQJUzsNZ84dvOlBC3jZoIUdbwvr69Y90dTerv11bONiy5eA1GlgzU3Mxr5HoV2iYxr8cisl-d8ry6cO7Z0scitq13nZXIZerZJFKlSbqP-sNMIVIwg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2784692394</pqid></control><display><type>article</type><title>Challenges of the Creation of a Dataset for Vision Based Human Hand Action Recognition in Industrial Assembly</title><source>Free E- Journals</source><creator>Sturm, Fabian ; Hergenroether, Elke ; Reinhardt, Julian ; Vojnovikj, Petar Smilevski ; Siegel, Melanie</creator><creatorcontrib>Sturm, Fabian ; Hergenroether, Elke ; Reinhardt, Julian ; Vojnovikj, Petar Smilevski ; Siegel, Melanie</creatorcontrib><description>This work presents the Industrial Hand Action Dataset V1, an industrial assembly dataset consisting of 12 classes with 459,180 images in the basic version and 2,295,900 images after spatial augmentation. Compared to other freely available datasets tested, it has an above-average duration and, in addition, meets the technical and legal requirements for industrial assembly lines. Furthermore, the dataset contains occlusions, hand-object interaction, and various fine-grained human hand actions for industrial assembly tasks that were not found in combination in examined datasets. The recorded ground truth assembly classes were selected after extensive observation of real-world use cases. A Gated Transformer Network, a state-of-the-art model from the transformer domain was adapted, and proved with a test accuracy of 86.25% before hyperparameter tuning by 18,269,959 trainable parameters, that it is possible to train sequential deep learning models with this dataset.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Assembly lines ; Datasets ; Machine learning</subject><ispartof>arXiv.org, 2023-03</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Sturm, Fabian</creatorcontrib><creatorcontrib>Hergenroether, Elke</creatorcontrib><creatorcontrib>Reinhardt, Julian</creatorcontrib><creatorcontrib>Vojnovikj, Petar Smilevski</creatorcontrib><creatorcontrib>Siegel, Melanie</creatorcontrib><title>Challenges of the Creation of a Dataset for Vision Based Human Hand Action Recognition in Industrial Assembly</title><title>arXiv.org</title><description>This work presents the Industrial Hand Action Dataset V1, an industrial assembly dataset consisting of 12 classes with 459,180 images in the basic version and 2,295,900 images after spatial augmentation. Compared to other freely available datasets tested, it has an above-average duration and, in addition, meets the technical and legal requirements for industrial assembly lines. Furthermore, the dataset contains occlusions, hand-object interaction, and various fine-grained human hand actions for industrial assembly tasks that were not found in combination in examined datasets. The recorded ground truth assembly classes were selected after extensive observation of real-world use cases. A Gated Transformer Network, a state-of-the-art model from the transformer domain was adapted, and proved with a test accuracy of 86.25% before hyperparameter tuning by 18,269,959 trainable parameters, that it is possible to train sequential deep learning models with this dataset.</description><subject>Assembly lines</subject><subject>Datasets</subject><subject>Machine learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNis0KwjAQhIMgWNR3WPAs1KT-9KhVqVcRr2W1W01JE82mB9_eKj6Ap5n55uuJSCo1m64SKQdizFzHcSwXSzmfq0g02R2NIXsjBldBuBNknjBoZz8bYYsBmQJUzsNZ84dvOlBC3jZoIUdbwvr69Y90dTerv11bONiy5eA1GlgzU3Mxr5HoV2iYxr8cisl-d8ry6cO7Z0scitq13nZXIZerZJFKlSbqP-sNMIVIwg</recordid><startdate>20230307</startdate><enddate>20230307</enddate><creator>Sturm, Fabian</creator><creator>Hergenroether, Elke</creator><creator>Reinhardt, Julian</creator><creator>Vojnovikj, Petar Smilevski</creator><creator>Siegel, Melanie</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230307</creationdate><title>Challenges of the Creation of a Dataset for Vision Based Human Hand Action Recognition in Industrial Assembly</title><author>Sturm, Fabian ; Hergenroether, Elke ; Reinhardt, Julian ; Vojnovikj, Petar Smilevski ; Siegel, Melanie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27846923943</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Assembly lines</topic><topic>Datasets</topic><topic>Machine learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Sturm, Fabian</creatorcontrib><creatorcontrib>Hergenroether, Elke</creatorcontrib><creatorcontrib>Reinhardt, Julian</creatorcontrib><creatorcontrib>Vojnovikj, Petar Smilevski</creatorcontrib><creatorcontrib>Siegel, Melanie</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sturm, Fabian</au><au>Hergenroether, Elke</au><au>Reinhardt, Julian</au><au>Vojnovikj, Petar Smilevski</au><au>Siegel, Melanie</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Challenges of the Creation of a Dataset for Vision Based Human Hand Action Recognition in Industrial Assembly</atitle><jtitle>arXiv.org</jtitle><date>2023-03-07</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>This work presents the Industrial Hand Action Dataset V1, an industrial assembly dataset consisting of 12 classes with 459,180 images in the basic version and 2,295,900 images after spatial augmentation. Compared to other freely available datasets tested, it has an above-average duration and, in addition, meets the technical and legal requirements for industrial assembly lines. Furthermore, the dataset contains occlusions, hand-object interaction, and various fine-grained human hand actions for industrial assembly tasks that were not found in combination in examined datasets. The recorded ground truth assembly classes were selected after extensive observation of real-world use cases. A Gated Transformer Network, a state-of-the-art model from the transformer domain was adapted, and proved with a test accuracy of 86.25% before hyperparameter tuning by 18,269,959 trainable parameters, that it is possible to train sequential deep learning models with this dataset.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2784692394 |
source | Free E- Journals |
subjects | Assembly lines Datasets Machine learning |
title | Challenges of the Creation of a Dataset for Vision Based Human Hand Action Recognition in Industrial Assembly |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T03%3A35%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Challenges%20of%20the%20Creation%20of%20a%20Dataset%20for%20Vision%20Based%20Human%20Hand%20Action%20Recognition%20in%20Industrial%20Assembly&rft.jtitle=arXiv.org&rft.au=Sturm,%20Fabian&rft.date=2023-03-07&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2784692394%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2784692394&rft_id=info:pmid/&rfr_iscdi=true |