Event analysis and interpretation of human activity for augmented reality-based assistant systems
In this paper a concept and its implementation of an ergonomic cognitive assistant system for supporting human workers at complex assembly tasks in industrial environments is introduced. Depending on the level of the userpsilas product knowledge this mixed-initiative system follows and gains knowled...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 8 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | |
container_volume | |
creator | Bannat, A. Gast, J. Rigoll, G. Wallhoff, F. |
description | In this paper a concept and its implementation of an ergonomic cognitive assistant system for supporting human workers at complex assembly tasks in industrial environments is introduced. Depending on the level of the userpsilas product knowledge this mixed-initiative system follows and gains knowledge from the human workerpsilas construction steps while it is also able to automatically give hints and instruct the worker whenever needed. The presented agent bases on a closed human-machine interaction loop consisting of the multimodal perception of the workerpsilas action, the comparison with the systempsilas knowledge about the production task, and the displaying of the adequate next assembly instruction step. First experimental results of the assistant system are demonstrated on a simplified use case with the construction of a small toy car using augmented reality display techniques. |
doi_str_mv | 10.1109/ICCP.2008.4648347 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_4648347</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>4648347</ieee_id><sourcerecordid>4648347</sourcerecordid><originalsourceid>FETCH-LOGICAL-i218t-59263d394a01dbd964dd0f7388af9bac6c26c1b8d54b121841e0f1e07ed6715a3</originalsourceid><addsrcrecordid>eNotkN1Kw0AQhRekoNY8gHizL5C4P5P9uZRQtVDQC70uk-xGV5K0ZLeFvL0rduAwM2c438UQcs9ZxTmzj9umea8EY6YCBUaCviKF1YaDABBKS70it39ny-q6ltekiPGH5QKQoOwNwc3ZT4nihMMSQ8yDo2FKfj7OPmEKh4keevp9GnGi2KVwDmmh_WGmePoac9I7Onscslu2GPOGMWMSZmZcYvJjvCOrHofoi0tfk8_nzUfzWu7eXrbN064MgptU1lYo6aQFZNy1zipwjvVaGoO9bbFTnVAdb42roeU5AdyzPkt7pzSvUa7Jwz83eO_3xzmMOC_7y1fkL6udV84</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Event analysis and interpretation of human activity for augmented reality-based assistant systems</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Bannat, A. ; Gast, J. ; Rigoll, G. ; Wallhoff, F.</creator><creatorcontrib>Bannat, A. ; Gast, J. ; Rigoll, G. ; Wallhoff, F.</creatorcontrib><description>In this paper a concept and its implementation of an ergonomic cognitive assistant system for supporting human workers at complex assembly tasks in industrial environments is introduced. Depending on the level of the userpsilas product knowledge this mixed-initiative system follows and gains knowledge from the human workerpsilas construction steps while it is also able to automatically give hints and instruct the worker whenever needed. The presented agent bases on a closed human-machine interaction loop consisting of the multimodal perception of the workerpsilas action, the comparison with the systempsilas knowledge about the production task, and the displaying of the adequate next assembly instruction step. First experimental results of the assistant system are demonstrated on a simplified use case with the construction of a small toy car using augmented reality display techniques.</description><identifier>ISBN: 9781424426737</identifier><identifier>ISBN: 1424426731</identifier><identifier>DOI: 10.1109/ICCP.2008.4648347</identifier><identifier>LCCN: 2008905553</identifier><language>eng</language><publisher>IEEE</publisher><subject>Assembly ; Cameras ; Data gloves ; Fingers ; Humans ; Image color analysis ; Pixel</subject><ispartof>2008 4th International Conference on Intelligent Computer Communication and Processing, 2008, p.1-8</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/4648347$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2052,27902,54895</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/4648347$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Bannat, A.</creatorcontrib><creatorcontrib>Gast, J.</creatorcontrib><creatorcontrib>Rigoll, G.</creatorcontrib><creatorcontrib>Wallhoff, F.</creatorcontrib><title>Event analysis and interpretation of human activity for augmented reality-based assistant systems</title><title>2008 4th International Conference on Intelligent Computer Communication and Processing</title><addtitle>ICCP</addtitle><description>In this paper a concept and its implementation of an ergonomic cognitive assistant system for supporting human workers at complex assembly tasks in industrial environments is introduced. Depending on the level of the userpsilas product knowledge this mixed-initiative system follows and gains knowledge from the human workerpsilas construction steps while it is also able to automatically give hints and instruct the worker whenever needed. The presented agent bases on a closed human-machine interaction loop consisting of the multimodal perception of the workerpsilas action, the comparison with the systempsilas knowledge about the production task, and the displaying of the adequate next assembly instruction step. First experimental results of the assistant system are demonstrated on a simplified use case with the construction of a small toy car using augmented reality display techniques.</description><subject>Assembly</subject><subject>Cameras</subject><subject>Data gloves</subject><subject>Fingers</subject><subject>Humans</subject><subject>Image color analysis</subject><subject>Pixel</subject><isbn>9781424426737</isbn><isbn>1424426731</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2008</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotkN1Kw0AQhRekoNY8gHizL5C4P5P9uZRQtVDQC70uk-xGV5K0ZLeFvL0rduAwM2c438UQcs9ZxTmzj9umea8EY6YCBUaCviKF1YaDABBKS70it39ny-q6ltekiPGH5QKQoOwNwc3ZT4nihMMSQ8yDo2FKfj7OPmEKh4keevp9GnGi2KVwDmmh_WGmePoac9I7Onscslu2GPOGMWMSZmZcYvJjvCOrHofoi0tfk8_nzUfzWu7eXrbN064MgptU1lYo6aQFZNy1zipwjvVaGoO9bbFTnVAdb42roeU5AdyzPkt7pzSvUa7Jwz83eO_3xzmMOC_7y1fkL6udV84</recordid><startdate>20080101</startdate><enddate>20080101</enddate><creator>Bannat, A.</creator><creator>Gast, J.</creator><creator>Rigoll, G.</creator><creator>Wallhoff, F.</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20080101</creationdate><title>Event analysis and interpretation of human activity for augmented reality-based assistant systems</title><author>Bannat, A. ; Gast, J. ; Rigoll, G. ; Wallhoff, F.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i218t-59263d394a01dbd964dd0f7388af9bac6c26c1b8d54b121841e0f1e07ed6715a3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2008</creationdate><topic>Assembly</topic><topic>Cameras</topic><topic>Data gloves</topic><topic>Fingers</topic><topic>Humans</topic><topic>Image color analysis</topic><topic>Pixel</topic><toplevel>online_resources</toplevel><creatorcontrib>Bannat, A.</creatorcontrib><creatorcontrib>Gast, J.</creatorcontrib><creatorcontrib>Rigoll, G.</creatorcontrib><creatorcontrib>Wallhoff, F.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bannat, A.</au><au>Gast, J.</au><au>Rigoll, G.</au><au>Wallhoff, F.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Event analysis and interpretation of human activity for augmented reality-based assistant systems</atitle><btitle>2008 4th International Conference on Intelligent Computer Communication and Processing</btitle><stitle>ICCP</stitle><date>2008-01-01</date><risdate>2008</risdate><spage>1</spage><epage>8</epage><pages>1-8</pages><isbn>9781424426737</isbn><isbn>1424426731</isbn><abstract>In this paper a concept and its implementation of an ergonomic cognitive assistant system for supporting human workers at complex assembly tasks in industrial environments is introduced. Depending on the level of the userpsilas product knowledge this mixed-initiative system follows and gains knowledge from the human workerpsilas construction steps while it is also able to automatically give hints and instruct the worker whenever needed. The presented agent bases on a closed human-machine interaction loop consisting of the multimodal perception of the workerpsilas action, the comparison with the systempsilas knowledge about the production task, and the displaying of the adequate next assembly instruction step. First experimental results of the assistant system are demonstrated on a simplified use case with the construction of a small toy car using augmented reality display techniques.</abstract><pub>IEEE</pub><doi>10.1109/ICCP.2008.4648347</doi><tpages>8</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISBN: 9781424426737 |
ispartof | 2008 4th International Conference on Intelligent Computer Communication and Processing, 2008, p.1-8 |
issn | |
language | eng |
recordid | cdi_ieee_primary_4648347 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Assembly Cameras Data gloves Fingers Humans Image color analysis Pixel |
title | Event analysis and interpretation of human activity for augmented reality-based assistant systems |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T17%3A45%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Event%20analysis%20and%20interpretation%20of%20human%20activity%20for%20augmented%20reality-based%20assistant%20systems&rft.btitle=2008%204th%20International%20Conference%20on%20Intelligent%20Computer%20Communication%20and%20Processing&rft.au=Bannat,%20A.&rft.date=2008-01-01&rft.spage=1&rft.epage=8&rft.pages=1-8&rft.isbn=9781424426737&rft.isbn_list=1424426731&rft_id=info:doi/10.1109/ICCP.2008.4648347&rft_dat=%3Cieee_6IE%3E4648347%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=4648347&rfr_iscdi=true |