Design and Implementation of a Behavioral Sequence Framework for Human–Robot Interaction Utilizing Brain-Computer Interface and Haptic Feedback

In assistive robotics, research in Brain Computer Interface aims to understand human intent to enhance Human–Robot interaction and augment human performance. In this research, a framework to enable a person with an upper limb disability to use an assistive system toward maintaining self-reliance is...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of Engineering and Science in Medical Diagnostics and Therapy 2023-11, Vol.6 (4)
Hauptverfasser: Hazra, Sudip, Whitaker, Shane, Shiakolas, Panos S.
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 4
container_start_page
container_title Journal of Engineering and Science in Medical Diagnostics and Therapy
container_volume 6
creator Hazra, Sudip
Whitaker, Shane
Shiakolas, Panos S.
description In assistive robotics, research in Brain Computer Interface aims to understand human intent to enhance Human–Robot interaction and augment human performance. In this research, a framework to enable a person with an upper limb disability to use an assistive system toward maintaining self-reliance is introduced and its implementation and evaluation are discussed. The framework interlinks functional components and establishes a behavioral sequence to operate the assistive system in three stages; action classification, verification, and execution. An action is classified based on identified human intent and verified through haptic and/or visual feedback before execution. The human intent is conveyed through facial expressions and verified through head movements. The interlinked functional components are an electro-encephalogram (EEG) sensing device, a head movement recorder, a dual-purpose glove, a visual feedback environment, and a robotic arm. Five volunteers are used to evaluate the ability of the system to recognize a facial expression, the time required to respond using head movements, convey information through vibrotactile feedback effects, and the ability to follow the established behavioral sequence. Based on the evaluation, a personalized training dataset should be used to calibrate facial expression recognition and define the time required to respond during verification. Custom vibrotactile effects were effective in conveying system information to the user. The volunteers were able to follow the behavioral sequence and control the system with a success rate of 80.00%, thus providing confidence to recruit more volunteers to identify and address improvements and expand the operational capability of the framework.
doi_str_mv 10.1115/1.4062341
format Article
fullrecord <record><control><sourceid>asme_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1115_1_4062341</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1160697</sourcerecordid><originalsourceid>FETCH-LOGICAL-a551-622250f5ca8ad8e3ff8274089e9135b21b65fb2adea65222fa68c96d951ee2313</originalsourceid><addsrcrecordid>eNo9kLFOwzAQhiMEElXpwM7glSHFdmInHmmhtFIlJChzdEnOxW1jBycFwcQrIN6QJyGlFdP90n367vQHwTmjQ8aYuGLDmEoexewo6HGR8DBRUh7_Z5GeBoOmWVFKmVJKqLgXfN1gY5aWgC3JrKo3WKFtoTXOEqcJkBE-w6txHjbkEV-2aAskEw8Vvjm_Jtp5Mt1WYH8-vx9c7loysy16KP4ET63ZmA9jl2Tkwdhw7Kp62633kIZOtTs7hbo1BZkgljkU67PgRMOmwcFh9oPF5HYxnobz-7vZ-HoeghAslJxzQbUoIIUyxUjrlCcxTRUqFomcs1wKnXMoEaToWA0yLZQslWCIPGJRP7jcawvvmsajzmpvKvDvGaPZrs2MZYc2O_Ziz0JTYbZyW2-7zzpKUqmS6BdyGHLi</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Design and Implementation of a Behavioral Sequence Framework for Human–Robot Interaction Utilizing Brain-Computer Interface and Haptic Feedback</title><source>ASME Transactions Journals (Current)</source><creator>Hazra, Sudip ; Whitaker, Shane ; Shiakolas, Panos S.</creator><creatorcontrib>Hazra, Sudip ; Whitaker, Shane ; Shiakolas, Panos S.</creatorcontrib><description>In assistive robotics, research in Brain Computer Interface aims to understand human intent to enhance Human–Robot interaction and augment human performance. In this research, a framework to enable a person with an upper limb disability to use an assistive system toward maintaining self-reliance is introduced and its implementation and evaluation are discussed. The framework interlinks functional components and establishes a behavioral sequence to operate the assistive system in three stages; action classification, verification, and execution. An action is classified based on identified human intent and verified through haptic and/or visual feedback before execution. The human intent is conveyed through facial expressions and verified through head movements. The interlinked functional components are an electro-encephalogram (EEG) sensing device, a head movement recorder, a dual-purpose glove, a visual feedback environment, and a robotic arm. Five volunteers are used to evaluate the ability of the system to recognize a facial expression, the time required to respond using head movements, convey information through vibrotactile feedback effects, and the ability to follow the established behavioral sequence. Based on the evaluation, a personalized training dataset should be used to calibrate facial expression recognition and define the time required to respond during verification. Custom vibrotactile effects were effective in conveying system information to the user. The volunteers were able to follow the behavioral sequence and control the system with a success rate of 80.00%, thus providing confidence to recruit more volunteers to identify and address improvements and expand the operational capability of the framework.</description><identifier>ISSN: 2572-7958</identifier><identifier>EISSN: 2572-7966</identifier><identifier>DOI: 10.1115/1.4062341</identifier><language>eng</language><publisher>ASME</publisher><ispartof>Journal of Engineering and Science in Medical Diagnostics and Therapy, 2023-11, Vol.6 (4)</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-a551-622250f5ca8ad8e3ff8274089e9135b21b65fb2adea65222fa68c96d951ee2313</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>315,781,785,27929,27930,38525</link.rule.ids></links><search><creatorcontrib>Hazra, Sudip</creatorcontrib><creatorcontrib>Whitaker, Shane</creatorcontrib><creatorcontrib>Shiakolas, Panos S.</creatorcontrib><title>Design and Implementation of a Behavioral Sequence Framework for Human–Robot Interaction Utilizing Brain-Computer Interface and Haptic Feedback</title><title>Journal of Engineering and Science in Medical Diagnostics and Therapy</title><addtitle>ASME J of Medical Diagnostics</addtitle><description>In assistive robotics, research in Brain Computer Interface aims to understand human intent to enhance Human–Robot interaction and augment human performance. In this research, a framework to enable a person with an upper limb disability to use an assistive system toward maintaining self-reliance is introduced and its implementation and evaluation are discussed. The framework interlinks functional components and establishes a behavioral sequence to operate the assistive system in three stages; action classification, verification, and execution. An action is classified based on identified human intent and verified through haptic and/or visual feedback before execution. The human intent is conveyed through facial expressions and verified through head movements. The interlinked functional components are an electro-encephalogram (EEG) sensing device, a head movement recorder, a dual-purpose glove, a visual feedback environment, and a robotic arm. Five volunteers are used to evaluate the ability of the system to recognize a facial expression, the time required to respond using head movements, convey information through vibrotactile feedback effects, and the ability to follow the established behavioral sequence. Based on the evaluation, a personalized training dataset should be used to calibrate facial expression recognition and define the time required to respond during verification. Custom vibrotactile effects were effective in conveying system information to the user. The volunteers were able to follow the behavioral sequence and control the system with a success rate of 80.00%, thus providing confidence to recruit more volunteers to identify and address improvements and expand the operational capability of the framework.</description><issn>2572-7958</issn><issn>2572-7966</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNo9kLFOwzAQhiMEElXpwM7glSHFdmInHmmhtFIlJChzdEnOxW1jBycFwcQrIN6QJyGlFdP90n367vQHwTmjQ8aYuGLDmEoexewo6HGR8DBRUh7_Z5GeBoOmWVFKmVJKqLgXfN1gY5aWgC3JrKo3WKFtoTXOEqcJkBE-w6txHjbkEV-2aAskEw8Vvjm_Jtp5Mt1WYH8-vx9c7loysy16KP4ET63ZmA9jl2Tkwdhw7Kp62633kIZOtTs7hbo1BZkgljkU67PgRMOmwcFh9oPF5HYxnobz-7vZ-HoeghAslJxzQbUoIIUyxUjrlCcxTRUqFomcs1wKnXMoEaToWA0yLZQslWCIPGJRP7jcawvvmsajzmpvKvDvGaPZrs2MZYc2O_Ziz0JTYbZyW2-7zzpKUqmS6BdyGHLi</recordid><startdate>20231101</startdate><enddate>20231101</enddate><creator>Hazra, Sudip</creator><creator>Whitaker, Shane</creator><creator>Shiakolas, Panos S.</creator><general>ASME</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20231101</creationdate><title>Design and Implementation of a Behavioral Sequence Framework for Human–Robot Interaction Utilizing Brain-Computer Interface and Haptic Feedback</title><author>Hazra, Sudip ; Whitaker, Shane ; Shiakolas, Panos S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a551-622250f5ca8ad8e3ff8274089e9135b21b65fb2adea65222fa68c96d951ee2313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Hazra, Sudip</creatorcontrib><creatorcontrib>Whitaker, Shane</creatorcontrib><creatorcontrib>Shiakolas, Panos S.</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of Engineering and Science in Medical Diagnostics and Therapy</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hazra, Sudip</au><au>Whitaker, Shane</au><au>Shiakolas, Panos S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Design and Implementation of a Behavioral Sequence Framework for Human–Robot Interaction Utilizing Brain-Computer Interface and Haptic Feedback</atitle><jtitle>Journal of Engineering and Science in Medical Diagnostics and Therapy</jtitle><stitle>ASME J of Medical Diagnostics</stitle><date>2023-11-01</date><risdate>2023</risdate><volume>6</volume><issue>4</issue><issn>2572-7958</issn><eissn>2572-7966</eissn><abstract>In assistive robotics, research in Brain Computer Interface aims to understand human intent to enhance Human–Robot interaction and augment human performance. In this research, a framework to enable a person with an upper limb disability to use an assistive system toward maintaining self-reliance is introduced and its implementation and evaluation are discussed. The framework interlinks functional components and establishes a behavioral sequence to operate the assistive system in three stages; action classification, verification, and execution. An action is classified based on identified human intent and verified through haptic and/or visual feedback before execution. The human intent is conveyed through facial expressions and verified through head movements. The interlinked functional components are an electro-encephalogram (EEG) sensing device, a head movement recorder, a dual-purpose glove, a visual feedback environment, and a robotic arm. Five volunteers are used to evaluate the ability of the system to recognize a facial expression, the time required to respond using head movements, convey information through vibrotactile feedback effects, and the ability to follow the established behavioral sequence. Based on the evaluation, a personalized training dataset should be used to calibrate facial expression recognition and define the time required to respond during verification. Custom vibrotactile effects were effective in conveying system information to the user. The volunteers were able to follow the behavioral sequence and control the system with a success rate of 80.00%, thus providing confidence to recruit more volunteers to identify and address improvements and expand the operational capability of the framework.</abstract><pub>ASME</pub><doi>10.1115/1.4062341</doi></addata></record>
fulltext fulltext
identifier ISSN: 2572-7958
ispartof Journal of Engineering and Science in Medical Diagnostics and Therapy, 2023-11, Vol.6 (4)
issn 2572-7958
2572-7966
language eng
recordid cdi_crossref_primary_10_1115_1_4062341
source ASME Transactions Journals (Current)
title Design and Implementation of a Behavioral Sequence Framework for Human–Robot Interaction Utilizing Brain-Computer Interface and Haptic Feedback
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-14T04%3A01%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-asme_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Design%20and%20Implementation%20of%20a%20Behavioral%20Sequence%20Framework%20for%20Human%E2%80%93Robot%20Interaction%20Utilizing%20Brain-Computer%20Interface%20and%20Haptic%20Feedback&rft.jtitle=Journal%20of%20Engineering%20and%20Science%20in%20Medical%20Diagnostics%20and%20Therapy&rft.au=Hazra,%20Sudip&rft.date=2023-11-01&rft.volume=6&rft.issue=4&rft.issn=2572-7958&rft.eissn=2572-7966&rft_id=info:doi/10.1115/1.4062341&rft_dat=%3Casme_cross%3E1160697%3C/asme_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true