Action Recognition Framework in Traffic Scene for Autonomous Driving System

For the autonomous driving system, accurately recognizing the actions of different roles in the traffic scene is the prerequisite for realizing this kind of human-vehicle information interaction. In this paper, we propose a complete framework based on 3D human pose estimation to recognize the action...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent transportation systems 2022-11, Vol.23 (11), p.22301-22311
Hauptverfasser: Xu, Feiyi, Xu, Feng, Xie, Jiucheng, Pun, Chi-Man, Lu, Huimin, Gao, Hao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 22311
container_issue 11
container_start_page 22301
container_title IEEE transactions on intelligent transportation systems
container_volume 23
creator Xu, Feiyi
Xu, Feng
Xie, Jiucheng
Pun, Chi-Man
Lu, Huimin
Gao, Hao
description For the autonomous driving system, accurately recognizing the actions of different roles in the traffic scene is the prerequisite for realizing this kind of human-vehicle information interaction. In this paper, we propose a complete framework based on 3D human pose estimation to recognize the actions of different roles on the road. The main objects recognized include traffic police, cyclists, and some passersby in need. We perform action recognition based on a dynamic adaptive graph convolutional network, which can realize the action recognition of objects based on 3D human pose. In addition to the action recognition module, we have optimized both the object detection module and the human pose estimation module in the framework so that the framework can handle multiple objects at the same time, which can be closer to the real traffic scene. To realize complex and changeable human action recognition, we built a multi-view camera system to collect responsible 3D human pose datasets containing traffic police gestures, cyclist gestures, and pedestrians' body movements. In the experiments, compared to other state-of-the-art researches, the proposed framework can achieve comparable results with the same dataset. Satisfactory performance has also been obtained on the real data we collected, which can handle a variety of different action recognition tasks at the same time.
doi_str_mv 10.1109/TITS.2021.3135251
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2734387055</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9660768</ieee_id><sourcerecordid>2734387055</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-812807913b416cb9d4f2c2c66cdb651a9e78d70fc492ac16e62165e86fef56083</originalsourceid><addsrcrecordid>eNo9kNFKwzAUhoMoOKcPIN4EvO7MSZo0uRzT6XAguHoduiwZmbaZSavs7W2deHV-Dt9_DnwIXQOZABB1Vy7K1YQSChMGjFMOJ2gEnMuMEBCnQ6Z5pggn5-gipV2_zTnACD1PTetDg1-tCdvG_-Z5rGr7HeI79g0uY-WcN3hlbGOxCxFPuzY0oQ5dwvfRf_lmi1eH1Nr6Ep256iPZq785Rm_zh3L2lC1fHhez6TIzVLE2k0AlKRSwdQ7CrNUmd9RQI4TZrAWHStlCbgriTK5oZUBYQUFwK4Wzjgsi2RjdHu_uY_jsbGr1LnSx6V9qWrCcyYJw3lNwpEwMKUXr9D76uooHDUQPzvTgTA_O9J-zvnNz7Hhr7T-vhCCFkOwHb15nTQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2734387055</pqid></control><display><type>article</type><title>Action Recognition Framework in Traffic Scene for Autonomous Driving System</title><source>IEEE Electronic Library (IEL)</source><creator>Xu, Feiyi ; Xu, Feng ; Xie, Jiucheng ; Pun, Chi-Man ; Lu, Huimin ; Gao, Hao</creator><creatorcontrib>Xu, Feiyi ; Xu, Feng ; Xie, Jiucheng ; Pun, Chi-Man ; Lu, Huimin ; Gao, Hao</creatorcontrib><description>For the autonomous driving system, accurately recognizing the actions of different roles in the traffic scene is the prerequisite for realizing this kind of human-vehicle information interaction. In this paper, we propose a complete framework based on 3D human pose estimation to recognize the actions of different roles on the road. The main objects recognized include traffic police, cyclists, and some passersby in need. We perform action recognition based on a dynamic adaptive graph convolutional network, which can realize the action recognition of objects based on 3D human pose. In addition to the action recognition module, we have optimized both the object detection module and the human pose estimation module in the framework so that the framework can handle multiple objects at the same time, which can be closer to the real traffic scene. To realize complex and changeable human action recognition, we built a multi-view camera system to collect responsible 3D human pose datasets containing traffic police gestures, cyclist gestures, and pedestrians' body movements. In the experiments, compared to other state-of-the-art researches, the proposed framework can achieve comparable results with the same dataset. Satisfactory performance has also been obtained on the real data we collected, which can handle a variety of different action recognition tasks at the same time.</description><identifier>ISSN: 1524-9050</identifier><identifier>EISSN: 1558-0016</identifier><identifier>DOI: 10.1109/TITS.2021.3135251</identifier><identifier>CODEN: ITISFG</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>3D pose estimation ; Autonomous driving ; Autonomous vehicles ; Datasets ; Detectors ; graph convolutional network ; Human activity recognition ; Human motion ; Joints ; Law enforcement ; Modules ; Moving object recognition ; Pedestrians ; Police ; Pose estimation ; Roads ; skeleton-based action recognition ; Three-dimensional displays ; Traffic police</subject><ispartof>IEEE transactions on intelligent transportation systems, 2022-11, Vol.23 (11), p.22301-22311</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-812807913b416cb9d4f2c2c66cdb651a9e78d70fc492ac16e62165e86fef56083</citedby><cites>FETCH-LOGICAL-c293t-812807913b416cb9d4f2c2c66cdb651a9e78d70fc492ac16e62165e86fef56083</cites><orcidid>0000-0002-0953-1057 ; 0000-0003-1788-3746 ; 0000-0003-2336-8521 ; 0000-0003-0148-3713 ; 0000-0001-9794-3221 ; 0000-0003-4733-6524</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9660768$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9660768$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Xu, Feiyi</creatorcontrib><creatorcontrib>Xu, Feng</creatorcontrib><creatorcontrib>Xie, Jiucheng</creatorcontrib><creatorcontrib>Pun, Chi-Man</creatorcontrib><creatorcontrib>Lu, Huimin</creatorcontrib><creatorcontrib>Gao, Hao</creatorcontrib><title>Action Recognition Framework in Traffic Scene for Autonomous Driving System</title><title>IEEE transactions on intelligent transportation systems</title><addtitle>TITS</addtitle><description>For the autonomous driving system, accurately recognizing the actions of different roles in the traffic scene is the prerequisite for realizing this kind of human-vehicle information interaction. In this paper, we propose a complete framework based on 3D human pose estimation to recognize the actions of different roles on the road. The main objects recognized include traffic police, cyclists, and some passersby in need. We perform action recognition based on a dynamic adaptive graph convolutional network, which can realize the action recognition of objects based on 3D human pose. In addition to the action recognition module, we have optimized both the object detection module and the human pose estimation module in the framework so that the framework can handle multiple objects at the same time, which can be closer to the real traffic scene. To realize complex and changeable human action recognition, we built a multi-view camera system to collect responsible 3D human pose datasets containing traffic police gestures, cyclist gestures, and pedestrians' body movements. In the experiments, compared to other state-of-the-art researches, the proposed framework can achieve comparable results with the same dataset. Satisfactory performance has also been obtained on the real data we collected, which can handle a variety of different action recognition tasks at the same time.</description><subject>3D pose estimation</subject><subject>Autonomous driving</subject><subject>Autonomous vehicles</subject><subject>Datasets</subject><subject>Detectors</subject><subject>graph convolutional network</subject><subject>Human activity recognition</subject><subject>Human motion</subject><subject>Joints</subject><subject>Law enforcement</subject><subject>Modules</subject><subject>Moving object recognition</subject><subject>Pedestrians</subject><subject>Police</subject><subject>Pose estimation</subject><subject>Roads</subject><subject>skeleton-based action recognition</subject><subject>Three-dimensional displays</subject><subject>Traffic police</subject><issn>1524-9050</issn><issn>1558-0016</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kNFKwzAUhoMoOKcPIN4EvO7MSZo0uRzT6XAguHoduiwZmbaZSavs7W2deHV-Dt9_DnwIXQOZABB1Vy7K1YQSChMGjFMOJ2gEnMuMEBCnQ6Z5pggn5-gipV2_zTnACD1PTetDg1-tCdvG_-Z5rGr7HeI79g0uY-WcN3hlbGOxCxFPuzY0oQ5dwvfRf_lmi1eH1Nr6Ep256iPZq785Rm_zh3L2lC1fHhez6TIzVLE2k0AlKRSwdQ7CrNUmd9RQI4TZrAWHStlCbgriTK5oZUBYQUFwK4Wzjgsi2RjdHu_uY_jsbGr1LnSx6V9qWrCcyYJw3lNwpEwMKUXr9D76uooHDUQPzvTgTA_O9J-zvnNz7Hhr7T-vhCCFkOwHb15nTQ</recordid><startdate>20221101</startdate><enddate>20221101</enddate><creator>Xu, Feiyi</creator><creator>Xu, Feng</creator><creator>Xie, Jiucheng</creator><creator>Pun, Chi-Man</creator><creator>Lu, Huimin</creator><creator>Gao, Hao</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-0953-1057</orcidid><orcidid>https://orcid.org/0000-0003-1788-3746</orcidid><orcidid>https://orcid.org/0000-0003-2336-8521</orcidid><orcidid>https://orcid.org/0000-0003-0148-3713</orcidid><orcidid>https://orcid.org/0000-0001-9794-3221</orcidid><orcidid>https://orcid.org/0000-0003-4733-6524</orcidid></search><sort><creationdate>20221101</creationdate><title>Action Recognition Framework in Traffic Scene for Autonomous Driving System</title><author>Xu, Feiyi ; Xu, Feng ; Xie, Jiucheng ; Pun, Chi-Man ; Lu, Huimin ; Gao, Hao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-812807913b416cb9d4f2c2c66cdb651a9e78d70fc492ac16e62165e86fef56083</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>3D pose estimation</topic><topic>Autonomous driving</topic><topic>Autonomous vehicles</topic><topic>Datasets</topic><topic>Detectors</topic><topic>graph convolutional network</topic><topic>Human activity recognition</topic><topic>Human motion</topic><topic>Joints</topic><topic>Law enforcement</topic><topic>Modules</topic><topic>Moving object recognition</topic><topic>Pedestrians</topic><topic>Police</topic><topic>Pose estimation</topic><topic>Roads</topic><topic>skeleton-based action recognition</topic><topic>Three-dimensional displays</topic><topic>Traffic police</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xu, Feiyi</creatorcontrib><creatorcontrib>Xu, Feng</creatorcontrib><creatorcontrib>Xie, Jiucheng</creatorcontrib><creatorcontrib>Pun, Chi-Man</creatorcontrib><creatorcontrib>Lu, Huimin</creatorcontrib><creatorcontrib>Gao, Hao</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on intelligent transportation systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xu, Feiyi</au><au>Xu, Feng</au><au>Xie, Jiucheng</au><au>Pun, Chi-Man</au><au>Lu, Huimin</au><au>Gao, Hao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Action Recognition Framework in Traffic Scene for Autonomous Driving System</atitle><jtitle>IEEE transactions on intelligent transportation systems</jtitle><stitle>TITS</stitle><date>2022-11-01</date><risdate>2022</risdate><volume>23</volume><issue>11</issue><spage>22301</spage><epage>22311</epage><pages>22301-22311</pages><issn>1524-9050</issn><eissn>1558-0016</eissn><coden>ITISFG</coden><abstract>For the autonomous driving system, accurately recognizing the actions of different roles in the traffic scene is the prerequisite for realizing this kind of human-vehicle information interaction. In this paper, we propose a complete framework based on 3D human pose estimation to recognize the actions of different roles on the road. The main objects recognized include traffic police, cyclists, and some passersby in need. We perform action recognition based on a dynamic adaptive graph convolutional network, which can realize the action recognition of objects based on 3D human pose. In addition to the action recognition module, we have optimized both the object detection module and the human pose estimation module in the framework so that the framework can handle multiple objects at the same time, which can be closer to the real traffic scene. To realize complex and changeable human action recognition, we built a multi-view camera system to collect responsible 3D human pose datasets containing traffic police gestures, cyclist gestures, and pedestrians' body movements. In the experiments, compared to other state-of-the-art researches, the proposed framework can achieve comparable results with the same dataset. Satisfactory performance has also been obtained on the real data we collected, which can handle a variety of different action recognition tasks at the same time.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TITS.2021.3135251</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0002-0953-1057</orcidid><orcidid>https://orcid.org/0000-0003-1788-3746</orcidid><orcidid>https://orcid.org/0000-0003-2336-8521</orcidid><orcidid>https://orcid.org/0000-0003-0148-3713</orcidid><orcidid>https://orcid.org/0000-0001-9794-3221</orcidid><orcidid>https://orcid.org/0000-0003-4733-6524</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1524-9050
ispartof IEEE transactions on intelligent transportation systems, 2022-11, Vol.23 (11), p.22301-22311
issn 1524-9050
1558-0016
language eng
recordid cdi_proquest_journals_2734387055
source IEEE Electronic Library (IEL)
subjects 3D pose estimation
Autonomous driving
Autonomous vehicles
Datasets
Detectors
graph convolutional network
Human activity recognition
Human motion
Joints
Law enforcement
Modules
Moving object recognition
Pedestrians
Police
Pose estimation
Roads
skeleton-based action recognition
Three-dimensional displays
Traffic police
title Action Recognition Framework in Traffic Scene for Autonomous Driving System
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T00%3A55%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Action%20Recognition%20Framework%20in%20Traffic%20Scene%20for%20Autonomous%20Driving%20System&rft.jtitle=IEEE%20transactions%20on%20intelligent%20transportation%20systems&rft.au=Xu,%20Feiyi&rft.date=2022-11-01&rft.volume=23&rft.issue=11&rft.spage=22301&rft.epage=22311&rft.pages=22301-22311&rft.issn=1524-9050&rft.eissn=1558-0016&rft.coden=ITISFG&rft_id=info:doi/10.1109/TITS.2021.3135251&rft_dat=%3Cproquest_RIE%3E2734387055%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2734387055&rft_id=info:pmid/&rft_ieee_id=9660768&rfr_iscdi=true