Research on detection and classification of traffic signs with data augmentation

Traffic Sign Detection and Recognition (TSDR) system is an important part of autonomous driver-assistance systems (ADAS), and a hot topic in computer vision research. With the instance segmentation framework proposed, deep learning has entered a new stage. However, the current traffic sign dataset c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2023-10, Vol.82 (25), p.38875-38899
Hauptverfasser: Yao, Jiana, Chu, Yinze, Xiang, Xinjian, Huang, Bingqiang, Xiaoli, Wu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 38899
container_issue 25
container_start_page 38875
container_title Multimedia tools and applications
container_volume 82
creator Yao, Jiana
Chu, Yinze
Xiang, Xinjian
Huang, Bingqiang
Xiaoli, Wu
description Traffic Sign Detection and Recognition (TSDR) system is an important part of autonomous driver-assistance systems (ADAS), and a hot topic in computer vision research. With the instance segmentation framework proposed, deep learning has entered a new stage. However, the current traffic sign dataset can only evaluate the performance of object detection framework. In this paper, a new large-scale ZUST Chinese traffic sign dataset benchmark (ZCTSDB) is created to assess the performance of the object detection and instance segmentation framework. ZCTSDB adopts seven different image amplification strategies to enhance the data, which improves the balance of the traffic sign category in the training concentration. The results showed that the average accuracy of ZCTSDB-augmentation object detection and instance segmentation increased by 1.963% and 1.4218%, respectively, especially for large traffic signs. Mask R-CNN has better detection and anti-interference performance than Faster RCNN. The mAP  of Mask R-CNN is as high as 74.0580.
doi_str_mv 10.1007/s11042-023-14895-z
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2875205527</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2875205527</sourcerecordid><originalsourceid>FETCH-LOGICAL-c270t-244934a43cd60e74853a5b467b1338b42dbbfeabca380c435ec4a6e65cf958053</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKt_wFPAc3TytdkepfgFBUX0HGaz2XZLu1uTLGJ_vbErePM07wzPOzO8hFxyuOYA5iZyDkowEJJxVc402x-RCddGMmMEP85alsCMBn5KzmJcA_BCCzUhL68-egxuRfuO1j55l9qssKup22CMbdM6PIz6hqaATe5pbJddpJ9tWtEaE1IcllvfpQN3Tk4a3ER_8Vun5P3-7m3-yBbPD0_z2wVzwkBiQqmZVKikqwvwRpVaoq5UYSouZVkpUVdV47FymB93SmrvFBa-0K6Z6RK0nJKrce8u9B-Dj8mu-yF0-aQVpdECtBYmU2KkXOhjDL6xu9BuMXxZDvYnOTsmZ3Ny9pCc3WeTHE0xw93Sh7_V_7i-AabXce4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2875205527</pqid></control><display><type>article</type><title>Research on detection and classification of traffic signs with data augmentation</title><source>Springer Nature - Complete Springer Journals</source><creator>Yao, Jiana ; Chu, Yinze ; Xiang, Xinjian ; Huang, Bingqiang ; Xiaoli, Wu</creator><creatorcontrib>Yao, Jiana ; Chu, Yinze ; Xiang, Xinjian ; Huang, Bingqiang ; Xiaoli, Wu</creatorcontrib><description>Traffic Sign Detection and Recognition (TSDR) system is an important part of autonomous driver-assistance systems (ADAS), and a hot topic in computer vision research. With the instance segmentation framework proposed, deep learning has entered a new stage. However, the current traffic sign dataset can only evaluate the performance of object detection framework. In this paper, a new large-scale ZUST Chinese traffic sign dataset benchmark (ZCTSDB) is created to assess the performance of the object detection and instance segmentation framework. ZCTSDB adopts seven different image amplification strategies to enhance the data, which improves the balance of the traffic sign category in the training concentration. The results showed that the average accuracy of ZCTSDB-augmentation object detection and instance segmentation increased by 1.963% and 1.4218%, respectively, especially for large traffic signs. Mask R-CNN has better detection and anti-interference performance than Faster RCNN. The mAP  of Mask R-CNN is as high as 74.0580.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-023-14895-z</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Advanced driver assistance systems ; Classification ; Computer Communication Networks ; Computer Science ; Computer vision ; Data augmentation ; Data Structures and Information Theory ; Datasets ; Deep learning ; Image enhancement ; Image segmentation ; Machine learning ; Multimedia ; Multimedia Information Systems ; Neural networks ; Object recognition ; Performance evaluation ; Signs ; Special Purpose and Application-Based Systems ; Traffic control ; Traffic signs</subject><ispartof>Multimedia tools and applications, 2023-10, Vol.82 (25), p.38875-38899</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c270t-244934a43cd60e74853a5b467b1338b42dbbfeabca380c435ec4a6e65cf958053</cites><orcidid>0000-0001-8226-3960</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-023-14895-z$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-023-14895-z$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51298</link.rule.ids></links><search><creatorcontrib>Yao, Jiana</creatorcontrib><creatorcontrib>Chu, Yinze</creatorcontrib><creatorcontrib>Xiang, Xinjian</creatorcontrib><creatorcontrib>Huang, Bingqiang</creatorcontrib><creatorcontrib>Xiaoli, Wu</creatorcontrib><title>Research on detection and classification of traffic signs with data augmentation</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>Traffic Sign Detection and Recognition (TSDR) system is an important part of autonomous driver-assistance systems (ADAS), and a hot topic in computer vision research. With the instance segmentation framework proposed, deep learning has entered a new stage. However, the current traffic sign dataset can only evaluate the performance of object detection framework. In this paper, a new large-scale ZUST Chinese traffic sign dataset benchmark (ZCTSDB) is created to assess the performance of the object detection and instance segmentation framework. ZCTSDB adopts seven different image amplification strategies to enhance the data, which improves the balance of the traffic sign category in the training concentration. The results showed that the average accuracy of ZCTSDB-augmentation object detection and instance segmentation increased by 1.963% and 1.4218%, respectively, especially for large traffic signs. Mask R-CNN has better detection and anti-interference performance than Faster RCNN. The mAP  of Mask R-CNN is as high as 74.0580.</description><subject>Accuracy</subject><subject>Advanced driver assistance systems</subject><subject>Classification</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Computer vision</subject><subject>Data augmentation</subject><subject>Data Structures and Information Theory</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Image enhancement</subject><subject>Image segmentation</subject><subject>Machine learning</subject><subject>Multimedia</subject><subject>Multimedia Information Systems</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Performance evaluation</subject><subject>Signs</subject><subject>Special Purpose and Application-Based Systems</subject><subject>Traffic control</subject><subject>Traffic signs</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kE1LAzEQhoMoWKt_wFPAc3TytdkepfgFBUX0HGaz2XZLu1uTLGJ_vbErePM07wzPOzO8hFxyuOYA5iZyDkowEJJxVc402x-RCddGMmMEP85alsCMBn5KzmJcA_BCCzUhL68-egxuRfuO1j55l9qssKup22CMbdM6PIz6hqaATe5pbJddpJ9tWtEaE1IcllvfpQN3Tk4a3ER_8Vun5P3-7m3-yBbPD0_z2wVzwkBiQqmZVKikqwvwRpVaoq5UYSouZVkpUVdV47FymB93SmrvFBa-0K6Z6RK0nJKrce8u9B-Dj8mu-yF0-aQVpdECtBYmU2KkXOhjDL6xu9BuMXxZDvYnOTsmZ3Ny9pCc3WeTHE0xw93Sh7_V_7i-AabXce4</recordid><startdate>20231001</startdate><enddate>20231001</enddate><creator>Yao, Jiana</creator><creator>Chu, Yinze</creator><creator>Xiang, Xinjian</creator><creator>Huang, Bingqiang</creator><creator>Xiaoli, Wu</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-8226-3960</orcidid></search><sort><creationdate>20231001</creationdate><title>Research on detection and classification of traffic signs with data augmentation</title><author>Yao, Jiana ; Chu, Yinze ; Xiang, Xinjian ; Huang, Bingqiang ; Xiaoli, Wu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c270t-244934a43cd60e74853a5b467b1338b42dbbfeabca380c435ec4a6e65cf958053</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Advanced driver assistance systems</topic><topic>Classification</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Computer vision</topic><topic>Data augmentation</topic><topic>Data Structures and Information Theory</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Image enhancement</topic><topic>Image segmentation</topic><topic>Machine learning</topic><topic>Multimedia</topic><topic>Multimedia Information Systems</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Performance evaluation</topic><topic>Signs</topic><topic>Special Purpose and Application-Based Systems</topic><topic>Traffic control</topic><topic>Traffic signs</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yao, Jiana</creatorcontrib><creatorcontrib>Chu, Yinze</creatorcontrib><creatorcontrib>Xiang, Xinjian</creatorcontrib><creatorcontrib>Huang, Bingqiang</creatorcontrib><creatorcontrib>Xiaoli, Wu</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yao, Jiana</au><au>Chu, Yinze</au><au>Xiang, Xinjian</au><au>Huang, Bingqiang</au><au>Xiaoli, Wu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Research on detection and classification of traffic signs with data augmentation</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2023-10-01</date><risdate>2023</risdate><volume>82</volume><issue>25</issue><spage>38875</spage><epage>38899</epage><pages>38875-38899</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>Traffic Sign Detection and Recognition (TSDR) system is an important part of autonomous driver-assistance systems (ADAS), and a hot topic in computer vision research. With the instance segmentation framework proposed, deep learning has entered a new stage. However, the current traffic sign dataset can only evaluate the performance of object detection framework. In this paper, a new large-scale ZUST Chinese traffic sign dataset benchmark (ZCTSDB) is created to assess the performance of the object detection and instance segmentation framework. ZCTSDB adopts seven different image amplification strategies to enhance the data, which improves the balance of the traffic sign category in the training concentration. The results showed that the average accuracy of ZCTSDB-augmentation object detection and instance segmentation increased by 1.963% and 1.4218%, respectively, especially for large traffic signs. Mask R-CNN has better detection and anti-interference performance than Faster RCNN. The mAP  of Mask R-CNN is as high as 74.0580.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-023-14895-z</doi><tpages>25</tpages><orcidid>https://orcid.org/0000-0001-8226-3960</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1380-7501
ispartof Multimedia tools and applications, 2023-10, Vol.82 (25), p.38875-38899
issn 1380-7501
1573-7721
language eng
recordid cdi_proquest_journals_2875205527
source Springer Nature - Complete Springer Journals
subjects Accuracy
Advanced driver assistance systems
Classification
Computer Communication Networks
Computer Science
Computer vision
Data augmentation
Data Structures and Information Theory
Datasets
Deep learning
Image enhancement
Image segmentation
Machine learning
Multimedia
Multimedia Information Systems
Neural networks
Object recognition
Performance evaluation
Signs
Special Purpose and Application-Based Systems
Traffic control
Traffic signs
title Research on detection and classification of traffic signs with data augmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T00%3A17%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Research%20on%20detection%20and%20classification%20of%20traffic%20signs%20with%20data%20augmentation&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Yao,%20Jiana&rft.date=2023-10-01&rft.volume=82&rft.issue=25&rft.spage=38875&rft.epage=38899&rft.pages=38875-38899&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-023-14895-z&rft_dat=%3Cproquest_cross%3E2875205527%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2875205527&rft_id=info:pmid/&rfr_iscdi=true