Integrating GAN and Texture Synthesis for Enhanced Road Damage Detection
In the domain of traffic safety and road maintenance, precise detection of road damage is crucial for ensuring safe driving and prolonging road durability. However, current methods often fall short due to limited data. Prior attempts have used Generative Adversarial Networks to generate damage with...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-03 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Chen, Tengyang Ren, Jiangtao |
description | In the domain of traffic safety and road maintenance, precise detection of road damage is crucial for ensuring safe driving and prolonging road durability. However, current methods often fall short due to limited data. Prior attempts have used Generative Adversarial Networks to generate damage with diverse shapes and manually integrate it into appropriate positions. However, the problem has not been well explored and is faced with two challenges. First, they only enrich the location and shape of damage while neglect the diversity of severity levels, and the realism still needs further improvement. Second, they require a significant amount of manual effort. To address these challenges, we propose an innovative approach. In addition to using GAN to generate damage with various shapes, we further employ texture synthesis techniques to extract road textures. These two elements are then mixed with different weights, allowing us to control the severity of the synthesized damage, which are then embedded back into the original images via Poisson blending. Our method ensures both richness of damage severity and a better alignment with the background. To save labor costs, we leverage structural similarity for automated sample selection during embedding. Each augmented data of an original image contains versions with varying severity levels. We implement a straightforward screening strategy to mitigate distribution drift. Experiments are conducted on a public road damage dataset. The proposed method not only eliminates the need for manual labor but also achieves remarkable enhancements, improving the mAP by 4.1% and the F1-score by 4.5%. |
doi_str_mv | 10.48550/arxiv.2309.06747 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2309_06747</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2864706120</sourcerecordid><originalsourceid>FETCH-LOGICAL-a957-6c2b52388995b1171fdfa6161169622b1a9786ac3a3c0abc1a2aaf2efc2b8d53</originalsourceid><addsrcrecordid>eNotj01PwkAURScmJhLkB7hyEtetM286H10SQCAhmgj75rWdlhKZ4nRq4N9bwdXdnHtzDyFPnMWJkZK9oj83PzEIlsZM6UTfkREIwSOTADyQSdcdGGOgNEgpRmS1dsHWHkPjarqcvlN0Jd3Zc-i9pduLC3vbNR2tWk8Xbo-usCX9bLGkczxibencBluEpnWP5L7Cr85O_nNMtm-L3WwVbT6W69l0E2EqdaQKyCUIY9JU5pxrXpUVKq44V6kCyDmm2igsBIqCYV5wBMQKbDX0TCnFmDzfVq-W2ck3R_SX7M82u9oOxMuNOPn2u7ddyA5t791wKQOjEs0UByZ-AR0zVsY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2864706120</pqid></control><display><type>article</type><title>Integrating GAN and Texture Synthesis for Enhanced Road Damage Detection</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Chen, Tengyang ; Ren, Jiangtao</creator><creatorcontrib>Chen, Tengyang ; Ren, Jiangtao</creatorcontrib><description>In the domain of traffic safety and road maintenance, precise detection of road damage is crucial for ensuring safe driving and prolonging road durability. However, current methods often fall short due to limited data. Prior attempts have used Generative Adversarial Networks to generate damage with diverse shapes and manually integrate it into appropriate positions. However, the problem has not been well explored and is faced with two challenges. First, they only enrich the location and shape of damage while neglect the diversity of severity levels, and the realism still needs further improvement. Second, they require a significant amount of manual effort. To address these challenges, we propose an innovative approach. In addition to using GAN to generate damage with various shapes, we further employ texture synthesis techniques to extract road textures. These two elements are then mixed with different weights, allowing us to control the severity of the synthesized damage, which are then embedded back into the original images via Poisson blending. Our method ensures both richness of damage severity and a better alignment with the background. To save labor costs, we leverage structural similarity for automated sample selection during embedding. Each augmented data of an original image contains versions with varying severity levels. We implement a straightforward screening strategy to mitigate distribution drift. Experiments are conducted on a public road damage dataset. The proposed method not only eliminates the need for manual labor but also achieves remarkable enhancements, improving the mAP by 4.1% and the F1-score by 4.5%.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2309.06747</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Computer Vision and Pattern Recognition ; Damage detection ; Data augmentation ; Embedding ; Generative adversarial networks ; Labor ; Physical work ; Road maintenance ; Synthesis ; Texture</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,782,883,27912</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.06747$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1109/TITS.2024.3373394$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Tengyang</creatorcontrib><creatorcontrib>Ren, Jiangtao</creatorcontrib><title>Integrating GAN and Texture Synthesis for Enhanced Road Damage Detection</title><title>arXiv.org</title><description>In the domain of traffic safety and road maintenance, precise detection of road damage is crucial for ensuring safe driving and prolonging road durability. However, current methods often fall short due to limited data. Prior attempts have used Generative Adversarial Networks to generate damage with diverse shapes and manually integrate it into appropriate positions. However, the problem has not been well explored and is faced with two challenges. First, they only enrich the location and shape of damage while neglect the diversity of severity levels, and the realism still needs further improvement. Second, they require a significant amount of manual effort. To address these challenges, we propose an innovative approach. In addition to using GAN to generate damage with various shapes, we further employ texture synthesis techniques to extract road textures. These two elements are then mixed with different weights, allowing us to control the severity of the synthesized damage, which are then embedded back into the original images via Poisson blending. Our method ensures both richness of damage severity and a better alignment with the background. To save labor costs, we leverage structural similarity for automated sample selection during embedding. Each augmented data of an original image contains versions with varying severity levels. We implement a straightforward screening strategy to mitigate distribution drift. Experiments are conducted on a public road damage dataset. The proposed method not only eliminates the need for manual labor but also achieves remarkable enhancements, improving the mAP by 4.1% and the F1-score by 4.5%.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Damage detection</subject><subject>Data augmentation</subject><subject>Embedding</subject><subject>Generative adversarial networks</subject><subject>Labor</subject><subject>Physical work</subject><subject>Road maintenance</subject><subject>Synthesis</subject><subject>Texture</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj01PwkAURScmJhLkB7hyEtetM286H10SQCAhmgj75rWdlhKZ4nRq4N9bwdXdnHtzDyFPnMWJkZK9oj83PzEIlsZM6UTfkREIwSOTADyQSdcdGGOgNEgpRmS1dsHWHkPjarqcvlN0Jd3Zc-i9pduLC3vbNR2tWk8Xbo-usCX9bLGkczxibencBluEpnWP5L7Cr85O_nNMtm-L3WwVbT6W69l0E2EqdaQKyCUIY9JU5pxrXpUVKq44V6kCyDmm2igsBIqCYV5wBMQKbDX0TCnFmDzfVq-W2ck3R_SX7M82u9oOxMuNOPn2u7ddyA5t791wKQOjEs0UByZ-AR0zVsY</recordid><startdate>20240304</startdate><enddate>20240304</enddate><creator>Chen, Tengyang</creator><creator>Ren, Jiangtao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240304</creationdate><title>Integrating GAN and Texture Synthesis for Enhanced Road Damage Detection</title><author>Chen, Tengyang ; Ren, Jiangtao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a957-6c2b52388995b1171fdfa6161169622b1a9786ac3a3c0abc1a2aaf2efc2b8d53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Damage detection</topic><topic>Data augmentation</topic><topic>Embedding</topic><topic>Generative adversarial networks</topic><topic>Labor</topic><topic>Physical work</topic><topic>Road maintenance</topic><topic>Synthesis</topic><topic>Texture</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Tengyang</creatorcontrib><creatorcontrib>Ren, Jiangtao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Tengyang</au><au>Ren, Jiangtao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Integrating GAN and Texture Synthesis for Enhanced Road Damage Detection</atitle><jtitle>arXiv.org</jtitle><date>2024-03-04</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>In the domain of traffic safety and road maintenance, precise detection of road damage is crucial for ensuring safe driving and prolonging road durability. However, current methods often fall short due to limited data. Prior attempts have used Generative Adversarial Networks to generate damage with diverse shapes and manually integrate it into appropriate positions. However, the problem has not been well explored and is faced with two challenges. First, they only enrich the location and shape of damage while neglect the diversity of severity levels, and the realism still needs further improvement. Second, they require a significant amount of manual effort. To address these challenges, we propose an innovative approach. In addition to using GAN to generate damage with various shapes, we further employ texture synthesis techniques to extract road textures. These two elements are then mixed with different weights, allowing us to control the severity of the synthesized damage, which are then embedded back into the original images via Poisson blending. Our method ensures both richness of damage severity and a better alignment with the background. To save labor costs, we leverage structural similarity for automated sample selection during embedding. Each augmented data of an original image contains versions with varying severity levels. We implement a straightforward screening strategy to mitigate distribution drift. Experiments are conducted on a public road damage dataset. The proposed method not only eliminates the need for manual labor but also achieves remarkable enhancements, improving the mAP by 4.1% and the F1-score by 4.5%.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2309.06747</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2309_06747 |
source | arXiv.org; Free E- Journals |
subjects | Computer Science - Computer Vision and Pattern Recognition Damage detection Data augmentation Embedding Generative adversarial networks Labor Physical work Road maintenance Synthesis Texture |
title | Integrating GAN and Texture Synthesis for Enhanced Road Damage Detection |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T00%3A11%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Integrating%20GAN%20and%20Texture%20Synthesis%20for%20Enhanced%20Road%20Damage%20Detection&rft.jtitle=arXiv.org&rft.au=Chen,%20Tengyang&rft.date=2024-03-04&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2309.06747&rft_dat=%3Cproquest_arxiv%3E2864706120%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2864706120&rft_id=info:pmid/&rfr_iscdi=true |