Semi-supervised learning with GAN for automatic defect detection from images

Towards the automatic defect detection from images, this research develops a semi-supervised generative adversarial network (SSGAN) with two sub-networks for more precise segmentation results at the pixel level. One is the segmentation network for the defect segmentation from labeled and non-labeled...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Automation in construction 2021-08, Vol.128, p.103764, Article 103764
Hauptverfasser: Zhang, Gaowei, Pan, Yue, Zhang, Limao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page 103764
container_title Automation in construction
container_volume 128
creator Zhang, Gaowei
Pan, Yue
Zhang, Limao
description Towards the automatic defect detection from images, this research develops a semi-supervised generative adversarial network (SSGAN) with two sub-networks for more precise segmentation results at the pixel level. One is the segmentation network for the defect segmentation from labeled and non-labeled images, which is built on a dual attention mechanism. Specifically, the attention mechanism is employed to extract the rich and global representations of pixels in both the spatial and channel dimension for better feature representation. The other one is the fully convolutional discriminator (FCD) network, which employs two kinds of loss functions (the adversarial loss and the cross-entropy loss) to generate the confidential density maps of unlabeled images in a semi-supervised learning manner. In contrast to most existing methods heavily relying on labeled or weakly-labeled images, the developed SSGAN model can leverage unlabeled images to enhance the segmentation performance and alleviate the data labeling task. The effectiveness of the proposed SSGAN model is demonstrated in a public dataset with four classes of steel defects. In comparison with other state-of-the-art methods, our developed model using 1/8 and 1/4 labeled data can reach promising mean Intersection over Union (IoU) of 79.0% and 81.8%, respectively. Moreover, the proposed SSGAN is robust and flexible in the segmentation under various scenarios. •A semi-supervised generative adversarial network with two sub-networks is developed.•It leverages unlabeled images to enhance segmentation performance and alleviate labeling task.•The effectiveness is verified in a public dataset with four classes of steel defects.•1/8 and 1/4 labeled data reach mean Intersection over Union of 79.0% and 81.8%, respectively.•The developed approach is robust and flexible in the segmentation of various scenarios.
doi_str_mv 10.1016/j.autcon.2021.103764
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2550514677</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0926580521002156</els_id><sourcerecordid>2550514677</sourcerecordid><originalsourceid>FETCH-LOGICAL-c264t-4c1ba7f9a4e1e4fe82e8761cd41b675c34bc0d4b81ea5b160ac4a52ce34af4063</originalsourceid><addsrcrecordid>eNp9kE9LAzEQxYMoWKvfwEPA89Ykmz-7F6EUrULRg3oO2eykZuluarJV_PamrGdPD4Z5b-b9ELqmZEEJlbfdwhxGG4YFI4zmUakkP0EzWilWqKqmp2hGaiYLURFxji5S6gghish6hjav0PsiHfYQv3yCFu_AxMEPW_ztxw-8Xj5jFyLO-aE3o7e4BQd2zDJm8WHALoYe-95sIV2iM2d2Ca7-dI7eH-7fVo_F5mX9tFpuCsskHwtuaWOUqw0HCtxBxaBSktqW00YqYUveWNLypqJgREMlMZYbwSyU3DhOZDlHN1PuPobPA6RRd-EQh3xSMyGIoFwqlbf4tGVjSCmC0_uY_4w_mhJ95KY7PXHTR2564pZtd5MNcoMvD1En62Gw0PqYK-s2-P8DfgF5c3ie</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2550514677</pqid></control><display><type>article</type><title>Semi-supervised learning with GAN for automatic defect detection from images</title><source>Elsevier ScienceDirect Journals</source><creator>Zhang, Gaowei ; Pan, Yue ; Zhang, Limao</creator><creatorcontrib>Zhang, Gaowei ; Pan, Yue ; Zhang, Limao</creatorcontrib><description>Towards the automatic defect detection from images, this research develops a semi-supervised generative adversarial network (SSGAN) with two sub-networks for more precise segmentation results at the pixel level. One is the segmentation network for the defect segmentation from labeled and non-labeled images, which is built on a dual attention mechanism. Specifically, the attention mechanism is employed to extract the rich and global representations of pixels in both the spatial and channel dimension for better feature representation. The other one is the fully convolutional discriminator (FCD) network, which employs two kinds of loss functions (the adversarial loss and the cross-entropy loss) to generate the confidential density maps of unlabeled images in a semi-supervised learning manner. In contrast to most existing methods heavily relying on labeled or weakly-labeled images, the developed SSGAN model can leverage unlabeled images to enhance the segmentation performance and alleviate the data labeling task. The effectiveness of the proposed SSGAN model is demonstrated in a public dataset with four classes of steel defects. In comparison with other state-of-the-art methods, our developed model using 1/8 and 1/4 labeled data can reach promising mean Intersection over Union (IoU) of 79.0% and 81.8%, respectively. Moreover, the proposed SSGAN is robust and flexible in the segmentation under various scenarios. •A semi-supervised generative adversarial network with two sub-networks is developed.•It leverages unlabeled images to enhance segmentation performance and alleviate labeling task.•The effectiveness is verified in a public dataset with four classes of steel defects.•1/8 and 1/4 labeled data reach mean Intersection over Union of 79.0% and 81.8%, respectively.•The developed approach is robust and flexible in the segmentation of various scenarios.</description><identifier>ISSN: 0926-5805</identifier><identifier>EISSN: 1872-7891</identifier><identifier>DOI: 10.1016/j.autcon.2021.103764</identifier><language>eng</language><publisher>Amsterdam: Elsevier B.V</publisher><subject>Defect detection ; Fully convolutional network ; Generative adversarial network ; Generative adversarial networks ; Image contrast ; Image enhancement ; Image segmentation ; Pixels ; Representations ; Semi-supervised learning</subject><ispartof>Automation in construction, 2021-08, Vol.128, p.103764, Article 103764</ispartof><rights>2021 Elsevier B.V.</rights><rights>Copyright Elsevier BV Aug 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c264t-4c1ba7f9a4e1e4fe82e8761cd41b675c34bc0d4b81ea5b160ac4a52ce34af4063</citedby><cites>FETCH-LOGICAL-c264t-4c1ba7f9a4e1e4fe82e8761cd41b675c34bc0d4b81ea5b160ac4a52ce34af4063</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0926580521002156$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65306</link.rule.ids></links><search><creatorcontrib>Zhang, Gaowei</creatorcontrib><creatorcontrib>Pan, Yue</creatorcontrib><creatorcontrib>Zhang, Limao</creatorcontrib><title>Semi-supervised learning with GAN for automatic defect detection from images</title><title>Automation in construction</title><description>Towards the automatic defect detection from images, this research develops a semi-supervised generative adversarial network (SSGAN) with two sub-networks for more precise segmentation results at the pixel level. One is the segmentation network for the defect segmentation from labeled and non-labeled images, which is built on a dual attention mechanism. Specifically, the attention mechanism is employed to extract the rich and global representations of pixels in both the spatial and channel dimension for better feature representation. The other one is the fully convolutional discriminator (FCD) network, which employs two kinds of loss functions (the adversarial loss and the cross-entropy loss) to generate the confidential density maps of unlabeled images in a semi-supervised learning manner. In contrast to most existing methods heavily relying on labeled or weakly-labeled images, the developed SSGAN model can leverage unlabeled images to enhance the segmentation performance and alleviate the data labeling task. The effectiveness of the proposed SSGAN model is demonstrated in a public dataset with four classes of steel defects. In comparison with other state-of-the-art methods, our developed model using 1/8 and 1/4 labeled data can reach promising mean Intersection over Union (IoU) of 79.0% and 81.8%, respectively. Moreover, the proposed SSGAN is robust and flexible in the segmentation under various scenarios. •A semi-supervised generative adversarial network with two sub-networks is developed.•It leverages unlabeled images to enhance segmentation performance and alleviate labeling task.•The effectiveness is verified in a public dataset with four classes of steel defects.•1/8 and 1/4 labeled data reach mean Intersection over Union of 79.0% and 81.8%, respectively.•The developed approach is robust and flexible in the segmentation of various scenarios.</description><subject>Defect detection</subject><subject>Fully convolutional network</subject><subject>Generative adversarial network</subject><subject>Generative adversarial networks</subject><subject>Image contrast</subject><subject>Image enhancement</subject><subject>Image segmentation</subject><subject>Pixels</subject><subject>Representations</subject><subject>Semi-supervised learning</subject><issn>0926-5805</issn><issn>1872-7891</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kE9LAzEQxYMoWKvfwEPA89Ykmz-7F6EUrULRg3oO2eykZuluarJV_PamrGdPD4Z5b-b9ELqmZEEJlbfdwhxGG4YFI4zmUakkP0EzWilWqKqmp2hGaiYLURFxji5S6gghish6hjav0PsiHfYQv3yCFu_AxMEPW_ztxw-8Xj5jFyLO-aE3o7e4BQd2zDJm8WHALoYe-95sIV2iM2d2Ca7-dI7eH-7fVo_F5mX9tFpuCsskHwtuaWOUqw0HCtxBxaBSktqW00YqYUveWNLypqJgREMlMZYbwSyU3DhOZDlHN1PuPobPA6RRd-EQh3xSMyGIoFwqlbf4tGVjSCmC0_uY_4w_mhJ95KY7PXHTR2564pZtd5MNcoMvD1En62Gw0PqYK-s2-P8DfgF5c3ie</recordid><startdate>202108</startdate><enddate>202108</enddate><creator>Zhang, Gaowei</creator><creator>Pan, Yue</creator><creator>Zhang, Limao</creator><general>Elsevier B.V</general><general>Elsevier BV</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>202108</creationdate><title>Semi-supervised learning with GAN for automatic defect detection from images</title><author>Zhang, Gaowei ; Pan, Yue ; Zhang, Limao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c264t-4c1ba7f9a4e1e4fe82e8761cd41b675c34bc0d4b81ea5b160ac4a52ce34af4063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Defect detection</topic><topic>Fully convolutional network</topic><topic>Generative adversarial network</topic><topic>Generative adversarial networks</topic><topic>Image contrast</topic><topic>Image enhancement</topic><topic>Image segmentation</topic><topic>Pixels</topic><topic>Representations</topic><topic>Semi-supervised learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Gaowei</creatorcontrib><creatorcontrib>Pan, Yue</creatorcontrib><creatorcontrib>Zhang, Limao</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Automation in construction</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Gaowei</au><au>Pan, Yue</au><au>Zhang, Limao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Semi-supervised learning with GAN for automatic defect detection from images</atitle><jtitle>Automation in construction</jtitle><date>2021-08</date><risdate>2021</risdate><volume>128</volume><spage>103764</spage><pages>103764-</pages><artnum>103764</artnum><issn>0926-5805</issn><eissn>1872-7891</eissn><abstract>Towards the automatic defect detection from images, this research develops a semi-supervised generative adversarial network (SSGAN) with two sub-networks for more precise segmentation results at the pixel level. One is the segmentation network for the defect segmentation from labeled and non-labeled images, which is built on a dual attention mechanism. Specifically, the attention mechanism is employed to extract the rich and global representations of pixels in both the spatial and channel dimension for better feature representation. The other one is the fully convolutional discriminator (FCD) network, which employs two kinds of loss functions (the adversarial loss and the cross-entropy loss) to generate the confidential density maps of unlabeled images in a semi-supervised learning manner. In contrast to most existing methods heavily relying on labeled or weakly-labeled images, the developed SSGAN model can leverage unlabeled images to enhance the segmentation performance and alleviate the data labeling task. The effectiveness of the proposed SSGAN model is demonstrated in a public dataset with four classes of steel defects. In comparison with other state-of-the-art methods, our developed model using 1/8 and 1/4 labeled data can reach promising mean Intersection over Union (IoU) of 79.0% and 81.8%, respectively. Moreover, the proposed SSGAN is robust and flexible in the segmentation under various scenarios. •A semi-supervised generative adversarial network with two sub-networks is developed.•It leverages unlabeled images to enhance segmentation performance and alleviate labeling task.•The effectiveness is verified in a public dataset with four classes of steel defects.•1/8 and 1/4 labeled data reach mean Intersection over Union of 79.0% and 81.8%, respectively.•The developed approach is robust and flexible in the segmentation of various scenarios.</abstract><cop>Amsterdam</cop><pub>Elsevier B.V</pub><doi>10.1016/j.autcon.2021.103764</doi></addata></record>
fulltext fulltext
identifier ISSN: 0926-5805
ispartof Automation in construction, 2021-08, Vol.128, p.103764, Article 103764
issn 0926-5805
1872-7891
language eng
recordid cdi_proquest_journals_2550514677
source Elsevier ScienceDirect Journals
subjects Defect detection
Fully convolutional network
Generative adversarial network
Generative adversarial networks
Image contrast
Image enhancement
Image segmentation
Pixels
Representations
Semi-supervised learning
title Semi-supervised learning with GAN for automatic defect detection from images
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T02%3A36%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Semi-supervised%20learning%20with%20GAN%20for%20automatic%20defect%20detection%20from%20images&rft.jtitle=Automation%20in%20construction&rft.au=Zhang,%20Gaowei&rft.date=2021-08&rft.volume=128&rft.spage=103764&rft.pages=103764-&rft.artnum=103764&rft.issn=0926-5805&rft.eissn=1872-7891&rft_id=info:doi/10.1016/j.autcon.2021.103764&rft_dat=%3Cproquest_cross%3E2550514677%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2550514677&rft_id=info:pmid/&rft_els_id=S0926580521002156&rfr_iscdi=true