Fabric defect detection via a spatial cloze strategy

Deep-learning models have achieved state-of-the-art performances in a wide range of defect detection tasks. However, an inescapable criticism of one-stage fully supervised models is the lack of interpretability, which not only reduces the reliability of fabric defect detection systems but also limit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Textile research journal 2023-04, Vol.93 (7-8), p.1612-1627
Hauptverfasser: Lu, Zhengyang, Zhang, Yudian, Xu, Han, Chen, Han
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1627
container_issue 7-8
container_start_page 1612
container_title Textile research journal
container_volume 93
creator Lu, Zhengyang
Zhang, Yudian
Xu, Han
Chen, Han
description Deep-learning models have achieved state-of-the-art performances in a wide range of defect detection tasks. However, an inescapable criticism of one-stage fully supervised models is the lack of interpretability, which not only reduces the reliability of fabric defect detection systems but also limits the scope of their applications in production environments. To tackle the data imbalance and low interpretability of defect samples, we proposed a spatial cloze strategy for fabric defect detection, which reconstructs a local normal image and then feeds it into the detection model with the original image simultaneously. Specifically, we formulate the defect detection task as a novel image completion problem. Firstly, an end-to-end deep neural network is trained to finely restore the defect image by completing each image slice removed in sequence. Next, the progressive attention mechanism fuses the repaired normal image with the raw image, replacing the input layer of the cascade region-based convolutional neural network. Eventually, accurate instance-level defect segmentation can be obtained by comparing the repaired defect-free and the raw images. On the Tianchi dataset, the proposed method displays superior accuracy in 92% of defect classes, with a breakthrough in various categories that have hardly ever been detected. Extensive experiments on various complex fabric defect samples demonstrate that our strategy outperforms existing advanced methods.
doi_str_mv 10.1177/00405175221135205
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2794605151</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_00405175221135205</sage_id><sourcerecordid>2794605151</sourcerecordid><originalsourceid>FETCH-LOGICAL-c312t-5f74dbeb89e1c7c9ce3c71f34328c841141e09c2db9faf757e301f4c9e6479363</originalsourceid><addsrcrecordid>eNp1kE9LAzEQxYMoWKsfwNuC562Z_NlJjlKsFQpe9Lxk00nZsnZrkgr107tLBQ_i6R3e771hHmO3wGcAiPecK64BtRAAUguuz9gEUFUlojLnbDL65QhcsquUtpxzY9BMmFq4Jra-WFMgnwfJg7T9rvhsXeGKtHe5dV3hu_6LipSjy7Q5XrOL4LpENz86ZW-Lx9f5sly9PD3PH1allyByqQOqdUONsQQevfUkPUKQSgrjjQJQQNx6sW5scAE1kuQQlLdUKbSyklN2d-rdx_7jQCnX2_4Qd8PJWqBV1fCxhoGCE-Vjn1KkUO9j--7isQZej-PUf8YZMrNTJrkN_bb-H_gGiDNiOg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2794605151</pqid></control><display><type>article</type><title>Fabric defect detection via a spatial cloze strategy</title><source>SAGE Complete</source><creator>Lu, Zhengyang ; Zhang, Yudian ; Xu, Han ; Chen, Han</creator><creatorcontrib>Lu, Zhengyang ; Zhang, Yudian ; Xu, Han ; Chen, Han</creatorcontrib><description>Deep-learning models have achieved state-of-the-art performances in a wide range of defect detection tasks. However, an inescapable criticism of one-stage fully supervised models is the lack of interpretability, which not only reduces the reliability of fabric defect detection systems but also limits the scope of their applications in production environments. To tackle the data imbalance and low interpretability of defect samples, we proposed a spatial cloze strategy for fabric defect detection, which reconstructs a local normal image and then feeds it into the detection model with the original image simultaneously. Specifically, we formulate the defect detection task as a novel image completion problem. Firstly, an end-to-end deep neural network is trained to finely restore the defect image by completing each image slice removed in sequence. Next, the progressive attention mechanism fuses the repaired normal image with the raw image, replacing the input layer of the cascade region-based convolutional neural network. Eventually, accurate instance-level defect segmentation can be obtained by comparing the repaired defect-free and the raw images. On the Tianchi dataset, the proposed method displays superior accuracy in 92% of defect classes, with a breakthrough in various categories that have hardly ever been detected. Extensive experiments on various complex fabric defect samples demonstrate that our strategy outperforms existing advanced methods.</description><identifier>ISSN: 0040-5175</identifier><identifier>EISSN: 1746-7748</identifier><identifier>DOI: 10.1177/00405175221135205</identifier><language>eng</language><publisher>London, England: SAGE Publications</publisher><subject>Artificial neural networks ; Defects ; Image processing ; Image restoration ; Image segmentation ; Machine learning ; Neural networks ; Spatial discrimination learning</subject><ispartof>Textile research journal, 2023-04, Vol.93 (7-8), p.1612-1627</ispartof><rights>The Author(s) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c312t-5f74dbeb89e1c7c9ce3c71f34328c841141e09c2db9faf757e301f4c9e6479363</citedby><cites>FETCH-LOGICAL-c312t-5f74dbeb89e1c7c9ce3c71f34328c841141e09c2db9faf757e301f4c9e6479363</cites><orcidid>0000-0002-1540-0678</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/00405175221135205$$EPDF$$P50$$Gsage$$H</linktopdf><linktohtml>$$Uhttps://journals.sagepub.com/doi/10.1177/00405175221135205$$EHTML$$P50$$Gsage$$H</linktohtml><link.rule.ids>314,780,784,21817,27922,27923,43619,43620</link.rule.ids></links><search><creatorcontrib>Lu, Zhengyang</creatorcontrib><creatorcontrib>Zhang, Yudian</creatorcontrib><creatorcontrib>Xu, Han</creatorcontrib><creatorcontrib>Chen, Han</creatorcontrib><title>Fabric defect detection via a spatial cloze strategy</title><title>Textile research journal</title><description>Deep-learning models have achieved state-of-the-art performances in a wide range of defect detection tasks. However, an inescapable criticism of one-stage fully supervised models is the lack of interpretability, which not only reduces the reliability of fabric defect detection systems but also limits the scope of their applications in production environments. To tackle the data imbalance and low interpretability of defect samples, we proposed a spatial cloze strategy for fabric defect detection, which reconstructs a local normal image and then feeds it into the detection model with the original image simultaneously. Specifically, we formulate the defect detection task as a novel image completion problem. Firstly, an end-to-end deep neural network is trained to finely restore the defect image by completing each image slice removed in sequence. Next, the progressive attention mechanism fuses the repaired normal image with the raw image, replacing the input layer of the cascade region-based convolutional neural network. Eventually, accurate instance-level defect segmentation can be obtained by comparing the repaired defect-free and the raw images. On the Tianchi dataset, the proposed method displays superior accuracy in 92% of defect classes, with a breakthrough in various categories that have hardly ever been detected. Extensive experiments on various complex fabric defect samples demonstrate that our strategy outperforms existing advanced methods.</description><subject>Artificial neural networks</subject><subject>Defects</subject><subject>Image processing</subject><subject>Image restoration</subject><subject>Image segmentation</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Spatial discrimination learning</subject><issn>0040-5175</issn><issn>1746-7748</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp1kE9LAzEQxYMoWKsfwNuC562Z_NlJjlKsFQpe9Lxk00nZsnZrkgr107tLBQ_i6R3e771hHmO3wGcAiPecK64BtRAAUguuz9gEUFUlojLnbDL65QhcsquUtpxzY9BMmFq4Jra-WFMgnwfJg7T9rvhsXeGKtHe5dV3hu_6LipSjy7Q5XrOL4LpENz86ZW-Lx9f5sly9PD3PH1allyByqQOqdUONsQQevfUkPUKQSgrjjQJQQNx6sW5scAE1kuQQlLdUKbSyklN2d-rdx_7jQCnX2_4Qd8PJWqBV1fCxhoGCE-Vjn1KkUO9j--7isQZej-PUf8YZMrNTJrkN_bb-H_gGiDNiOg</recordid><startdate>202304</startdate><enddate>202304</enddate><creator>Lu, Zhengyang</creator><creator>Zhang, Yudian</creator><creator>Xu, Han</creator><creator>Chen, Han</creator><general>SAGE Publications</general><general>Sage Publications Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SR</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>JG9</scope><orcidid>https://orcid.org/0000-0002-1540-0678</orcidid></search><sort><creationdate>202304</creationdate><title>Fabric defect detection via a spatial cloze strategy</title><author>Lu, Zhengyang ; Zhang, Yudian ; Xu, Han ; Chen, Han</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c312t-5f74dbeb89e1c7c9ce3c71f34328c841141e09c2db9faf757e301f4c9e6479363</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial neural networks</topic><topic>Defects</topic><topic>Image processing</topic><topic>Image restoration</topic><topic>Image segmentation</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Spatial discrimination learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lu, Zhengyang</creatorcontrib><creatorcontrib>Zhang, Yudian</creatorcontrib><creatorcontrib>Xu, Han</creatorcontrib><creatorcontrib>Chen, Han</creatorcontrib><collection>CrossRef</collection><collection>Engineered Materials Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Materials Research Database</collection><jtitle>Textile research journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lu, Zhengyang</au><au>Zhang, Yudian</au><au>Xu, Han</au><au>Chen, Han</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fabric defect detection via a spatial cloze strategy</atitle><jtitle>Textile research journal</jtitle><date>2023-04</date><risdate>2023</risdate><volume>93</volume><issue>7-8</issue><spage>1612</spage><epage>1627</epage><pages>1612-1627</pages><issn>0040-5175</issn><eissn>1746-7748</eissn><abstract>Deep-learning models have achieved state-of-the-art performances in a wide range of defect detection tasks. However, an inescapable criticism of one-stage fully supervised models is the lack of interpretability, which not only reduces the reliability of fabric defect detection systems but also limits the scope of their applications in production environments. To tackle the data imbalance and low interpretability of defect samples, we proposed a spatial cloze strategy for fabric defect detection, which reconstructs a local normal image and then feeds it into the detection model with the original image simultaneously. Specifically, we formulate the defect detection task as a novel image completion problem. Firstly, an end-to-end deep neural network is trained to finely restore the defect image by completing each image slice removed in sequence. Next, the progressive attention mechanism fuses the repaired normal image with the raw image, replacing the input layer of the cascade region-based convolutional neural network. Eventually, accurate instance-level defect segmentation can be obtained by comparing the repaired defect-free and the raw images. On the Tianchi dataset, the proposed method displays superior accuracy in 92% of defect classes, with a breakthrough in various categories that have hardly ever been detected. Extensive experiments on various complex fabric defect samples demonstrate that our strategy outperforms existing advanced methods.</abstract><cop>London, England</cop><pub>SAGE Publications</pub><doi>10.1177/00405175221135205</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-1540-0678</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0040-5175
ispartof Textile research journal, 2023-04, Vol.93 (7-8), p.1612-1627
issn 0040-5175
1746-7748
language eng
recordid cdi_proquest_journals_2794605151
source SAGE Complete
subjects Artificial neural networks
Defects
Image processing
Image restoration
Image segmentation
Machine learning
Neural networks
Spatial discrimination learning
title Fabric defect detection via a spatial cloze strategy
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T07%3A15%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fabric%20defect%20detection%20via%20a%20spatial%20cloze%20strategy&rft.jtitle=Textile%20research%20journal&rft.au=Lu,%20Zhengyang&rft.date=2023-04&rft.volume=93&rft.issue=7-8&rft.spage=1612&rft.epage=1627&rft.pages=1612-1627&rft.issn=0040-5175&rft.eissn=1746-7748&rft_id=info:doi/10.1177/00405175221135205&rft_dat=%3Cproquest_cross%3E2794605151%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2794605151&rft_id=info:pmid/&rft_sage_id=10.1177_00405175221135205&rfr_iscdi=true