Vision‐based automated bridge component recognition with high‐level scene consistency

This research investigates vision‐based automated bridge component recognition, which is critical for automating visual inspection of bridges during initial response after earthquakes. Semantic segmentation algorithms with up to 45 convolutional layers are applied to recognize bridge components from...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer-aided civil and infrastructure engineering 2020-05, Vol.35 (5), p.465-482
Hauptverfasser: Narazaki, Yasutaka, Hoskere, Vedhus, Hoang, Tu A., Fujino, Yozo, Sakurai, Akito, Spencer, Billie F.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 482
container_issue 5
container_start_page 465
container_title Computer-aided civil and infrastructure engineering
container_volume 35
creator Narazaki, Yasutaka
Hoskere, Vedhus
Hoang, Tu A.
Fujino, Yozo
Sakurai, Akito
Spencer, Billie F.
description This research investigates vision‐based automated bridge component recognition, which is critical for automating visual inspection of bridges during initial response after earthquakes. Semantic segmentation algorithms with up to 45 convolutional layers are applied to recognize bridge components from images of complex scenes. One of the challenges in such scenarios is to get the recognition results consistent with high‐level scene structure using limited amount of training data. To impose the high‐level scene consistency, this research combines 10‐class scene classification and 5‐class bridge component classification. Three approaches are investigated to combine scene classification results into bridge component classification: (a) naïve configuration, (b) parallel configuration, and (c) sequential configuration of classifiers. The proposed approaches, sequential configuration in particular, are demonstrated to be effective in recognizing bridge components in complex scenes, showing less than 1% of accuracy loss from the naïve/parallel configuration for bridge images, and less than 1% false positives for the nonbridge images.
doi_str_mv 10.1111/mice.12505
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2386847585</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2386847585</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3445-e9f31fdae5a4eeda1274ec380307226608f89b90dfe6ade8fd9b6ee84878f9453</originalsourceid><addsrcrecordid>eNp9kLFOwzAYhC0EEqWw8ASR2JBS7NhxnBFVBSoVsQASU-Q4v1tXiV3slKobj8Az8iS4hJlb7obv_l86hC4JnpCom84omJAsx_kRGhHGi1RwXhzHjEuallwUp-gshDWOYoyO0NurCcbZ78-vWgZoErntXSf7mGpvmiUkynUbZ8H2iQflltb0EU92pl8lK7NcxWILH9AmQYE90DaY0INV-3N0omUb4OLPx-jlbvY8fUgXT_fz6e0iVZSxPIVSU6IbCblkAI0kWcFAUYEpLrKMcyy0KOsSNxq4bEDopqw5gGCiELpkOR2jq-Huxrv3LYS-Wrutt_FllVHBBStycaCuB0p5F4IHXW286aTfVwRXh-mqw3TV73QRJgO8My3s_yGrx_l0NnR-AJurdLA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2386847585</pqid></control><display><type>article</type><title>Vision‐based automated bridge component recognition with high‐level scene consistency</title><source>Access via Wiley Online Library</source><creator>Narazaki, Yasutaka ; Hoskere, Vedhus ; Hoang, Tu A. ; Fujino, Yozo ; Sakurai, Akito ; Spencer, Billie F.</creator><creatorcontrib>Narazaki, Yasutaka ; Hoskere, Vedhus ; Hoang, Tu A. ; Fujino, Yozo ; Sakurai, Akito ; Spencer, Billie F.</creatorcontrib><description>This research investigates vision‐based automated bridge component recognition, which is critical for automating visual inspection of bridges during initial response after earthquakes. Semantic segmentation algorithms with up to 45 convolutional layers are applied to recognize bridge components from images of complex scenes. One of the challenges in such scenarios is to get the recognition results consistent with high‐level scene structure using limited amount of training data. To impose the high‐level scene consistency, this research combines 10‐class scene classification and 5‐class bridge component classification. Three approaches are investigated to combine scene classification results into bridge component classification: (a) naïve configuration, (b) parallel configuration, and (c) sequential configuration of classifiers. The proposed approaches, sequential configuration in particular, are demonstrated to be effective in recognizing bridge components in complex scenes, showing less than 1% of accuracy loss from the naïve/parallel configuration for bridge images, and less than 1% false positives for the nonbridge images.</description><identifier>ISSN: 1093-9687</identifier><identifier>EISSN: 1467-8667</identifier><identifier>DOI: 10.1111/mice.12505</identifier><language>eng</language><publisher>Hoboken: Wiley Subscription Services, Inc</publisher><subject>Algorithms ; Automation ; Bridge inspection ; Classification ; Configurations ; Consistency ; Image segmentation ; Recognition ; Seismic response ; Vision</subject><ispartof>Computer-aided civil and infrastructure engineering, 2020-05, Vol.35 (5), p.465-482</ispartof><rights>2019</rights><rights>2020 Computer‐Aided Civil and Infrastructure Engineering</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3445-e9f31fdae5a4eeda1274ec380307226608f89b90dfe6ade8fd9b6ee84878f9453</citedby><cites>FETCH-LOGICAL-c3445-e9f31fdae5a4eeda1274ec380307226608f89b90dfe6ade8fd9b6ee84878f9453</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1111%2Fmice.12505$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1111%2Fmice.12505$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,780,784,1417,27924,27925,45574,45575</link.rule.ids></links><search><creatorcontrib>Narazaki, Yasutaka</creatorcontrib><creatorcontrib>Hoskere, Vedhus</creatorcontrib><creatorcontrib>Hoang, Tu A.</creatorcontrib><creatorcontrib>Fujino, Yozo</creatorcontrib><creatorcontrib>Sakurai, Akito</creatorcontrib><creatorcontrib>Spencer, Billie F.</creatorcontrib><title>Vision‐based automated bridge component recognition with high‐level scene consistency</title><title>Computer-aided civil and infrastructure engineering</title><description>This research investigates vision‐based automated bridge component recognition, which is critical for automating visual inspection of bridges during initial response after earthquakes. Semantic segmentation algorithms with up to 45 convolutional layers are applied to recognize bridge components from images of complex scenes. One of the challenges in such scenarios is to get the recognition results consistent with high‐level scene structure using limited amount of training data. To impose the high‐level scene consistency, this research combines 10‐class scene classification and 5‐class bridge component classification. Three approaches are investigated to combine scene classification results into bridge component classification: (a) naïve configuration, (b) parallel configuration, and (c) sequential configuration of classifiers. The proposed approaches, sequential configuration in particular, are demonstrated to be effective in recognizing bridge components in complex scenes, showing less than 1% of accuracy loss from the naïve/parallel configuration for bridge images, and less than 1% false positives for the nonbridge images.</description><subject>Algorithms</subject><subject>Automation</subject><subject>Bridge inspection</subject><subject>Classification</subject><subject>Configurations</subject><subject>Consistency</subject><subject>Image segmentation</subject><subject>Recognition</subject><subject>Seismic response</subject><subject>Vision</subject><issn>1093-9687</issn><issn>1467-8667</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp9kLFOwzAYhC0EEqWw8ASR2JBS7NhxnBFVBSoVsQASU-Q4v1tXiV3slKobj8Az8iS4hJlb7obv_l86hC4JnpCom84omJAsx_kRGhHGi1RwXhzHjEuallwUp-gshDWOYoyO0NurCcbZ78-vWgZoErntXSf7mGpvmiUkynUbZ8H2iQflltb0EU92pl8lK7NcxWILH9AmQYE90DaY0INV-3N0omUb4OLPx-jlbvY8fUgXT_fz6e0iVZSxPIVSU6IbCblkAI0kWcFAUYEpLrKMcyy0KOsSNxq4bEDopqw5gGCiELpkOR2jq-Huxrv3LYS-Wrutt_FllVHBBStycaCuB0p5F4IHXW286aTfVwRXh-mqw3TV73QRJgO8My3s_yGrx_l0NnR-AJurdLA</recordid><startdate>202005</startdate><enddate>202005</enddate><creator>Narazaki, Yasutaka</creator><creator>Hoskere, Vedhus</creator><creator>Hoang, Tu A.</creator><creator>Fujino, Yozo</creator><creator>Sakurai, Akito</creator><creator>Spencer, Billie F.</creator><general>Wiley Subscription Services, Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>202005</creationdate><title>Vision‐based automated bridge component recognition with high‐level scene consistency</title><author>Narazaki, Yasutaka ; Hoskere, Vedhus ; Hoang, Tu A. ; Fujino, Yozo ; Sakurai, Akito ; Spencer, Billie F.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3445-e9f31fdae5a4eeda1274ec380307226608f89b90dfe6ade8fd9b6ee84878f9453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Automation</topic><topic>Bridge inspection</topic><topic>Classification</topic><topic>Configurations</topic><topic>Consistency</topic><topic>Image segmentation</topic><topic>Recognition</topic><topic>Seismic response</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Narazaki, Yasutaka</creatorcontrib><creatorcontrib>Hoskere, Vedhus</creatorcontrib><creatorcontrib>Hoang, Tu A.</creatorcontrib><creatorcontrib>Fujino, Yozo</creatorcontrib><creatorcontrib>Sakurai, Akito</creatorcontrib><creatorcontrib>Spencer, Billie F.</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Computer-aided civil and infrastructure engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Narazaki, Yasutaka</au><au>Hoskere, Vedhus</au><au>Hoang, Tu A.</au><au>Fujino, Yozo</au><au>Sakurai, Akito</au><au>Spencer, Billie F.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Vision‐based automated bridge component recognition with high‐level scene consistency</atitle><jtitle>Computer-aided civil and infrastructure engineering</jtitle><date>2020-05</date><risdate>2020</risdate><volume>35</volume><issue>5</issue><spage>465</spage><epage>482</epage><pages>465-482</pages><issn>1093-9687</issn><eissn>1467-8667</eissn><abstract>This research investigates vision‐based automated bridge component recognition, which is critical for automating visual inspection of bridges during initial response after earthquakes. Semantic segmentation algorithms with up to 45 convolutional layers are applied to recognize bridge components from images of complex scenes. One of the challenges in such scenarios is to get the recognition results consistent with high‐level scene structure using limited amount of training data. To impose the high‐level scene consistency, this research combines 10‐class scene classification and 5‐class bridge component classification. Three approaches are investigated to combine scene classification results into bridge component classification: (a) naïve configuration, (b) parallel configuration, and (c) sequential configuration of classifiers. The proposed approaches, sequential configuration in particular, are demonstrated to be effective in recognizing bridge components in complex scenes, showing less than 1% of accuracy loss from the naïve/parallel configuration for bridge images, and less than 1% false positives for the nonbridge images.</abstract><cop>Hoboken</cop><pub>Wiley Subscription Services, Inc</pub><doi>10.1111/mice.12505</doi><tpages>18</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1093-9687
ispartof Computer-aided civil and infrastructure engineering, 2020-05, Vol.35 (5), p.465-482
issn 1093-9687
1467-8667
language eng
recordid cdi_proquest_journals_2386847585
source Access via Wiley Online Library
subjects Algorithms
Automation
Bridge inspection
Classification
Configurations
Consistency
Image segmentation
Recognition
Seismic response
Vision
title Vision‐based automated bridge component recognition with high‐level scene consistency
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T18%3A21%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Vision%E2%80%90based%20automated%20bridge%20component%20recognition%20with%20high%E2%80%90level%20scene%20consistency&rft.jtitle=Computer-aided%20civil%20and%20infrastructure%20engineering&rft.au=Narazaki,%20Yasutaka&rft.date=2020-05&rft.volume=35&rft.issue=5&rft.spage=465&rft.epage=482&rft.pages=465-482&rft.issn=1093-9687&rft.eissn=1467-8667&rft_id=info:doi/10.1111/mice.12505&rft_dat=%3Cproquest_cross%3E2386847585%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2386847585&rft_id=info:pmid/&rfr_iscdi=true