Deep multimodal learning for municipal solid waste sorting

Automated waste sorting can dramatically increase waste sorting efficiency and reduce its regulation cost. Most of the current methods only use a single modality such as image data or acoustic data for waste classification, which makes it difficult to classify mixed and confusable wastes. In these c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Science China. Technological sciences 2022, Vol.65 (2), p.324-335
Hauptverfasser: Lu, Gang, Wang, YuanBin, Xu, HuXiu, Yang, HuaYong, Zou, Jun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 335
container_issue 2
container_start_page 324
container_title Science China. Technological sciences
container_volume 65
creator Lu, Gang
Wang, YuanBin
Xu, HuXiu
Yang, HuaYong
Zou, Jun
description Automated waste sorting can dramatically increase waste sorting efficiency and reduce its regulation cost. Most of the current methods only use a single modality such as image data or acoustic data for waste classification, which makes it difficult to classify mixed and confusable wastes. In these complex situations, using multiple modalities becomes necessary to achieve a high classification accuracy. Traditionally, the fusion of multiple modalities has been limited by fixed handcrafted features. In this study, the deep-learning approach was applied to the multimodal fusion at the feature level for municipal solid-waste sorting. More specifically, the pre-trained VGG16 and one-dimensional convolutional neural networks (1D CNNs) were utilized to extract features from visual data and acoustic data, respectively. These deeply learned features were then fused in the fully connected layers for classification. The results of comparative experiments proved that the proposed method was superior to the single-modality methods. Additionally, the feature-based fusion strategy performed better than the decision-based strategy with deeply learned features.
doi_str_mv 10.1007/s11431-021-1927-9
format Article
fullrecord <record><control><sourceid>proquest_webof</sourceid><recordid>TN_cdi_webofscience_primary_000739293200002</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2625578510</sourcerecordid><originalsourceid>FETCH-LOGICAL-c316t-6e2335c33173e656811520edce96a0bd2980a0218ac864c14090e8407f5e07893</originalsourceid><addsrcrecordid>eNqNkE1LAzEQhhdRsGh_gLeCR1mdSXbz4U3qJxS86Dmk6WxJ2W5qskvx35uyoifBucxkeN6ZyVsUFwjXCCBvEmLFsQSGJWomS31UTFAJnV8Ax7kWsiolZ3haTFPaQA6uNGA1KW7viXaz7dD2fhtWtp21ZGPnu_WsCTH3O-_8LrdTaP1qtrepp1zHPhPnxUlj20TT73xWvD8-vM2fy8Xr08v8blE6jqIvBTHOa8c5Sk6iFgqxZkArR1pYWK6YVmDz6co6JSqHFWggVYFsagKpND8rLse5uxg-Bkq92YQhdnmlYYLVtVQ1QqZwpFwMKUVqzC76rY2fBsEcXDKjSyavMgeXzGHy1ajZ0zI0yXnqHP3oskuSa6Y5OxjGMq3-T899b3sfunkYuj5L2ShNGe_WFH-_8Pd1X3CEiEk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2625578510</pqid></control><display><type>article</type><title>Deep multimodal learning for municipal solid waste sorting</title><source>SpringerNature Journals</source><source>Alma/SFX Local Collection</source><creator>Lu, Gang ; Wang, YuanBin ; Xu, HuXiu ; Yang, HuaYong ; Zou, Jun</creator><creatorcontrib>Lu, Gang ; Wang, YuanBin ; Xu, HuXiu ; Yang, HuaYong ; Zou, Jun</creatorcontrib><description>Automated waste sorting can dramatically increase waste sorting efficiency and reduce its regulation cost. Most of the current methods only use a single modality such as image data or acoustic data for waste classification, which makes it difficult to classify mixed and confusable wastes. In these complex situations, using multiple modalities becomes necessary to achieve a high classification accuracy. Traditionally, the fusion of multiple modalities has been limited by fixed handcrafted features. In this study, the deep-learning approach was applied to the multimodal fusion at the feature level for municipal solid-waste sorting. More specifically, the pre-trained VGG16 and one-dimensional convolutional neural networks (1D CNNs) were utilized to extract features from visual data and acoustic data, respectively. These deeply learned features were then fused in the fully connected layers for classification. The results of comparative experiments proved that the proposed method was superior to the single-modality methods. Additionally, the feature-based fusion strategy performed better than the decision-based strategy with deeply learned features.</description><identifier>ISSN: 1674-7321</identifier><identifier>EISSN: 1869-1900</identifier><identifier>DOI: 10.1007/s11431-021-1927-9</identifier><language>eng</language><publisher>Beijing: Science China Press</publisher><subject>Artificial neural networks ; Deep learning ; Engineering ; Engineering, Multidisciplinary ; Feature extraction ; Image classification ; Machine learning ; Materials Science ; Materials Science, Multidisciplinary ; Municipal waste management ; Science &amp; Technology ; Solid waste management ; Technology</subject><ispartof>Science China. Technological sciences, 2022, Vol.65 (2), p.324-335</ispartof><rights>Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022</rights><rights>Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>true</woscitedreferencessubscribed><woscitedreferencescount>10</woscitedreferencescount><woscitedreferencesoriginalsourcerecordid>wos000739293200002</woscitedreferencesoriginalsourcerecordid><citedby>FETCH-LOGICAL-c316t-6e2335c33173e656811520edce96a0bd2980a0218ac864c14090e8407f5e07893</citedby><cites>FETCH-LOGICAL-c316t-6e2335c33173e656811520edce96a0bd2980a0218ac864c14090e8407f5e07893</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11431-021-1927-9$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11431-021-1927-9$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>315,781,785,27928,27929,41492,42561,51323</link.rule.ids></links><search><creatorcontrib>Lu, Gang</creatorcontrib><creatorcontrib>Wang, YuanBin</creatorcontrib><creatorcontrib>Xu, HuXiu</creatorcontrib><creatorcontrib>Yang, HuaYong</creatorcontrib><creatorcontrib>Zou, Jun</creatorcontrib><title>Deep multimodal learning for municipal solid waste sorting</title><title>Science China. Technological sciences</title><addtitle>Sci. China Technol. Sci</addtitle><addtitle>SCI CHINA TECHNOL SC</addtitle><description>Automated waste sorting can dramatically increase waste sorting efficiency and reduce its regulation cost. Most of the current methods only use a single modality such as image data or acoustic data for waste classification, which makes it difficult to classify mixed and confusable wastes. In these complex situations, using multiple modalities becomes necessary to achieve a high classification accuracy. Traditionally, the fusion of multiple modalities has been limited by fixed handcrafted features. In this study, the deep-learning approach was applied to the multimodal fusion at the feature level for municipal solid-waste sorting. More specifically, the pre-trained VGG16 and one-dimensional convolutional neural networks (1D CNNs) were utilized to extract features from visual data and acoustic data, respectively. These deeply learned features were then fused in the fully connected layers for classification. The results of comparative experiments proved that the proposed method was superior to the single-modality methods. Additionally, the feature-based fusion strategy performed better than the decision-based strategy with deeply learned features.</description><subject>Artificial neural networks</subject><subject>Deep learning</subject><subject>Engineering</subject><subject>Engineering, Multidisciplinary</subject><subject>Feature extraction</subject><subject>Image classification</subject><subject>Machine learning</subject><subject>Materials Science</subject><subject>Materials Science, Multidisciplinary</subject><subject>Municipal waste management</subject><subject>Science &amp; Technology</subject><subject>Solid waste management</subject><subject>Technology</subject><issn>1674-7321</issn><issn>1869-1900</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>HGBXW</sourceid><recordid>eNqNkE1LAzEQhhdRsGh_gLeCR1mdSXbz4U3qJxS86Dmk6WxJ2W5qskvx35uyoifBucxkeN6ZyVsUFwjXCCBvEmLFsQSGJWomS31UTFAJnV8Ax7kWsiolZ3haTFPaQA6uNGA1KW7viXaz7dD2fhtWtp21ZGPnu_WsCTH3O-_8LrdTaP1qtrepp1zHPhPnxUlj20TT73xWvD8-vM2fy8Xr08v8blE6jqIvBTHOa8c5Sk6iFgqxZkArR1pYWK6YVmDz6co6JSqHFWggVYFsagKpND8rLse5uxg-Bkq92YQhdnmlYYLVtVQ1QqZwpFwMKUVqzC76rY2fBsEcXDKjSyavMgeXzGHy1ajZ0zI0yXnqHP3oskuSa6Y5OxjGMq3-T899b3sfunkYuj5L2ShNGe_WFH-_8Pd1X3CEiEk</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Lu, Gang</creator><creator>Wang, YuanBin</creator><creator>Xu, HuXiu</creator><creator>Yang, HuaYong</creator><creator>Zou, Jun</creator><general>Science China Press</general><general>Science Press</general><general>Springer Nature B.V</general><scope>BLEPL</scope><scope>DTL</scope><scope>HGBXW</scope><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>2022</creationdate><title>Deep multimodal learning for municipal solid waste sorting</title><author>Lu, Gang ; Wang, YuanBin ; Xu, HuXiu ; Yang, HuaYong ; Zou, Jun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c316t-6e2335c33173e656811520edce96a0bd2980a0218ac864c14090e8407f5e07893</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial neural networks</topic><topic>Deep learning</topic><topic>Engineering</topic><topic>Engineering, Multidisciplinary</topic><topic>Feature extraction</topic><topic>Image classification</topic><topic>Machine learning</topic><topic>Materials Science</topic><topic>Materials Science, Multidisciplinary</topic><topic>Municipal waste management</topic><topic>Science &amp; Technology</topic><topic>Solid waste management</topic><topic>Technology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lu, Gang</creatorcontrib><creatorcontrib>Wang, YuanBin</creatorcontrib><creatorcontrib>Xu, HuXiu</creatorcontrib><creatorcontrib>Yang, HuaYong</creatorcontrib><creatorcontrib>Zou, Jun</creatorcontrib><collection>Web of Science Core Collection</collection><collection>Science Citation Index Expanded</collection><collection>Web of Science - Science Citation Index Expanded - 2021</collection><collection>CrossRef</collection><jtitle>Science China. Technological sciences</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lu, Gang</au><au>Wang, YuanBin</au><au>Xu, HuXiu</au><au>Yang, HuaYong</au><au>Zou, Jun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep multimodal learning for municipal solid waste sorting</atitle><jtitle>Science China. Technological sciences</jtitle><stitle>Sci. China Technol. Sci</stitle><stitle>SCI CHINA TECHNOL SC</stitle><date>2022</date><risdate>2022</risdate><volume>65</volume><issue>2</issue><spage>324</spage><epage>335</epage><pages>324-335</pages><issn>1674-7321</issn><eissn>1869-1900</eissn><abstract>Automated waste sorting can dramatically increase waste sorting efficiency and reduce its regulation cost. Most of the current methods only use a single modality such as image data or acoustic data for waste classification, which makes it difficult to classify mixed and confusable wastes. In these complex situations, using multiple modalities becomes necessary to achieve a high classification accuracy. Traditionally, the fusion of multiple modalities has been limited by fixed handcrafted features. In this study, the deep-learning approach was applied to the multimodal fusion at the feature level for municipal solid-waste sorting. More specifically, the pre-trained VGG16 and one-dimensional convolutional neural networks (1D CNNs) were utilized to extract features from visual data and acoustic data, respectively. These deeply learned features were then fused in the fully connected layers for classification. The results of comparative experiments proved that the proposed method was superior to the single-modality methods. Additionally, the feature-based fusion strategy performed better than the decision-based strategy with deeply learned features.</abstract><cop>Beijing</cop><pub>Science China Press</pub><doi>10.1007/s11431-021-1927-9</doi><tpages>12</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1674-7321
ispartof Science China. Technological sciences, 2022, Vol.65 (2), p.324-335
issn 1674-7321
1869-1900
language eng
recordid cdi_webofscience_primary_000739293200002
source SpringerNature Journals; Alma/SFX Local Collection
subjects Artificial neural networks
Deep learning
Engineering
Engineering, Multidisciplinary
Feature extraction
Image classification
Machine learning
Materials Science
Materials Science, Multidisciplinary
Municipal waste management
Science & Technology
Solid waste management
Technology
title Deep multimodal learning for municipal solid waste sorting
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T04%3A01%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_webof&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20multimodal%20learning%20for%20municipal%20solid%20waste%20sorting&rft.jtitle=Science%20China.%20Technological%20sciences&rft.au=Lu,%20Gang&rft.date=2022&rft.volume=65&rft.issue=2&rft.spage=324&rft.epage=335&rft.pages=324-335&rft.issn=1674-7321&rft.eissn=1869-1900&rft_id=info:doi/10.1007/s11431-021-1927-9&rft_dat=%3Cproquest_webof%3E2625578510%3C/proquest_webof%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2625578510&rft_id=info:pmid/&rfr_iscdi=true