Classification of underwater soundscapes using raw hydroacoustic signals
Automatic classification of underwater soundscapes remains an open problem, mainly due to the challenges posed by multi-source interferences. Accurately classifying anthropogenic sounds in the marine environment, specifically those emanating from surface-ships, is concerned in this study. To achieve...
Gespeichert in:
Veröffentlicht in: | The Journal of the Acoustical Society of America 2023-10, Vol.154 (4_supplement), p.A304-A304 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | A304 |
---|---|
container_issue | 4_supplement |
container_start_page | A304 |
container_title | The Journal of the Acoustical Society of America |
container_volume | 154 |
creator | Nie, Leixin Zhang, Yonglin Wang, HaiBin |
description | Automatic classification of underwater soundscapes remains an open problem, mainly due to the challenges posed by multi-source interferences. Accurately classifying anthropogenic sounds in the marine environment, specifically those emanating from surface-ships, is concerned in this study. To achieve this, the convolutional neural network (CNN) with sine-cardinal-likeconstrained convolutional kernels is proposed, where kernels represent unsolved filter coefficients. Raw signals of sound pressures passively received by the hydrophone are used directly in this method without the need for the routine time-frequency analysis (TFA) beforehand. The convolutional layer with constrained kernels plays a crucial role in extracting features from hydroacoustic signals, effectively acting as a learnable extension of the bandpass filter groups with flexible bandwidth. One significant advantage of the proposed approach is its adaptive filtering capability based on the real-world data, which makes it effectively filter out irrelevant interference sounds. Such heterogeneous interferences often exhibit diverse unknown spectral ranges, making them challenging for conventional TFA with fixed parameters. By leveraging this adaptability, the output of the convolutional layer serves as a task-specific spectrogram, customized to the classifier after being trained on hydroacoustic data. The experiments on the ShipsEar dataset demonstrate the promising results of our solution than TFA and vanilla CNN. |
doi_str_mv | 10.1121/10.0023608 |
format | Article |
fullrecord | <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1121_10_0023608</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1121_10_0023608</sourcerecordid><originalsourceid>FETCH-crossref_primary_10_1121_10_00236083</originalsourceid><addsrcrecordid>eNqVzksKgzAUheFQWqh9TLqCjAu2N_GBjqXFBXQuISY2xSaSq4i7r4Ib6OjjhzM4hFwY3Bjj7D4LwKMUsg0JWMIhzBIeb0kAACyM8zTdkwPiZ84ki_KAlEUrEI02UvTGWeo0HWyt_Ch65Sm6OVCKTiEd0NiGejHS91R7J6QbsDeSommsaPFEdnpGnVeP5Pp8vIoylN4heqWrzpuv8FPFoFq-Lq5fo7_GP2QLRQ4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Classification of underwater soundscapes using raw hydroacoustic signals</title><source>Acoustical Society of America (AIP)</source><source>AIP Journals Complete</source><source>Alma/SFX Local Collection</source><creator>Nie, Leixin ; Zhang, Yonglin ; Wang, HaiBin</creator><creatorcontrib>Nie, Leixin ; Zhang, Yonglin ; Wang, HaiBin</creatorcontrib><description>Automatic classification of underwater soundscapes remains an open problem, mainly due to the challenges posed by multi-source interferences. Accurately classifying anthropogenic sounds in the marine environment, specifically those emanating from surface-ships, is concerned in this study. To achieve this, the convolutional neural network (CNN) with sine-cardinal-likeconstrained convolutional kernels is proposed, where kernels represent unsolved filter coefficients. Raw signals of sound pressures passively received by the hydrophone are used directly in this method without the need for the routine time-frequency analysis (TFA) beforehand. The convolutional layer with constrained kernels plays a crucial role in extracting features from hydroacoustic signals, effectively acting as a learnable extension of the bandpass filter groups with flexible bandwidth. One significant advantage of the proposed approach is its adaptive filtering capability based on the real-world data, which makes it effectively filter out irrelevant interference sounds. Such heterogeneous interferences often exhibit diverse unknown spectral ranges, making them challenging for conventional TFA with fixed parameters. By leveraging this adaptability, the output of the convolutional layer serves as a task-specific spectrogram, customized to the classifier after being trained on hydroacoustic data. The experiments on the ShipsEar dataset demonstrate the promising results of our solution than TFA and vanilla CNN.</description><identifier>ISSN: 0001-4966</identifier><identifier>EISSN: 1520-8524</identifier><identifier>DOI: 10.1121/10.0023608</identifier><language>eng</language><ispartof>The Journal of the Acoustical Society of America, 2023-10, Vol.154 (4_supplement), p.A304-A304</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>207,208,314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Nie, Leixin</creatorcontrib><creatorcontrib>Zhang, Yonglin</creatorcontrib><creatorcontrib>Wang, HaiBin</creatorcontrib><title>Classification of underwater soundscapes using raw hydroacoustic signals</title><title>The Journal of the Acoustical Society of America</title><description>Automatic classification of underwater soundscapes remains an open problem, mainly due to the challenges posed by multi-source interferences. Accurately classifying anthropogenic sounds in the marine environment, specifically those emanating from surface-ships, is concerned in this study. To achieve this, the convolutional neural network (CNN) with sine-cardinal-likeconstrained convolutional kernels is proposed, where kernels represent unsolved filter coefficients. Raw signals of sound pressures passively received by the hydrophone are used directly in this method without the need for the routine time-frequency analysis (TFA) beforehand. The convolutional layer with constrained kernels plays a crucial role in extracting features from hydroacoustic signals, effectively acting as a learnable extension of the bandpass filter groups with flexible bandwidth. One significant advantage of the proposed approach is its adaptive filtering capability based on the real-world data, which makes it effectively filter out irrelevant interference sounds. Such heterogeneous interferences often exhibit diverse unknown spectral ranges, making them challenging for conventional TFA with fixed parameters. By leveraging this adaptability, the output of the convolutional layer serves as a task-specific spectrogram, customized to the classifier after being trained on hydroacoustic data. The experiments on the ShipsEar dataset demonstrate the promising results of our solution than TFA and vanilla CNN.</description><issn>0001-4966</issn><issn>1520-8524</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNqVzksKgzAUheFQWqh9TLqCjAu2N_GBjqXFBXQuISY2xSaSq4i7r4Ib6OjjhzM4hFwY3Bjj7D4LwKMUsg0JWMIhzBIeb0kAACyM8zTdkwPiZ84ki_KAlEUrEI02UvTGWeo0HWyt_Ch65Sm6OVCKTiEd0NiGejHS91R7J6QbsDeSommsaPFEdnpGnVeP5Pp8vIoylN4heqWrzpuv8FPFoFq-Lq5fo7_GP2QLRQ4</recordid><startdate>20231001</startdate><enddate>20231001</enddate><creator>Nie, Leixin</creator><creator>Zhang, Yonglin</creator><creator>Wang, HaiBin</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20231001</creationdate><title>Classification of underwater soundscapes using raw hydroacoustic signals</title><author>Nie, Leixin ; Zhang, Yonglin ; Wang, HaiBin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-crossref_primary_10_1121_10_00236083</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nie, Leixin</creatorcontrib><creatorcontrib>Zhang, Yonglin</creatorcontrib><creatorcontrib>Wang, HaiBin</creatorcontrib><collection>CrossRef</collection><jtitle>The Journal of the Acoustical Society of America</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nie, Leixin</au><au>Zhang, Yonglin</au><au>Wang, HaiBin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Classification of underwater soundscapes using raw hydroacoustic signals</atitle><jtitle>The Journal of the Acoustical Society of America</jtitle><date>2023-10-01</date><risdate>2023</risdate><volume>154</volume><issue>4_supplement</issue><spage>A304</spage><epage>A304</epage><pages>A304-A304</pages><issn>0001-4966</issn><eissn>1520-8524</eissn><abstract>Automatic classification of underwater soundscapes remains an open problem, mainly due to the challenges posed by multi-source interferences. Accurately classifying anthropogenic sounds in the marine environment, specifically those emanating from surface-ships, is concerned in this study. To achieve this, the convolutional neural network (CNN) with sine-cardinal-likeconstrained convolutional kernels is proposed, where kernels represent unsolved filter coefficients. Raw signals of sound pressures passively received by the hydrophone are used directly in this method without the need for the routine time-frequency analysis (TFA) beforehand. The convolutional layer with constrained kernels plays a crucial role in extracting features from hydroacoustic signals, effectively acting as a learnable extension of the bandpass filter groups with flexible bandwidth. One significant advantage of the proposed approach is its adaptive filtering capability based on the real-world data, which makes it effectively filter out irrelevant interference sounds. Such heterogeneous interferences often exhibit diverse unknown spectral ranges, making them challenging for conventional TFA with fixed parameters. By leveraging this adaptability, the output of the convolutional layer serves as a task-specific spectrogram, customized to the classifier after being trained on hydroacoustic data. The experiments on the ShipsEar dataset demonstrate the promising results of our solution than TFA and vanilla CNN.</abstract><doi>10.1121/10.0023608</doi></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0001-4966 |
ispartof | The Journal of the Acoustical Society of America, 2023-10, Vol.154 (4_supplement), p.A304-A304 |
issn | 0001-4966 1520-8524 |
language | eng |
recordid | cdi_crossref_primary_10_1121_10_0023608 |
source | Acoustical Society of America (AIP); AIP Journals Complete; Alma/SFX Local Collection |
title | Classification of underwater soundscapes using raw hydroacoustic signals |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T11%3A51%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Classification%20of%20underwater%20soundscapes%20using%20raw%20hydroacoustic%20signals&rft.jtitle=The%20Journal%20of%20the%20Acoustical%20Society%20of%20America&rft.au=Nie,%20Leixin&rft.date=2023-10-01&rft.volume=154&rft.issue=4_supplement&rft.spage=A304&rft.epage=A304&rft.pages=A304-A304&rft.issn=0001-4966&rft.eissn=1520-8524&rft_id=info:doi/10.1121/10.0023608&rft_dat=%3Ccrossref%3E10_1121_10_0023608%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |