Hear-and-avoid for unmanned air vehicles using convolutional neural networks
To investigate how an unmanned air vehicle can detect manned aircraft with a single microphone, an audio data set is created in which unmanned air vehicle ego-sound and recorded aircraft sound are mixed together. A convolutional neural network is used to perform air traffic detection. Due to restric...
Gespeichert in:
Veröffentlicht in: | International journal of micro air vehicles 2021, Vol.13 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | International journal of micro air vehicles |
container_volume | 13 |
creator | Wijnker, Dirk van Dijk, Tom Snellen, Mirjam de Croon, Guido De Wagter, Christophe |
description | To investigate how an unmanned air vehicle can detect manned aircraft with a single microphone, an audio data set is created in which unmanned air vehicle ego-sound and recorded aircraft sound are mixed together. A convolutional neural network is used to perform air traffic detection. Due to restrictions on flying unmanned air vehicles close to aircraft, the data set has to be artificially produced, so the unmanned air vehicle sound is captured separately from the aircraft sound. They are then mixed with unmanned air vehicle recordings, during which labels are given indicating whether the mixed recording contains aircraft audio or not. The model is a convolutional neural network that uses the features Mel frequency cepstral coefficient, spectrogram or Mel spectrogram as input. For each feature, the effect of unmanned air vehicle/aircraft amplitude ratio, the type of labeling, the window length and the addition of third party aircraft sound database recordings are explored. The results show that the best performance is achieved using the Mel spectrogram feature. The performance increases when the unmanned air vehicle/aircraft amplitude ratio is decreased, when the time window is increased or when the data set is extended with aircraft audio recordings from a third party sound database. Although the currently presented approach has a number of false positives and false negatives that is still too high for real-world application, this study indicates multiple paths forward that can lead to an interesting performance. Finally, the data set is provided as open access. |
doi_str_mv | 10.1177/1756829321992137 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2613228176</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_1756829321992137</sage_id><sourcerecordid>2613228176</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-c4e23b5f585a630f7894b72e81140ed2a3dd48ae76aaee9e3f886b68544cd1fb3</originalsourceid><addsrcrecordid>eNp1kEFLAzEQhYMoWGrvHgOeVzPJbpI9SlFbKHjR85LdTOrWbVKT3Yr_3q1VBMHTG2a-9xgeIZfArgGUugFVSM1LwaEsOQh1QiaHVaYFU6c_83g_J7OUNowx0EwJCROyWqCJmfE2M_vQWupCpIPfGu_RUtNGuseXtukw0SG1fk2b4PehG_o2eNNRj0P8kv49xNd0Qc6c6RLOvnVKnu_vnuaLbPX4sJzfrrJGFNBnTY5c1IUrdGGkYE7pMq8VRw2QM7TcCGtzbVBJYxBLFE5rWUtd5HljwdViSq6OubsY3gZMfbUJQxwfShWXIDjXoORIsSPVxJBSRFftYrs18aMCVh1qq_7WNlqyoyWZNf6G_st_AjiEbKQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2613228176</pqid></control><display><type>article</type><title>Hear-and-avoid for unmanned air vehicles using convolutional neural networks</title><source>DOAJ Directory of Open Access Journals</source><source>Sage Journals GOLD Open Access 2024</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Wijnker, Dirk ; van Dijk, Tom ; Snellen, Mirjam ; de Croon, Guido ; De Wagter, Christophe</creator><creatorcontrib>Wijnker, Dirk ; van Dijk, Tom ; Snellen, Mirjam ; de Croon, Guido ; De Wagter, Christophe</creatorcontrib><description>To investigate how an unmanned air vehicle can detect manned aircraft with a single microphone, an audio data set is created in which unmanned air vehicle ego-sound and recorded aircraft sound are mixed together. A convolutional neural network is used to perform air traffic detection. Due to restrictions on flying unmanned air vehicles close to aircraft, the data set has to be artificially produced, so the unmanned air vehicle sound is captured separately from the aircraft sound. They are then mixed with unmanned air vehicle recordings, during which labels are given indicating whether the mixed recording contains aircraft audio or not. The model is a convolutional neural network that uses the features Mel frequency cepstral coefficient, spectrogram or Mel spectrogram as input. For each feature, the effect of unmanned air vehicle/aircraft amplitude ratio, the type of labeling, the window length and the addition of third party aircraft sound database recordings are explored. The results show that the best performance is achieved using the Mel spectrogram feature. The performance increases when the unmanned air vehicle/aircraft amplitude ratio is decreased, when the time window is increased or when the data set is extended with aircraft audio recordings from a third party sound database. Although the currently presented approach has a number of false positives and false negatives that is still too high for real-world application, this study indicates multiple paths forward that can lead to an interesting performance. Finally, the data set is provided as open access.</description><identifier>ISSN: 1756-8293</identifier><identifier>EISSN: 1756-8307</identifier><identifier>DOI: 10.1177/1756829321992137</identifier><language>eng</language><publisher>London, England: SAGE Publications</publisher><subject>Aircraft ; Amplitudes ; Artificial neural networks ; Audio data ; Datasets ; Labels ; Neural networks ; Sound ; Unmanned aerial vehicles ; Unmanned aircraft ; Vehicles ; Windows (intervals)</subject><ispartof>International journal of micro air vehicles, 2021, Vol.13</ispartof><rights>The Author(s) 2021</rights><rights>The Author(s) 2021. This work is licensed under the Creative Commons Attribution License https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-c4e23b5f585a630f7894b72e81140ed2a3dd48ae76aaee9e3f886b68544cd1fb3</citedby><cites>FETCH-LOGICAL-c351t-c4e23b5f585a630f7894b72e81140ed2a3dd48ae76aaee9e3f886b68544cd1fb3</cites><orcidid>0000-0002-6795-8454</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/1756829321992137$$EPDF$$P50$$Gsage$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://journals.sagepub.com/doi/10.1177/1756829321992137$$EHTML$$P50$$Gsage$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,4010,21945,27830,27900,27901,27902,44921,45309</link.rule.ids></links><search><creatorcontrib>Wijnker, Dirk</creatorcontrib><creatorcontrib>van Dijk, Tom</creatorcontrib><creatorcontrib>Snellen, Mirjam</creatorcontrib><creatorcontrib>de Croon, Guido</creatorcontrib><creatorcontrib>De Wagter, Christophe</creatorcontrib><title>Hear-and-avoid for unmanned air vehicles using convolutional neural networks</title><title>International journal of micro air vehicles</title><description>To investigate how an unmanned air vehicle can detect manned aircraft with a single microphone, an audio data set is created in which unmanned air vehicle ego-sound and recorded aircraft sound are mixed together. A convolutional neural network is used to perform air traffic detection. Due to restrictions on flying unmanned air vehicles close to aircraft, the data set has to be artificially produced, so the unmanned air vehicle sound is captured separately from the aircraft sound. They are then mixed with unmanned air vehicle recordings, during which labels are given indicating whether the mixed recording contains aircraft audio or not. The model is a convolutional neural network that uses the features Mel frequency cepstral coefficient, spectrogram or Mel spectrogram as input. For each feature, the effect of unmanned air vehicle/aircraft amplitude ratio, the type of labeling, the window length and the addition of third party aircraft sound database recordings are explored. The results show that the best performance is achieved using the Mel spectrogram feature. The performance increases when the unmanned air vehicle/aircraft amplitude ratio is decreased, when the time window is increased or when the data set is extended with aircraft audio recordings from a third party sound database. Although the currently presented approach has a number of false positives and false negatives that is still too high for real-world application, this study indicates multiple paths forward that can lead to an interesting performance. Finally, the data set is provided as open access.</description><subject>Aircraft</subject><subject>Amplitudes</subject><subject>Artificial neural networks</subject><subject>Audio data</subject><subject>Datasets</subject><subject>Labels</subject><subject>Neural networks</subject><subject>Sound</subject><subject>Unmanned aerial vehicles</subject><subject>Unmanned aircraft</subject><subject>Vehicles</subject><subject>Windows (intervals)</subject><issn>1756-8293</issn><issn>1756-8307</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>AFRWT</sourceid><sourceid>BENPR</sourceid><recordid>eNp1kEFLAzEQhYMoWGrvHgOeVzPJbpI9SlFbKHjR85LdTOrWbVKT3Yr_3q1VBMHTG2a-9xgeIZfArgGUugFVSM1LwaEsOQh1QiaHVaYFU6c_83g_J7OUNowx0EwJCROyWqCJmfE2M_vQWupCpIPfGu_RUtNGuseXtukw0SG1fk2b4PehG_o2eNNRj0P8kv49xNd0Qc6c6RLOvnVKnu_vnuaLbPX4sJzfrrJGFNBnTY5c1IUrdGGkYE7pMq8VRw2QM7TcCGtzbVBJYxBLFE5rWUtd5HljwdViSq6OubsY3gZMfbUJQxwfShWXIDjXoORIsSPVxJBSRFftYrs18aMCVh1qq_7WNlqyoyWZNf6G_st_AjiEbKQ</recordid><startdate>2021</startdate><enddate>2021</enddate><creator>Wijnker, Dirk</creator><creator>van Dijk, Tom</creator><creator>Snellen, Mirjam</creator><creator>de Croon, Guido</creator><creator>De Wagter, Christophe</creator><general>SAGE Publications</general><general>Sage Publications Ltd</general><scope>AFRWT</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>8FD</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>H8D</scope><scope>L7M</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0002-6795-8454</orcidid></search><sort><creationdate>2021</creationdate><title>Hear-and-avoid for unmanned air vehicles using convolutional neural networks</title><author>Wijnker, Dirk ; van Dijk, Tom ; Snellen, Mirjam ; de Croon, Guido ; De Wagter, Christophe</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-c4e23b5f585a630f7894b72e81140ed2a3dd48ae76aaee9e3f886b68544cd1fb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Aircraft</topic><topic>Amplitudes</topic><topic>Artificial neural networks</topic><topic>Audio data</topic><topic>Datasets</topic><topic>Labels</topic><topic>Neural networks</topic><topic>Sound</topic><topic>Unmanned aerial vehicles</topic><topic>Unmanned aircraft</topic><topic>Vehicles</topic><topic>Windows (intervals)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wijnker, Dirk</creatorcontrib><creatorcontrib>van Dijk, Tom</creatorcontrib><creatorcontrib>Snellen, Mirjam</creatorcontrib><creatorcontrib>de Croon, Guido</creatorcontrib><creatorcontrib>De Wagter, Christophe</creatorcontrib><collection>Sage Journals GOLD Open Access 2024</collection><collection>CrossRef</collection><collection>Technology Research Database</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>International journal of micro air vehicles</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wijnker, Dirk</au><au>van Dijk, Tom</au><au>Snellen, Mirjam</au><au>de Croon, Guido</au><au>De Wagter, Christophe</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hear-and-avoid for unmanned air vehicles using convolutional neural networks</atitle><jtitle>International journal of micro air vehicles</jtitle><date>2021</date><risdate>2021</risdate><volume>13</volume><issn>1756-8293</issn><eissn>1756-8307</eissn><abstract>To investigate how an unmanned air vehicle can detect manned aircraft with a single microphone, an audio data set is created in which unmanned air vehicle ego-sound and recorded aircraft sound are mixed together. A convolutional neural network is used to perform air traffic detection. Due to restrictions on flying unmanned air vehicles close to aircraft, the data set has to be artificially produced, so the unmanned air vehicle sound is captured separately from the aircraft sound. They are then mixed with unmanned air vehicle recordings, during which labels are given indicating whether the mixed recording contains aircraft audio or not. The model is a convolutional neural network that uses the features Mel frequency cepstral coefficient, spectrogram or Mel spectrogram as input. For each feature, the effect of unmanned air vehicle/aircraft amplitude ratio, the type of labeling, the window length and the addition of third party aircraft sound database recordings are explored. The results show that the best performance is achieved using the Mel spectrogram feature. The performance increases when the unmanned air vehicle/aircraft amplitude ratio is decreased, when the time window is increased or when the data set is extended with aircraft audio recordings from a third party sound database. Although the currently presented approach has a number of false positives and false negatives that is still too high for real-world application, this study indicates multiple paths forward that can lead to an interesting performance. Finally, the data set is provided as open access.</abstract><cop>London, England</cop><pub>SAGE Publications</pub><doi>10.1177/1756829321992137</doi><orcidid>https://orcid.org/0000-0002-6795-8454</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1756-8293 |
ispartof | International journal of micro air vehicles, 2021, Vol.13 |
issn | 1756-8293 1756-8307 |
language | eng |
recordid | cdi_proquest_journals_2613228176 |
source | DOAJ Directory of Open Access Journals; Sage Journals GOLD Open Access 2024; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals |
subjects | Aircraft Amplitudes Artificial neural networks Audio data Datasets Labels Neural networks Sound Unmanned aerial vehicles Unmanned aircraft Vehicles Windows (intervals) |
title | Hear-and-avoid for unmanned air vehicles using convolutional neural networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T05%3A17%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hear-and-avoid%20for%20unmanned%20air%20vehicles%20using%20convolutional%20neural%20networks&rft.jtitle=International%20journal%20of%20micro%20air%20vehicles&rft.au=Wijnker,%20Dirk&rft.date=2021&rft.volume=13&rft.issn=1756-8293&rft.eissn=1756-8307&rft_id=info:doi/10.1177/1756829321992137&rft_dat=%3Cproquest_cross%3E2613228176%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2613228176&rft_id=info:pmid/&rft_sage_id=10.1177_1756829321992137&rfr_iscdi=true |