LinkedFormer: Radar Communication and Multiscale Imaging for Object Detection under Complex Sea Background

The advent of deep learning has propelled significant advancements in object detection, thereby enhancing the intelligence of underwater autonomous driving systems. In this paper, we explore the cutting-edge applications of autonomous driving technology in the field of underwater exploration, addres...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Sensors and materials 2024-08, Vol.36 (8), p.3351
Hauptverfasser: Pu, Xing, Xu, Xisheng, Yu, Yi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The advent of deep learning has propelled significant advancements in object detection, thereby enhancing the intelligence of underwater autonomous driving systems. In this paper, we explore the cutting-edge applications of autonomous driving technology in the field of underwater exploration, addressing the pivotal role of target detection in navigating and executing tasks within challenging marine environments. In this study, the object detection capability of such systems is enhanced by integrating deep learning and multisensor fusion technology, especially by combining high-precision sensor data with multitask learning models to achieve efficient and robust detection. Our study has three principal contributions. First, we introduce a novel light perception detection system that combines monocular camera technology with 4D radar. It enriches environmental perception by weaving in radar signals and significantly enhances the accuracy and stability of target detection. Second, we have developed a dual-modal detection framework, named Radar-Picture Detection, which utilizes a parallel sequence prediction method. This approach prioritizes radar signal processing, aiding in the improvement of target detection accuracy in intricate underwater environments. Third, we conducted a comprehensive evaluation of our model's performance using the FloW Dataset, which is specifically curated for identifying floating waste in inland waters through unmanned vessel footage. We not only propel forward the field of target detection for underwater autonomous systems but also establish new avenues and a solid foundation for deploying deep learning and multisensor fusion technology in marine environmental perception. Insights and methodologies from this study are poised to spearhead further developments in autonomous marine exploration, enhancing safety, efficiency, and our understanding of underwater environments.
ISSN:0914-4935
2435-0869
DOI:10.18494/SAM5062