SAU-Net: Monocular Depth Estimation Combining Multi-Scale Features and Attention Mechanisms

Monocular depth estimation technology is widely utilized in autonomous driving for sensing and obstacle avoidance. Recent advancements in deep-learning techniques have resulted in significant progress in monocular depth estimation. However, monocular depth estimation is mainly optimized for the lumi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023, Vol.11, p.137734-137746
Hauptverfasser: Zhao, Wei, Song, Yunqing, Wang, Tingting
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Monocular depth estimation technology is widely utilized in autonomous driving for sensing and obstacle avoidance. Recent advancements in deep-learning techniques have resulted in significant progress in monocular depth estimation. However, monocular depth estimation is mainly optimized for the luminosity error of pixels, mostly disregarding the related problems of result ambiguity and boundary artifacts in the image. To address these issues, we developed an improved network model called SAU-Net. The superposition of excessive convolutional layers in conventional convolution networks impairs the network's timeliness and results in the loss of primary information. Therefore, we propose a convolution-free stratified transformer as an image feature extractor at the network's coding end, which limits self-attention to innumerable windows and leverages sliding windows for characterization to reduce the network delay. This study also addresses the issue of critical information loss. We connect each feature map directly to another from a different scale. In addition, an attention module is introduced to focus on the effective features, which increases the amount of target information in the depth map. We employ the gradient loss function during the training stage to improve the segmentation accuracy of the network and the smoothness of the output image. Training and testing were conducted using the KITTI dataset. To ensure the robustness of the algorithm in practical applications, we also validated the algorithm using a campus dataset that we collected. The experimental results indicated that the accuracy of the algorithm was 89.1%, 96.4%, and 98.5% under three proportional thresholds. The estimated depth map was clear in details and edges, with fewer artifacts.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3339152