Unmanned aerial vehicle visual scene understanding method based on multi-task learning network
The invention discloses an unmanned aerial vehicle visual scene understanding method based on a multi-task learning network. The method comprises the following steps: firstly, selecting an efficient classification network VoVNet as a feature sharing network to obtain multi-scale coding features; sec...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The invention discloses an unmanned aerial vehicle visual scene understanding method based on a multi-task learning network. The method comprises the following steps: firstly, selecting an efficient classification network VoVNet as a feature sharing network to obtain multi-scale coding features; secondly, on the basis of a single-stage anchor-frame-free target detection network, designing a feature screening supplement unit based on an attention mechanism to enhance the detection capability of a potential target; then, aiming at semantic segmentation and depth estimation tasks, using a cascaded CRP module as a parameter sharing decoder for semantic segmentation and depth estimation, and ensuring the running speed through a parameter sharing decoder structure; and finally, in order to improve the generalization ability of the model, constructing a universal-special paired data set for network training. According to the method, target detection, semantic segmentation and depth estimation in the flight scene of |
---|