Color–depth multi-task learning for object detection in haze
Haze environments pose serious challenges for object detection, making existing methods difficult to generate satisfied results. However, there is no escape from haze environments in real-world applications, especially in water and bad weather. Hence, it is necessary to enable object detection metho...
Gespeichert in:
Veröffentlicht in: | Neural computing & applications 2020-06, Vol.32 (11), p.6591-6599 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Haze environments pose serious challenges for object detection, making existing methods difficult to generate satisfied results. However, there is no escape from haze environments in real-world applications, especially in water and bad weather. Hence, it is necessary to enable object detection methods to conquer the difficulties caused by the haze effect. In spite of the diversity between various conditions, haze environments share a common characteristic that the haze concentration is changed with the scene depth. Hence, this haze concentration feature can be used as a representation of the scene depth. This provides us a novel cue available for object detection in haze that the object-background depth contrast can be identified. In this paper, we propose a multi-task learning-based object detection method by jointly using the color and depth features. A pair of background models is built separately with the color and depth features, forming two streams of our multi-task learning framework. The final object detection results are generated by fusing the results given by color and depth features. In contrast to existing object detection methods, the novelty of our method lies in the combination of the color and depth features under a unified multi-task learning mechanism, which is experimentally demonstrated to be robust against challenging haze environments. |
---|---|
ISSN: | 0941-0643 1433-3058 |
DOI: | 10.1007/s00521-018-3732-6 |