Surround-view Fisheye Camera Viewpoint Augmentation for Image Semantic Segmentation

In autonomous vehicles, perception information about the surrounding road environment can be obtained through image semantic segmentation. The fisheye camera commonly used in autonomous vehicle surround view systems offers a wide field of view (FoV), providing comprehensive perception information ab...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023-01, Vol.11, p.1-1
Hauptverfasser: Cho, Jieun, Lee, Jonghyun, Ha, Jiunsu, Resende, Paulo, Bradai, Benazouz, Jo, Kichun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In autonomous vehicles, perception information about the surrounding road environment can be obtained through image semantic segmentation. The fisheye camera commonly used in autonomous vehicle surround view systems offers a wide field of view (FoV), providing comprehensive perception information about the surrounding environment and assisting in understanding complex scenes. However, there is a challenge in model training due to the limited availability of fisheye semantic image datasets, resulting in reduced generalization performance and unreliable results in various test environments. In particular, changes in the position and orientation of the camera result in changes in the camera viewpoint, which can impair the model's segmentation performance. Generally, data scarcity problems are solved using augmentation methods, but existing methods have difficulty reflecting the distortion characteristics of fisheye images. To solve this problem, we propose viewpoint augmentation considering the spatially variant distortion characteristic of fisheye images. First, we use the fisheye camera projection model in reverse to map the captured 2D fisheye image to a point on the surface of a unit sphere in 3D. Then, we change the camera's orientation and position by applying rotation and translation operations to the point. Finally, we re-project the transformed point to the fisheye image to generate a fisheye image with a changed viewpoint. The experimental results show that the proposed augmentation method increases the generalization performance of the model and effectively reduces model performance degradation under changing camera viewpoints, making it suitable for practical applications.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3276985