Assembling three one‐camera images for three‐camera intersection classification

Determining whether an autonomous self‐driving agent is in the middle of an intersection can be extremely difficult when relying on visual input taken from a single camera. In such a problem setting, a wider range of views is essential, which drives us to use three cameras positioned in the front, l...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ETRI journal 2023-10, Vol.45 (5), p.862-873
Hauptverfasser: Astrid, Marcella, Lee, Seung‐Ik
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Determining whether an autonomous self‐driving agent is in the middle of an intersection can be extremely difficult when relying on visual input taken from a single camera. In such a problem setting, a wider range of views is essential, which drives us to use three cameras positioned in the front, left, and right of an agent for better intersection recognition. However, collecting adequate training data with three cameras poses several practical difficulties; hence, we propose using data collected from one camera to train a three‐camera model, which would enable us to more easily compile a variety of training data to endow our model with improved generalizability. In this work, we provide three separate fusion methods (feature, early, and late) of combining the information from three cameras. Extensive pedestrian‐view intersection classification experiments show that our feature fusion model provides an area under the curve and F1‐score of 82.00 and 46.48, respectively, which considerably outperforms contemporary three‐ and one‐camera models.
ISSN:1225-6463
2233-7326
DOI:10.4218/etrij.2023-0100