Eyes on the Road: State-of-the-Art Video Question Answering Models Assessment for Traffic Monitoring Tasks
Recent advances in video question answering (VideoQA) offer promising applications, especially in traffic monitoring, where efficient video interpretation is critical. Within ITS, answering complex, real-time queries like "How many red cars passed in the last 10 minutes?" or "Was ther...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advances in video question answering (VideoQA) offer promising
applications, especially in traffic monitoring, where efficient video
interpretation is critical. Within ITS, answering complex, real-time queries
like "How many red cars passed in the last 10 minutes?" or "Was there an
incident between 3:00 PM and 3:05 PM?" enhances situational awareness and
decision-making. Despite progress in vision-language models, VideoQA remains
challenging, especially in dynamic environments involving multiple objects and
intricate spatiotemporal relationships. This study evaluates state-of-the-art
VideoQA models using non-benchmark synthetic and real-world traffic sequences.
The framework leverages GPT-4o to assess accuracy, relevance, and consistency
across basic detection, temporal reasoning, and decomposition queries.
VideoLLaMA-2 excelled with 57% accuracy, particularly in compositional
reasoning and consistent answers. However, all models, including VideoLLaMA-2,
faced limitations in multi-object tracking, temporal coherence, and complex
scene interpretation, highlighting gaps in current architectures. These
findings underscore VideoQA's potential in traffic monitoring but also
emphasize the need for improvements in multi-object tracking, temporal
reasoning, and compositional capabilities. Enhancing these areas could make
VideoQA indispensable for incident detection, traffic flow management, and
responsive urban planning. The study's code and framework are open-sourced for
further exploration: https://github.com/joe-rabbit/VideoQA_Pilot_Study |
---|---|
DOI: | 10.48550/arxiv.2412.01132 |