Do Vision-and-Language Transformers Learn Grounded Predicate-Noun Dependencies?
Recent advances in vision-and-language modeling have seen the development of Transformer architectures that achieve remarkable performance on multimodal reasoning tasks. Yet, the exact capabilities of these black-box models are still poorly understood. While much of previous work has focused on stud...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advances in vision-and-language modeling have seen the development of
Transformer architectures that achieve remarkable performance on multimodal
reasoning tasks. Yet, the exact capabilities of these black-box models are
still poorly understood. While much of previous work has focused on studying
their ability to learn meaning at the word-level, their ability to track
syntactic dependencies between words has received less attention. We take a
first step in closing this gap by creating a new multimodal task targeted at
evaluating understanding of predicate-noun dependencies in a controlled setup.
We evaluate a range of state-of-the-art models and find that their performance
on the task varies considerably, with some models performing relatively well
and others at chance level. In an effort to explain this variability, our
analyses indicate that the quality (and not only sheer quantity) of pretraining
data is essential. Additionally, the best performing models leverage
fine-grained multimodal pretraining objectives in addition to the standard
image-text matching objectives. This study highlights that targeted and
controlled evaluations are a crucial step for a precise and rigorous test of
the multimodal knowledge of vision-and-language models. |
---|---|
DOI: | 10.48550/arxiv.2210.12079 |