Leveraging VLM-Based Pipelines to Annotate 3D Objects
Pretrained vision language models (VLMs) present an opportunity to caption unlabeled 3D objects at scale. The leading approach to summarize VLM descriptions from different views of an object (Luo et al., 2023) relies on a language model (GPT4) to produce the final output. This text-based aggregation...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Pretrained vision language models (VLMs) present an opportunity to caption
unlabeled 3D objects at scale. The leading approach to summarize VLM
descriptions from different views of an object (Luo et al., 2023) relies on a
language model (GPT4) to produce the final output. This text-based aggregation
is susceptible to hallucinations as it merges potentially contradictory
descriptions. We propose an alternative algorithm to marginalize over factors
such as the viewpoint that affect the VLM's response. Instead of merging
text-only responses, we utilize the VLM's joint image-text likelihoods. We show
our probabilistic aggregation is not only more reliable and efficient, but sets
the SoTA on inferring object types with respect to human-verified labels. The
aggregated annotations are also useful for conditional inference; they improve
downstream predictions (e.g., of object material) when the object's type is
specified as an auxiliary text-based input. Such auxiliary inputs allow
ablating the contribution of visual reasoning over visionless reasoning in an
unsupervised setting. With these supervised and unsupervised evaluations, we
show how a VLM-based pipeline can be leveraged to produce reliable annotations
for 764K objects from the Objaverse dataset. |
---|---|
DOI: | 10.48550/arxiv.2311.17851 |