Cross-BERT for Point Cloud Pretraining
Introducing BERT into cross-modal settings raises difficulties in its optimization for handling multiple modalities. Both the BERT architecture and training objective need to be adapted to incorporate and model information from different modalities. In this paper, we address these challenges by expl...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Introducing BERT into cross-modal settings raises difficulties in its
optimization for handling multiple modalities. Both the BERT architecture and
training objective need to be adapted to incorporate and model information from
different modalities. In this paper, we address these challenges by exploring
the implicit semantic and geometric correlations between 2D and 3D data of the
same objects/scenes. We propose a new cross-modal BERT-style self-supervised
learning paradigm, called Cross-BERT. To facilitate pretraining for irregular
and sparse point clouds, we design two self-supervised tasks to boost
cross-modal interaction. The first task, referred to as Point-Image Alignment,
aims to align features between unimodal and cross-modal representations to
capture the correspondences between the 2D and 3D modalities. The second task,
termed Masked Cross-modal Modeling, further improves mask modeling of BERT by
incorporating high-dimensional semantic information obtained by cross-modal
interaction. By performing cross-modal interaction, Cross-BERT can smoothly
reconstruct the masked tokens during pretraining, leading to notable
performance enhancements for downstream tasks. Through empirical evaluation, we
demonstrate that Cross-BERT outperforms existing state-of-the-art methods in 3D
downstream applications. Our work highlights the effectiveness of leveraging
cross-modal 2D knowledge to strengthen 3D point cloud representation and the
transferable capability of BERT across modalities. |
---|---|
DOI: | 10.48550/arxiv.2312.04891 |