Bridging Knowledge Graphs to Generate Scene Graphs
Scene graphs are powerful representations that parse images into their abstract semantic elements, i.e., objects and their interactions, which facilitates visual comprehension and explainable reasoning. On the other hand, commonsense knowledge graphs are rich repositories that encode how the world i...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Scene graphs are powerful representations that parse images into their
abstract semantic elements, i.e., objects and their interactions, which
facilitates visual comprehension and explainable reasoning. On the other hand,
commonsense knowledge graphs are rich repositories that encode how the world is
structured, and how general concepts interact. In this paper, we present a
unified formulation of these two constructs, where a scene graph is seen as an
image-conditioned instantiation of a commonsense knowledge graph. Based on this
new perspective, we re-formulate scene graph generation as the inference of a
bridge between the scene and commonsense graphs, where each entity or predicate
instance in the scene graph has to be linked to its corresponding entity or
predicate class in the commonsense graph. To this end, we propose a novel
graph-based neural network that iteratively propagates information between the
two graphs, as well as within each of them, while gradually refining their
bridge in each iteration. Our Graph Bridging Network, GB-Net, successively
infers edges and nodes, allowing to simultaneously exploit and refine the rich,
heterogeneous structure of the interconnected scene and commonsense graphs.
Through extensive experimentation, we showcase the superior accuracy of GB-Net
compared to the most recent methods, resulting in a new state of the art. We
publicly release the source code of our method. |
---|---|
DOI: | 10.48550/arxiv.2001.02314 |