BI-MDRG: Bridging Image History in Multimodal Dialogue Response Generation
Multimodal Dialogue Response Generation (MDRG) is a recently proposed task where the model needs to generate responses in texts, images, or a blend of both based on the dialogue context. Due to the lack of a large-scale dataset specifically for this task and the benefits of leveraging powerful pre-t...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multimodal Dialogue Response Generation (MDRG) is a recently proposed task
where the model needs to generate responses in texts, images, or a blend of
both based on the dialogue context. Due to the lack of a large-scale dataset
specifically for this task and the benefits of leveraging powerful pre-trained
models, previous work relies on the text modality as an intermediary step for
both the image input and output of the model rather than adopting an end-to-end
approach. However, this approach can overlook crucial information about the
image, hindering 1) image-grounded text response and 2) consistency of objects
in the image response. In this paper, we propose BI-MDRG that bridges the
response generation path such that the image history information is utilized
for enhanced relevance of text responses to the image content and the
consistency of objects in sequential image responses. Through extensive
experiments on the multimodal dialogue benchmark dataset, we show that BI-MDRG
can effectively increase the quality of multimodal dialogue. Additionally,
recognizing the gap in benchmark datasets for evaluating the image consistency
in multimodal dialogue, we have created a curated set of 300 dialogues
annotated to track object consistency across conversations. |
---|---|
DOI: | 10.48550/arxiv.2408.05926 |