Fine-tune the Entire RAG Architecture (including DPR retriever) for Question-Answering
In this paper, we illustrate how to fine-tune the entire Retrieval Augment Generation (RAG) architecture in an end-to-end manner. We highlighted the main engineering challenges that needed to be addressed to achieve this objective. We also compare how end-to-end RAG architecture outperforms the orig...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we illustrate how to fine-tune the entire Retrieval Augment
Generation (RAG) architecture in an end-to-end manner. We highlighted the main
engineering challenges that needed to be addressed to achieve this objective.
We also compare how end-to-end RAG architecture outperforms the original RAG
architecture for the task of question answering. We have open-sourced our
implementation in the HuggingFace Transformers library. |
---|---|
DOI: | 10.48550/arxiv.2106.11517 |