NinjaLLM: Fast, Scalable and Cost-effective RAG using Amazon SageMaker and AWS Trainium and Inferentia2
Retrieval-augmented generation (RAG) techniques are widely used today to retrieve and present information in a conversational format. This paper presents a set of enhancements to traditional RAG techniques, focusing on large language models (LLMs) fine-tuned and hosted on AWS Trainium and Inferentia...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Retrieval-augmented generation (RAG) techniques are widely used today to
retrieve and present information in a conversational format. This paper
presents a set of enhancements to traditional RAG techniques, focusing on large
language models (LLMs) fine-tuned and hosted on AWS Trainium and Inferentia2 AI
chips via SageMaker. These chips are characterized by their elasticity,
affordability, and efficient performance for AI compute tasks. Besides enabling
deployment on these chips, this work aims to improve tool usage, add citation
capabilities, and mitigate the risks of hallucinations and unsafe responses due
to context bias. We benchmark our RAG system's performance on the Natural
Questions and HotPotQA datasets, achieving an accuracy of 62% and 59%
respectively, exceeding other models such as DBRX and Mixtral Instruct. |
---|---|
DOI: | 10.48550/arxiv.2407.12057 |