Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures
Large Language Models (LLMs) deployed on edge devices learn through fine-tuning and updating a certain portion of their parameters. Although such learning methods can be optimized to reduce resource utilization, the overall required resources remain a heavy burden on edge devices. Instead, Retrieval...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) deployed on edge devices learn through
fine-tuning and updating a certain portion of their parameters. Although such
learning methods can be optimized to reduce resource utilization, the overall
required resources remain a heavy burden on edge devices. Instead,
Retrieval-Augmented Generation (RAG), a resource-efficient LLM learning method,
can improve the quality of the LLM-generated content without updating model
parameters. However, the RAG-based LLM may involve repetitive searches on the
profile data in every user-LLM interaction. This search can lead to significant
latency along with the accumulation of user data. Conventional efforts to
decrease latency result in restricting the size of saved user data, thus
reducing the scalability of RAG as user data continuously grows. It remains an
open question: how to free RAG from the constraints of latency and scalability
on edge devices? In this paper, we propose a novel framework to accelerate RAG
via Computing-in-Memory (CiM) architectures. It accelerates matrix
multiplications by performing in-situ computation inside the memory while
avoiding the expensive data transfer between the computing unit and memory. Our
framework, Robust CiM-backed RAG (RoCR), utilizing a novel contrastive
learning-based training method and noise-aware training, can enable RAG to
efficiently search profile data with CiM. To the best of our knowledge, this is
the first work utilizing CiM to accelerate RAG. |
---|---|
DOI: | 10.48550/arxiv.2405.04700 |