LLM-based Medical Assistant Personalization with Short- and Long-Term Memory Coordination
Large Language Models (LLMs), such as GPT3.5, have exhibited remarkable proficiency in comprehending and generating natural language. On the other hand, medical assistants hold the potential to offer substantial benefits for individuals. However, the exploration of LLM-based personalized medical ass...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs), such as GPT3.5, have exhibited remarkable
proficiency in comprehending and generating natural language. On the other
hand, medical assistants hold the potential to offer substantial benefits for
individuals. However, the exploration of LLM-based personalized medical
assistant remains relatively scarce. Typically, patients converse differently
based on their background and preferences which necessitates the task of
enhancing user-oriented medical assistant. While one can fully train an LLM for
this objective, the resource consumption is unaffordable. Prior research has
explored memory-based methods to enhance the response with aware of previous
mistakes for new queries during a dialogue session. We contend that a mere
memory module is inadequate and fully training an LLM can be excessively
costly. In this study, we propose a novel computational bionic memory
mechanism, equipped with a parameter-efficient fine-tuning (PEFT) schema, to
personalize medical assistants. |
---|---|
DOI: | 10.48550/arxiv.2309.11696 |