LLMGuard: Guarding Against Unsafe LLM Behavior
Although the rise of Large Language Models (LLMs) in enterprise settings brings new opportunities and capabilities, it also brings challenges, such as the risk of generating inappropriate, biased, or misleading content that violates regulations and can have legal concerns. To alleviate this, we pres...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Although the rise of Large Language Models (LLMs) in enterprise settings
brings new opportunities and capabilities, it also brings challenges, such as
the risk of generating inappropriate, biased, or misleading content that
violates regulations and can have legal concerns. To alleviate this, we present
"LLMGuard", a tool that monitors user interactions with an LLM application and
flags content against specific behaviours or conversation topics. To do this
robustly, LLMGuard employs an ensemble of detectors. |
---|---|
DOI: | 10.48550/arxiv.2403.00826 |