Rule Based Rewards for Language Model Safety
Reinforcement learning based fine-tuning of large language models (LLMs) on human preferences has been shown to enhance both their capabilities and safety behavior. However, in cases related to safety, without precise instructions to human annotators, the data collected may cause the model to become...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement learning based fine-tuning of large language models (LLMs) on
human preferences has been shown to enhance both their capabilities and safety
behavior. However, in cases related to safety, without precise instructions to
human annotators, the data collected may cause the model to become overly
cautious, or to respond in an undesirable style, such as being judgmental.
Additionally, as model capabilities and usage patterns evolve, there may be a
costly need to add or relabel data to modify safety behavior. We propose a
novel preference modeling approach that utilizes AI feedback and only requires
a small amount of human data. Our method, Rule Based Rewards (RBR), uses a
collection of rules for desired or undesired behaviors (e.g. refusals should
not be judgmental) along with a LLM grader. In contrast to prior methods using
AI feedback, our method uses fine-grained, composable, LLM-graded few-shot
prompts as reward directly in RL training, resulting in greater control,
accuracy and ease of updating. We show that RBRs are an effective training
method, achieving an F1 score of 97.1, compared to a human-feedback baseline of
91.7, resulting in much higher safety-behavior accuracy through better
balancing usefulness and safety. |
---|---|
DOI: | 10.48550/arxiv.2411.01111 |