Aegis2.0: A Diverse AI Safety Dataset and Risks Taxonomy for Alignment of LLM Guardrails
As Large Language Models (LLMs) and generative AI become increasingly widespread, concerns about content safety have grown in parallel. Currently, there is a clear lack of high-quality, human-annotated datasets that address the full spectrum of LLM-related safety risks and are usable for commercial...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As Large Language Models (LLMs) and generative AI become increasingly
widespread, concerns about content safety have grown in parallel. Currently,
there is a clear lack of high-quality, human-annotated datasets that address
the full spectrum of LLM-related safety risks and are usable for commercial
applications. To bridge this gap, we propose a comprehensive and adaptable
taxonomy for categorizing safety risks, structured into 12 top-level hazard
categories with an extension to 9 fine-grained subcategories. This taxonomy is
designed to meet the diverse requirements of downstream users, offering more
granular and flexible tools for managing various risk types. Using a hybrid
data generation pipeline that combines human annotations with a multi-LLM
"jury" system to assess the safety of responses, we obtain Aegis 2.0, a
carefully curated collection of 34,248 samples of human-LLM interactions,
annotated according to our proposed taxonomy. To validate its effectiveness, we
demonstrate that several lightweight models, trained using parameter-efficient
techniques on Aegis 2.0, achieve performance competitive with leading safety
models fully fine-tuned on much larger, non-commercial datasets. In addition,
we introduce a novel training blend that combines safety with topic following
data.This approach enhances the adaptability of guard models, enabling them to
generalize to new risk categories defined during inference. We plan to
open-source Aegis 2.0 data and models to the research community to aid in the
safety guardrailing of LLMs. |
---|---|
DOI: | 10.48550/arxiv.2501.09004 |