WalledEval: A Comprehensive Safety Evaluation Toolkit for Large Language Models
WalledEval is a comprehensive AI safety testing toolkit designed to evaluate large language models (LLMs). It accommodates a diverse range of models, including both open-weight and API-based ones, and features over 35 safety benchmarks covering areas such as multilingual safety, exaggerated safety,...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | WalledEval is a comprehensive AI safety testing toolkit designed to evaluate
large language models (LLMs). It accommodates a diverse range of models,
including both open-weight and API-based ones, and features over 35 safety
benchmarks covering areas such as multilingual safety, exaggerated safety, and
prompt injections. The framework supports both LLM and judge benchmarking and
incorporates custom mutators to test safety against various text-style
mutations, such as future tense and paraphrasing. Additionally, WalledEval
introduces WalledGuard, a new, small, and performant content moderation tool,
and two datasets: SGXSTest and HIXSTest, which serve as benchmarks for
assessing the exaggerated safety of LLMs and judges in cultural contexts. We
make WalledEval publicly available at https://github.com/walledai/walledeval. |
---|---|
DOI: | 10.48550/arxiv.2408.03837 |