Chinese SafetyQA: A Safety Short-form Factuality Benchmark for Large Language Models
With the rapid advancement of Large Language Models (LLMs), significant safety concerns have emerged. Fundamentally, the safety of large language models is closely linked to the accuracy, comprehensiveness, and clarity of their understanding of safety knowledge, particularly in domains such as law,...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the rapid advancement of Large Language Models (LLMs), significant
safety concerns have emerged. Fundamentally, the safety of large language
models is closely linked to the accuracy, comprehensiveness, and clarity of
their understanding of safety knowledge, particularly in domains such as law,
policy and ethics. This factuality ability is crucial in determining whether
these models can be deployed and applied safely and compliantly within specific
regions. To address these challenges and better evaluate the factuality ability
of LLMs to answer short questions, we introduce the Chinese SafetyQA benchmark.
Chinese SafetyQA has several properties (i.e., Chinese, Diverse, High-quality,
Static, Easy-to-evaluate, Safety-related, Harmless). Based on Chinese SafetyQA,
we perform a comprehensive evaluation on the factuality abilities of existing
LLMs and analyze how these capabilities relate to LLM abilities, e.g., RAG
ability and robustness against attacks. |
---|---|
DOI: | 10.48550/arxiv.2412.15265 |