ExplainCPE: A Free-text Explanation Benchmark of Chinese Pharmacist Examination
As ChatGPT and GPT-4 spearhead the development of Large Language Models (LLMs), more researchers are investigating their performance across various tasks. But more research needs to be done on the interpretability capabilities of LLMs, that is, the ability to generate reasons after an answer has bee...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As ChatGPT and GPT-4 spearhead the development of Large Language Models
(LLMs), more researchers are investigating their performance across various
tasks. But more research needs to be done on the interpretability capabilities
of LLMs, that is, the ability to generate reasons after an answer has been
given. Existing explanation datasets are mostly English-language general
knowledge questions, which leads to insufficient thematic and linguistic
diversity. To address the language bias and lack of medical resources in
generating rationales QA datasets, we present ExplainCPE (over 7k instances), a
challenging medical benchmark in Simplified Chinese. We analyzed the errors of
ChatGPT and GPT-4, pointing out the limitations of current LLMs in
understanding text and computational reasoning. During the experiment, we also
found that different LLMs have different preferences for in-context learning.
ExplainCPE presents a significant challenge, but its potential for further
investigation is promising, and it can be used to evaluate the ability of a
model to generate explanations. AI safety and trustworthiness need more
attention, and this work makes the first step to explore the medical
interpretability of LLMs.The dataset is available at
https://github.com/HITsz-TMG/ExplainCPE. |
---|---|
DOI: | 10.48550/arxiv.2305.12945 |