CLOMO: Counterfactual Logical Modification with Large Language Models
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (ACL 2024) In this study, we delve into the realm of counterfactual reasoning capabilities of large language models (LLMs). Our primary objective is to cultivate the counterfactual thought...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Proceedings of the 62nd Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers) (ACL 2024) In this study, we delve into the realm of counterfactual reasoning
capabilities of large language models (LLMs). Our primary objective is to
cultivate the counterfactual thought processes within LLMs and rigorously
assess these processes for their validity. Specifically, we introduce a novel
task, Counterfactual Logical Modification (CLOMO), and a high-quality
human-annotated benchmark. In this task, LLMs must adeptly alter a given
argumentative text to uphold a predetermined logical relationship. To
effectively evaluate a generation model's counterfactual capabilities, we
propose an innovative evaluation metric, the decomposed Self-Evaluation Score
(SES) to directly evaluate the natural language output of LLMs instead of
modeling the task as a multiple-choice problem. Analysis shows that the
proposed automatic metric aligns well with human preference. Our experimental
results show that while LLMs demonstrate a notable capacity for logical
counterfactual thinking, there remains a discernible gap between their current
abilities and human performance. Code and data are available at
https://github.com/Eleanor-H/CLOMO. |
---|---|
DOI: | 10.48550/arxiv.2311.17438 |