When Not to Answer: Evaluating Prompts on GPT Models for Effective Abstention in Unanswerable Math Word Problems
Large language models (LLMs) are increasingly relied upon to solve complex mathematical word problems. However, being susceptible to hallucination, they may generate inaccurate results when presented with unanswerable questions, raising concerns about their potential harm. While GPT models are now w...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) are increasingly relied upon to solve complex
mathematical word problems. However, being susceptible to hallucination, they
may generate inaccurate results when presented with unanswerable questions,
raising concerns about their potential harm. While GPT models are now widely
used and trusted, the exploration of how they can effectively abstain from
answering unanswerable math problems and the enhancement of their abstention
capabilities has not been rigorously investigated. In this paper, we
investigate whether GPTs can appropriately respond to unanswerable math word
problems by applying prompts typically used in solvable mathematical scenarios.
Our experiments utilize the Unanswerable Word Math Problem (UWMP) dataset,
directly leveraging GPT model APIs. Evaluation metrics are introduced, which
integrate three key factors: abstention, correctness and confidence. Our
findings reveal critical gaps in GPT models and the hallucination it suffers
from for unsolvable problems, highlighting the need for improved models capable
of better managing uncertainty and complex reasoning in math word
problem-solving contexts. |
---|---|
DOI: | 10.48550/arxiv.2410.13029 |