Benchmarking Language Model Creativity: A Case Study on Code Generation
As LLMs become increasingly prevalent, it is interesting to consider how ``creative'' these models can be. From cognitive science, creativity consists of at least two key characteristics: \emph{convergent} thinking (purposefulness to achieve a given goal) and \emph{divergent} thinking (ada...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As LLMs become increasingly prevalent, it is interesting to consider how
``creative'' these models can be. From cognitive science, creativity consists
of at least two key characteristics: \emph{convergent} thinking (purposefulness
to achieve a given goal) and \emph{divergent} thinking (adaptability to new
environments or constraints) \citep{runco2003critical}. In this work, we
introduce a framework for quantifying LLM creativity that incorporates the two
characteristics. This is achieved by (1) Denial Prompting pushes LLMs to come
up with more creative solutions to a given problem by incrementally imposing
new constraints on the previous solution, compelling LLMs to adopt new
strategies, and (2) defining and computing the NeoGauge metric which examines
both convergent and divergent thinking in the generated creative responses by
LLMs. We apply the proposed framework on Codeforces problems, a natural data
source for collecting human coding solutions. We quantify NeoGauge for various
proprietary and open-source models and find that even the most creative model,
GPT-4, still falls short of demonstrating human-like creativity. We also
experiment with advanced reasoning strategies (MCTS, self-correction, etc.) and
observe no significant improvement in creativity. As a by-product of our
analysis, we release NeoCoder dataset for reproducing our results on future
models. |
---|---|
DOI: | 10.48550/arxiv.2407.09007 |