OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling
Large language models (LLMs) have exhibited their problem-solving abilities in mathematical reasoning. Solving realistic optimization (OPT) problems in application scenarios requires advanced and applied mathematics ability. However, current OPT benchmarks that merely solve linear programming are fa...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have exhibited their problem-solving abilities
in mathematical reasoning. Solving realistic optimization (OPT) problems in
application scenarios requires advanced and applied mathematics ability.
However, current OPT benchmarks that merely solve linear programming are far
from complex realistic situations. In this work, we propose OptiBench, a
benchmark for End-to-end optimization problem-solving with human-readable
inputs and outputs. OptiBench contains rich optimization problems, including
linear and nonlinear programming with or without tabular data, which can
comprehensively evaluate LLMs' solving ability. In our benchmark, LLMs are
required to call a code solver to provide precise numerical answers.
Furthermore, to alleviate the data scarcity for optimization problems, and to
bridge the gap between open-source LLMs on a small scale (e.g., Llama-3-8b) and
closed-source LLMs (e.g., GPT-4), we further propose a data synthesis method
namely ReSocratic. Unlike general data synthesis methods that proceed from
questions to answers, \ReSocratic first incrementally synthesizes formatted
optimization demonstration with mathematical formulations step by step and then
back-translates the generated demonstrations into questions. Based on this, we
synthesize the ReSocratic-29k dataset. We further conduct supervised
fine-tuning with ReSocratic-29k on multiple open-source models. Experimental
results show that ReSocratic-29k significantly improves the performance of
open-source models. |
---|---|
DOI: | 10.48550/arxiv.2407.09887 |