PatentEval: Understanding Errors in Patent Generation
NAACL2024 - 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Jun 2024, Mexico City, Mexico In this work, we introduce a comprehensive error typology specifically designed for evaluating two distinct tasks in machine-generated patent texts: claims...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | NAACL2024 - 2024 Annual Conference of the North American Chapter
of the Association for Computational Linguistics, Jun 2024, Mexico City,
Mexico In this work, we introduce a comprehensive error typology specifically
designed for evaluating two distinct tasks in machine-generated patent texts:
claims-to-abstract generation, and the generation of the next claim given
previous ones. We have also developed a benchmark, PatentEval, for
systematically assessing language models in this context. Our study includes a
comparative analysis, annotated by humans, of various models. These range from
those specifically adapted during training for tasks within the patent domain
to the latest general-purpose large language models (LLMs). Furthermore, we
explored and evaluated some metrics to approximate human judgments in patent
text evaluation, analyzing the extent to which these metrics align with expert
assessments. These approaches provide valuable insights into the capabilities
and limitations of current language models in the specialized field of patent
text generation. |
---|---|
DOI: | 10.48550/arxiv.2406.06589 |