Large Language Models as Sous Chefs: Revising Recipes with GPT-3
With their remarkably improved text generation and prompting capabilities, large language models can adapt existing written information into forms that are easier to use and understand. In our work, we focus on recipes as an example of complex, diverse, and widely used instructions. We develop a pro...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-06 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Hwang, Alyssa Li, Bryan Hou, Zhaoyi Roth, Dan |
description | With their remarkably improved text generation and prompting capabilities, large language models can adapt existing written information into forms that are easier to use and understand. In our work, we focus on recipes as an example of complex, diverse, and widely used instructions. We develop a prompt grounded in the original recipe and ingredients list that breaks recipes down into simpler steps. We apply this prompt to recipes from various world cuisines, and experiment with several large language models (LLMs), finding best results with GPT-3.5. We also contribute an Amazon Mechanical Turk task that is carefully designed to reduce fatigue while collecting human judgment of the quality of recipe revisions. We find that annotators usually prefer the revision over the original, demonstrating a promising application of LLMs in serving as digital sous chefs for recipes and beyond. We release our prompt, code, and MTurk template for public use. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2829985231</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2829985231</sourcerecordid><originalsourceid>FETCH-proquest_journals_28299852313</originalsourceid><addsrcrecordid>eNqNissKgkAUQIcgSMp_uNBa0DtZ2iqQHguDMPcy1FVHxDGvU7-fiz6g1Tlwzkw4KGXgRRvEhXCZG9_3cbvDMJSOOKRqqAhS1VVWTXI1T2oZFMPdWIakppL3kNFbs-6qSR66J4aPHms433JPrsS8VC2T--NSrE_HPLl4_WBelngsGmOHbkoFRhjHUYgykP9dX3r-N3Q</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2829985231</pqid></control><display><type>article</type><title>Large Language Models as Sous Chefs: Revising Recipes with GPT-3</title><source>Free E- Journals</source><creator>Hwang, Alyssa ; Li, Bryan ; Hou, Zhaoyi ; Roth, Dan</creator><creatorcontrib>Hwang, Alyssa ; Li, Bryan ; Hou, Zhaoyi ; Roth, Dan</creatorcontrib><description>With their remarkably improved text generation and prompting capabilities, large language models can adapt existing written information into forms that are easier to use and understand. In our work, we focus on recipes as an example of complex, diverse, and widely used instructions. We develop a prompt grounded in the original recipe and ingredients list that breaks recipes down into simpler steps. We apply this prompt to recipes from various world cuisines, and experiment with several large language models (LLMs), finding best results with GPT-3.5. We also contribute an Amazon Mechanical Turk task that is carefully designed to reduce fatigue while collecting human judgment of the quality of recipe revisions. We find that annotators usually prefer the revision over the original, demonstrating a promising application of LLMs in serving as digital sous chefs for recipes and beyond. We release our prompt, code, and MTurk template for public use.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Fatigue ; Large language models ; Revisions</subject><ispartof>arXiv.org, 2023-06</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Hwang, Alyssa</creatorcontrib><creatorcontrib>Li, Bryan</creatorcontrib><creatorcontrib>Hou, Zhaoyi</creatorcontrib><creatorcontrib>Roth, Dan</creatorcontrib><title>Large Language Models as Sous Chefs: Revising Recipes with GPT-3</title><title>arXiv.org</title><description>With their remarkably improved text generation and prompting capabilities, large language models can adapt existing written information into forms that are easier to use and understand. In our work, we focus on recipes as an example of complex, diverse, and widely used instructions. We develop a prompt grounded in the original recipe and ingredients list that breaks recipes down into simpler steps. We apply this prompt to recipes from various world cuisines, and experiment with several large language models (LLMs), finding best results with GPT-3.5. We also contribute an Amazon Mechanical Turk task that is carefully designed to reduce fatigue while collecting human judgment of the quality of recipe revisions. We find that annotators usually prefer the revision over the original, demonstrating a promising application of LLMs in serving as digital sous chefs for recipes and beyond. We release our prompt, code, and MTurk template for public use.</description><subject>Fatigue</subject><subject>Large language models</subject><subject>Revisions</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNissKgkAUQIcgSMp_uNBa0DtZ2iqQHguDMPcy1FVHxDGvU7-fiz6g1Tlwzkw4KGXgRRvEhXCZG9_3cbvDMJSOOKRqqAhS1VVWTXI1T2oZFMPdWIakppL3kNFbs-6qSR66J4aPHms433JPrsS8VC2T--NSrE_HPLl4_WBelngsGmOHbkoFRhjHUYgykP9dX3r-N3Q</recordid><startdate>20230624</startdate><enddate>20230624</enddate><creator>Hwang, Alyssa</creator><creator>Li, Bryan</creator><creator>Hou, Zhaoyi</creator><creator>Roth, Dan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230624</creationdate><title>Large Language Models as Sous Chefs: Revising Recipes with GPT-3</title><author>Hwang, Alyssa ; Li, Bryan ; Hou, Zhaoyi ; Roth, Dan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28299852313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Fatigue</topic><topic>Large language models</topic><topic>Revisions</topic><toplevel>online_resources</toplevel><creatorcontrib>Hwang, Alyssa</creatorcontrib><creatorcontrib>Li, Bryan</creatorcontrib><creatorcontrib>Hou, Zhaoyi</creatorcontrib><creatorcontrib>Roth, Dan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hwang, Alyssa</au><au>Li, Bryan</au><au>Hou, Zhaoyi</au><au>Roth, Dan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Large Language Models as Sous Chefs: Revising Recipes with GPT-3</atitle><jtitle>arXiv.org</jtitle><date>2023-06-24</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>With their remarkably improved text generation and prompting capabilities, large language models can adapt existing written information into forms that are easier to use and understand. In our work, we focus on recipes as an example of complex, diverse, and widely used instructions. We develop a prompt grounded in the original recipe and ingredients list that breaks recipes down into simpler steps. We apply this prompt to recipes from various world cuisines, and experiment with several large language models (LLMs), finding best results with GPT-3.5. We also contribute an Amazon Mechanical Turk task that is carefully designed to reduce fatigue while collecting human judgment of the quality of recipe revisions. We find that annotators usually prefer the revision over the original, demonstrating a promising application of LLMs in serving as digital sous chefs for recipes and beyond. We release our prompt, code, and MTurk template for public use.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-06 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2829985231 |
source | Free E- Journals |
subjects | Fatigue Large language models Revisions |
title | Large Language Models as Sous Chefs: Revising Recipes with GPT-3 |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T02%3A47%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Large%20Language%20Models%20as%20Sous%20Chefs:%20Revising%20Recipes%20with%20GPT-3&rft.jtitle=arXiv.org&rft.au=Hwang,%20Alyssa&rft.date=2023-06-24&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2829985231%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2829985231&rft_id=info:pmid/&rfr_iscdi=true |