Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance

While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Petersen, Molly R, van der Plas, Lonneke
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Petersen, Molly R
van der Plas, Lonneke
description While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans than those in commonly used NLP benchmarks. Our experiments find that models are able to learn analogical reasoning, even with a small amount of data. We additionally compare our models to a dataset with a human baseline, and find that after training, models approach human performance.
doi_str_mv 10.48550/arxiv.2310.05597
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_05597</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_05597</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-1df833d1f6221758c6bc1f9c3118740d5c1ae2dfa93b2b6056636d6ccbc9b8ea3</originalsourceid><addsrcrecordid>eNotUMtOwzAQ9IUDKnwAJ_wDKXFcO8kJoYhHpUq99B6t7XUwcuzISSP4e9zCaWdnNTPaIeSBldtdI0T5BOnbrduKZ6IUoq1vydpBoB7CcIYB6RgN-pl6hBQoBPBxcBo8TQhzDC4Mz3QfVpwXN8CSV7okcBeeRvWFenH5lnWG6jhOkFwWzXSJ9PM85pgJk40pI4135MaCn_H-f27I6e311H0Uh-P7vns5FCDrumDGNpwbZmVVsVo0WirNbKs5Y029K43QDLAyFlquKiVLISWXRmqtdKsaBL4hj3-218f7KbkR0k9_KaC_FsB_ATTrWUs</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance</title><source>arXiv.org</source><creator>Petersen, Molly R ; van der Plas, Lonneke</creator><creatorcontrib>Petersen, Molly R ; van der Plas, Lonneke</creatorcontrib><description>While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans than those in commonly used NLP benchmarks. Our experiments find that models are able to learn analogical reasoning, even with a small amount of data. We additionally compare our models to a dataset with a human baseline, and find that after training, models approach human performance.</description><identifier>DOI: 10.48550/arxiv.2310.05597</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2023-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.05597$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.05597$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Petersen, Molly R</creatorcontrib><creatorcontrib>van der Plas, Lonneke</creatorcontrib><title>Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance</title><description>While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans than those in commonly used NLP benchmarks. Our experiments find that models are able to learn analogical reasoning, even with a small amount of data. We additionally compare our models to a dataset with a human baseline, and find that after training, models approach human performance.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotUMtOwzAQ9IUDKnwAJ_wDKXFcO8kJoYhHpUq99B6t7XUwcuzISSP4e9zCaWdnNTPaIeSBldtdI0T5BOnbrduKZ6IUoq1vydpBoB7CcIYB6RgN-pl6hBQoBPBxcBo8TQhzDC4Mz3QfVpwXN8CSV7okcBeeRvWFenH5lnWG6jhOkFwWzXSJ9PM85pgJk40pI4135MaCn_H-f27I6e311H0Uh-P7vns5FCDrumDGNpwbZmVVsVo0WirNbKs5Y029K43QDLAyFlquKiVLISWXRmqtdKsaBL4hj3-218f7KbkR0k9_KaC_FsB_ATTrWUs</recordid><startdate>20231009</startdate><enddate>20231009</enddate><creator>Petersen, Molly R</creator><creator>van der Plas, Lonneke</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231009</creationdate><title>Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance</title><author>Petersen, Molly R ; van der Plas, Lonneke</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-1df833d1f6221758c6bc1f9c3118740d5c1ae2dfa93b2b6056636d6ccbc9b8ea3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Petersen, Molly R</creatorcontrib><creatorcontrib>van der Plas, Lonneke</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Petersen, Molly R</au><au>van der Plas, Lonneke</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance</atitle><date>2023-10-09</date><risdate>2023</risdate><abstract>While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans than those in commonly used NLP benchmarks. Our experiments find that models are able to learn analogical reasoning, even with a small amount of data. We additionally compare our models to a dataset with a human baseline, and find that after training, models approach human performance.</abstract><doi>10.48550/arxiv.2310.05597</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.05597
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_05597
source arXiv.org
subjects Computer Science - Computation and Language
title Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T04%3A49%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Can%20language%20models%20learn%20analogical%20reasoning?%20Investigating%20training%20objectives%20and%20comparisons%20to%20human%20performance&rft.au=Petersen,%20Molly%20R&rft.date=2023-10-09&rft_id=info:doi/10.48550/arxiv.2310.05597&rft_dat=%3Carxiv_GOX%3E2310_05597%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true