Evaluating Linguistic Capabilities of Multimodal LLMs in the Lens of Few-Shot Learning

The linguistic capabilities of Multimodal Large Language Models (MLLMs) are critical for their effective application across diverse tasks. This study aims to evaluate the performance of MLLMs on the VALSE benchmark, focusing on the efficacy of few-shot In-Context Learning (ICL), and Chain-of-Thought...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-07
Hauptverfasser: Dogan, Mustafa, Kesen, Ilker, Iacer Calixto, Erdem, Aykut, Erdem, Erkut
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Dogan, Mustafa
Kesen, Ilker
Iacer Calixto
Erdem, Aykut
Erdem, Erkut
description The linguistic capabilities of Multimodal Large Language Models (MLLMs) are critical for their effective application across diverse tasks. This study aims to evaluate the performance of MLLMs on the VALSE benchmark, focusing on the efficacy of few-shot In-Context Learning (ICL), and Chain-of-Thought (CoT) prompting. We conducted a comprehensive assessment of state-of-the-art MLLMs, varying in model size and pretraining datasets. The experimental results reveal that ICL and CoT prompting significantly boost model performance, particularly in tasks requiring complex reasoning and contextual understanding. Models pretrained on captioning datasets show superior zero-shot performance, while those trained on interleaved image-text data benefit from few-shot learning. Our findings provide valuable insights into optimizing MLLMs for better grounding of language in visual contexts, highlighting the importance of the composition of pretraining data and the potential of few-shot learning strategies to improve the reasoning abilities of MLLMs.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3082397876</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3082397876</sourcerecordid><originalsourceid>FETCH-proquest_journals_30823978763</originalsourceid><addsrcrecordid>eNqNjMsKwjAURIMgWLT_cMF1ISb24bq0uEhXiluJmtpbYlKbRH_fIn6AmzkwZ5gZiRjnm6TYMrYgsXM9pZRlOUtTHpFT9ZI6SI_mDmKKgM7jFUo5yAtq9Kgc2BaaoD0-7E1qEKJxgAZ8p0Ao89W1eieHzvqpkKOZblZk3krtVPzjkqzr6ljuk2G0z6CcP_c2jGZSZ04Lxnd5kWf8v9UHvthAew</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3082397876</pqid></control><display><type>article</type><title>Evaluating Linguistic Capabilities of Multimodal LLMs in the Lens of Few-Shot Learning</title><source>Free E- Journals</source><creator>Dogan, Mustafa ; Kesen, Ilker ; Iacer Calixto ; Erdem, Aykut ; Erdem, Erkut</creator><creatorcontrib>Dogan, Mustafa ; Kesen, Ilker ; Iacer Calixto ; Erdem, Aykut ; Erdem, Erkut</creatorcontrib><description>The linguistic capabilities of Multimodal Large Language Models (MLLMs) are critical for their effective application across diverse tasks. This study aims to evaluate the performance of MLLMs on the VALSE benchmark, focusing on the efficacy of few-shot In-Context Learning (ICL), and Chain-of-Thought (CoT) prompting. We conducted a comprehensive assessment of state-of-the-art MLLMs, varying in model size and pretraining datasets. The experimental results reveal that ICL and CoT prompting significantly boost model performance, particularly in tasks requiring complex reasoning and contextual understanding. Models pretrained on captioning datasets show superior zero-shot performance, while those trained on interleaved image-text data benefit from few-shot learning. Our findings provide valuable insights into optimizing MLLMs for better grounding of language in visual contexts, highlighting the importance of the composition of pretraining data and the potential of few-shot learning strategies to improve the reasoning abilities of MLLMs.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Datasets ; Effectiveness ; Large language models ; Linguistics ; Performance evaluation ; Prompt engineering ; Reasoning ; Task complexity</subject><ispartof>arXiv.org, 2024-07</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Dogan, Mustafa</creatorcontrib><creatorcontrib>Kesen, Ilker</creatorcontrib><creatorcontrib>Iacer Calixto</creatorcontrib><creatorcontrib>Erdem, Aykut</creatorcontrib><creatorcontrib>Erdem, Erkut</creatorcontrib><title>Evaluating Linguistic Capabilities of Multimodal LLMs in the Lens of Few-Shot Learning</title><title>arXiv.org</title><description>The linguistic capabilities of Multimodal Large Language Models (MLLMs) are critical for their effective application across diverse tasks. This study aims to evaluate the performance of MLLMs on the VALSE benchmark, focusing on the efficacy of few-shot In-Context Learning (ICL), and Chain-of-Thought (CoT) prompting. We conducted a comprehensive assessment of state-of-the-art MLLMs, varying in model size and pretraining datasets. The experimental results reveal that ICL and CoT prompting significantly boost model performance, particularly in tasks requiring complex reasoning and contextual understanding. Models pretrained on captioning datasets show superior zero-shot performance, while those trained on interleaved image-text data benefit from few-shot learning. Our findings provide valuable insights into optimizing MLLMs for better grounding of language in visual contexts, highlighting the importance of the composition of pretraining data and the potential of few-shot learning strategies to improve the reasoning abilities of MLLMs.</description><subject>Datasets</subject><subject>Effectiveness</subject><subject>Large language models</subject><subject>Linguistics</subject><subject>Performance evaluation</subject><subject>Prompt engineering</subject><subject>Reasoning</subject><subject>Task complexity</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjMsKwjAURIMgWLT_cMF1ISb24bq0uEhXiluJmtpbYlKbRH_fIn6AmzkwZ5gZiRjnm6TYMrYgsXM9pZRlOUtTHpFT9ZI6SI_mDmKKgM7jFUo5yAtq9Kgc2BaaoD0-7E1qEKJxgAZ8p0Ao89W1eieHzvqpkKOZblZk3krtVPzjkqzr6ljuk2G0z6CcP_c2jGZSZ04Lxnd5kWf8v9UHvthAew</recordid><startdate>20240717</startdate><enddate>20240717</enddate><creator>Dogan, Mustafa</creator><creator>Kesen, Ilker</creator><creator>Iacer Calixto</creator><creator>Erdem, Aykut</creator><creator>Erdem, Erkut</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240717</creationdate><title>Evaluating Linguistic Capabilities of Multimodal LLMs in the Lens of Few-Shot Learning</title><author>Dogan, Mustafa ; Kesen, Ilker ; Iacer Calixto ; Erdem, Aykut ; Erdem, Erkut</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30823978763</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Datasets</topic><topic>Effectiveness</topic><topic>Large language models</topic><topic>Linguistics</topic><topic>Performance evaluation</topic><topic>Prompt engineering</topic><topic>Reasoning</topic><topic>Task complexity</topic><toplevel>online_resources</toplevel><creatorcontrib>Dogan, Mustafa</creatorcontrib><creatorcontrib>Kesen, Ilker</creatorcontrib><creatorcontrib>Iacer Calixto</creatorcontrib><creatorcontrib>Erdem, Aykut</creatorcontrib><creatorcontrib>Erdem, Erkut</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dogan, Mustafa</au><au>Kesen, Ilker</au><au>Iacer Calixto</au><au>Erdem, Aykut</au><au>Erdem, Erkut</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Evaluating Linguistic Capabilities of Multimodal LLMs in the Lens of Few-Shot Learning</atitle><jtitle>arXiv.org</jtitle><date>2024-07-17</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>The linguistic capabilities of Multimodal Large Language Models (MLLMs) are critical for their effective application across diverse tasks. This study aims to evaluate the performance of MLLMs on the VALSE benchmark, focusing on the efficacy of few-shot In-Context Learning (ICL), and Chain-of-Thought (CoT) prompting. We conducted a comprehensive assessment of state-of-the-art MLLMs, varying in model size and pretraining datasets. The experimental results reveal that ICL and CoT prompting significantly boost model performance, particularly in tasks requiring complex reasoning and contextual understanding. Models pretrained on captioning datasets show superior zero-shot performance, while those trained on interleaved image-text data benefit from few-shot learning. Our findings provide valuable insights into optimizing MLLMs for better grounding of language in visual contexts, highlighting the importance of the composition of pretraining data and the potential of few-shot learning strategies to improve the reasoning abilities of MLLMs.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-07
issn 2331-8422
language eng
recordid cdi_proquest_journals_3082397876
source Free E- Journals
subjects Datasets
Effectiveness
Large language models
Linguistics
Performance evaluation
Prompt engineering
Reasoning
Task complexity
title Evaluating Linguistic Capabilities of Multimodal LLMs in the Lens of Few-Shot Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T18%3A15%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Evaluating%20Linguistic%20Capabilities%20of%20Multimodal%20LLMs%20in%20the%20Lens%20of%20Few-Shot%20Learning&rft.jtitle=arXiv.org&rft.au=Dogan,%20Mustafa&rft.date=2024-07-17&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3082397876%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3082397876&rft_id=info:pmid/&rfr_iscdi=true