TILFA: A Unified Framework for Text, Image, and Layout Fusion in Argument Mining

A main goal of Argument Mining (AM) is to analyze an author's stance. Unlike previous AM datasets focusing only on text, the shared task at the 10th Workshop on Argument Mining introduces a dataset including both text and images. Importantly, these images contain both visual elements and optica...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zong, Qing, Wang, Zhaowei, Xu, Baixuan, Zheng, Tianshi, Shi, Haochen, Wang, Weiqi, Song, Yangqiu, Wong, Ginny Y, See, Simon
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zong, Qing
Wang, Zhaowei
Xu, Baixuan
Zheng, Tianshi
Shi, Haochen
Wang, Weiqi
Song, Yangqiu
Wong, Ginny Y
See, Simon
description A main goal of Argument Mining (AM) is to analyze an author's stance. Unlike previous AM datasets focusing only on text, the shared task at the 10th Workshop on Argument Mining introduces a dataset including both text and images. Importantly, these images contain both visual elements and optical characters. Our new framework, TILFA (A Unified Framework for Text, Image, and Layout Fusion in Argument Mining), is designed to handle this mixed data. It excels at not only understanding text but also detecting optical characters and recognizing layout details in images. Our model significantly outperforms existing baselines, earning our team, KnowComp, the 1st place in the leaderboard of Argumentative Stance Classification subtask in this shared task.
doi_str_mv 10.48550/arxiv.2310.05210
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_05210</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_05210</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-dbd194d2ef3ac7b30fefb5202bc0c83fe75a1140423cbcb5121bc077b4120dc63</originalsourceid><addsrcrecordid>eNotz8tKw0AYBeDZuJDqA7jyf4Cm_nPrVHehGA1EdBHXYa5h0Exkmmj79tbW1YFz4MBHyA3FldhIiXc67-P3ivFjgZJRvCRvbd1U5QOU8J5iiN5BlfXgf8b8AWHM0Pr9tIR60L1fgk4OGn0Y5wmqeRfHBDFBmft58GmCl5hi6q_IRdCfO3_9nwvSVo_t9rloXp_qbdkUeq2wcMbRe-GYD1xbZTgGH4xkyIxFu-HBK6kpFSgYt8YaSRk9LkoZQRk6u-YLcnu-PZG6rxwHnQ_dH6070fgv2UVH4A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>TILFA: A Unified Framework for Text, Image, and Layout Fusion in Argument Mining</title><source>arXiv.org</source><creator>Zong, Qing ; Wang, Zhaowei ; Xu, Baixuan ; Zheng, Tianshi ; Shi, Haochen ; Wang, Weiqi ; Song, Yangqiu ; Wong, Ginny Y ; See, Simon</creator><creatorcontrib>Zong, Qing ; Wang, Zhaowei ; Xu, Baixuan ; Zheng, Tianshi ; Shi, Haochen ; Wang, Weiqi ; Song, Yangqiu ; Wong, Ginny Y ; See, Simon</creatorcontrib><description>A main goal of Argument Mining (AM) is to analyze an author's stance. Unlike previous AM datasets focusing only on text, the shared task at the 10th Workshop on Argument Mining introduces a dataset including both text and images. Importantly, these images contain both visual elements and optical characters. Our new framework, TILFA (A Unified Framework for Text, Image, and Layout Fusion in Argument Mining), is designed to handle this mixed data. It excels at not only understanding text but also detecting optical characters and recognizing layout details in images. Our model significantly outperforms existing baselines, earning our team, KnowComp, the 1st place in the leaderboard of Argumentative Stance Classification subtask in this shared task.</description><identifier>DOI: 10.48550/arxiv.2310.05210</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2023-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.05210$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.05210$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zong, Qing</creatorcontrib><creatorcontrib>Wang, Zhaowei</creatorcontrib><creatorcontrib>Xu, Baixuan</creatorcontrib><creatorcontrib>Zheng, Tianshi</creatorcontrib><creatorcontrib>Shi, Haochen</creatorcontrib><creatorcontrib>Wang, Weiqi</creatorcontrib><creatorcontrib>Song, Yangqiu</creatorcontrib><creatorcontrib>Wong, Ginny Y</creatorcontrib><creatorcontrib>See, Simon</creatorcontrib><title>TILFA: A Unified Framework for Text, Image, and Layout Fusion in Argument Mining</title><description>A main goal of Argument Mining (AM) is to analyze an author's stance. Unlike previous AM datasets focusing only on text, the shared task at the 10th Workshop on Argument Mining introduces a dataset including both text and images. Importantly, these images contain both visual elements and optical characters. Our new framework, TILFA (A Unified Framework for Text, Image, and Layout Fusion in Argument Mining), is designed to handle this mixed data. It excels at not only understanding text but also detecting optical characters and recognizing layout details in images. Our model significantly outperforms existing baselines, earning our team, KnowComp, the 1st place in the leaderboard of Argumentative Stance Classification subtask in this shared task.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz8tKw0AYBeDZuJDqA7jyf4Cm_nPrVHehGA1EdBHXYa5h0Exkmmj79tbW1YFz4MBHyA3FldhIiXc67-P3ivFjgZJRvCRvbd1U5QOU8J5iiN5BlfXgf8b8AWHM0Pr9tIR60L1fgk4OGn0Y5wmqeRfHBDFBmft58GmCl5hi6q_IRdCfO3_9nwvSVo_t9rloXp_qbdkUeq2wcMbRe-GYD1xbZTgGH4xkyIxFu-HBK6kpFSgYt8YaSRk9LkoZQRk6u-YLcnu-PZG6rxwHnQ_dH6070fgv2UVH4A</recordid><startdate>20231008</startdate><enddate>20231008</enddate><creator>Zong, Qing</creator><creator>Wang, Zhaowei</creator><creator>Xu, Baixuan</creator><creator>Zheng, Tianshi</creator><creator>Shi, Haochen</creator><creator>Wang, Weiqi</creator><creator>Song, Yangqiu</creator><creator>Wong, Ginny Y</creator><creator>See, Simon</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231008</creationdate><title>TILFA: A Unified Framework for Text, Image, and Layout Fusion in Argument Mining</title><author>Zong, Qing ; Wang, Zhaowei ; Xu, Baixuan ; Zheng, Tianshi ; Shi, Haochen ; Wang, Weiqi ; Song, Yangqiu ; Wong, Ginny Y ; See, Simon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-dbd194d2ef3ac7b30fefb5202bc0c83fe75a1140423cbcb5121bc077b4120dc63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Zong, Qing</creatorcontrib><creatorcontrib>Wang, Zhaowei</creatorcontrib><creatorcontrib>Xu, Baixuan</creatorcontrib><creatorcontrib>Zheng, Tianshi</creatorcontrib><creatorcontrib>Shi, Haochen</creatorcontrib><creatorcontrib>Wang, Weiqi</creatorcontrib><creatorcontrib>Song, Yangqiu</creatorcontrib><creatorcontrib>Wong, Ginny Y</creatorcontrib><creatorcontrib>See, Simon</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zong, Qing</au><au>Wang, Zhaowei</au><au>Xu, Baixuan</au><au>Zheng, Tianshi</au><au>Shi, Haochen</au><au>Wang, Weiqi</au><au>Song, Yangqiu</au><au>Wong, Ginny Y</au><au>See, Simon</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>TILFA: A Unified Framework for Text, Image, and Layout Fusion in Argument Mining</atitle><date>2023-10-08</date><risdate>2023</risdate><abstract>A main goal of Argument Mining (AM) is to analyze an author's stance. Unlike previous AM datasets focusing only on text, the shared task at the 10th Workshop on Argument Mining introduces a dataset including both text and images. Importantly, these images contain both visual elements and optical characters. Our new framework, TILFA (A Unified Framework for Text, Image, and Layout Fusion in Argument Mining), is designed to handle this mixed data. It excels at not only understanding text but also detecting optical characters and recognizing layout details in images. Our model significantly outperforms existing baselines, earning our team, KnowComp, the 1st place in the leaderboard of Argumentative Stance Classification subtask in this shared task.</abstract><doi>10.48550/arxiv.2310.05210</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.05210
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_05210
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
title TILFA: A Unified Framework for Text, Image, and Layout Fusion in Argument Mining
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T08%3A55%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=TILFA:%20A%20Unified%20Framework%20for%20Text,%20Image,%20and%20Layout%20Fusion%20in%20Argument%20Mining&rft.au=Zong,%20Qing&rft.date=2023-10-08&rft_id=info:doi/10.48550/arxiv.2310.05210&rft_dat=%3Carxiv_GOX%3E2310_05210%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true