Zero-Shot Character Identification and Speaker Prediction in Comics via Iterative Multimodal Fusion

Recognizing characters and predicting speakers of dialogue are critical for comic processing tasks, such as voice generation or translation. However, because characters vary by comic title, supervised learning approaches like training character classifiers which require specific annotations for each...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Li, Yingxuan, Hinami, Ryota, Aizawa, Kiyoharu, Matsui, Yusuke
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Li, Yingxuan
Hinami, Ryota
Aizawa, Kiyoharu
Matsui, Yusuke
description Recognizing characters and predicting speakers of dialogue are critical for comic processing tasks, such as voice generation or translation. However, because characters vary by comic title, supervised learning approaches like training character classifiers which require specific annotations for each comic title are infeasible. This motivates us to propose a novel zero-shot approach, allowing machines to identify characters and predict speaker names based solely on unannotated comic images. In spite of their importance in real-world applications, these task have largely remained unexplored due to challenges in story comprehension and multimodal integration. Recent large language models (LLMs) have shown great capability for text understanding and reasoning, while their application to multimodal content analysis is still an open problem. To address this problem, we propose an iterative multimodal framework, the first to employ multimodal information for both character identification and speaker prediction tasks. Our experiments demonstrate the effectiveness of the proposed framework, establishing a robust baseline for these tasks. Furthermore, since our method requires no training data or annotations, it can be used as-is on any comic series.
doi_str_mv 10.48550/arxiv.2404.13993
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2404_13993</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2404_13993</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-67b9f626f7451609e53e923d686a77b1d45760c3fe8b661c008e4ffd362d8b683</originalsourceid><addsrcrecordid>eNotj7tOwzAUhr0woMIDMOEXSHBi5zgZq4hCpCKQ2oklOrGPVau5VE4awdsTAtMv_TfpY-whEbHKs0w8Yfjyc5wqoeJEFoW8ZeaTwhAdTsPEyxMGNBMFXlnqJ--8wckPPcfe8sOF8LxEH4GsN6vte14OnTcjnz3yahku9Zn427WdfDdYbPnuOi7NO3bjsB3p_l837Lh7Ppav0f79pSq3-whBywh0UzhIwWmVJSAKyiQVqbSQA2rdJFZlGoSRjvIGIDFC5KScsxJSuzi53LDHv9uVsr4E32H4rn9p65VW_gDUvFAV</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Zero-Shot Character Identification and Speaker Prediction in Comics via Iterative Multimodal Fusion</title><source>arXiv.org</source><creator>Li, Yingxuan ; Hinami, Ryota ; Aizawa, Kiyoharu ; Matsui, Yusuke</creator><creatorcontrib>Li, Yingxuan ; Hinami, Ryota ; Aizawa, Kiyoharu ; Matsui, Yusuke</creatorcontrib><description>Recognizing characters and predicting speakers of dialogue are critical for comic processing tasks, such as voice generation or translation. However, because characters vary by comic title, supervised learning approaches like training character classifiers which require specific annotations for each comic title are infeasible. This motivates us to propose a novel zero-shot approach, allowing machines to identify characters and predict speaker names based solely on unannotated comic images. In spite of their importance in real-world applications, these task have largely remained unexplored due to challenges in story comprehension and multimodal integration. Recent large language models (LLMs) have shown great capability for text understanding and reasoning, while their application to multimodal content analysis is still an open problem. To address this problem, we propose an iterative multimodal framework, the first to employ multimodal information for both character identification and speaker prediction tasks. Our experiments demonstrate the effectiveness of the proposed framework, establishing a robust baseline for these tasks. Furthermore, since our method requires no training data or annotations, it can be used as-is on any comic series.</description><identifier>DOI: 10.48550/arxiv.2404.13993</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Multimedia</subject><creationdate>2024-04</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2404.13993$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2404.13993$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Yingxuan</creatorcontrib><creatorcontrib>Hinami, Ryota</creatorcontrib><creatorcontrib>Aizawa, Kiyoharu</creatorcontrib><creatorcontrib>Matsui, Yusuke</creatorcontrib><title>Zero-Shot Character Identification and Speaker Prediction in Comics via Iterative Multimodal Fusion</title><description>Recognizing characters and predicting speakers of dialogue are critical for comic processing tasks, such as voice generation or translation. However, because characters vary by comic title, supervised learning approaches like training character classifiers which require specific annotations for each comic title are infeasible. This motivates us to propose a novel zero-shot approach, allowing machines to identify characters and predict speaker names based solely on unannotated comic images. In spite of their importance in real-world applications, these task have largely remained unexplored due to challenges in story comprehension and multimodal integration. Recent large language models (LLMs) have shown great capability for text understanding and reasoning, while their application to multimodal content analysis is still an open problem. To address this problem, we propose an iterative multimodal framework, the first to employ multimodal information for both character identification and speaker prediction tasks. Our experiments demonstrate the effectiveness of the proposed framework, establishing a robust baseline for these tasks. Furthermore, since our method requires no training data or annotations, it can be used as-is on any comic series.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Multimedia</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7tOwzAUhr0woMIDMOEXSHBi5zgZq4hCpCKQ2oklOrGPVau5VE4awdsTAtMv_TfpY-whEbHKs0w8Yfjyc5wqoeJEFoW8ZeaTwhAdTsPEyxMGNBMFXlnqJ--8wckPPcfe8sOF8LxEH4GsN6vte14OnTcjnz3yahku9Zn427WdfDdYbPnuOi7NO3bjsB3p_l837Lh7Ppav0f79pSq3-whBywh0UzhIwWmVJSAKyiQVqbSQA2rdJFZlGoSRjvIGIDFC5KScsxJSuzi53LDHv9uVsr4E32H4rn9p65VW_gDUvFAV</recordid><startdate>20240422</startdate><enddate>20240422</enddate><creator>Li, Yingxuan</creator><creator>Hinami, Ryota</creator><creator>Aizawa, Kiyoharu</creator><creator>Matsui, Yusuke</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240422</creationdate><title>Zero-Shot Character Identification and Speaker Prediction in Comics via Iterative Multimodal Fusion</title><author>Li, Yingxuan ; Hinami, Ryota ; Aizawa, Kiyoharu ; Matsui, Yusuke</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-67b9f626f7451609e53e923d686a77b1d45760c3fe8b661c008e4ffd362d8b683</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Multimedia</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Yingxuan</creatorcontrib><creatorcontrib>Hinami, Ryota</creatorcontrib><creatorcontrib>Aizawa, Kiyoharu</creatorcontrib><creatorcontrib>Matsui, Yusuke</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Yingxuan</au><au>Hinami, Ryota</au><au>Aizawa, Kiyoharu</au><au>Matsui, Yusuke</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Zero-Shot Character Identification and Speaker Prediction in Comics via Iterative Multimodal Fusion</atitle><date>2024-04-22</date><risdate>2024</risdate><abstract>Recognizing characters and predicting speakers of dialogue are critical for comic processing tasks, such as voice generation or translation. However, because characters vary by comic title, supervised learning approaches like training character classifiers which require specific annotations for each comic title are infeasible. This motivates us to propose a novel zero-shot approach, allowing machines to identify characters and predict speaker names based solely on unannotated comic images. In spite of their importance in real-world applications, these task have largely remained unexplored due to challenges in story comprehension and multimodal integration. Recent large language models (LLMs) have shown great capability for text understanding and reasoning, while their application to multimodal content analysis is still an open problem. To address this problem, we propose an iterative multimodal framework, the first to employ multimodal information for both character identification and speaker prediction tasks. Our experiments demonstrate the effectiveness of the proposed framework, establishing a robust baseline for these tasks. Furthermore, since our method requires no training data or annotations, it can be used as-is on any comic series.</abstract><doi>10.48550/arxiv.2404.13993</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2404.13993
ispartof
issn
language eng
recordid cdi_arxiv_primary_2404_13993
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Multimedia
title Zero-Shot Character Identification and Speaker Prediction in Comics via Iterative Multimodal Fusion
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T05%3A44%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Zero-Shot%20Character%20Identification%20and%20Speaker%20Prediction%20in%20Comics%20via%20Iterative%20Multimodal%20Fusion&rft.au=Li,%20Yingxuan&rft.date=2024-04-22&rft_id=info:doi/10.48550/arxiv.2404.13993&rft_dat=%3Carxiv_GOX%3E2404_13993%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true