ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in Images
Visual Question Answering (VQA) is a complicated task that requires the capability of simultaneously processing natural language and images. Initially, this task was researched, focusing on methods to help machines understand objects and scene contexts in images. However, some text appearing in the...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Van Nguyen, Quan Tran, Dan Quang Pham, Huy Quang Nguyen, Thang Kien-Bao Nguyen, Nghia Hieu Van Nguyen, Kiet Nguyen, Ngan Luu-Thuy |
description | Visual Question Answering (VQA) is a complicated task that requires the
capability of simultaneously processing natural language and images. Initially,
this task was researched, focusing on methods to help machines understand
objects and scene contexts in images. However, some text appearing in the image
that carries explicit information about the full content of the image is not
mentioned. Along with the continuous development of the AI era, there have been
many studies on the reading comprehension ability of VQA models in the world.
As a developing country, conditions are still limited, and this task is still
open in Vietnam. Therefore, we introduce the first large-scale dataset in
Vietnamese specializing in the ability to understand text appearing in images,
we call it ViTextVQA (\textbf{Vi}etnamese \textbf{Text}-based \textbf{V}isual
\textbf{Q}uestion \textbf{A}nswering dataset) which contains \textbf{over
16,000} images and \textbf{over 50,000} questions with answers. Through
meticulous experiments with various state-of-the-art models, we uncover the
significance of the order in which tokens in OCR text are processed and
selected to formulate answers. This finding helped us significantly improve the
performance of the baseline models on the ViTextVQA dataset. Our dataset is
available at this
\href{https://github.com/minhquan6203/ViTextVQA-Dataset}{link} for research
purposes. |
doi_str_mv | 10.48550/arxiv.2404.10652 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2404_10652</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2404_10652</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-f4c6456be31d1f5dc6b3b583efa025152ad42cf83226aaeb31e8066b7b23fee43</originalsourceid><addsrcrecordid>eNotj8tKw0AYRrNxIdUHcOW8QOLcG9yFWLUQkGLINvyT_BMHcqkzSa1vL6ldfXDgO3Ci6IHRRKZK0SfwZ3dKuKQyYVQrfht9V67E81wdsmeSkQJ8h_FnAz2SyoUFenJYMMxuGkk2hh_0buzIC8wQcCZ28mR3gn6BecWVw3mEAQOSVUnyaTh6_MIxrHc3kv0AHYa76MZCH_D-upuofN2V-XtcfLzt86yIQW95bGWjpdIGBWuZVW2jjTAqFWiBcsUUh1byxqaCcw2ARjBMqdZma7iwiFJsosd_7aW5Pno3gP-t1_b60i7-AMxRVZ4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in Images</title><source>arXiv.org</source><creator>Van Nguyen, Quan ; Tran, Dan Quang ; Pham, Huy Quang ; Nguyen, Thang Kien-Bao ; Nguyen, Nghia Hieu ; Van Nguyen, Kiet ; Nguyen, Ngan Luu-Thuy</creator><creatorcontrib>Van Nguyen, Quan ; Tran, Dan Quang ; Pham, Huy Quang ; Nguyen, Thang Kien-Bao ; Nguyen, Nghia Hieu ; Van Nguyen, Kiet ; Nguyen, Ngan Luu-Thuy</creatorcontrib><description>Visual Question Answering (VQA) is a complicated task that requires the
capability of simultaneously processing natural language and images. Initially,
this task was researched, focusing on methods to help machines understand
objects and scene contexts in images. However, some text appearing in the image
that carries explicit information about the full content of the image is not
mentioned. Along with the continuous development of the AI era, there have been
many studies on the reading comprehension ability of VQA models in the world.
As a developing country, conditions are still limited, and this task is still
open in Vietnam. Therefore, we introduce the first large-scale dataset in
Vietnamese specializing in the ability to understand text appearing in images,
we call it ViTextVQA (\textbf{Vi}etnamese \textbf{Text}-based \textbf{V}isual
\textbf{Q}uestion \textbf{A}nswering dataset) which contains \textbf{over
16,000} images and \textbf{over 50,000} questions with answers. Through
meticulous experiments with various state-of-the-art models, we uncover the
significance of the order in which tokens in OCR text are processed and
selected to formulate answers. This finding helped us significantly improve the
performance of the baseline models on the ViTextVQA dataset. Our dataset is
available at this
\href{https://github.com/minhquan6203/ViTextVQA-Dataset}{link} for research
purposes.</description><identifier>DOI: 10.48550/arxiv.2404.10652</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2024-04</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2404.10652$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2404.10652$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Van Nguyen, Quan</creatorcontrib><creatorcontrib>Tran, Dan Quang</creatorcontrib><creatorcontrib>Pham, Huy Quang</creatorcontrib><creatorcontrib>Nguyen, Thang Kien-Bao</creatorcontrib><creatorcontrib>Nguyen, Nghia Hieu</creatorcontrib><creatorcontrib>Van Nguyen, Kiet</creatorcontrib><creatorcontrib>Nguyen, Ngan Luu-Thuy</creatorcontrib><title>ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in Images</title><description>Visual Question Answering (VQA) is a complicated task that requires the
capability of simultaneously processing natural language and images. Initially,
this task was researched, focusing on methods to help machines understand
objects and scene contexts in images. However, some text appearing in the image
that carries explicit information about the full content of the image is not
mentioned. Along with the continuous development of the AI era, there have been
many studies on the reading comprehension ability of VQA models in the world.
As a developing country, conditions are still limited, and this task is still
open in Vietnam. Therefore, we introduce the first large-scale dataset in
Vietnamese specializing in the ability to understand text appearing in images,
we call it ViTextVQA (\textbf{Vi}etnamese \textbf{Text}-based \textbf{V}isual
\textbf{Q}uestion \textbf{A}nswering dataset) which contains \textbf{over
16,000} images and \textbf{over 50,000} questions with answers. Through
meticulous experiments with various state-of-the-art models, we uncover the
significance of the order in which tokens in OCR text are processed and
selected to formulate answers. This finding helped us significantly improve the
performance of the baseline models on the ViTextVQA dataset. Our dataset is
available at this
\href{https://github.com/minhquan6203/ViTextVQA-Dataset}{link} for research
purposes.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tKw0AYRrNxIdUHcOW8QOLcG9yFWLUQkGLINvyT_BMHcqkzSa1vL6ldfXDgO3Ci6IHRRKZK0SfwZ3dKuKQyYVQrfht9V67E81wdsmeSkQJ8h_FnAz2SyoUFenJYMMxuGkk2hh_0buzIC8wQcCZ28mR3gn6BecWVw3mEAQOSVUnyaTh6_MIxrHc3kv0AHYa76MZCH_D-upuofN2V-XtcfLzt86yIQW95bGWjpdIGBWuZVW2jjTAqFWiBcsUUh1byxqaCcw2ARjBMqdZma7iwiFJsosd_7aW5Pno3gP-t1_b60i7-AMxRVZ4</recordid><startdate>20240416</startdate><enddate>20240416</enddate><creator>Van Nguyen, Quan</creator><creator>Tran, Dan Quang</creator><creator>Pham, Huy Quang</creator><creator>Nguyen, Thang Kien-Bao</creator><creator>Nguyen, Nghia Hieu</creator><creator>Van Nguyen, Kiet</creator><creator>Nguyen, Ngan Luu-Thuy</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240416</creationdate><title>ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in Images</title><author>Van Nguyen, Quan ; Tran, Dan Quang ; Pham, Huy Quang ; Nguyen, Thang Kien-Bao ; Nguyen, Nghia Hieu ; Van Nguyen, Kiet ; Nguyen, Ngan Luu-Thuy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-f4c6456be31d1f5dc6b3b583efa025152ad42cf83226aaeb31e8066b7b23fee43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Van Nguyen, Quan</creatorcontrib><creatorcontrib>Tran, Dan Quang</creatorcontrib><creatorcontrib>Pham, Huy Quang</creatorcontrib><creatorcontrib>Nguyen, Thang Kien-Bao</creatorcontrib><creatorcontrib>Nguyen, Nghia Hieu</creatorcontrib><creatorcontrib>Van Nguyen, Kiet</creatorcontrib><creatorcontrib>Nguyen, Ngan Luu-Thuy</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Van Nguyen, Quan</au><au>Tran, Dan Quang</au><au>Pham, Huy Quang</au><au>Nguyen, Thang Kien-Bao</au><au>Nguyen, Nghia Hieu</au><au>Van Nguyen, Kiet</au><au>Nguyen, Ngan Luu-Thuy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in Images</atitle><date>2024-04-16</date><risdate>2024</risdate><abstract>Visual Question Answering (VQA) is a complicated task that requires the
capability of simultaneously processing natural language and images. Initially,
this task was researched, focusing on methods to help machines understand
objects and scene contexts in images. However, some text appearing in the image
that carries explicit information about the full content of the image is not
mentioned. Along with the continuous development of the AI era, there have been
many studies on the reading comprehension ability of VQA models in the world.
As a developing country, conditions are still limited, and this task is still
open in Vietnam. Therefore, we introduce the first large-scale dataset in
Vietnamese specializing in the ability to understand text appearing in images,
we call it ViTextVQA (\textbf{Vi}etnamese \textbf{Text}-based \textbf{V}isual
\textbf{Q}uestion \textbf{A}nswering dataset) which contains \textbf{over
16,000} images and \textbf{over 50,000} questions with answers. Through
meticulous experiments with various state-of-the-art models, we uncover the
significance of the order in which tokens in OCR text are processed and
selected to formulate answers. This finding helped us significantly improve the
performance of the baseline models on the ViTextVQA dataset. Our dataset is
available at this
\href{https://github.com/minhquan6203/ViTextVQA-Dataset}{link} for research
purposes.</abstract><doi>10.48550/arxiv.2404.10652</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2404.10652 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2404_10652 |
source | arXiv.org |
subjects | Computer Science - Computation and Language |
title | ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in Images |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T23%3A41%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ViTextVQA:%20A%20Large-Scale%20Visual%20Question%20Answering%20Dataset%20for%20Evaluating%20Vietnamese%20Text%20Comprehension%20in%20Images&rft.au=Van%20Nguyen,%20Quan&rft.date=2024-04-16&rft_id=info:doi/10.48550/arxiv.2404.10652&rft_dat=%3Carxiv_GOX%3E2404_10652%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |