LEAF-QA: Locate, Encode & Attend for Figure Question Answering
We introduce LEAF-QA, a comprehensive dataset of $250,000$ densely annotated figures/charts, constructed from real-world open data sources, along with ~2 million question-answer (QA) pairs querying the structure and semantics of these charts. LEAF-QA highlights the problem of multimodal QA, which is...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Chaudhry, Ritwick Shekhar, Sumit Gupta, Utkarsh Maneriker, Pranav Bansal, Prann Joshi, Ajay |
description | We introduce LEAF-QA, a comprehensive dataset of $250,000$ densely annotated
figures/charts, constructed from real-world open data sources, along with ~2
million question-answer (QA) pairs querying the structure and semantics of
these charts. LEAF-QA highlights the problem of multimodal QA, which is notably
different from conventional visual QA (VQA), and has recently gained interest
in the community. Furthermore, LEAF-QA is significantly more complex than
previous attempts at chart QA, viz. FigureQA and DVQA, which present only
limited variations in chart data. LEAF-QA being constructed from real-world
sources, requires a novel architecture to enable question answering. To this
end, LEAF-Net, a deep architecture involving chart element localization,
question and answer encoding in terms of chart elements, and an attention
network is proposed. Different experiments are conducted to demonstrate the
challenges of QA on LEAF-QA. The proposed architecture, LEAF-Net also
considerably advances the current state-of-the-art on FigureQA and DVQA. |
doi_str_mv | 10.48550/arxiv.1907.12861 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1907_12861</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1907_12861</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-75a473cdb40958733d56f7f5042be4bacc4c93e9837ff5c70bae5d903829560a3</originalsourceid><addsrcrecordid>eNotzz9LxDAYgPEsDnL6AZzM5GRr0uRtEgchHK0KBTm4vaTJmyOgqaQ9_3x78XR6tgd-hFxxVksNwO5c-UofNTdM1bzRLT8nD0Nn-2pn7-kwe7fiLe2ynwPSG2rXFXOgcS60T4djQbo74rKmOVObl08sKR8uyFl0rwte_ndD9n233z5Vw8vj89YOlWsVrxQ4qYQPk2QGtBIiQBtVBCabCeXkvJfeCDRaqBjBKzY5hGCY0I2BljmxIdd_2xNgfC_pzZXv8RcyniDiB9UiQMQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>LEAF-QA: Locate, Encode & Attend for Figure Question Answering</title><source>arXiv.org</source><creator>Chaudhry, Ritwick ; Shekhar, Sumit ; Gupta, Utkarsh ; Maneriker, Pranav ; Bansal, Prann ; Joshi, Ajay</creator><creatorcontrib>Chaudhry, Ritwick ; Shekhar, Sumit ; Gupta, Utkarsh ; Maneriker, Pranav ; Bansal, Prann ; Joshi, Ajay</creatorcontrib><description>We introduce LEAF-QA, a comprehensive dataset of $250,000$ densely annotated
figures/charts, constructed from real-world open data sources, along with ~2
million question-answer (QA) pairs querying the structure and semantics of
these charts. LEAF-QA highlights the problem of multimodal QA, which is notably
different from conventional visual QA (VQA), and has recently gained interest
in the community. Furthermore, LEAF-QA is significantly more complex than
previous attempts at chart QA, viz. FigureQA and DVQA, which present only
limited variations in chart data. LEAF-QA being constructed from real-world
sources, requires a novel architecture to enable question answering. To this
end, LEAF-Net, a deep architecture involving chart element localization,
question and answer encoding in terms of chart elements, and an attention
network is proposed. Different experiments are conducted to demonstrate the
challenges of QA on LEAF-QA. The proposed architecture, LEAF-Net also
considerably advances the current state-of-the-art on FigureQA and DVQA.</description><identifier>DOI: 10.48550/arxiv.1907.12861</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2019-07</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1907.12861$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1907.12861$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chaudhry, Ritwick</creatorcontrib><creatorcontrib>Shekhar, Sumit</creatorcontrib><creatorcontrib>Gupta, Utkarsh</creatorcontrib><creatorcontrib>Maneriker, Pranav</creatorcontrib><creatorcontrib>Bansal, Prann</creatorcontrib><creatorcontrib>Joshi, Ajay</creatorcontrib><title>LEAF-QA: Locate, Encode & Attend for Figure Question Answering</title><description>We introduce LEAF-QA, a comprehensive dataset of $250,000$ densely annotated
figures/charts, constructed from real-world open data sources, along with ~2
million question-answer (QA) pairs querying the structure and semantics of
these charts. LEAF-QA highlights the problem of multimodal QA, which is notably
different from conventional visual QA (VQA), and has recently gained interest
in the community. Furthermore, LEAF-QA is significantly more complex than
previous attempts at chart QA, viz. FigureQA and DVQA, which present only
limited variations in chart data. LEAF-QA being constructed from real-world
sources, requires a novel architecture to enable question answering. To this
end, LEAF-Net, a deep architecture involving chart element localization,
question and answer encoding in terms of chart elements, and an attention
network is proposed. Different experiments are conducted to demonstrate the
challenges of QA on LEAF-QA. The proposed architecture, LEAF-Net also
considerably advances the current state-of-the-art on FigureQA and DVQA.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzz9LxDAYgPEsDnL6AZzM5GRr0uRtEgchHK0KBTm4vaTJmyOgqaQ9_3x78XR6tgd-hFxxVksNwO5c-UofNTdM1bzRLT8nD0Nn-2pn7-kwe7fiLe2ynwPSG2rXFXOgcS60T4djQbo74rKmOVObl08sKR8uyFl0rwte_ndD9n233z5Vw8vj89YOlWsVrxQ4qYQPk2QGtBIiQBtVBCabCeXkvJfeCDRaqBjBKzY5hGCY0I2BljmxIdd_2xNgfC_pzZXv8RcyniDiB9UiQMQ</recordid><startdate>20190730</startdate><enddate>20190730</enddate><creator>Chaudhry, Ritwick</creator><creator>Shekhar, Sumit</creator><creator>Gupta, Utkarsh</creator><creator>Maneriker, Pranav</creator><creator>Bansal, Prann</creator><creator>Joshi, Ajay</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190730</creationdate><title>LEAF-QA: Locate, Encode & Attend for Figure Question Answering</title><author>Chaudhry, Ritwick ; Shekhar, Sumit ; Gupta, Utkarsh ; Maneriker, Pranav ; Bansal, Prann ; Joshi, Ajay</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-75a473cdb40958733d56f7f5042be4bacc4c93e9837ff5c70bae5d903829560a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Chaudhry, Ritwick</creatorcontrib><creatorcontrib>Shekhar, Sumit</creatorcontrib><creatorcontrib>Gupta, Utkarsh</creatorcontrib><creatorcontrib>Maneriker, Pranav</creatorcontrib><creatorcontrib>Bansal, Prann</creatorcontrib><creatorcontrib>Joshi, Ajay</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chaudhry, Ritwick</au><au>Shekhar, Sumit</au><au>Gupta, Utkarsh</au><au>Maneriker, Pranav</au><au>Bansal, Prann</au><au>Joshi, Ajay</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>LEAF-QA: Locate, Encode & Attend for Figure Question Answering</atitle><date>2019-07-30</date><risdate>2019</risdate><abstract>We introduce LEAF-QA, a comprehensive dataset of $250,000$ densely annotated
figures/charts, constructed from real-world open data sources, along with ~2
million question-answer (QA) pairs querying the structure and semantics of
these charts. LEAF-QA highlights the problem of multimodal QA, which is notably
different from conventional visual QA (VQA), and has recently gained interest
in the community. Furthermore, LEAF-QA is significantly more complex than
previous attempts at chart QA, viz. FigureQA and DVQA, which present only
limited variations in chart data. LEAF-QA being constructed from real-world
sources, requires a novel architecture to enable question answering. To this
end, LEAF-Net, a deep architecture involving chart element localization,
question and answer encoding in terms of chart elements, and an attention
network is proposed. Different experiments are conducted to demonstrate the
challenges of QA on LEAF-QA. The proposed architecture, LEAF-Net also
considerably advances the current state-of-the-art on FigureQA and DVQA.</abstract><doi>10.48550/arxiv.1907.12861</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1907.12861 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1907_12861 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | LEAF-QA: Locate, Encode & Attend for Figure Question Answering |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T20%3A49%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=LEAF-QA:%20Locate,%20Encode%20&%20Attend%20for%20Figure%20Question%20Answering&rft.au=Chaudhry,%20Ritwick&rft.date=2019-07-30&rft_id=info:doi/10.48550/arxiv.1907.12861&rft_dat=%3Carxiv_GOX%3E1907_12861%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |