Bench-Marking And Improving Arabic Automatic Image Captioning Through The Use Of Multi-Task Learning Paradigm

The continuous increase in the use of social media and the visual content on the internet have accelerated the research in computer vision field in general and the image captioning task in specific. The process of generating a caption that best describes an image is a useful task for various applica...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Za'ter, Muhy Eddin, Talafha, Bashar
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Za'ter, Muhy Eddin
Talafha, Bashar
description The continuous increase in the use of social media and the visual content on the internet have accelerated the research in computer vision field in general and the image captioning task in specific. The process of generating a caption that best describes an image is a useful task for various applications such as it can be used in image indexing and as a hearing aid for the visually impaired. In recent years, the image captioning task has witnessed remarkable advances regarding both datasets and architectures, and as a result, the captioning quality has reached an astounding performance. However, the majority of these advances especially in datasets are targeted for English, which left other languages such as Arabic lagging behind. Although Arabic language, being spoken by more than 450 million people and being the most growing language on the internet, lacks the fundamental pillars it needs to advance its image captioning research, such as benchmarks or unified datasets. This works is an attempt to expedite the synergy in this task by providing unified datasets and benchmarks, while also exploring methods and techniques that could enhance the performance of Arabic image captioning. The use of multi-task learning is explored, alongside exploring various word representations and different features. The results showed that the use of multi-task learning and pre-trained word embeddings noticeably enhanced the quality of image captioning, however the presented results shows that Arabic captioning still lags behind when compared to the English language. The used dataset and code are available at this link.
doi_str_mv 10.48550/arxiv.2202.05474
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2202_05474</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2202_05474</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-519e328be8df4ab16dfeb29f6f8e08ff922ef77dfc17d91e8fcea4438c4ad18a3</originalsourceid><addsrcrecordid>eNotj8tugzAURL3pokr7AV3VPwDFxoBZUtQHElG6oGt0wddgJQZkIGr_vgnpamako5EOIU8s8IWMouAF3I85-5wH3A8ikYh7Yl9xaHtvD-5oho5mg6KFndx43paDxrQ0W5fRwnJphYUOaQ7TYsbhSlS9G9euvyTS7xnpQdP9elqMV8F8pCWC27AvcKBMZx_InYbTjI__uSPV-1uVf3rl4aPIs9KDOBFexFIMuWxQKi2gYbHS2PBUx1piILVOOUedJEq3LFEpQ6lbBCFC2QpQTEK4I8-32823npyx4H7rq3e9eYd_NFhUew</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Bench-Marking And Improving Arabic Automatic Image Captioning Through The Use Of Multi-Task Learning Paradigm</title><source>arXiv.org</source><creator>Za'ter, Muhy Eddin ; Talafha, Bashar</creator><creatorcontrib>Za'ter, Muhy Eddin ; Talafha, Bashar</creatorcontrib><description>The continuous increase in the use of social media and the visual content on the internet have accelerated the research in computer vision field in general and the image captioning task in specific. The process of generating a caption that best describes an image is a useful task for various applications such as it can be used in image indexing and as a hearing aid for the visually impaired. In recent years, the image captioning task has witnessed remarkable advances regarding both datasets and architectures, and as a result, the captioning quality has reached an astounding performance. However, the majority of these advances especially in datasets are targeted for English, which left other languages such as Arabic lagging behind. Although Arabic language, being spoken by more than 450 million people and being the most growing language on the internet, lacks the fundamental pillars it needs to advance its image captioning research, such as benchmarks or unified datasets. This works is an attempt to expedite the synergy in this task by providing unified datasets and benchmarks, while also exploring methods and techniques that could enhance the performance of Arabic image captioning. The use of multi-task learning is explored, alongside exploring various word representations and different features. The results showed that the use of multi-task learning and pre-trained word embeddings noticeably enhanced the quality of image captioning, however the presented results shows that Arabic captioning still lags behind when compared to the English language. The used dataset and code are available at this link.</description><identifier>DOI: 10.48550/arxiv.2202.05474</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2202.05474$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2202.05474$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Za'ter, Muhy Eddin</creatorcontrib><creatorcontrib>Talafha, Bashar</creatorcontrib><title>Bench-Marking And Improving Arabic Automatic Image Captioning Through The Use Of Multi-Task Learning Paradigm</title><description>The continuous increase in the use of social media and the visual content on the internet have accelerated the research in computer vision field in general and the image captioning task in specific. The process of generating a caption that best describes an image is a useful task for various applications such as it can be used in image indexing and as a hearing aid for the visually impaired. In recent years, the image captioning task has witnessed remarkable advances regarding both datasets and architectures, and as a result, the captioning quality has reached an astounding performance. However, the majority of these advances especially in datasets are targeted for English, which left other languages such as Arabic lagging behind. Although Arabic language, being spoken by more than 450 million people and being the most growing language on the internet, lacks the fundamental pillars it needs to advance its image captioning research, such as benchmarks or unified datasets. This works is an attempt to expedite the synergy in this task by providing unified datasets and benchmarks, while also exploring methods and techniques that could enhance the performance of Arabic image captioning. The use of multi-task learning is explored, alongside exploring various word representations and different features. The results showed that the use of multi-task learning and pre-trained word embeddings noticeably enhanced the quality of image captioning, however the presented results shows that Arabic captioning still lags behind when compared to the English language. The used dataset and code are available at this link.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tugzAURL3pokr7AV3VPwDFxoBZUtQHElG6oGt0wddgJQZkIGr_vgnpamako5EOIU8s8IWMouAF3I85-5wH3A8ikYh7Yl9xaHtvD-5oho5mg6KFndx43paDxrQ0W5fRwnJphYUOaQ7TYsbhSlS9G9euvyTS7xnpQdP9elqMV8F8pCWC27AvcKBMZx_InYbTjI__uSPV-1uVf3rl4aPIs9KDOBFexFIMuWxQKi2gYbHS2PBUx1piILVOOUedJEq3LFEpQ6lbBCFC2QpQTEK4I8-32823npyx4H7rq3e9eYd_NFhUew</recordid><startdate>20220211</startdate><enddate>20220211</enddate><creator>Za'ter, Muhy Eddin</creator><creator>Talafha, Bashar</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220211</creationdate><title>Bench-Marking And Improving Arabic Automatic Image Captioning Through The Use Of Multi-Task Learning Paradigm</title><author>Za'ter, Muhy Eddin ; Talafha, Bashar</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-519e328be8df4ab16dfeb29f6f8e08ff922ef77dfc17d91e8fcea4438c4ad18a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Za'ter, Muhy Eddin</creatorcontrib><creatorcontrib>Talafha, Bashar</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Za'ter, Muhy Eddin</au><au>Talafha, Bashar</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Bench-Marking And Improving Arabic Automatic Image Captioning Through The Use Of Multi-Task Learning Paradigm</atitle><date>2022-02-11</date><risdate>2022</risdate><abstract>The continuous increase in the use of social media and the visual content on the internet have accelerated the research in computer vision field in general and the image captioning task in specific. The process of generating a caption that best describes an image is a useful task for various applications such as it can be used in image indexing and as a hearing aid for the visually impaired. In recent years, the image captioning task has witnessed remarkable advances regarding both datasets and architectures, and as a result, the captioning quality has reached an astounding performance. However, the majority of these advances especially in datasets are targeted for English, which left other languages such as Arabic lagging behind. Although Arabic language, being spoken by more than 450 million people and being the most growing language on the internet, lacks the fundamental pillars it needs to advance its image captioning research, such as benchmarks or unified datasets. This works is an attempt to expedite the synergy in this task by providing unified datasets and benchmarks, while also exploring methods and techniques that could enhance the performance of Arabic image captioning. The use of multi-task learning is explored, alongside exploring various word representations and different features. The results showed that the use of multi-task learning and pre-trained word embeddings noticeably enhanced the quality of image captioning, however the presented results shows that Arabic captioning still lags behind when compared to the English language. The used dataset and code are available at this link.</abstract><doi>10.48550/arxiv.2202.05474</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2202.05474
ispartof
issn
language eng
recordid cdi_arxiv_primary_2202_05474
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Bench-Marking And Improving Arabic Automatic Image Captioning Through The Use Of Multi-Task Learning Paradigm
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T07%3A01%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Bench-Marking%20And%20Improving%20Arabic%20Automatic%20Image%20Captioning%20Through%20The%20Use%20Of%20Multi-Task%20Learning%20Paradigm&rft.au=Za'ter,%20Muhy%20Eddin&rft.date=2022-02-11&rft_id=info:doi/10.48550/arxiv.2202.05474&rft_dat=%3Carxiv_GOX%3E2202_05474%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true