Towards bridging the semantic gap between image and text: An empirical approach

Vlogs, Recordings, news, sport coverages are huge sources of multimodal information that do not just limit to text but extend to audio, images and videos. Applications such as summary generation, image/video captioning, multimodal sentiment analysis, cross modal retrieval requires Computer Vision al...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of intelligent & fuzzy systems 2024-04, p.1-13
Hauptverfasser: Javed, Hira, Sufyan Beg, M.M., Akhtar, Nadeem, Alroobaea, Roobaea
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 13
container_issue
container_start_page 1
container_title Journal of intelligent & fuzzy systems
container_volume
creator Javed, Hira
Sufyan Beg, M.M.
Akhtar, Nadeem
Alroobaea, Roobaea
description Vlogs, Recordings, news, sport coverages are huge sources of multimodal information that do not just limit to text but extend to audio, images and videos. Applications such as summary generation, image/video captioning, multimodal sentiment analysis, cross modal retrieval requires Computer Vision along with Natural Language Processing techniques to extract relevant information. Information from different modalities must be leveraged in order to extract quality content. Hence, reducing the gap between different modalities is of utmost importance. Image to text conversion is an emerging field and employs the use of encoder decoder architecture. Deep CNNs extract the feature of images and sequence to sequence models are used to generate text description. This paper is a contribution towards the growing body of research in multimodal information retrieval. In order to generate the textual description of images, we have performed 5 experiments using the benchmark Flickr8k dataset. In these experiments we have utilized different architectures - simple sequence to sequence model, attention mechanism, transformer-based architecture to name a few. The results have been evaluated using BLEAU score. Results show that the best descriptions are attained by making use of transformer architecture. We have also compared our results with the pretrained visual model vit-gpt2 that incorporates visual transformer.
doi_str_mv 10.3233/JIFS-219394
format Article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_3233_JIFS_219394</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_3233_JIFS_219394</sourcerecordid><originalsourceid>FETCH-crossref_primary_10_3233_JIFS_2193943</originalsourceid><addsrcrecordid>eNqVzr1uwjAUBWCraqVS6NQXuDsK9U9ICBtCIOjCQHbrJrkEV8SxbEvA2wOiL9DpnOHo6GPsS_CJkkp9_2zX-0SKQhXpCxuIWT5NZkWWv947z9JEyDR7Zx8h_HIu8qnkA7Yr-zP6JkDlTdMa20I8EgTq0EZTQ4sOKopnIgumw5YAbQORLnEOCwvUOeNNjSdA53yP9XHE3g54CvT5l0M2Xq_K5SapfR-Cp4N2_n7kr1pw_TDrh1k_zep_6xv_1Uge</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Towards bridging the semantic gap between image and text: An empirical approach</title><source>EBSCOhost Business Source Complete</source><creator>Javed, Hira ; Sufyan Beg, M.M. ; Akhtar, Nadeem ; Alroobaea, Roobaea</creator><creatorcontrib>Javed, Hira ; Sufyan Beg, M.M. ; Akhtar, Nadeem ; Alroobaea, Roobaea</creatorcontrib><description>Vlogs, Recordings, news, sport coverages are huge sources of multimodal information that do not just limit to text but extend to audio, images and videos. Applications such as summary generation, image/video captioning, multimodal sentiment analysis, cross modal retrieval requires Computer Vision along with Natural Language Processing techniques to extract relevant information. Information from different modalities must be leveraged in order to extract quality content. Hence, reducing the gap between different modalities is of utmost importance. Image to text conversion is an emerging field and employs the use of encoder decoder architecture. Deep CNNs extract the feature of images and sequence to sequence models are used to generate text description. This paper is a contribution towards the growing body of research in multimodal information retrieval. In order to generate the textual description of images, we have performed 5 experiments using the benchmark Flickr8k dataset. In these experiments we have utilized different architectures - simple sequence to sequence model, attention mechanism, transformer-based architecture to name a few. The results have been evaluated using BLEAU score. Results show that the best descriptions are attained by making use of transformer architecture. We have also compared our results with the pretrained visual model vit-gpt2 that incorporates visual transformer.</description><identifier>ISSN: 1064-1246</identifier><identifier>EISSN: 1875-8967</identifier><identifier>DOI: 10.3233/JIFS-219394</identifier><language>eng</language><ispartof>Journal of intelligent &amp; fuzzy systems, 2024-04, p.1-13</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-crossref_primary_10_3233_JIFS_2193943</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Javed, Hira</creatorcontrib><creatorcontrib>Sufyan Beg, M.M.</creatorcontrib><creatorcontrib>Akhtar, Nadeem</creatorcontrib><creatorcontrib>Alroobaea, Roobaea</creatorcontrib><title>Towards bridging the semantic gap between image and text: An empirical approach</title><title>Journal of intelligent &amp; fuzzy systems</title><description>Vlogs, Recordings, news, sport coverages are huge sources of multimodal information that do not just limit to text but extend to audio, images and videos. Applications such as summary generation, image/video captioning, multimodal sentiment analysis, cross modal retrieval requires Computer Vision along with Natural Language Processing techniques to extract relevant information. Information from different modalities must be leveraged in order to extract quality content. Hence, reducing the gap between different modalities is of utmost importance. Image to text conversion is an emerging field and employs the use of encoder decoder architecture. Deep CNNs extract the feature of images and sequence to sequence models are used to generate text description. This paper is a contribution towards the growing body of research in multimodal information retrieval. In order to generate the textual description of images, we have performed 5 experiments using the benchmark Flickr8k dataset. In these experiments we have utilized different architectures - simple sequence to sequence model, attention mechanism, transformer-based architecture to name a few. The results have been evaluated using BLEAU score. Results show that the best descriptions are attained by making use of transformer architecture. We have also compared our results with the pretrained visual model vit-gpt2 that incorporates visual transformer.</description><issn>1064-1246</issn><issn>1875-8967</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNqVzr1uwjAUBWCraqVS6NQXuDsK9U9ICBtCIOjCQHbrJrkEV8SxbEvA2wOiL9DpnOHo6GPsS_CJkkp9_2zX-0SKQhXpCxuIWT5NZkWWv947z9JEyDR7Zx8h_HIu8qnkA7Yr-zP6JkDlTdMa20I8EgTq0EZTQ4sOKopnIgumw5YAbQORLnEOCwvUOeNNjSdA53yP9XHE3g54CvT5l0M2Xq_K5SapfR-Cp4N2_n7kr1pw_TDrh1k_zep_6xv_1Uge</recordid><startdate>20240425</startdate><enddate>20240425</enddate><creator>Javed, Hira</creator><creator>Sufyan Beg, M.M.</creator><creator>Akhtar, Nadeem</creator><creator>Alroobaea, Roobaea</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20240425</creationdate><title>Towards bridging the semantic gap between image and text: An empirical approach</title><author>Javed, Hira ; Sufyan Beg, M.M. ; Akhtar, Nadeem ; Alroobaea, Roobaea</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-crossref_primary_10_3233_JIFS_2193943</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Javed, Hira</creatorcontrib><creatorcontrib>Sufyan Beg, M.M.</creatorcontrib><creatorcontrib>Akhtar, Nadeem</creatorcontrib><creatorcontrib>Alroobaea, Roobaea</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of intelligent &amp; fuzzy systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Javed, Hira</au><au>Sufyan Beg, M.M.</au><au>Akhtar, Nadeem</au><au>Alroobaea, Roobaea</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards bridging the semantic gap between image and text: An empirical approach</atitle><jtitle>Journal of intelligent &amp; fuzzy systems</jtitle><date>2024-04-25</date><risdate>2024</risdate><spage>1</spage><epage>13</epage><pages>1-13</pages><issn>1064-1246</issn><eissn>1875-8967</eissn><abstract>Vlogs, Recordings, news, sport coverages are huge sources of multimodal information that do not just limit to text but extend to audio, images and videos. Applications such as summary generation, image/video captioning, multimodal sentiment analysis, cross modal retrieval requires Computer Vision along with Natural Language Processing techniques to extract relevant information. Information from different modalities must be leveraged in order to extract quality content. Hence, reducing the gap between different modalities is of utmost importance. Image to text conversion is an emerging field and employs the use of encoder decoder architecture. Deep CNNs extract the feature of images and sequence to sequence models are used to generate text description. This paper is a contribution towards the growing body of research in multimodal information retrieval. In order to generate the textual description of images, we have performed 5 experiments using the benchmark Flickr8k dataset. In these experiments we have utilized different architectures - simple sequence to sequence model, attention mechanism, transformer-based architecture to name a few. The results have been evaluated using BLEAU score. Results show that the best descriptions are attained by making use of transformer architecture. We have also compared our results with the pretrained visual model vit-gpt2 that incorporates visual transformer.</abstract><doi>10.3233/JIFS-219394</doi></addata></record>
fulltext fulltext
identifier ISSN: 1064-1246
ispartof Journal of intelligent & fuzzy systems, 2024-04, p.1-13
issn 1064-1246
1875-8967
language eng
recordid cdi_crossref_primary_10_3233_JIFS_219394
source EBSCOhost Business Source Complete
title Towards bridging the semantic gap between image and text: An empirical approach
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T03%3A05%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20bridging%20the%20semantic%20gap%20between%20image%20and%20text:%20An%20empirical%20approach&rft.jtitle=Journal%20of%20intelligent%20&%20fuzzy%20systems&rft.au=Javed,%20Hira&rft.date=2024-04-25&rft.spage=1&rft.epage=13&rft.pages=1-13&rft.issn=1064-1246&rft.eissn=1875-8967&rft_id=info:doi/10.3233/JIFS-219394&rft_dat=%3Ccrossref%3E10_3233_JIFS_219394%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true