Visuals to Text: A Comprehensive Review on Automatic Image Captioning
Image captioning refers to automatic generation of descriptive texts according to the visual content of images. It is a technique integrating multiple disciplines including the computer vision (CV), natural language processing (NLP) and artificial intelligence. In recent years, substantial research...
Gespeichert in:
Veröffentlicht in: | IEEE/CAA journal of automatica sinica 2022-08, Vol.9 (8), p.1339-1365 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1365 |
---|---|
container_issue | 8 |
container_start_page | 1339 |
container_title | IEEE/CAA journal of automatica sinica |
container_volume | 9 |
creator | Ming, Yue Hu, Nannan Fan, Chunxiao Feng, Fan Zhou, Jiangwan Yu, Hui |
description | Image captioning refers to automatic generation of descriptive texts according to the visual content of images. It is a technique integrating multiple disciplines including the computer vision (CV), natural language processing (NLP) and artificial intelligence. In recent years, substantial research efforts have been devoted to generate image caption with impressive progress. To summarize the recent advances in image captioning, we present a comprehensive review on image captioning, covering both traditional methods and recent deep learning-based techniques. Specifically, we first briefly review the early traditional works based on the retrieval and template. Then deep learning-based image captioning researches are focused, which is categorized into the encoder-decoder framework, attention mechanism and training strategies on the basis of model structures and training manners for a detailed introduction. After that, we summarize the publicly available datasets, evaluation metrics and those proposed for specific requirements, and then compare the state of the art methods on the MS COCO dataset. Finally, we provide some discussions on open challenges and future research directions. |
doi_str_mv | 10.1109/JAS.2022.105734 |
format | Article |
fullrecord | <record><control><sourceid>wanfang_jour_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_JAS_2022_105734</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9849164</ieee_id><wanfj_id>zdhxb_ywb202208001</wanfj_id><sourcerecordid>zdhxb_ywb202208001</sourcerecordid><originalsourceid>FETCH-LOGICAL-c324t-6aac7ca467dc3bd88626f28d3c3304b86027ceabf6eb937d36e6ae0453e7dee53</originalsourceid><addsrcrecordid>eNpFkM9LwzAUx4soOObOHrwEvAnd0iRNWm-lTJ0Igk6vIU1ftwzb1KbdD_96Oyrz9N7h8_0-3sfzrgM8DQIcz56T9ynBhEwDHArKzrwRoST2YyLY-Wnn_NKbOLfBGAckFDxmI2_-aVynvhxqLVrCvr1HCUptWTewhsqZLaA32BrYIVuhpGttqVqj0aJUK0CpqltjK1OtrryLoi-Byd8cex8P82X65L-8Pi7S5MXXlLDW50ppoRXjItc0y6OIE16QKKeaUsyyiGMiNKis4JDFVOSUA1eAWUhB5AAhHXt3Q-9OVYWqVnJju6bqL8qffL3P5GGXHS3gqP-wh28HuG7sdweu_acJj0XIiRCsp2YDpRvrXAOFrBtTquYgAyyPbmXvVh5b5eC2T9wMCQMAJzqOWBxwRn8Bu0Rzzw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2697562774</pqid></control><display><type>article</type><title>Visuals to Text: A Comprehensive Review on Automatic Image Captioning</title><source>IEEE Electronic Library (IEL)</source><creator>Ming, Yue ; Hu, Nannan ; Fan, Chunxiao ; Feng, Fan ; Zhou, Jiangwan ; Yu, Hui</creator><creatorcontrib>Ming, Yue ; Hu, Nannan ; Fan, Chunxiao ; Feng, Fan ; Zhou, Jiangwan ; Yu, Hui</creatorcontrib><description>Image captioning refers to automatic generation of descriptive texts according to the visual content of images. It is a technique integrating multiple disciplines including the computer vision (CV), natural language processing (NLP) and artificial intelligence. In recent years, substantial research efforts have been devoted to generate image caption with impressive progress. To summarize the recent advances in image captioning, we present a comprehensive review on image captioning, covering both traditional methods and recent deep learning-based techniques. Specifically, we first briefly review the early traditional works based on the retrieval and template. Then deep learning-based image captioning researches are focused, which is categorized into the encoder-decoder framework, attention mechanism and training strategies on the basis of model structures and training manners for a detailed introduction. After that, we summarize the publicly available datasets, evaluation metrics and those proposed for specific requirements, and then compare the state of the art methods on the MS COCO dataset. Finally, we provide some discussions on open challenges and future research directions.</description><identifier>ISSN: 2329-9266</identifier><identifier>EISSN: 2329-9274</identifier><identifier>DOI: 10.1109/JAS.2022.105734</identifier><identifier>CODEN: IJASJC</identifier><language>eng</language><publisher>Piscataway: Chinese Association of Automation (CAA)</publisher><subject>Artificial intelligence ; attention mechanism ; Coders ; Computer vision ; Datasets ; Deep learning ; encoder-decoder framework ; Encoders-Decoders ; image captioning ; Information processing ; Measurement ; multi-modal understanding ; Natural language processing ; Training ; training strategies ; Visualization</subject><ispartof>IEEE/CAA journal of automatica sinica, 2022-08, Vol.9 (8), p.1339-1365</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><rights>Copyright © Wanfang Data Co. Ltd. All Rights Reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c324t-6aac7ca467dc3bd88626f28d3c3304b86027ceabf6eb937d36e6ae0453e7dee53</citedby><cites>FETCH-LOGICAL-c324t-6aac7ca467dc3bd88626f28d3c3304b86027ceabf6eb937d36e6ae0453e7dee53</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Uhttp://www.wanfangdata.com.cn/images/PeriodicalImages/zdhxb-ywb/zdhxb-ywb.jpg</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9849164$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9849164$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Ming, Yue</creatorcontrib><creatorcontrib>Hu, Nannan</creatorcontrib><creatorcontrib>Fan, Chunxiao</creatorcontrib><creatorcontrib>Feng, Fan</creatorcontrib><creatorcontrib>Zhou, Jiangwan</creatorcontrib><creatorcontrib>Yu, Hui</creatorcontrib><title>Visuals to Text: A Comprehensive Review on Automatic Image Captioning</title><title>IEEE/CAA journal of automatica sinica</title><addtitle>JAS</addtitle><description>Image captioning refers to automatic generation of descriptive texts according to the visual content of images. It is a technique integrating multiple disciplines including the computer vision (CV), natural language processing (NLP) and artificial intelligence. In recent years, substantial research efforts have been devoted to generate image caption with impressive progress. To summarize the recent advances in image captioning, we present a comprehensive review on image captioning, covering both traditional methods and recent deep learning-based techniques. Specifically, we first briefly review the early traditional works based on the retrieval and template. Then deep learning-based image captioning researches are focused, which is categorized into the encoder-decoder framework, attention mechanism and training strategies on the basis of model structures and training manners for a detailed introduction. After that, we summarize the publicly available datasets, evaluation metrics and those proposed for specific requirements, and then compare the state of the art methods on the MS COCO dataset. Finally, we provide some discussions on open challenges and future research directions.</description><subject>Artificial intelligence</subject><subject>attention mechanism</subject><subject>Coders</subject><subject>Computer vision</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>encoder-decoder framework</subject><subject>Encoders-Decoders</subject><subject>image captioning</subject><subject>Information processing</subject><subject>Measurement</subject><subject>multi-modal understanding</subject><subject>Natural language processing</subject><subject>Training</subject><subject>training strategies</subject><subject>Visualization</subject><issn>2329-9266</issn><issn>2329-9274</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpFkM9LwzAUx4soOObOHrwEvAnd0iRNWm-lTJ0Igk6vIU1ftwzb1KbdD_96Oyrz9N7h8_0-3sfzrgM8DQIcz56T9ynBhEwDHArKzrwRoST2YyLY-Wnn_NKbOLfBGAckFDxmI2_-aVynvhxqLVrCvr1HCUptWTewhsqZLaA32BrYIVuhpGttqVqj0aJUK0CpqltjK1OtrryLoi-Byd8cex8P82X65L-8Pi7S5MXXlLDW50ppoRXjItc0y6OIE16QKKeaUsyyiGMiNKis4JDFVOSUA1eAWUhB5AAhHXt3Q-9OVYWqVnJju6bqL8qffL3P5GGXHS3gqP-wh28HuG7sdweu_acJj0XIiRCsp2YDpRvrXAOFrBtTquYgAyyPbmXvVh5b5eC2T9wMCQMAJzqOWBxwRn8Bu0Rzzw</recordid><startdate>20220801</startdate><enddate>20220801</enddate><creator>Ming, Yue</creator><creator>Hu, Nannan</creator><creator>Fan, Chunxiao</creator><creator>Feng, Fan</creator><creator>Zhou, Jiangwan</creator><creator>Yu, Hui</creator><general>Chinese Association of Automation (CAA)</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><general>Beijing University of Posts and Telecommunications,Beijing 100876,China%School of Creative Technologies,University of Ports-mouth,Portsmouth PO1 2DJ,UK</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>2B.</scope><scope>4A8</scope><scope>92I</scope><scope>93N</scope><scope>PSX</scope><scope>TCJ</scope></search><sort><creationdate>20220801</creationdate><title>Visuals to Text: A Comprehensive Review on Automatic Image Captioning</title><author>Ming, Yue ; Hu, Nannan ; Fan, Chunxiao ; Feng, Fan ; Zhou, Jiangwan ; Yu, Hui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c324t-6aac7ca467dc3bd88626f28d3c3304b86027ceabf6eb937d36e6ae0453e7dee53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial intelligence</topic><topic>attention mechanism</topic><topic>Coders</topic><topic>Computer vision</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>encoder-decoder framework</topic><topic>Encoders-Decoders</topic><topic>image captioning</topic><topic>Information processing</topic><topic>Measurement</topic><topic>multi-modal understanding</topic><topic>Natural language processing</topic><topic>Training</topic><topic>training strategies</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ming, Yue</creatorcontrib><creatorcontrib>Hu, Nannan</creatorcontrib><creatorcontrib>Fan, Chunxiao</creatorcontrib><creatorcontrib>Feng, Fan</creatorcontrib><creatorcontrib>Zhou, Jiangwan</creatorcontrib><creatorcontrib>Yu, Hui</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Wanfang Data Journals - Hong Kong</collection><collection>WANFANG Data Centre</collection><collection>Wanfang Data Journals</collection><collection>万方数据期刊 - 香港版</collection><collection>China Online Journals (COJ)</collection><collection>China Online Journals (COJ)</collection><jtitle>IEEE/CAA journal of automatica sinica</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ming, Yue</au><au>Hu, Nannan</au><au>Fan, Chunxiao</au><au>Feng, Fan</au><au>Zhou, Jiangwan</au><au>Yu, Hui</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Visuals to Text: A Comprehensive Review on Automatic Image Captioning</atitle><jtitle>IEEE/CAA journal of automatica sinica</jtitle><stitle>JAS</stitle><date>2022-08-01</date><risdate>2022</risdate><volume>9</volume><issue>8</issue><spage>1339</spage><epage>1365</epage><pages>1339-1365</pages><issn>2329-9266</issn><eissn>2329-9274</eissn><coden>IJASJC</coden><abstract>Image captioning refers to automatic generation of descriptive texts according to the visual content of images. It is a technique integrating multiple disciplines including the computer vision (CV), natural language processing (NLP) and artificial intelligence. In recent years, substantial research efforts have been devoted to generate image caption with impressive progress. To summarize the recent advances in image captioning, we present a comprehensive review on image captioning, covering both traditional methods and recent deep learning-based techniques. Specifically, we first briefly review the early traditional works based on the retrieval and template. Then deep learning-based image captioning researches are focused, which is categorized into the encoder-decoder framework, attention mechanism and training strategies on the basis of model structures and training manners for a detailed introduction. After that, we summarize the publicly available datasets, evaluation metrics and those proposed for specific requirements, and then compare the state of the art methods on the MS COCO dataset. Finally, we provide some discussions on open challenges and future research directions.</abstract><cop>Piscataway</cop><pub>Chinese Association of Automation (CAA)</pub><doi>10.1109/JAS.2022.105734</doi><tpages>27</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2329-9266 |
ispartof | IEEE/CAA journal of automatica sinica, 2022-08, Vol.9 (8), p.1339-1365 |
issn | 2329-9266 2329-9274 |
language | eng |
recordid | cdi_crossref_primary_10_1109_JAS_2022_105734 |
source | IEEE Electronic Library (IEL) |
subjects | Artificial intelligence attention mechanism Coders Computer vision Datasets Deep learning encoder-decoder framework Encoders-Decoders image captioning Information processing Measurement multi-modal understanding Natural language processing Training training strategies Visualization |
title | Visuals to Text: A Comprehensive Review on Automatic Image Captioning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T21%3A53%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-wanfang_jour_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Visuals%20to%20Text:%20A%20Comprehensive%20Review%20on%20Automatic%20Image%20Captioning&rft.jtitle=IEEE/CAA%20journal%20of%20automatica%20sinica&rft.au=Ming,%20Yue&rft.date=2022-08-01&rft.volume=9&rft.issue=8&rft.spage=1339&rft.epage=1365&rft.pages=1339-1365&rft.issn=2329-9266&rft.eissn=2329-9274&rft.coden=IJASJC&rft_id=info:doi/10.1109/JAS.2022.105734&rft_dat=%3Cwanfang_jour_RIE%3Ezdhxb_ywb202208001%3C/wanfang_jour_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2697562774&rft_id=info:pmid/&rft_ieee_id=9849164&rft_wanfj_id=zdhxb_ywb202208001&rfr_iscdi=true |