Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models
The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and as...
Gespeichert in:
Veröffentlicht in: | International journal of computer vision 2017-05, Vol.123 (1), p.74-93 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 93 |
---|---|
container_issue | 1 |
container_start_page | 74 |
container_title | International journal of computer vision |
container_volume | 123 |
creator | Plummer, Bryan A. Wang, Liwei Cervantes, Chris M. Caicedo, Juan C. Hockenmaier, Julia Lazebnik, Svetlana |
description | The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research. |
doi_str_mv | 10.1007/s11263-016-0965-7 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_miscellaneous_1904245719</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A550951131</galeid><sourcerecordid>A550951131</sourcerecordid><originalsourceid>FETCH-LOGICAL-c488t-7648633ff2b78eec3ddc73fd47e7d8bad5572253d4c91873d2deacf1dee535a93</originalsourceid><addsrcrecordid>eNp1kcGKFDEQhhtRcFx9AG8NXvSQNZV0Ot3elmFXB1aUWT2HbFLpzW5PMiYZ0Lc3TXtwBalDQdX3FQV_07wGeg6UyvcZgPWcUOgJHXtB5JNmA0JyAh0VT5sNHRkloh_hefMi53tKKRsY3zTmavbmIXH60F6G4ovH_KHdxnlGU3yY2j1OPgZSIvl6l3TGuksJ8zEGi8Fgbl1M7d6bO0zt7qAnXNAbDGXZtp-jxTm_bJ45PWd89aefNd-vLr9tP5HrLx9324trYrphKET23dBz7hy7lQOi4dYayZ3tJEo73GorhGRMcNuZEQbJLbOojQOLKLjQIz9r3q53jyn-OGEu6uCzwXnWAeMpKxhpxzohYUHf_IPex1MK9TsFw8iAs1EMlTpfqUnPqHxwsSRtalk8eBMDOl_nF0LQUQBwqMK7R0JlCv4skz7lrHY3-8csrKxJMeeETh2TP-j0SwFVS6ZqzVTVTNWSqZLVYauTKxsmTH-9_V_pN04PosI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1892132958</pqid></control><display><type>article</type><title>Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models</title><source>Springer Nature - Complete Springer Journals</source><creator>Plummer, Bryan A. ; Wang, Liwei ; Cervantes, Chris M. ; Caicedo, Juan C. ; Hockenmaier, Julia ; Lazebnik, Svetlana</creator><creatorcontrib>Plummer, Bryan A. ; Wang, Liwei ; Cervantes, Chris M. ; Caicedo, Juan C. ; Hockenmaier, Julia ; Lazebnik, Svetlana</creatorcontrib><description>The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-016-0965-7</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Annotations ; Artificial Intelligence ; Automation ; Benchmarking ; Benchmarks ; Boxes ; Color ; Computer Imaging ; Computer Science ; Crowdsourcing ; Datasets ; Detectors ; Image Processing and Computer Vision ; Image processing systems ; Information management ; Language ; Localization ; Object recognition ; Pattern Recognition ; Pattern Recognition and Graphics ; Position (location) ; Retrieval ; Studies ; Tasks ; Vision ; Vision systems ; Weddings</subject><ispartof>International journal of computer vision, 2017-05, Vol.123 (1), p.74-93</ispartof><rights>Springer Science+Business Media New York 2016</rights><rights>COPYRIGHT 2017 Springer</rights><rights>International Journal of Computer Vision is a copyright of Springer, 2017.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c488t-7648633ff2b78eec3ddc73fd47e7d8bad5572253d4c91873d2deacf1dee535a93</citedby><cites>FETCH-LOGICAL-c488t-7648633ff2b78eec3ddc73fd47e7d8bad5572253d4c91873d2deacf1dee535a93</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-016-0965-7$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-016-0965-7$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Plummer, Bryan A.</creatorcontrib><creatorcontrib>Wang, Liwei</creatorcontrib><creatorcontrib>Cervantes, Chris M.</creatorcontrib><creatorcontrib>Caicedo, Juan C.</creatorcontrib><creatorcontrib>Hockenmaier, Julia</creatorcontrib><creatorcontrib>Lazebnik, Svetlana</creatorcontrib><title>Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research.</description><subject>Annotations</subject><subject>Artificial Intelligence</subject><subject>Automation</subject><subject>Benchmarking</subject><subject>Benchmarks</subject><subject>Boxes</subject><subject>Color</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Crowdsourcing</subject><subject>Datasets</subject><subject>Detectors</subject><subject>Image Processing and Computer Vision</subject><subject>Image processing systems</subject><subject>Information management</subject><subject>Language</subject><subject>Localization</subject><subject>Object recognition</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Position (location)</subject><subject>Retrieval</subject><subject>Studies</subject><subject>Tasks</subject><subject>Vision</subject><subject>Vision systems</subject><subject>Weddings</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNp1kcGKFDEQhhtRcFx9AG8NXvSQNZV0Ot3elmFXB1aUWT2HbFLpzW5PMiYZ0Lc3TXtwBalDQdX3FQV_07wGeg6UyvcZgPWcUOgJHXtB5JNmA0JyAh0VT5sNHRkloh_hefMi53tKKRsY3zTmavbmIXH60F6G4ovH_KHdxnlGU3yY2j1OPgZSIvl6l3TGuksJ8zEGi8Fgbl1M7d6bO0zt7qAnXNAbDGXZtp-jxTm_bJ45PWd89aefNd-vLr9tP5HrLx9324trYrphKET23dBz7hy7lQOi4dYayZ3tJEo73GorhGRMcNuZEQbJLbOojQOLKLjQIz9r3q53jyn-OGEu6uCzwXnWAeMpKxhpxzohYUHf_IPex1MK9TsFw8iAs1EMlTpfqUnPqHxwsSRtalk8eBMDOl_nF0LQUQBwqMK7R0JlCv4skz7lrHY3-8csrKxJMeeETh2TP-j0SwFVS6ZqzVTVTNWSqZLVYauTKxsmTH-9_V_pN04PosI</recordid><startdate>20170501</startdate><enddate>20170501</enddate><creator>Plummer, Bryan A.</creator><creator>Wang, Liwei</creator><creator>Cervantes, Chris M.</creator><creator>Caicedo, Juan C.</creator><creator>Hockenmaier, Julia</creator><creator>Lazebnik, Svetlana</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PKEHL</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYYUZ</scope><scope>Q9U</scope></search><sort><creationdate>20170501</creationdate><title>Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models</title><author>Plummer, Bryan A. ; Wang, Liwei ; Cervantes, Chris M. ; Caicedo, Juan C. ; Hockenmaier, Julia ; Lazebnik, Svetlana</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c488t-7648633ff2b78eec3ddc73fd47e7d8bad5572253d4c91873d2deacf1dee535a93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Annotations</topic><topic>Artificial Intelligence</topic><topic>Automation</topic><topic>Benchmarking</topic><topic>Benchmarks</topic><topic>Boxes</topic><topic>Color</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Crowdsourcing</topic><topic>Datasets</topic><topic>Detectors</topic><topic>Image Processing and Computer Vision</topic><topic>Image processing systems</topic><topic>Information management</topic><topic>Language</topic><topic>Localization</topic><topic>Object recognition</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Position (location)</topic><topic>Retrieval</topic><topic>Studies</topic><topic>Tasks</topic><topic>Vision</topic><topic>Vision systems</topic><topic>Weddings</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Plummer, Bryan A.</creatorcontrib><creatorcontrib>Wang, Liwei</creatorcontrib><creatorcontrib>Cervantes, Chris M.</creatorcontrib><creatorcontrib>Caicedo, Juan C.</creatorcontrib><creatorcontrib>Hockenmaier, Julia</creatorcontrib><creatorcontrib>Lazebnik, Svetlana</creatorcontrib><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied & Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Plummer, Bryan A.</au><au>Wang, Liwei</au><au>Cervantes, Chris M.</au><au>Caicedo, Juan C.</au><au>Hockenmaier, Julia</au><au>Lazebnik, Svetlana</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2017-05-01</date><risdate>2017</risdate><volume>123</volume><issue>1</issue><spage>74</spage><epage>93</epage><pages>74-93</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-016-0965-7</doi><tpages>20</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0920-5691 |
ispartof | International journal of computer vision, 2017-05, Vol.123 (1), p.74-93 |
issn | 0920-5691 1573-1405 |
language | eng |
recordid | cdi_proquest_miscellaneous_1904245719 |
source | Springer Nature - Complete Springer Journals |
subjects | Annotations Artificial Intelligence Automation Benchmarking Benchmarks Boxes Color Computer Imaging Computer Science Crowdsourcing Datasets Detectors Image Processing and Computer Vision Image processing systems Information management Language Localization Object recognition Pattern Recognition Pattern Recognition and Graphics Position (location) Retrieval Studies Tasks Vision Vision systems Weddings |
title | Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T17%3A14%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Flickr30k%20Entities:%20Collecting%20Region-to-Phrase%20Correspondences%20for%20Richer%20Image-to-Sentence%20Models&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Plummer,%20Bryan%20A.&rft.date=2017-05-01&rft.volume=123&rft.issue=1&rft.spage=74&rft.epage=93&rft.pages=74-93&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-016-0965-7&rft_dat=%3Cgale_proqu%3EA550951131%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1892132958&rft_id=info:pmid/&rft_galeid=A550951131&rfr_iscdi=true |