Large-scale image annotation with image–text hybrid learning models
Managing large-scale image data becomes an important research issue due to the considerably increasing of digital images of late years. For retrieving images by semantic keywords effectively, annotating appropriate concept labels to the corresponding images in advance is required. Many image annotat...
Gespeichert in:
Veröffentlicht in: | Soft computing (Berlin, Germany) Germany), 2017-06, Vol.21 (11), p.2857-2869 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2869 |
---|---|
container_issue | 11 |
container_start_page | 2857 |
container_title | Soft computing (Berlin, Germany) |
container_volume | 21 |
creator | Chien, Been-Chian Ku, Chia-Wei |
description | Managing large-scale image data becomes an important research issue due to the considerably increasing of digital images of late years. For retrieving images by semantic keywords effectively, annotating appropriate concept labels to the corresponding images in advance is required. Many image annotation approaches and models have been proposed in recent years. However, most of the models only focus on analyzing one of the relationships between image visual features and concept texts. In this paper, all the possible relationships of crossing image and text including image-to-text, text-to-text, and image-to-image are considered and discussed. A set of hybrid learning models based on the proposed cross image–text annotation framework are developed and implemented by means of image classifiers, similarity image matching and association mining of image labels. The goal of experiments is to investigate the performance of the cross image–text framework by evaluating the effectiveness of different annotation models including individual models, bi-hybrid models and the all-hybrid model. The results show that not all-hybrid models can improve the accuracy of image annotation. In general, the hybrid models combining the relationships with both images and text boost the effectiveness of annotation. |
doi_str_mv | 10.1007/s00500-016-2221-z |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2917904843</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2917904843</sourcerecordid><originalsourceid>FETCH-LOGICAL-c316t-63d1750013ef967ab4d0efe3de1de085efeb398cbb447b914e91f56b4aa8a8fe3</originalsourceid><addsrcrecordid>eNp1UMtKw0AUHUTBWv0AdwHXo3ceySRLKfUBBTe6HmaSmzQlndSZFG1X_oN_6Jc4NYIrV_dwOQ_OIeSSwTUDUDcBIAWgwDLKOWd0f0QmTApBlVTF8Q_mVGVSnJKzEFYAnKlUTMh8YXyDNJSmw6RdmwYT41w_mKHtXfLWDsvx-_XxOeD7kCx31rdV0qHxrnVNsu4r7MI5OalNF_Di907Jy938efZAF0_3j7PbBS0FywaaiSqmAjCBdZEpY2UFWKOokFUIeRqxFUVeWiulsgWTWLA6zaw0Jjd5JE7J1ei78f3rFsOgV_3WuxipecFUATKPnaeEjazS9yF4rPXGxxJ-pxnow1p6XEvHtfRhLb2PGj5qQuS6Bv2f8_-ib1KNbuw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2917904843</pqid></control><display><type>article</type><title>Large-scale image annotation with image–text hybrid learning models</title><source>SpringerLink Journals - AutoHoldings</source><source>ProQuest Central</source><creator>Chien, Been-Chian ; Ku, Chia-Wei</creator><creatorcontrib>Chien, Been-Chian ; Ku, Chia-Wei</creatorcontrib><description>Managing large-scale image data becomes an important research issue due to the considerably increasing of digital images of late years. For retrieving images by semantic keywords effectively, annotating appropriate concept labels to the corresponding images in advance is required. Many image annotation approaches and models have been proposed in recent years. However, most of the models only focus on analyzing one of the relationships between image visual features and concept texts. In this paper, all the possible relationships of crossing image and text including image-to-text, text-to-text, and image-to-image are considered and discussed. A set of hybrid learning models based on the proposed cross image–text annotation framework are developed and implemented by means of image classifiers, similarity image matching and association mining of image labels. The goal of experiments is to investigate the performance of the cross image–text framework by evaluating the effectiveness of different annotation models including individual models, bi-hybrid models and the all-hybrid model. The results show that not all-hybrid models can improve the accuracy of image annotation. In general, the hybrid models combining the relationships with both images and text boost the effectiveness of annotation.</description><identifier>ISSN: 1432-7643</identifier><identifier>EISSN: 1433-7479</identifier><identifier>DOI: 10.1007/s00500-016-2221-z</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Accuracy ; Annotations ; Artificial Intelligence ; Blended learning ; Classification ; Computational Intelligence ; Control ; Digital imaging ; Effectiveness ; Engineering ; Focus ; Image annotation ; Image databases ; Image retrieval ; Keywords ; Labels ; Learning ; Machine learning ; Mathematical Logic and Foundations ; Mechatronics ; Methods ; Probability ; Robotics ; Semantics</subject><ispartof>Soft computing (Berlin, Germany), 2017-06, Vol.21 (11), p.2857-2869</ispartof><rights>Springer-Verlag Berlin Heidelberg 2016</rights><rights>Springer-Verlag Berlin Heidelberg 2016.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c316t-63d1750013ef967ab4d0efe3de1de085efeb398cbb447b914e91f56b4aa8a8fe3</citedby><cites>FETCH-LOGICAL-c316t-63d1750013ef967ab4d0efe3de1de085efeb398cbb447b914e91f56b4aa8a8fe3</cites><orcidid>0000-0003-0181-7269</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00500-016-2221-z$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2917904843?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,21388,27924,27925,33744,41488,42557,43805,51319,64385,64389,72341</link.rule.ids></links><search><creatorcontrib>Chien, Been-Chian</creatorcontrib><creatorcontrib>Ku, Chia-Wei</creatorcontrib><title>Large-scale image annotation with image–text hybrid learning models</title><title>Soft computing (Berlin, Germany)</title><addtitle>Soft Comput</addtitle><description>Managing large-scale image data becomes an important research issue due to the considerably increasing of digital images of late years. For retrieving images by semantic keywords effectively, annotating appropriate concept labels to the corresponding images in advance is required. Many image annotation approaches and models have been proposed in recent years. However, most of the models only focus on analyzing one of the relationships between image visual features and concept texts. In this paper, all the possible relationships of crossing image and text including image-to-text, text-to-text, and image-to-image are considered and discussed. A set of hybrid learning models based on the proposed cross image–text annotation framework are developed and implemented by means of image classifiers, similarity image matching and association mining of image labels. The goal of experiments is to investigate the performance of the cross image–text framework by evaluating the effectiveness of different annotation models including individual models, bi-hybrid models and the all-hybrid model. The results show that not all-hybrid models can improve the accuracy of image annotation. In general, the hybrid models combining the relationships with both images and text boost the effectiveness of annotation.</description><subject>Accuracy</subject><subject>Annotations</subject><subject>Artificial Intelligence</subject><subject>Blended learning</subject><subject>Classification</subject><subject>Computational Intelligence</subject><subject>Control</subject><subject>Digital imaging</subject><subject>Effectiveness</subject><subject>Engineering</subject><subject>Focus</subject><subject>Image annotation</subject><subject>Image databases</subject><subject>Image retrieval</subject><subject>Keywords</subject><subject>Labels</subject><subject>Learning</subject><subject>Machine learning</subject><subject>Mathematical Logic and Foundations</subject><subject>Mechatronics</subject><subject>Methods</subject><subject>Probability</subject><subject>Robotics</subject><subject>Semantics</subject><issn>1432-7643</issn><issn>1433-7479</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp1UMtKw0AUHUTBWv0AdwHXo3ceySRLKfUBBTe6HmaSmzQlndSZFG1X_oN_6Jc4NYIrV_dwOQ_OIeSSwTUDUDcBIAWgwDLKOWd0f0QmTApBlVTF8Q_mVGVSnJKzEFYAnKlUTMh8YXyDNJSmw6RdmwYT41w_mKHtXfLWDsvx-_XxOeD7kCx31rdV0qHxrnVNsu4r7MI5OalNF_Di907Jy938efZAF0_3j7PbBS0FywaaiSqmAjCBdZEpY2UFWKOokFUIeRqxFUVeWiulsgWTWLA6zaw0Jjd5JE7J1ei78f3rFsOgV_3WuxipecFUATKPnaeEjazS9yF4rPXGxxJ-pxnow1p6XEvHtfRhLb2PGj5qQuS6Bv2f8_-ib1KNbuw</recordid><startdate>20170601</startdate><enddate>20170601</enddate><creator>Chien, Been-Chian</creator><creator>Ku, Chia-Wei</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><orcidid>https://orcid.org/0000-0003-0181-7269</orcidid></search><sort><creationdate>20170601</creationdate><title>Large-scale image annotation with image–text hybrid learning models</title><author>Chien, Been-Chian ; Ku, Chia-Wei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c316t-63d1750013ef967ab4d0efe3de1de085efeb398cbb447b914e91f56b4aa8a8fe3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Accuracy</topic><topic>Annotations</topic><topic>Artificial Intelligence</topic><topic>Blended learning</topic><topic>Classification</topic><topic>Computational Intelligence</topic><topic>Control</topic><topic>Digital imaging</topic><topic>Effectiveness</topic><topic>Engineering</topic><topic>Focus</topic><topic>Image annotation</topic><topic>Image databases</topic><topic>Image retrieval</topic><topic>Keywords</topic><topic>Labels</topic><topic>Learning</topic><topic>Machine learning</topic><topic>Mathematical Logic and Foundations</topic><topic>Mechatronics</topic><topic>Methods</topic><topic>Probability</topic><topic>Robotics</topic><topic>Semantics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chien, Been-Chian</creatorcontrib><creatorcontrib>Ku, Chia-Wei</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Database (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>Soft computing (Berlin, Germany)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chien, Been-Chian</au><au>Ku, Chia-Wei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Large-scale image annotation with image–text hybrid learning models</atitle><jtitle>Soft computing (Berlin, Germany)</jtitle><stitle>Soft Comput</stitle><date>2017-06-01</date><risdate>2017</risdate><volume>21</volume><issue>11</issue><spage>2857</spage><epage>2869</epage><pages>2857-2869</pages><issn>1432-7643</issn><eissn>1433-7479</eissn><abstract>Managing large-scale image data becomes an important research issue due to the considerably increasing of digital images of late years. For retrieving images by semantic keywords effectively, annotating appropriate concept labels to the corresponding images in advance is required. Many image annotation approaches and models have been proposed in recent years. However, most of the models only focus on analyzing one of the relationships between image visual features and concept texts. In this paper, all the possible relationships of crossing image and text including image-to-text, text-to-text, and image-to-image are considered and discussed. A set of hybrid learning models based on the proposed cross image–text annotation framework are developed and implemented by means of image classifiers, similarity image matching and association mining of image labels. The goal of experiments is to investigate the performance of the cross image–text framework by evaluating the effectiveness of different annotation models including individual models, bi-hybrid models and the all-hybrid model. The results show that not all-hybrid models can improve the accuracy of image annotation. In general, the hybrid models combining the relationships with both images and text boost the effectiveness of annotation.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s00500-016-2221-z</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0003-0181-7269</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1432-7643 |
ispartof | Soft computing (Berlin, Germany), 2017-06, Vol.21 (11), p.2857-2869 |
issn | 1432-7643 1433-7479 |
language | eng |
recordid | cdi_proquest_journals_2917904843 |
source | SpringerLink Journals - AutoHoldings; ProQuest Central |
subjects | Accuracy Annotations Artificial Intelligence Blended learning Classification Computational Intelligence Control Digital imaging Effectiveness Engineering Focus Image annotation Image databases Image retrieval Keywords Labels Learning Machine learning Mathematical Logic and Foundations Mechatronics Methods Probability Robotics Semantics |
title | Large-scale image annotation with image–text hybrid learning models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T18%3A53%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Large-scale%20image%20annotation%20with%20image%E2%80%93text%20hybrid%20learning%20models&rft.jtitle=Soft%20computing%20(Berlin,%20Germany)&rft.au=Chien,%20Been-Chian&rft.date=2017-06-01&rft.volume=21&rft.issue=11&rft.spage=2857&rft.epage=2869&rft.pages=2857-2869&rft.issn=1432-7643&rft.eissn=1433-7479&rft_id=info:doi/10.1007/s00500-016-2221-z&rft_dat=%3Cproquest_cross%3E2917904843%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2917904843&rft_id=info:pmid/&rfr_iscdi=true |