Fluid Annotation: A Human-Machine Collaboration Interface for Full Image Annotation
We introduce Fluid Annotation, an intuitive human-machine collaboration interface for annotating the class label and outline of every object and background region in an image. Fluid annotation is based on three principles: (I) Strong Machine-Learning aid. We start from the output of a strong neural...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2018-12 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Andriluka, Mykhaylo Uijlings, Jasper R R Ferrari, Vittorio |
description | We introduce Fluid Annotation, an intuitive human-machine collaboration interface for annotating the class label and outline of every object and background region in an image. Fluid annotation is based on three principles: (I) Strong Machine-Learning aid. We start from the output of a strong neural network model, which the annotator can edit by correcting the labels of existing regions, adding new regions to cover missing objects, and removing incorrect regions. The edit operations are also assisted by the model. (II) Full image annotation in a single pass. As opposed to performing a series of small annotation tasks in isolation, we propose a unified interface for full image annotation in a single pass. (III) Empower the annotator. We empower the annotator to choose what to annotate and in which order. This enables concentrating on what the machine does not already know, i.e. putting human effort only on the errors it made. This helps using the annotation budget effectively. Through extensive experiments on the COCO+Stuff dataset, we demonstrate that Fluid Annotation leads to accurate annotations very efficiently, taking three times less annotation time than the popular LabelMe interface. |
doi_str_mv | 10.48550/arxiv.1806.07527 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_1806_07527</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2073538949</sourcerecordid><originalsourceid>FETCH-LOGICAL-a529-9f384a1c75b1e9f4f63de4f8b5de99a915cdab7f87bb19d59ec5b9e0f957a1173</originalsourceid><addsrcrecordid>eNpNjz1PwzAURS0kJKrSH8CEJeYUf-TVNltUUVqpiIHu0XNiQ6rELk6C4N9TWgamO9yro3sIueFsnmsAdo_pq_mcc80Wc6ZAqAsyEVLyTOdCXJFZ3-8ZY2KhBICckNdVOzY1LUKIAw5NDA-0oOuxw5A9Y_XeBEeXsW3RxnSq6SYMLnmsHPUx0dXYtnTT4Zv7h7gmlx7b3s3-ckp2q8fdcp1tX542y2KbIQiTGS91jrxSYLkzPvcLWbvcawu1MwYNh6pGq7xW1nJTg3EVWOOYN6CQcyWn5PaMPRmXh9R0mL7LX_PyZH5c3J0XhxQ_RtcP5T6OKRw_lYIpCVKb3MgfY0xb8Q</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2073538949</pqid></control><display><type>article</type><title>Fluid Annotation: A Human-Machine Collaboration Interface for Full Image Annotation</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Andriluka, Mykhaylo ; Uijlings, Jasper R R ; Ferrari, Vittorio</creator><creatorcontrib>Andriluka, Mykhaylo ; Uijlings, Jasper R R ; Ferrari, Vittorio</creatorcontrib><description>We introduce Fluid Annotation, an intuitive human-machine collaboration interface for annotating the class label and outline of every object and background region in an image. Fluid annotation is based on three principles: (I) Strong Machine-Learning aid. We start from the output of a strong neural network model, which the annotator can edit by correcting the labels of existing regions, adding new regions to cover missing objects, and removing incorrect regions. The edit operations are also assisted by the model. (II) Full image annotation in a single pass. As opposed to performing a series of small annotation tasks in isolation, we propose a unified interface for full image annotation in a single pass. (III) Empower the annotator. We empower the annotator to choose what to annotate and in which order. This enables concentrating on what the machine does not already know, i.e. putting human effort only on the errors it made. This helps using the annotation budget effectively. Through extensive experiments on the COCO+Stuff dataset, we demonstrate that Fluid Annotation leads to accurate annotations very efficiently, taking three times less annotation time than the popular LabelMe interface.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1806.07527</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Collaboration ; Computer Science - Computer Vision and Pattern Recognition ; Image annotation ; Instructional aids ; Machine learning ; Man-machine interfaces ; Neural networks</subject><ispartof>arXiv.org, 2018-12</ispartof><rights>2018. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,782,883,27912</link.rule.ids><backlink>$$Uhttps://doi.org/10.1145/3240508.3241916$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.1806.07527$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Andriluka, Mykhaylo</creatorcontrib><creatorcontrib>Uijlings, Jasper R R</creatorcontrib><creatorcontrib>Ferrari, Vittorio</creatorcontrib><title>Fluid Annotation: A Human-Machine Collaboration Interface for Full Image Annotation</title><title>arXiv.org</title><description>We introduce Fluid Annotation, an intuitive human-machine collaboration interface for annotating the class label and outline of every object and background region in an image. Fluid annotation is based on three principles: (I) Strong Machine-Learning aid. We start from the output of a strong neural network model, which the annotator can edit by correcting the labels of existing regions, adding new regions to cover missing objects, and removing incorrect regions. The edit operations are also assisted by the model. (II) Full image annotation in a single pass. As opposed to performing a series of small annotation tasks in isolation, we propose a unified interface for full image annotation in a single pass. (III) Empower the annotator. We empower the annotator to choose what to annotate and in which order. This enables concentrating on what the machine does not already know, i.e. putting human effort only on the errors it made. This helps using the annotation budget effectively. Through extensive experiments on the COCO+Stuff dataset, we demonstrate that Fluid Annotation leads to accurate annotations very efficiently, taking three times less annotation time than the popular LabelMe interface.</description><subject>Annotations</subject><subject>Collaboration</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Image annotation</subject><subject>Instructional aids</subject><subject>Machine learning</subject><subject>Man-machine interfaces</subject><subject>Neural networks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNpNjz1PwzAURS0kJKrSH8CEJeYUf-TVNltUUVqpiIHu0XNiQ6rELk6C4N9TWgamO9yro3sIueFsnmsAdo_pq_mcc80Wc6ZAqAsyEVLyTOdCXJFZ3-8ZY2KhBICckNdVOzY1LUKIAw5NDA-0oOuxw5A9Y_XeBEeXsW3RxnSq6SYMLnmsHPUx0dXYtnTT4Zv7h7gmlx7b3s3-ckp2q8fdcp1tX542y2KbIQiTGS91jrxSYLkzPvcLWbvcawu1MwYNh6pGq7xW1nJTg3EVWOOYN6CQcyWn5PaMPRmXh9R0mL7LX_PyZH5c3J0XhxQ_RtcP5T6OKRw_lYIpCVKb3MgfY0xb8Q</recordid><startdate>20181220</startdate><enddate>20181220</enddate><creator>Andriluka, Mykhaylo</creator><creator>Uijlings, Jasper R R</creator><creator>Ferrari, Vittorio</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20181220</creationdate><title>Fluid Annotation: A Human-Machine Collaboration Interface for Full Image Annotation</title><author>Andriluka, Mykhaylo ; Uijlings, Jasper R R ; Ferrari, Vittorio</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a529-9f384a1c75b1e9f4f63de4f8b5de99a915cdab7f87bb19d59ec5b9e0f957a1173</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Annotations</topic><topic>Collaboration</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Image annotation</topic><topic>Instructional aids</topic><topic>Machine learning</topic><topic>Man-machine interfaces</topic><topic>Neural networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Andriluka, Mykhaylo</creatorcontrib><creatorcontrib>Uijlings, Jasper R R</creatorcontrib><creatorcontrib>Ferrari, Vittorio</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Andriluka, Mykhaylo</au><au>Uijlings, Jasper R R</au><au>Ferrari, Vittorio</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fluid Annotation: A Human-Machine Collaboration Interface for Full Image Annotation</atitle><jtitle>arXiv.org</jtitle><date>2018-12-20</date><risdate>2018</risdate><eissn>2331-8422</eissn><abstract>We introduce Fluid Annotation, an intuitive human-machine collaboration interface for annotating the class label and outline of every object and background region in an image. Fluid annotation is based on three principles: (I) Strong Machine-Learning aid. We start from the output of a strong neural network model, which the annotator can edit by correcting the labels of existing regions, adding new regions to cover missing objects, and removing incorrect regions. The edit operations are also assisted by the model. (II) Full image annotation in a single pass. As opposed to performing a series of small annotation tasks in isolation, we propose a unified interface for full image annotation in a single pass. (III) Empower the annotator. We empower the annotator to choose what to annotate and in which order. This enables concentrating on what the machine does not already know, i.e. putting human effort only on the errors it made. This helps using the annotation budget effectively. Through extensive experiments on the COCO+Stuff dataset, we demonstrate that Fluid Annotation leads to accurate annotations very efficiently, taking three times less annotation time than the popular LabelMe interface.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1806.07527</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2018-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_1806_07527 |
source | arXiv.org; Free E- Journals |
subjects | Annotations Collaboration Computer Science - Computer Vision and Pattern Recognition Image annotation Instructional aids Machine learning Man-machine interfaces Neural networks |
title | Fluid Annotation: A Human-Machine Collaboration Interface for Full Image Annotation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T15%3A45%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fluid%20Annotation:%20A%20Human-Machine%20Collaboration%20Interface%20for%20Full%20Image%20Annotation&rft.jtitle=arXiv.org&rft.au=Andriluka,%20Mykhaylo&rft.date=2018-12-20&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1806.07527&rft_dat=%3Cproquest_arxiv%3E2073538949%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2073538949&rft_id=info:pmid/&rfr_iscdi=true |