Self-supervised driven consistency training for annotation efficient histopathology image analysis

Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology. However, obtaining such exhaustive manual annotations is often expensive, laborious, and prone to inter and Intra-observer variability. While recent self-supervised and semi-supervise...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-10
Hauptverfasser: Srinidhi, Chetan L, Kim, Seung Wook, Fu-Der, Chen, Martel, Anne L
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Srinidhi, Chetan L
Kim, Seung Wook
Fu-Der, Chen
Martel, Anne L
description Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology. However, obtaining such exhaustive manual annotations is often expensive, laborious, and prone to inter and Intra-observer variability. While recent self-supervised and semi-supervised methods can alleviate this need by learn-ing unsupervised feature representations, they still struggle to generalize well to downstream tasks when the number of labeled instances is small. In this work, we overcome this challenge by leveraging both task-agnostic and task-specific unlabeled data based on two novel strategies: i) a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning; ii) a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data. We carry out extensive validation experiments on three histopathology benchmark datasets across two classification and one regression-based tasks, i.e., tumor metastasis detection, tissue type classification, and tumor cellularity quantification. Under limited-label data, the proposed method yields tangible improvements, which is close or even outperforming other state-of-the-art self-supervised and supervised baselines. Furthermore, we empirically show that the idea of bootstrapping the self-supervised pretrained features is an effective way to improve the task-specific semi-supervised learning on standard benchmarks. Code and pretrained models will be made available at: https://github.com/srinidhiPY/SSL_CR_Histo
doi_str_mv 10.48550/arxiv.2102.03897
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2102_03897</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2487644592</sourcerecordid><originalsourceid>FETCH-LOGICAL-a522-8ed28bcaf74c570dc16e88db0de252625ec01531a17681a93f14e32c1dcee8d03</originalsourceid><addsrcrecordid>eNotkE9LwzAYh4MgOOY-gCcDnjuTN0mbHWX4DwYe3L1kydstoyY16Yr99tbN0-_y8OPhIeSOs6XUSrFHk378sATOYMmEXlVXZAZC8EJLgBuyyPnIGIOyAqXEjOw-sW2KfOowDT6joy75AQO1MWSfewx2pH0yPviwp01M1IQQe9P7GCg2jbceQ08PExo70x9iG_cj9V9mjxNp2nE6uSXXjWkzLv53TrYvz9v1W7H5eH1fP20KowAKjQ70zpqmklZVzFleotZuxxyCghIUWsaV4IZXpeZmJRouUYDlziJqx8Sc3F9uzwHqLk0Waaz_QtTnEBPxcCG6FL9PmPv6GE9pssw1SF2VUqoViF_gL2Nu</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2487644592</pqid></control><display><type>article</type><title>Self-supervised driven consistency training for annotation efficient histopathology image analysis</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Srinidhi, Chetan L ; Kim, Seung Wook ; Fu-Der, Chen ; Martel, Anne L</creator><creatorcontrib>Srinidhi, Chetan L ; Kim, Seung Wook ; Fu-Der, Chen ; Martel, Anne L</creatorcontrib><description>Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology. However, obtaining such exhaustive manual annotations is often expensive, laborious, and prone to inter and Intra-observer variability. While recent self-supervised and semi-supervised methods can alleviate this need by learn-ing unsupervised feature representations, they still struggle to generalize well to downstream tasks when the number of labeled instances is small. In this work, we overcome this challenge by leveraging both task-agnostic and task-specific unlabeled data based on two novel strategies: i) a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning; ii) a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data. We carry out extensive validation experiments on three histopathology benchmark datasets across two classification and one regression-based tasks, i.e., tumor metastasis detection, tissue type classification, and tumor cellularity quantification. Under limited-label data, the proposed method yields tangible improvements, which is close or even outperforming other state-of-the-art self-supervised and supervised baselines. Furthermore, we empirically show that the idea of bootstrapping the self-supervised pretrained features is an effective way to improve the task-specific semi-supervised learning on standard benchmarks. Code and pretrained models will be made available at: https://github.com/srinidhiPY/SSL_CR_Histo</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2102.03897</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Benchmarks ; Classification ; Computer Science - Computer Vision and Pattern Recognition ; Consistency ; Datasets ; Harnesses ; Histology ; Histopathology ; Image analysis ; Neural networks ; Regression analysis ; Representations ; Semi-supervised learning ; Training ; Tumors</subject><ispartof>arXiv.org, 2021-10</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,781,882,27906</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2102.03897$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1016/j.media.2021.102256$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Srinidhi, Chetan L</creatorcontrib><creatorcontrib>Kim, Seung Wook</creatorcontrib><creatorcontrib>Fu-Der, Chen</creatorcontrib><creatorcontrib>Martel, Anne L</creatorcontrib><title>Self-supervised driven consistency training for annotation efficient histopathology image analysis</title><title>arXiv.org</title><description>Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology. However, obtaining such exhaustive manual annotations is often expensive, laborious, and prone to inter and Intra-observer variability. While recent self-supervised and semi-supervised methods can alleviate this need by learn-ing unsupervised feature representations, they still struggle to generalize well to downstream tasks when the number of labeled instances is small. In this work, we overcome this challenge by leveraging both task-agnostic and task-specific unlabeled data based on two novel strategies: i) a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning; ii) a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data. We carry out extensive validation experiments on three histopathology benchmark datasets across two classification and one regression-based tasks, i.e., tumor metastasis detection, tissue type classification, and tumor cellularity quantification. Under limited-label data, the proposed method yields tangible improvements, which is close or even outperforming other state-of-the-art self-supervised and supervised baselines. Furthermore, we empirically show that the idea of bootstrapping the self-supervised pretrained features is an effective way to improve the task-specific semi-supervised learning on standard benchmarks. Code and pretrained models will be made available at: https://github.com/srinidhiPY/SSL_CR_Histo</description><subject>Annotations</subject><subject>Benchmarks</subject><subject>Classification</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Consistency</subject><subject>Datasets</subject><subject>Harnesses</subject><subject>Histology</subject><subject>Histopathology</subject><subject>Image analysis</subject><subject>Neural networks</subject><subject>Regression analysis</subject><subject>Representations</subject><subject>Semi-supervised learning</subject><subject>Training</subject><subject>Tumors</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkE9LwzAYh4MgOOY-gCcDnjuTN0mbHWX4DwYe3L1kydstoyY16Yr99tbN0-_y8OPhIeSOs6XUSrFHk378sATOYMmEXlVXZAZC8EJLgBuyyPnIGIOyAqXEjOw-sW2KfOowDT6joy75AQO1MWSfewx2pH0yPviwp01M1IQQe9P7GCg2jbceQ08PExo70x9iG_cj9V9mjxNp2nE6uSXXjWkzLv53TrYvz9v1W7H5eH1fP20KowAKjQ70zpqmklZVzFleotZuxxyCghIUWsaV4IZXpeZmJRouUYDlziJqx8Sc3F9uzwHqLk0Waaz_QtTnEBPxcCG6FL9PmPv6GE9pssw1SF2VUqoViF_gL2Nu</recordid><startdate>20211003</startdate><enddate>20211003</enddate><creator>Srinidhi, Chetan L</creator><creator>Kim, Seung Wook</creator><creator>Fu-Der, Chen</creator><creator>Martel, Anne L</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20211003</creationdate><title>Self-supervised driven consistency training for annotation efficient histopathology image analysis</title><author>Srinidhi, Chetan L ; Kim, Seung Wook ; Fu-Der, Chen ; Martel, Anne L</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a522-8ed28bcaf74c570dc16e88db0de252625ec01531a17681a93f14e32c1dcee8d03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Annotations</topic><topic>Benchmarks</topic><topic>Classification</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Consistency</topic><topic>Datasets</topic><topic>Harnesses</topic><topic>Histology</topic><topic>Histopathology</topic><topic>Image analysis</topic><topic>Neural networks</topic><topic>Regression analysis</topic><topic>Representations</topic><topic>Semi-supervised learning</topic><topic>Training</topic><topic>Tumors</topic><toplevel>online_resources</toplevel><creatorcontrib>Srinidhi, Chetan L</creatorcontrib><creatorcontrib>Kim, Seung Wook</creatorcontrib><creatorcontrib>Fu-Der, Chen</creatorcontrib><creatorcontrib>Martel, Anne L</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Srinidhi, Chetan L</au><au>Kim, Seung Wook</au><au>Fu-Der, Chen</au><au>Martel, Anne L</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Self-supervised driven consistency training for annotation efficient histopathology image analysis</atitle><jtitle>arXiv.org</jtitle><date>2021-10-03</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology. However, obtaining such exhaustive manual annotations is often expensive, laborious, and prone to inter and Intra-observer variability. While recent self-supervised and semi-supervised methods can alleviate this need by learn-ing unsupervised feature representations, they still struggle to generalize well to downstream tasks when the number of labeled instances is small. In this work, we overcome this challenge by leveraging both task-agnostic and task-specific unlabeled data based on two novel strategies: i) a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning; ii) a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data. We carry out extensive validation experiments on three histopathology benchmark datasets across two classification and one regression-based tasks, i.e., tumor metastasis detection, tissue type classification, and tumor cellularity quantification. Under limited-label data, the proposed method yields tangible improvements, which is close or even outperforming other state-of-the-art self-supervised and supervised baselines. Furthermore, we empirically show that the idea of bootstrapping the self-supervised pretrained features is an effective way to improve the task-specific semi-supervised learning on standard benchmarks. Code and pretrained models will be made available at: https://github.com/srinidhiPY/SSL_CR_Histo</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2102.03897</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-10
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2102_03897
source arXiv.org; Free E- Journals
subjects Annotations
Benchmarks
Classification
Computer Science - Computer Vision and Pattern Recognition
Consistency
Datasets
Harnesses
Histology
Histopathology
Image analysis
Neural networks
Regression analysis
Representations
Semi-supervised learning
Training
Tumors
title Self-supervised driven consistency training for annotation efficient histopathology image analysis
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T21%3A40%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Self-supervised%20driven%20consistency%20training%20for%20annotation%20efficient%20histopathology%20image%20analysis&rft.jtitle=arXiv.org&rft.au=Srinidhi,%20Chetan%20L&rft.date=2021-10-03&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2102.03897&rft_dat=%3Cproquest_arxiv%3E2487644592%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2487644592&rft_id=info:pmid/&rfr_iscdi=true