Robust Method for Semantic Segmentation of Whole-Slide Blood Cell Microscopic Image
Previous works on segmentation of SEM (scanning electron microscope) blood cell image ignore the semantic segmentation approach of whole-slide blood cell segmentation. In the proposed work, we address the problem of whole-slide blood cell segmentation using the semantic segmentation approach. We des...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2020-01 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Shahzad, Muhammad Arif Iqbal Umar Khan, Muazzam A Syed Hamad Shirazi Khan, Zakir Yousaf, Waqas |
description | Previous works on segmentation of SEM (scanning electron microscope) blood cell image ignore the semantic segmentation approach of whole-slide blood cell segmentation. In the proposed work, we address the problem of whole-slide blood cell segmentation using the semantic segmentation approach. We design a novel convolutional encoder-decoder framework along with VGG-16 as the pixel-level feature extraction model. -e proposed framework comprises 3 main steps: First, all the original images along with manually generated ground truth masks of each blood cell type are passed through the preprocessing stage. In the preprocessing stage, pixel-level labeling, RGB to grayscale conversion of masked image and pixel fusing, and unity mask generation are performed. After that, VGG16 is loaded into the system, which acts as a pretrained pixel-level feature extraction model. In the third step, the training process is initiated on the proposed model. We have evaluated our network performance on three evaluation metrics. We obtained outstanding results with respect to classwise, as well as global and mean accuracies. Our system achieved classwise accuracies of 97.45%, 93.34%, and 85.11% for RBCs, WBCs, and platelets, respectively, while global and mean accuracies remain 97.18% and 91.96%, respectively. |
doi_str_mv | 10.48550/arxiv.2001.10188 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2001_10188</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2348153291</sourcerecordid><originalsourceid>FETCH-LOGICAL-a521-30ca8ea73cbca2b951623a7bd856d4667cb9bb9c48afeaf6d179e1bb19c6a88a3</originalsourceid><addsrcrecordid>eNotj11LwzAUhoMgOOZ-gFcGvG7NR5Oml1r8GGwIduBlOUnTraNtapqK_nvr5tV5L5735TwI3VASJ0oIcg_-u_mKGSE0poQqdYEWjHMaqYSxK7QaxyMhhMmUCcEXqHh3ehoD3tpwcBWunceF7aAPjZnDvrN9gNC4Hrsafxxca6OibSqLH1s347ltW7xtjHejccNcWXewt9fosoZ2tKv_u0S756dd_hpt3l7W-cMmAsFoxIkBZSHlRhtgOhNUMg6prpSQVSJlanSmdWYSBbWFWlY0zSzVmmZGglLAl-j2PHsyLgffdOB_yj_z8mQ-E3dnYvDuc7JjKI9u8v38U8l4oqjgLKP8F4BQXAw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2348153291</pqid></control><display><type>article</type><title>Robust Method for Semantic Segmentation of Whole-Slide Blood Cell Microscopic Image</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Shahzad, Muhammad ; Arif Iqbal Umar ; Khan, Muazzam A ; Syed Hamad Shirazi ; Khan, Zakir ; Yousaf, Waqas</creator><creatorcontrib>Shahzad, Muhammad ; Arif Iqbal Umar ; Khan, Muazzam A ; Syed Hamad Shirazi ; Khan, Zakir ; Yousaf, Waqas</creatorcontrib><description>Previous works on segmentation of SEM (scanning electron microscope) blood cell image ignore the semantic segmentation approach of whole-slide blood cell segmentation. In the proposed work, we address the problem of whole-slide blood cell segmentation using the semantic segmentation approach. We design a novel convolutional encoder-decoder framework along with VGG-16 as the pixel-level feature extraction model. -e proposed framework comprises 3 main steps: First, all the original images along with manually generated ground truth masks of each blood cell type are passed through the preprocessing stage. In the preprocessing stage, pixel-level labeling, RGB to grayscale conversion of masked image and pixel fusing, and unity mask generation are performed. After that, VGG16 is loaded into the system, which acts as a pretrained pixel-level feature extraction model. In the third step, the training process is initiated on the proposed model. We have evaluated our network performance on three evaluation metrics. We obtained outstanding results with respect to classwise, as well as global and mean accuracies. Our system achieved classwise accuracies of 97.45%, 93.34%, and 85.11% for RBCs, WBCs, and platelets, respectively, while global and mean accuracies remain 97.18% and 91.96%, respectively.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2001.10188</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Blood ; Blood cells ; Coders ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning ; Encoders-Decoders ; Erythrocytes ; Feature extraction ; Ground truth ; Image segmentation ; Masks ; Performance evaluation ; Pixels ; Platelets ; Preprocessing ; Semantic segmentation ; Semantics ; Statistics - Machine Learning</subject><ispartof>arXiv.org, 2020-01</ispartof><rights>2020. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.1155/2020/4015323$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2001.10188$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shahzad, Muhammad</creatorcontrib><creatorcontrib>Arif Iqbal Umar</creatorcontrib><creatorcontrib>Khan, Muazzam A</creatorcontrib><creatorcontrib>Syed Hamad Shirazi</creatorcontrib><creatorcontrib>Khan, Zakir</creatorcontrib><creatorcontrib>Yousaf, Waqas</creatorcontrib><title>Robust Method for Semantic Segmentation of Whole-Slide Blood Cell Microscopic Image</title><title>arXiv.org</title><description>Previous works on segmentation of SEM (scanning electron microscope) blood cell image ignore the semantic segmentation approach of whole-slide blood cell segmentation. In the proposed work, we address the problem of whole-slide blood cell segmentation using the semantic segmentation approach. We design a novel convolutional encoder-decoder framework along with VGG-16 as the pixel-level feature extraction model. -e proposed framework comprises 3 main steps: First, all the original images along with manually generated ground truth masks of each blood cell type are passed through the preprocessing stage. In the preprocessing stage, pixel-level labeling, RGB to grayscale conversion of masked image and pixel fusing, and unity mask generation are performed. After that, VGG16 is loaded into the system, which acts as a pretrained pixel-level feature extraction model. In the third step, the training process is initiated on the proposed model. We have evaluated our network performance on three evaluation metrics. We obtained outstanding results with respect to classwise, as well as global and mean accuracies. Our system achieved classwise accuracies of 97.45%, 93.34%, and 85.11% for RBCs, WBCs, and platelets, respectively, while global and mean accuracies remain 97.18% and 91.96%, respectively.</description><subject>Blood</subject><subject>Blood cells</subject><subject>Coders</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><subject>Encoders-Decoders</subject><subject>Erythrocytes</subject><subject>Feature extraction</subject><subject>Ground truth</subject><subject>Image segmentation</subject><subject>Masks</subject><subject>Performance evaluation</subject><subject>Pixels</subject><subject>Platelets</subject><subject>Preprocessing</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>Statistics - Machine Learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj11LwzAUhoMgOOZ-gFcGvG7NR5Oml1r8GGwIduBlOUnTraNtapqK_nvr5tV5L5735TwI3VASJ0oIcg_-u_mKGSE0poQqdYEWjHMaqYSxK7QaxyMhhMmUCcEXqHh3ehoD3tpwcBWunceF7aAPjZnDvrN9gNC4Hrsafxxca6OibSqLH1s347ltW7xtjHejccNcWXewt9fosoZ2tKv_u0S756dd_hpt3l7W-cMmAsFoxIkBZSHlRhtgOhNUMg6prpSQVSJlanSmdWYSBbWFWlY0zSzVmmZGglLAl-j2PHsyLgffdOB_yj_z8mQ-E3dnYvDuc7JjKI9u8v38U8l4oqjgLKP8F4BQXAw</recordid><startdate>20200128</startdate><enddate>20200128</enddate><creator>Shahzad, Muhammad</creator><creator>Arif Iqbal Umar</creator><creator>Khan, Muazzam A</creator><creator>Syed Hamad Shirazi</creator><creator>Khan, Zakir</creator><creator>Yousaf, Waqas</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200128</creationdate><title>Robust Method for Semantic Segmentation of Whole-Slide Blood Cell Microscopic Image</title><author>Shahzad, Muhammad ; Arif Iqbal Umar ; Khan, Muazzam A ; Syed Hamad Shirazi ; Khan, Zakir ; Yousaf, Waqas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a521-30ca8ea73cbca2b951623a7bd856d4667cb9bb9c48afeaf6d179e1bb19c6a88a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Blood</topic><topic>Blood cells</topic><topic>Coders</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><topic>Encoders-Decoders</topic><topic>Erythrocytes</topic><topic>Feature extraction</topic><topic>Ground truth</topic><topic>Image segmentation</topic><topic>Masks</topic><topic>Performance evaluation</topic><topic>Pixels</topic><topic>Platelets</topic><topic>Preprocessing</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Shahzad, Muhammad</creatorcontrib><creatorcontrib>Arif Iqbal Umar</creatorcontrib><creatorcontrib>Khan, Muazzam A</creatorcontrib><creatorcontrib>Syed Hamad Shirazi</creatorcontrib><creatorcontrib>Khan, Zakir</creatorcontrib><creatorcontrib>Yousaf, Waqas</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shahzad, Muhammad</au><au>Arif Iqbal Umar</au><au>Khan, Muazzam A</au><au>Syed Hamad Shirazi</au><au>Khan, Zakir</au><au>Yousaf, Waqas</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robust Method for Semantic Segmentation of Whole-Slide Blood Cell Microscopic Image</atitle><jtitle>arXiv.org</jtitle><date>2020-01-28</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Previous works on segmentation of SEM (scanning electron microscope) blood cell image ignore the semantic segmentation approach of whole-slide blood cell segmentation. In the proposed work, we address the problem of whole-slide blood cell segmentation using the semantic segmentation approach. We design a novel convolutional encoder-decoder framework along with VGG-16 as the pixel-level feature extraction model. -e proposed framework comprises 3 main steps: First, all the original images along with manually generated ground truth masks of each blood cell type are passed through the preprocessing stage. In the preprocessing stage, pixel-level labeling, RGB to grayscale conversion of masked image and pixel fusing, and unity mask generation are performed. After that, VGG16 is loaded into the system, which acts as a pretrained pixel-level feature extraction model. In the third step, the training process is initiated on the proposed model. We have evaluated our network performance on three evaluation metrics. We obtained outstanding results with respect to classwise, as well as global and mean accuracies. Our system achieved classwise accuracies of 97.45%, 93.34%, and 85.11% for RBCs, WBCs, and platelets, respectively, while global and mean accuracies remain 97.18% and 91.96%, respectively.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2001.10188</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2020-01 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2001_10188 |
source | arXiv.org; Free E- Journals |
subjects | Blood Blood cells Coders Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning Encoders-Decoders Erythrocytes Feature extraction Ground truth Image segmentation Masks Performance evaluation Pixels Platelets Preprocessing Semantic segmentation Semantics Statistics - Machine Learning |
title | Robust Method for Semantic Segmentation of Whole-Slide Blood Cell Microscopic Image |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T04%3A31%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robust%20Method%20for%20Semantic%20Segmentation%20of%20Whole-Slide%20Blood%20Cell%20Microscopic%20Image&rft.jtitle=arXiv.org&rft.au=Shahzad,%20Muhammad&rft.date=2020-01-28&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2001.10188&rft_dat=%3Cproquest_arxiv%3E2348153291%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2348153291&rft_id=info:pmid/&rfr_iscdi=true |