Large Scale Image Segmentation with Structured Loss based Deep Learning for Connectome Reconstruction

We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-NET, trained to predict affinities between voxels, follo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2020-07
Hauptverfasser: Funke, Jan, Tschopp, Fabian David, Grisaitis, William, Sheridan, Arlo, Singh, Chandan, Saalfeld, Stephan, Turaga, Srinivas C
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Funke, Jan
Tschopp, Fabian David
Grisaitis, William
Sheridan, Arlo
Singh, Chandan
Saalfeld, Stephan
Turaga, Srinivas C
description We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-NET, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of about 2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.
doi_str_mv 10.48550/arxiv.1709.02974
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_1709_02974</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2076742784</sourcerecordid><originalsourceid>FETCH-LOGICAL-a524-5481df3019d980c269027bf47dc426c235718a732620336d9e76b5f0c230cc8c3</originalsourceid><addsrcrecordid>eNotkMtqwzAQRUWh0JDmA7qqoGun8ujpZUlfAUOhyd4o8jh1iKVUsvv4-zpJV3MX516GQ8hNzubCSMnubfxpv-a5ZsWcQaHFBZkA53lmBMAVmaW0Y4yB0iAlnxAsbdwiXTm7R7rs7DHjtkPf274Nnn63_Qdd9XFw_RCxpmVIiW5sGuMj4oGWaKNv_ZY2IdJF8B5dHzqk7-iCT6feOHNNLhu7Tzj7v1Oyfn5aL16z8u1luXgoMytBZFKYvG44y4u6MMyBKhjoTSN07QQoB1zq3FjNQQHjXNUFarWRzUhy5pxxfEpuz7MnB9Uhtp2Nv9XRRXVyMRJ3Z-IQw-eAqa92YYh-_KkCppUWoI3gf-IMYVc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2076742784</pqid></control><display><type>article</type><title>Large Scale Image Segmentation with Structured Loss based Deep Learning for Connectome Reconstruction</title><source>Freely Accessible Journals</source><source>arXiv.org</source><creator>Funke, Jan ; Tschopp, Fabian David ; Grisaitis, William ; Sheridan, Arlo ; Singh, Chandan ; Saalfeld, Stephan ; Turaga, Srinivas C</creator><creatorcontrib>Funke, Jan ; Tschopp, Fabian David ; Grisaitis, William ; Sheridan, Arlo ; Singh, Chandan ; Saalfeld, Stephan ; Turaga, Srinivas C</creatorcontrib><description>We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-NET, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of about 2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1709.02974</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Affinity ; Agglomeration ; Algorithms ; Computer Science - Computer Vision and Pattern Recognition ; Datasets ; Deep learning ; Electron micrographs ; Image reconstruction ; Image segmentation ; Imaging techniques ; Iterative methods ; Predictions</subject><ispartof>arXiv.org, 2020-07</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27902</link.rule.ids><backlink>$$Uhttps://doi.org/10.1109/TPAMI.2018.2835450$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.1709.02974$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Funke, Jan</creatorcontrib><creatorcontrib>Tschopp, Fabian David</creatorcontrib><creatorcontrib>Grisaitis, William</creatorcontrib><creatorcontrib>Sheridan, Arlo</creatorcontrib><creatorcontrib>Singh, Chandan</creatorcontrib><creatorcontrib>Saalfeld, Stephan</creatorcontrib><creatorcontrib>Turaga, Srinivas C</creatorcontrib><title>Large Scale Image Segmentation with Structured Loss based Deep Learning for Connectome Reconstruction</title><title>arXiv.org</title><description>We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-NET, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of about 2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.</description><subject>Affinity</subject><subject>Agglomeration</subject><subject>Algorithms</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Electron micrographs</subject><subject>Image reconstruction</subject><subject>Image segmentation</subject><subject>Imaging techniques</subject><subject>Iterative methods</subject><subject>Predictions</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNotkMtqwzAQRUWh0JDmA7qqoGun8ujpZUlfAUOhyd4o8jh1iKVUsvv4-zpJV3MX516GQ8hNzubCSMnubfxpv-a5ZsWcQaHFBZkA53lmBMAVmaW0Y4yB0iAlnxAsbdwiXTm7R7rs7DHjtkPf274Nnn63_Qdd9XFw_RCxpmVIiW5sGuMj4oGWaKNv_ZY2IdJF8B5dHzqk7-iCT6feOHNNLhu7Tzj7v1Oyfn5aL16z8u1luXgoMytBZFKYvG44y4u6MMyBKhjoTSN07QQoB1zq3FjNQQHjXNUFarWRzUhy5pxxfEpuz7MnB9Uhtp2Nv9XRRXVyMRJ3Z-IQw-eAqa92YYh-_KkCppUWoI3gf-IMYVc</recordid><startdate>20200728</startdate><enddate>20200728</enddate><creator>Funke, Jan</creator><creator>Tschopp, Fabian David</creator><creator>Grisaitis, William</creator><creator>Sheridan, Arlo</creator><creator>Singh, Chandan</creator><creator>Saalfeld, Stephan</creator><creator>Turaga, Srinivas C</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PIMPY</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200728</creationdate><title>Large Scale Image Segmentation with Structured Loss based Deep Learning for Connectome Reconstruction</title><author>Funke, Jan ; Tschopp, Fabian David ; Grisaitis, William ; Sheridan, Arlo ; Singh, Chandan ; Saalfeld, Stephan ; Turaga, Srinivas C</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a524-5481df3019d980c269027bf47dc426c235718a732620336d9e76b5f0c230cc8c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Affinity</topic><topic>Agglomeration</topic><topic>Algorithms</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Electron micrographs</topic><topic>Image reconstruction</topic><topic>Image segmentation</topic><topic>Imaging techniques</topic><topic>Iterative methods</topic><topic>Predictions</topic><toplevel>online_resources</toplevel><creatorcontrib>Funke, Jan</creatorcontrib><creatorcontrib>Tschopp, Fabian David</creatorcontrib><creatorcontrib>Grisaitis, William</creatorcontrib><creatorcontrib>Sheridan, Arlo</creatorcontrib><creatorcontrib>Singh, Chandan</creatorcontrib><creatorcontrib>Saalfeld, Stephan</creatorcontrib><creatorcontrib>Turaga, Srinivas C</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied &amp; Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Funke, Jan</au><au>Tschopp, Fabian David</au><au>Grisaitis, William</au><au>Sheridan, Arlo</au><au>Singh, Chandan</au><au>Saalfeld, Stephan</au><au>Turaga, Srinivas C</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Large Scale Image Segmentation with Structured Loss based Deep Learning for Connectome Reconstruction</atitle><jtitle>arXiv.org</jtitle><date>2020-07-28</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-NET, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of about 2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1709.02974</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-07
issn 2331-8422
language eng
recordid cdi_arxiv_primary_1709_02974
source Freely Accessible Journals; arXiv.org
subjects Affinity
Agglomeration
Algorithms
Computer Science - Computer Vision and Pattern Recognition
Datasets
Deep learning
Electron micrographs
Image reconstruction
Image segmentation
Imaging techniques
Iterative methods
Predictions
title Large Scale Image Segmentation with Structured Loss based Deep Learning for Connectome Reconstruction
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-19T04%3A06%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Large%20Scale%20Image%20Segmentation%20with%20Structured%20Loss%20based%20Deep%20Learning%20for%20Connectome%20Reconstruction&rft.jtitle=arXiv.org&rft.au=Funke,%20Jan&rft.date=2020-07-28&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1709.02974&rft_dat=%3Cproquest_arxiv%3E2076742784%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2076742784&rft_id=info:pmid/&rfr_iscdi=true