A generalized graph reduction framework for interactive segmentation of large images

•We introduce a user-guided graph reduction approach to speed-up interactive segmentation for large images.•We demonstrate the generalizability of our approach to graph-based segmentation methods, e.g., random walker and graph cuts.•Through a user study, we highlight the preservation of resolution a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer vision and image understanding 2016-09, Vol.150, p.44-57
Hauptverfasser: Gueziri, Houssem-Eddine, McGuffin, Michael J., Laporte, Catherine
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 57
container_issue
container_start_page 44
container_title Computer vision and image understanding
container_volume 150
creator Gueziri, Houssem-Eddine
McGuffin, Michael J.
Laporte, Catherine
description •We introduce a user-guided graph reduction approach to speed-up interactive segmentation for large images.•We demonstrate the generalizability of our approach to graph-based segmentation methods, e.g., random walker and graph cuts.•Through a user study, we highlight the preservation of resolution and segmentation quality using our approach.•We describe how our approach can be combined with super-pixels to benefit from further reductions in computation time. The speed of graph-based segmentation approaches, such as random walker (RW) and graph cut (GC), depends strongly on image size. For high-resolution images, the time required to compute a segmentation based on user input renders interaction tedious. We propose a novel method, using an approximate contour sketched by the user, to reduce the graph before passing it on to a segmentation algorithm such as RW or GC. This enables a significantly faster feedback loop. The user first draws a rough contour of the object to segment. Then, the pixels of the image are partitioned into “layers” (corresponding to different scales) based on their distance from the contour. The thickness of these layers increases with distance to the contour according to a Fibonacci sequence. An initial segmentation result is rapidly obtained after automatically generating foreground and background labels according to a specifically selected layer; all vertices beyond this layer are eliminated, restricting the segmentation to regions near the drawn contour. Further foreground/background labels can then be added by the user to refine the segmentation. All iterations of the graph-based segmentation benefit from a reduced input graph, while maintaining full resolution near the object boundary. A user study with 16 participants was carried out for RW segmentation of a multi-modal dataset of 22 medical images, using either a standard mouse or a stylus pen to draw the contour. Results reveal that our approach significantly reduces the overall segmentation time compared with the status quo approach (p < 0.01). The study also shows that our approach works well with both input devices. Compared to super-pixel graph reduction, our approach provides full resolution accuracy at similar speed on a high-resolution benchmark image with both RW and GC segmentation methods. However, graph reduction based on super-pixels does not allow interactive correction of clustering errors. Finally, our approach can be combined with super-pixel clustering methods
doi_str_mv 10.1016/j.cviu.2016.05.009
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1835674667</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S107731421630056X</els_id><sourcerecordid>1835674667</sourcerecordid><originalsourceid>FETCH-LOGICAL-c377t-55c85429ec95ed7781a82d917f33b7c540ef2e9ca6d608c1a5cdd12fa3b19cd13</originalsourceid><addsrcrecordid>eNp9kE1PwzAMhiMEEmPwBzjlyKUlbpumkbhMiC9pEpchcYuyxC0Z_RhJNwS_npRx5mRLfl7Lfgi5BJYCg_J6k5q926VZ7FPGU8bkEZkBkyzJcv56PPVCJDkU2Sk5C2HDGEAhYUZWC9pgj1637hstbbzevlGPdmdGN_S09rrDz8G_03rw1PVjJONkjzRg02E_6l9sqGmrfYPUdbrBcE5Oat0GvPirc_Jyf7e6fUyWzw9Pt4tlYnIhxoRzU_Eik2gkRytEBbrKrARR5_laGF4wrDOURpe2ZJUBzY21kNU6X4M0FvI5uTrs3frhY4dhVJ0LBttW9zjsgoIq56UoylJENDugxg8heKzV1sdj_ZcCpiaFaqMmhWpSqBhXUWEM3RxCGJ_YO_QqGIe9Qes8mlHZwf0X_wHA2nvw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1835674667</pqid></control><display><type>article</type><title>A generalized graph reduction framework for interactive segmentation of large images</title><source>ScienceDirect Journals (5 years ago - present)</source><creator>Gueziri, Houssem-Eddine ; McGuffin, Michael J. ; Laporte, Catherine</creator><creatorcontrib>Gueziri, Houssem-Eddine ; McGuffin, Michael J. ; Laporte, Catherine</creatorcontrib><description>•We introduce a user-guided graph reduction approach to speed-up interactive segmentation for large images.•We demonstrate the generalizability of our approach to graph-based segmentation methods, e.g., random walker and graph cuts.•Through a user study, we highlight the preservation of resolution and segmentation quality using our approach.•We describe how our approach can be combined with super-pixels to benefit from further reductions in computation time. The speed of graph-based segmentation approaches, such as random walker (RW) and graph cut (GC), depends strongly on image size. For high-resolution images, the time required to compute a segmentation based on user input renders interaction tedious. We propose a novel method, using an approximate contour sketched by the user, to reduce the graph before passing it on to a segmentation algorithm such as RW or GC. This enables a significantly faster feedback loop. The user first draws a rough contour of the object to segment. Then, the pixels of the image are partitioned into “layers” (corresponding to different scales) based on their distance from the contour. The thickness of these layers increases with distance to the contour according to a Fibonacci sequence. An initial segmentation result is rapidly obtained after automatically generating foreground and background labels according to a specifically selected layer; all vertices beyond this layer are eliminated, restricting the segmentation to regions near the drawn contour. Further foreground/background labels can then be added by the user to refine the segmentation. All iterations of the graph-based segmentation benefit from a reduced input graph, while maintaining full resolution near the object boundary. A user study with 16 participants was carried out for RW segmentation of a multi-modal dataset of 22 medical images, using either a standard mouse or a stylus pen to draw the contour. Results reveal that our approach significantly reduces the overall segmentation time compared with the status quo approach (p &lt; 0.01). The study also shows that our approach works well with both input devices. Compared to super-pixel graph reduction, our approach provides full resolution accuracy at similar speed on a high-resolution benchmark image with both RW and GC segmentation methods. However, graph reduction based on super-pixels does not allow interactive correction of clustering errors. Finally, our approach can be combined with super-pixel clustering methods for further graph reduction, resulting in even faster segmentation.</description><identifier>ISSN: 1077-3142</identifier><identifier>EISSN: 1090-235X</identifier><identifier>DOI: 10.1016/j.cviu.2016.05.009</identifier><language>eng</language><publisher>Elsevier Inc</publisher><subject>Clustering ; Graph cuts ; Graph reduction ; Graph-based segmentation ; Graphs ; Interactive ; Interactive segmentation ; Labels ; Random walker ; Reduction ; Segmentation ; Shape ; User study</subject><ispartof>Computer vision and image understanding, 2016-09, Vol.150, p.44-57</ispartof><rights>2016 Elsevier Inc.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c377t-55c85429ec95ed7781a82d917f33b7c540ef2e9ca6d608c1a5cdd12fa3b19cd13</citedby><cites>FETCH-LOGICAL-c377t-55c85429ec95ed7781a82d917f33b7c540ef2e9ca6d608c1a5cdd12fa3b19cd13</cites><orcidid>0000-0002-2997-7718</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.cviu.2016.05.009$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,780,784,3548,27923,27924,45994</link.rule.ids></links><search><creatorcontrib>Gueziri, Houssem-Eddine</creatorcontrib><creatorcontrib>McGuffin, Michael J.</creatorcontrib><creatorcontrib>Laporte, Catherine</creatorcontrib><title>A generalized graph reduction framework for interactive segmentation of large images</title><title>Computer vision and image understanding</title><description>•We introduce a user-guided graph reduction approach to speed-up interactive segmentation for large images.•We demonstrate the generalizability of our approach to graph-based segmentation methods, e.g., random walker and graph cuts.•Through a user study, we highlight the preservation of resolution and segmentation quality using our approach.•We describe how our approach can be combined with super-pixels to benefit from further reductions in computation time. The speed of graph-based segmentation approaches, such as random walker (RW) and graph cut (GC), depends strongly on image size. For high-resolution images, the time required to compute a segmentation based on user input renders interaction tedious. We propose a novel method, using an approximate contour sketched by the user, to reduce the graph before passing it on to a segmentation algorithm such as RW or GC. This enables a significantly faster feedback loop. The user first draws a rough contour of the object to segment. Then, the pixels of the image are partitioned into “layers” (corresponding to different scales) based on their distance from the contour. The thickness of these layers increases with distance to the contour according to a Fibonacci sequence. An initial segmentation result is rapidly obtained after automatically generating foreground and background labels according to a specifically selected layer; all vertices beyond this layer are eliminated, restricting the segmentation to regions near the drawn contour. Further foreground/background labels can then be added by the user to refine the segmentation. All iterations of the graph-based segmentation benefit from a reduced input graph, while maintaining full resolution near the object boundary. A user study with 16 participants was carried out for RW segmentation of a multi-modal dataset of 22 medical images, using either a standard mouse or a stylus pen to draw the contour. Results reveal that our approach significantly reduces the overall segmentation time compared with the status quo approach (p &lt; 0.01). The study also shows that our approach works well with both input devices. Compared to super-pixel graph reduction, our approach provides full resolution accuracy at similar speed on a high-resolution benchmark image with both RW and GC segmentation methods. However, graph reduction based on super-pixels does not allow interactive correction of clustering errors. Finally, our approach can be combined with super-pixel clustering methods for further graph reduction, resulting in even faster segmentation.</description><subject>Clustering</subject><subject>Graph cuts</subject><subject>Graph reduction</subject><subject>Graph-based segmentation</subject><subject>Graphs</subject><subject>Interactive</subject><subject>Interactive segmentation</subject><subject>Labels</subject><subject>Random walker</subject><subject>Reduction</subject><subject>Segmentation</subject><subject>Shape</subject><subject>User study</subject><issn>1077-3142</issn><issn>1090-235X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><recordid>eNp9kE1PwzAMhiMEEmPwBzjlyKUlbpumkbhMiC9pEpchcYuyxC0Z_RhJNwS_npRx5mRLfl7Lfgi5BJYCg_J6k5q926VZ7FPGU8bkEZkBkyzJcv56PPVCJDkU2Sk5C2HDGEAhYUZWC9pgj1637hstbbzevlGPdmdGN_S09rrDz8G_03rw1PVjJONkjzRg02E_6l9sqGmrfYPUdbrBcE5Oat0GvPirc_Jyf7e6fUyWzw9Pt4tlYnIhxoRzU_Eik2gkRytEBbrKrARR5_laGF4wrDOURpe2ZJUBzY21kNU6X4M0FvI5uTrs3frhY4dhVJ0LBttW9zjsgoIq56UoylJENDugxg8heKzV1sdj_ZcCpiaFaqMmhWpSqBhXUWEM3RxCGJ_YO_QqGIe9Qes8mlHZwf0X_wHA2nvw</recordid><startdate>201609</startdate><enddate>201609</enddate><creator>Gueziri, Houssem-Eddine</creator><creator>McGuffin, Michael J.</creator><creator>Laporte, Catherine</creator><general>Elsevier Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-2997-7718</orcidid></search><sort><creationdate>201609</creationdate><title>A generalized graph reduction framework for interactive segmentation of large images</title><author>Gueziri, Houssem-Eddine ; McGuffin, Michael J. ; Laporte, Catherine</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c377t-55c85429ec95ed7781a82d917f33b7c540ef2e9ca6d608c1a5cdd12fa3b19cd13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Clustering</topic><topic>Graph cuts</topic><topic>Graph reduction</topic><topic>Graph-based segmentation</topic><topic>Graphs</topic><topic>Interactive</topic><topic>Interactive segmentation</topic><topic>Labels</topic><topic>Random walker</topic><topic>Reduction</topic><topic>Segmentation</topic><topic>Shape</topic><topic>User study</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gueziri, Houssem-Eddine</creatorcontrib><creatorcontrib>McGuffin, Michael J.</creatorcontrib><creatorcontrib>Laporte, Catherine</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Computer vision and image understanding</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gueziri, Houssem-Eddine</au><au>McGuffin, Michael J.</au><au>Laporte, Catherine</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A generalized graph reduction framework for interactive segmentation of large images</atitle><jtitle>Computer vision and image understanding</jtitle><date>2016-09</date><risdate>2016</risdate><volume>150</volume><spage>44</spage><epage>57</epage><pages>44-57</pages><issn>1077-3142</issn><eissn>1090-235X</eissn><abstract>•We introduce a user-guided graph reduction approach to speed-up interactive segmentation for large images.•We demonstrate the generalizability of our approach to graph-based segmentation methods, e.g., random walker and graph cuts.•Through a user study, we highlight the preservation of resolution and segmentation quality using our approach.•We describe how our approach can be combined with super-pixels to benefit from further reductions in computation time. The speed of graph-based segmentation approaches, such as random walker (RW) and graph cut (GC), depends strongly on image size. For high-resolution images, the time required to compute a segmentation based on user input renders interaction tedious. We propose a novel method, using an approximate contour sketched by the user, to reduce the graph before passing it on to a segmentation algorithm such as RW or GC. This enables a significantly faster feedback loop. The user first draws a rough contour of the object to segment. Then, the pixels of the image are partitioned into “layers” (corresponding to different scales) based on their distance from the contour. The thickness of these layers increases with distance to the contour according to a Fibonacci sequence. An initial segmentation result is rapidly obtained after automatically generating foreground and background labels according to a specifically selected layer; all vertices beyond this layer are eliminated, restricting the segmentation to regions near the drawn contour. Further foreground/background labels can then be added by the user to refine the segmentation. All iterations of the graph-based segmentation benefit from a reduced input graph, while maintaining full resolution near the object boundary. A user study with 16 participants was carried out for RW segmentation of a multi-modal dataset of 22 medical images, using either a standard mouse or a stylus pen to draw the contour. Results reveal that our approach significantly reduces the overall segmentation time compared with the status quo approach (p &lt; 0.01). The study also shows that our approach works well with both input devices. Compared to super-pixel graph reduction, our approach provides full resolution accuracy at similar speed on a high-resolution benchmark image with both RW and GC segmentation methods. However, graph reduction based on super-pixels does not allow interactive correction of clustering errors. Finally, our approach can be combined with super-pixel clustering methods for further graph reduction, resulting in even faster segmentation.</abstract><pub>Elsevier Inc</pub><doi>10.1016/j.cviu.2016.05.009</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-2997-7718</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1077-3142
ispartof Computer vision and image understanding, 2016-09, Vol.150, p.44-57
issn 1077-3142
1090-235X
language eng
recordid cdi_proquest_miscellaneous_1835674667
source ScienceDirect Journals (5 years ago - present)
subjects Clustering
Graph cuts
Graph reduction
Graph-based segmentation
Graphs
Interactive
Interactive segmentation
Labels
Random walker
Reduction
Segmentation
Shape
User study
title A generalized graph reduction framework for interactive segmentation of large images
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T23%3A33%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20generalized%20graph%20reduction%20framework%20for%20interactive%20segmentation%20of%20large%20images&rft.jtitle=Computer%20vision%20and%20image%20understanding&rft.au=Gueziri,%20Houssem-Eddine&rft.date=2016-09&rft.volume=150&rft.spage=44&rft.epage=57&rft.pages=44-57&rft.issn=1077-3142&rft.eissn=1090-235X&rft_id=info:doi/10.1016/j.cviu.2016.05.009&rft_dat=%3Cproquest_cross%3E1835674667%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1835674667&rft_id=info:pmid/&rft_els_id=S107731421630056X&rfr_iscdi=true