SuPerPM: A Large Deformation-Robust Surgical Perception Framework Based on Deep Point Matching Learned from Physical Constrained Simulation Data
Manipulation of tissue with surgical tools often results in large deformations that current methods in tracking and reconstructing algorithms have not effectively addressed. A major source of tracking errors during large deformations stems from wrong data association between observed sensor measurem...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Lin, Shan Miao, Albert J Alabiad, Ali Liu, Fei Wang, Kaiyuan Lu, Jingpei Richter, Florian Yip, Michael C |
description | Manipulation of tissue with surgical tools often results in large
deformations that current methods in tracking and reconstructing algorithms
have not effectively addressed. A major source of tracking errors during large
deformations stems from wrong data association between observed sensor
measurements with previously tracked scene. To mitigate this issue, we present
a surgical perception framework, SuPerPM, that leverages learning-based
non-rigid point cloud matching for data association, thus accommodating larger
deformations. The learning models typically require training data with ground
truth point cloud correspondences, which is challenging or even impractical to
collect in surgical environments. Thus, for tuning the learning model, we
gather endoscopic data of soft tissue being manipulated by a surgical robot and
then establish correspondences between point clouds at different time points to
serve as ground truth. This was achieved by employing a position-based dynamics
(PBD) simulation to ensure that the correspondences adhered to physical
constraints. The proposed framework is demonstrated on several challenging
surgical datasets that are characterized by large deformations, achieving
superior performance over state-of-the-art surgical scene tracking algorithms. |
doi_str_mv | 10.48550/arxiv.2309.13863 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_13863</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_13863</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-c85d166d0a872686ee211ee2c145733a62d9922f6609a25b66b1249e199d0a2d3</originalsourceid><addsrcrecordid>eNotkMFOg0AQhrl4MNUH8OS8AJXdhS3rrVKrJjQS6b0ZYKAbC0sWUPsWPnIBvcwk8838k3yOc8e8pR8GgfeA9kd_Lbnw1JKJUIpr5zcdErLJ7hHWEKOtCDZUGltjr03jfphs6HpIB1vpHE8wrubUTgi2Fmv6NvYTnrCjAsbRhqiFxOimhx32-VE3FcSEthlxaU0NyfHczTmRabreop5IquvhNL-DDfZ441yVeOro9r8vnP32eR-9uvH7y1u0jl2UK-HmYVAwKQsPwxWXoSTijI0lZ36wEgIlL5TivJTSU8iDTMqMcV8RU2q84YVYOPd_sbOSQ2t1jfZ8mNQcZjXiAhF1X50</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SuPerPM: A Large Deformation-Robust Surgical Perception Framework Based on Deep Point Matching Learned from Physical Constrained Simulation Data</title><source>arXiv.org</source><creator>Lin, Shan ; Miao, Albert J ; Alabiad, Ali ; Liu, Fei ; Wang, Kaiyuan ; Lu, Jingpei ; Richter, Florian ; Yip, Michael C</creator><creatorcontrib>Lin, Shan ; Miao, Albert J ; Alabiad, Ali ; Liu, Fei ; Wang, Kaiyuan ; Lu, Jingpei ; Richter, Florian ; Yip, Michael C</creatorcontrib><description>Manipulation of tissue with surgical tools often results in large
deformations that current methods in tracking and reconstructing algorithms
have not effectively addressed. A major source of tracking errors during large
deformations stems from wrong data association between observed sensor
measurements with previously tracked scene. To mitigate this issue, we present
a surgical perception framework, SuPerPM, that leverages learning-based
non-rigid point cloud matching for data association, thus accommodating larger
deformations. The learning models typically require training data with ground
truth point cloud correspondences, which is challenging or even impractical to
collect in surgical environments. Thus, for tuning the learning model, we
gather endoscopic data of soft tissue being manipulated by a surgical robot and
then establish correspondences between point clouds at different time points to
serve as ground truth. This was achieved by employing a position-based dynamics
(PBD) simulation to ensure that the correspondences adhered to physical
constraints. The proposed framework is demonstrated on several challenging
surgical datasets that are characterized by large deformations, achieving
superior performance over state-of-the-art surgical scene tracking algorithms.</description><identifier>DOI: 10.48550/arxiv.2309.13863</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.13863$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.13863$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lin, Shan</creatorcontrib><creatorcontrib>Miao, Albert J</creatorcontrib><creatorcontrib>Alabiad, Ali</creatorcontrib><creatorcontrib>Liu, Fei</creatorcontrib><creatorcontrib>Wang, Kaiyuan</creatorcontrib><creatorcontrib>Lu, Jingpei</creatorcontrib><creatorcontrib>Richter, Florian</creatorcontrib><creatorcontrib>Yip, Michael C</creatorcontrib><title>SuPerPM: A Large Deformation-Robust Surgical Perception Framework Based on Deep Point Matching Learned from Physical Constrained Simulation Data</title><description>Manipulation of tissue with surgical tools often results in large
deformations that current methods in tracking and reconstructing algorithms
have not effectively addressed. A major source of tracking errors during large
deformations stems from wrong data association between observed sensor
measurements with previously tracked scene. To mitigate this issue, we present
a surgical perception framework, SuPerPM, that leverages learning-based
non-rigid point cloud matching for data association, thus accommodating larger
deformations. The learning models typically require training data with ground
truth point cloud correspondences, which is challenging or even impractical to
collect in surgical environments. Thus, for tuning the learning model, we
gather endoscopic data of soft tissue being manipulated by a surgical robot and
then establish correspondences between point clouds at different time points to
serve as ground truth. This was achieved by employing a position-based dynamics
(PBD) simulation to ensure that the correspondences adhered to physical
constraints. The proposed framework is demonstrated on several challenging
surgical datasets that are characterized by large deformations, achieving
superior performance over state-of-the-art surgical scene tracking algorithms.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotkMFOg0AQhrl4MNUH8OS8AJXdhS3rrVKrJjQS6b0ZYKAbC0sWUPsWPnIBvcwk8838k3yOc8e8pR8GgfeA9kd_Lbnw1JKJUIpr5zcdErLJ7hHWEKOtCDZUGltjr03jfphs6HpIB1vpHE8wrubUTgi2Fmv6NvYTnrCjAsbRhqiFxOimhx32-VE3FcSEthlxaU0NyfHczTmRabreop5IquvhNL-DDfZ441yVeOro9r8vnP32eR-9uvH7y1u0jl2UK-HmYVAwKQsPwxWXoSTijI0lZ36wEgIlL5TivJTSU8iDTMqMcV8RU2q84YVYOPd_sbOSQ2t1jfZ8mNQcZjXiAhF1X50</recordid><startdate>20230925</startdate><enddate>20230925</enddate><creator>Lin, Shan</creator><creator>Miao, Albert J</creator><creator>Alabiad, Ali</creator><creator>Liu, Fei</creator><creator>Wang, Kaiyuan</creator><creator>Lu, Jingpei</creator><creator>Richter, Florian</creator><creator>Yip, Michael C</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230925</creationdate><title>SuPerPM: A Large Deformation-Robust Surgical Perception Framework Based on Deep Point Matching Learned from Physical Constrained Simulation Data</title><author>Lin, Shan ; Miao, Albert J ; Alabiad, Ali ; Liu, Fei ; Wang, Kaiyuan ; Lu, Jingpei ; Richter, Florian ; Yip, Michael C</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-c85d166d0a872686ee211ee2c145733a62d9922f6609a25b66b1249e199d0a2d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Lin, Shan</creatorcontrib><creatorcontrib>Miao, Albert J</creatorcontrib><creatorcontrib>Alabiad, Ali</creatorcontrib><creatorcontrib>Liu, Fei</creatorcontrib><creatorcontrib>Wang, Kaiyuan</creatorcontrib><creatorcontrib>Lu, Jingpei</creatorcontrib><creatorcontrib>Richter, Florian</creatorcontrib><creatorcontrib>Yip, Michael C</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lin, Shan</au><au>Miao, Albert J</au><au>Alabiad, Ali</au><au>Liu, Fei</au><au>Wang, Kaiyuan</au><au>Lu, Jingpei</au><au>Richter, Florian</au><au>Yip, Michael C</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SuPerPM: A Large Deformation-Robust Surgical Perception Framework Based on Deep Point Matching Learned from Physical Constrained Simulation Data</atitle><date>2023-09-25</date><risdate>2023</risdate><abstract>Manipulation of tissue with surgical tools often results in large
deformations that current methods in tracking and reconstructing algorithms
have not effectively addressed. A major source of tracking errors during large
deformations stems from wrong data association between observed sensor
measurements with previously tracked scene. To mitigate this issue, we present
a surgical perception framework, SuPerPM, that leverages learning-based
non-rigid point cloud matching for data association, thus accommodating larger
deformations. The learning models typically require training data with ground
truth point cloud correspondences, which is challenging or even impractical to
collect in surgical environments. Thus, for tuning the learning model, we
gather endoscopic data of soft tissue being manipulated by a surgical robot and
then establish correspondences between point clouds at different time points to
serve as ground truth. This was achieved by employing a position-based dynamics
(PBD) simulation to ensure that the correspondences adhered to physical
constraints. The proposed framework is demonstrated on several challenging
surgical datasets that are characterized by large deformations, achieving
superior performance over state-of-the-art surgical scene tracking algorithms.</abstract><doi>10.48550/arxiv.2309.13863</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2309.13863 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2309_13863 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | SuPerPM: A Large Deformation-Robust Surgical Perception Framework Based on Deep Point Matching Learned from Physical Constrained Simulation Data |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T18%3A25%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SuPerPM:%20A%20Large%20Deformation-Robust%20Surgical%20Perception%20Framework%20Based%20on%20Deep%20Point%20Matching%20Learned%20from%20Physical%20Constrained%20Simulation%20Data&rft.au=Lin,%20Shan&rft.date=2023-09-25&rft_id=info:doi/10.48550/arxiv.2309.13863&rft_dat=%3Carxiv_GOX%3E2309_13863%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |