Colonoscopy Landmark Detection using Vision Transformers
Colonoscopy is a routine outpatient procedure used to examine the colon and rectum for any abnormalities including polyps, diverticula and narrowing of colon structures. A significant amount of the clinician's time is spent in post-processing snapshots taken during the colonoscopy procedure, fo...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Tamhane, Aniruddha Mida, Tse'ela Posner, Erez Bouhnik, Moshe |
description | Colonoscopy is a routine outpatient procedure used to examine the colon and
rectum for any abnormalities including polyps, diverticula and narrowing of
colon structures. A significant amount of the clinician's time is spent in
post-processing snapshots taken during the colonoscopy procedure, for
maintaining medical records or further investigation. Automating this step can
save time and improve the efficiency of the process. In our work, we have
collected a dataset of 120 colonoscopy videos and 2416 snapshots taken during
the procedure, that have been annotated by experts. Further, we have developed
a novel, vision-transformer based landmark detection algorithm that identifies
key anatomical landmarks (the appendiceal orifice, ileocecal valve/cecum
landmark and rectum retroflexion) from snapshots taken during colonoscopy. Our
algorithm uses an adaptive gamma correction during preprocessing to maintain a
consistent brightness for all images. We then use a vision transformer as the
feature extraction backbone and a fully connected network based classifier head
to categorize a given frame into four classes: the three landmarks or a
non-landmark frame. We compare the vision transformer (ViT-B/16) backbone with
ResNet-101 and ConvNext-B backbones that have been trained similarly. We report
an accuracy of 82% with the vision transformer backbone on a test dataset of
snapshots. |
doi_str_mv | 10.48550/arxiv.2209.11304 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2209_11304</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2209_11304</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-6b03cd6fb4c75529cea2761ee8fdd5621387d6fb42b2d6c3ad9f4dfba5f08f673</originalsourceid><addsrcrecordid>eNotj8tqwzAURLXpoqT5gK7qH7Crt-RlcR8JGLIx3ZprPYJIIgUpLc3fp3GyGoaB4RyEngluuBYCv0L-C78NpbhtCGGYPyLdpX2KqZh0PFc9RHuAvKve3cmZU0ix-ikhbqvvUK5lyBCLT_ngcnlCDx72xS3vuUDD58fQrep-87Xu3voapOK1nDAzVvqJGyUEbY0DqiRxTntrhaSEaTXPdKJWGga29dz6CYTH2kvFFujldjujj8cc_gHP41VhnBXYBSxLQlE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Colonoscopy Landmark Detection using Vision Transformers</title><source>arXiv.org</source><creator>Tamhane, Aniruddha ; Mida, Tse'ela ; Posner, Erez ; Bouhnik, Moshe</creator><creatorcontrib>Tamhane, Aniruddha ; Mida, Tse'ela ; Posner, Erez ; Bouhnik, Moshe</creatorcontrib><description>Colonoscopy is a routine outpatient procedure used to examine the colon and
rectum for any abnormalities including polyps, diverticula and narrowing of
colon structures. A significant amount of the clinician's time is spent in
post-processing snapshots taken during the colonoscopy procedure, for
maintaining medical records or further investigation. Automating this step can
save time and improve the efficiency of the process. In our work, we have
collected a dataset of 120 colonoscopy videos and 2416 snapshots taken during
the procedure, that have been annotated by experts. Further, we have developed
a novel, vision-transformer based landmark detection algorithm that identifies
key anatomical landmarks (the appendiceal orifice, ileocecal valve/cecum
landmark and rectum retroflexion) from snapshots taken during colonoscopy. Our
algorithm uses an adaptive gamma correction during preprocessing to maintain a
consistent brightness for all images. We then use a vision transformer as the
feature extraction backbone and a fully connected network based classifier head
to categorize a given frame into four classes: the three landmarks or a
non-landmark frame. We compare the vision transformer (ViT-B/16) backbone with
ResNet-101 and ConvNext-B backbones that have been trained similarly. We report
an accuracy of 82% with the vision transformer backbone on a test dataset of
snapshots.</description><identifier>DOI: 10.48550/arxiv.2209.11304</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2022-09</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2209.11304$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2209.11304$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Tamhane, Aniruddha</creatorcontrib><creatorcontrib>Mida, Tse'ela</creatorcontrib><creatorcontrib>Posner, Erez</creatorcontrib><creatorcontrib>Bouhnik, Moshe</creatorcontrib><title>Colonoscopy Landmark Detection using Vision Transformers</title><description>Colonoscopy is a routine outpatient procedure used to examine the colon and
rectum for any abnormalities including polyps, diverticula and narrowing of
colon structures. A significant amount of the clinician's time is spent in
post-processing snapshots taken during the colonoscopy procedure, for
maintaining medical records or further investigation. Automating this step can
save time and improve the efficiency of the process. In our work, we have
collected a dataset of 120 colonoscopy videos and 2416 snapshots taken during
the procedure, that have been annotated by experts. Further, we have developed
a novel, vision-transformer based landmark detection algorithm that identifies
key anatomical landmarks (the appendiceal orifice, ileocecal valve/cecum
landmark and rectum retroflexion) from snapshots taken during colonoscopy. Our
algorithm uses an adaptive gamma correction during preprocessing to maintain a
consistent brightness for all images. We then use a vision transformer as the
feature extraction backbone and a fully connected network based classifier head
to categorize a given frame into four classes: the three landmarks or a
non-landmark frame. We compare the vision transformer (ViT-B/16) backbone with
ResNet-101 and ConvNext-B backbones that have been trained similarly. We report
an accuracy of 82% with the vision transformer backbone on a test dataset of
snapshots.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAURLXpoqT5gK7qH7Crt-RlcR8JGLIx3ZprPYJIIgUpLc3fp3GyGoaB4RyEngluuBYCv0L-C78NpbhtCGGYPyLdpX2KqZh0PFc9RHuAvKve3cmZU0ix-ikhbqvvUK5lyBCLT_ngcnlCDx72xS3vuUDD58fQrep-87Xu3voapOK1nDAzVvqJGyUEbY0DqiRxTntrhaSEaTXPdKJWGga29dz6CYTH2kvFFujldjujj8cc_gHP41VhnBXYBSxLQlE</recordid><startdate>20220922</startdate><enddate>20220922</enddate><creator>Tamhane, Aniruddha</creator><creator>Mida, Tse'ela</creator><creator>Posner, Erez</creator><creator>Bouhnik, Moshe</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220922</creationdate><title>Colonoscopy Landmark Detection using Vision Transformers</title><author>Tamhane, Aniruddha ; Mida, Tse'ela ; Posner, Erez ; Bouhnik, Moshe</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-6b03cd6fb4c75529cea2761ee8fdd5621387d6fb42b2d6c3ad9f4dfba5f08f673</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Tamhane, Aniruddha</creatorcontrib><creatorcontrib>Mida, Tse'ela</creatorcontrib><creatorcontrib>Posner, Erez</creatorcontrib><creatorcontrib>Bouhnik, Moshe</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tamhane, Aniruddha</au><au>Mida, Tse'ela</au><au>Posner, Erez</au><au>Bouhnik, Moshe</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Colonoscopy Landmark Detection using Vision Transformers</atitle><date>2022-09-22</date><risdate>2022</risdate><abstract>Colonoscopy is a routine outpatient procedure used to examine the colon and
rectum for any abnormalities including polyps, diverticula and narrowing of
colon structures. A significant amount of the clinician's time is spent in
post-processing snapshots taken during the colonoscopy procedure, for
maintaining medical records or further investigation. Automating this step can
save time and improve the efficiency of the process. In our work, we have
collected a dataset of 120 colonoscopy videos and 2416 snapshots taken during
the procedure, that have been annotated by experts. Further, we have developed
a novel, vision-transformer based landmark detection algorithm that identifies
key anatomical landmarks (the appendiceal orifice, ileocecal valve/cecum
landmark and rectum retroflexion) from snapshots taken during colonoscopy. Our
algorithm uses an adaptive gamma correction during preprocessing to maintain a
consistent brightness for all images. We then use a vision transformer as the
feature extraction backbone and a fully connected network based classifier head
to categorize a given frame into four classes: the three landmarks or a
non-landmark frame. We compare the vision transformer (ViT-B/16) backbone with
ResNet-101 and ConvNext-B backbones that have been trained similarly. We report
an accuracy of 82% with the vision transformer backbone on a test dataset of
snapshots.</abstract><doi>10.48550/arxiv.2209.11304</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2209.11304 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2209_11304 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
title | Colonoscopy Landmark Detection using Vision Transformers |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T12%3A12%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Colonoscopy%20Landmark%20Detection%20using%20Vision%20Transformers&rft.au=Tamhane,%20Aniruddha&rft.date=2022-09-22&rft_id=info:doi/10.48550/arxiv.2209.11304&rft_dat=%3Carxiv_GOX%3E2209_11304%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |