A deep learning framework for quality assessment and restoration in video endoscopy

•Accurate real-time detection and localisation of artifacts in endoscopy videos.•A novel video quality assessment metric based on type, location and area of artifact.•Context-based artefact specific restorations utilizing novel adaptive regularization.•Novel edge-aware blind deblurring and recolor f...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical image analysis 2021-02, Vol.68, p.101900-101900, Article 101900
Hauptverfasser: Ali, Sharib, Zhou, Felix, Bailey, Adam, Braden, Barbara, East, James E., Lu, Xin, Rittscher, Jens
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Accurate real-time detection and localisation of artifacts in endoscopy videos.•A novel video quality assessment metric based on type, location and area of artifact.•Context-based artefact specific restorations utilizing novel adaptive regularization.•Novel edge-aware blind deblurring and recolor function for saturation correction.•Fully deep learning based end-to-end models and extensive validation. [Display omitted] Endoscopy is a routine imaging technique used for both diagnosis and minimally invasive surgical treatment. Artifacts such as motion blur, bubbles, specular reflections, floating objects and pixel saturation impede the visual interpretation and the automated analysis of endoscopy videos. Given the widespread use of endoscopy in different clinical applications, robust and reliable identification of such artifacts and the automated restoration of corrupted video frames is a fundamental medical imaging problem. Existing state-of-the-art methods only deal with the detection and restoration of selected artifacts. However, typically endoscopy videos contain numerous artifacts which motivates to establish a comprehensive solution. In this paper, a fully automatic framework is proposed that can: 1) detect and classify six different artifacts, 2) segment artifact instances that have indefinable shapes, 3) provide a quality score for each frame, and 4) restore partially corrupted frames. To detect and classify different artifacts, the proposed framework exploits fast, multi-scale and single stage convolution neural network detector. In addition, we use an encoder-decoder model for pixel-wise segmentation of irregular shaped artifacts. A quality score is introduced to assess video frame quality and to predict image restoration success. Generative adversarial networks with carefully chosen regularization and training strategies for discriminator-generator networks are finally used to restore corrupted frames. The detector yields the highest mean average precision (mAP) of 45.7 and 34.7, respectively for 25% and 50% IoU thresholds, and the lowest computational time of 88 ms allowing for near real-time processing. The restoration models for blind deblurring, saturation correction and inpainting demonstrate significant improvements over previous methods. On a set of 10 test videos, an average of 68.7% of video frames successfully passed the quality score (≥0.9) after applying the proposed restoration framework thereby retaining 25% more frames compared to
ISSN:1361-8415
1361-8423
DOI:10.1016/j.media.2020.101900