covEcho Resource constrained lung ultrasound image analysis tool for faster triaging and active learning
Lung ultrasound (LUS) is possibly the only medical imaging modality which could be used for continuous and periodic monitoring of the lung. This is extremely useful in tracking the lung manifestations either during the onset of lung infection or to track the effect of vaccination on lung as in pande...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Lung ultrasound (LUS) is possibly the only medical imaging modality which
could be used for continuous and periodic monitoring of the lung. This is
extremely useful in tracking the lung manifestations either during the onset of
lung infection or to track the effect of vaccination on lung as in pandemics
such as COVID-19. There have been many attempts in automating the
classification of severity of lung into various classes or automatic
segmentation of various LUS landmarks and manifestations. However, all these
approaches are based on training static machine learning models which require a
significantly clinically annotated large dataset and are computationally heavy
and most of the time non-real time. In this work, a real-time light weight
active learning-based approach is presented for faster triaging in COVID-19
subjects in resource constrained settings. The tool, based on the you look only
once (YOLO) network, has the capability of providing the quality of images
based on the identification of various LUS landmarks, artefacts and
manifestations, prediction of severity of lung infection, possibility of active
learning based on the feedback from clinicians or on the image quality and a
summarization of the significant frames which are having high severity of
infection and high image quality for further analysis. The results show that
the proposed tool has a mean average precision (mAP) of 66% at an Intersection
over Union (IoU) threshold of 0.5 for the prediction of LUS landmarks. The 14MB
lightweight YOLOv5s network achieves 123 FPS while running in a Quadro P4000
GPU. The tool is available for usage and analysis upon request from the
authors. |
---|---|
DOI: | 10.48550/arxiv.2206.10183 |