Patch-driven Tongue Image Segmentation Using Sparse Representation

Tongue diagnosis plays a key role in TCM (Traditional Chinese Medicine) diagnosis. Tongue image segmentation lays a solid foundation for quantitative tongue analysis and diagnosis. However, the segmentation of tongue body is challenging due to the factors such as large personal variation of tongue b...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2020-01, Vol.8, p.1-1
Hauptverfasser: Liu, Weixia, Zhou, Changen, Li, Zuoyong, Hu, Zhongyi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Tongue diagnosis plays a key role in TCM (Traditional Chinese Medicine) diagnosis. Tongue image segmentation lays a solid foundation for quantitative tongue analysis and diagnosis. However, the segmentation of tongue body is challenging due to the factors such as large personal variation of tongue body on color, texture and shape, as well as weak edges caused by similar color between tongue body and neighboring tissues, especially the lip. Existing segmentation methods usually use only single color component and simple prior knowledge, thus leading to inaccuracy and instability. To alleviate these issues, a patch-driven segmentation method with sparse representation is proposed in this paper. Specifically, each patch in the testing image is sparsely represented by patches in the spatially varying dictionary, which is constructed by the local patches of training images. The derived sparse coefficients are then employed to estimate the tongue probability. Finally, the hard segmentation is obtained by applying the maximum a posteriori (MAP) rule on the tongue probability map and further polished with morphological operations. The proposed method has been extensively evaluated on a tongue image dataset including 290 subjects using 10-fold cross-validation, as well as additional 10 unseen testing subjects. The proposed method has achieved more accurate segmentation results, compared with the state-of-the-art methods.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.2976826