Maxillofacial bone movements-aware dual graph convolution approach for postoperative facial appearance prediction

Postoperative facial appearance prediction is vital for surgeons to make orthognathic surgical plans and communicate with patients. Conventional biomechanical prediction methods require heavy computations and time-consuming manual operations which hamper their clinical practice. Deep learning based...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical image analysis 2025-01, Vol.99, p.103350, Article 103350
Hauptverfasser: Huang, Xinrui, He, Dongming, Li, Zhenming, Zhang, Xiaofan, Wang, Xudong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Postoperative facial appearance prediction is vital for surgeons to make orthognathic surgical plans and communicate with patients. Conventional biomechanical prediction methods require heavy computations and time-consuming manual operations which hamper their clinical practice. Deep learning based methods have shown the potential to improve computational efficiency and achieve comparable accuracy. However, existing deep learning based methods only learn facial features from facial point clouds and process regional points independently, which has constrains in perceiving facial surface details and topology. In addition, they predict postoperative displacements for all facial points in one step, which is vulnerable to weakly supervised training and easy to produce distorted predictions. To alleviate these limitations, we propose a novel dual graph convolution based postoperative facial appearance prediction model which considers the surface geometry by learning on two graphs constructed from the facial mesh in the Euclidean and geodesic spaces, and transfers the bone movements to facial movements in dual spaces. We further adopt a coarse-to-fine strategy which performs coarse predictions for facial meshes with fewer vertices and then adds more to obtain more robust fine predictions. Experiments on real clinical data demonstrate that our method outperforms state-of-the-art deep learning based methods qualitatively and quantitatively. •A new bone movements transfer module is designed to transfer bone movements to relevant regions of the facial mesh by establishing the dual space attentive correspondences between the facial soft tissue and bone.•We adopt a coarse-to-fine strategy to predict changes of the facial mesh at different resolutions, which results in more accurate, robust and finer predictions.•Comprehensive experiments show that our model outperforms existing deep learning based postoperative facial appearance prediction methods qualitatively and quantitatively.
ISSN:1361-8415
1361-8423
1361-8423
DOI:10.1016/j.media.2024.103350