Correspondence attention for facial appearance simulation

In orthognathic surgical planning for patients with jaw deformities, it is crucial to accurately simulate the changes in facial appearance that follow the bony movement. Compared with the traditional biomechanics-based methods like the finite-element method (FEM), which are both labor-intensive and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical image analysis 2024-04, Vol.93, p.103094, Article 103094
Hauptverfasser: Fang, Xi, Kim, Daeseung, Xu, Xuanang, Kuang, Tianshu, Lampen, Nathan, Lee, Jungwook, Deng, Hannah H., Liebschner, Michael A.K., Xia, James J., Gateno, Jaime, Yan, Pingkun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In orthognathic surgical planning for patients with jaw deformities, it is crucial to accurately simulate the changes in facial appearance that follow the bony movement. Compared with the traditional biomechanics-based methods like the finite-element method (FEM), which are both labor-intensive and computationally inefficient, deep learning-based methods offer an efficient and robust modeling alternative. However, current methods do not account for the physical relationship between facial soft tissue and bony structure, causing them to fall short in accuracy compared to FEM. In this work, we propose an Attentive Correspondence assisted Movement Transformation network (ACMT-Net) to predict facial changes by correlating facial soft tissue changes with bony movement through a point-to-point attentive correspondence matrix. To ensure efficient training, we also introduce a contrastive loss for self-supervised pre-training of the ACMT-Net with a k-Nearest Neighbors (k-NN) based clustering. Experimental results on patients with jaw deformities show that our proposed solution can achieve significantly improved computational efficiency over the state-of-the-art FEM-based method with comparable facial change prediction accuracy. •Attentive correspondence to facilitate movement transformation between point sets.•A novel contrastive loss to incorporate spatial proximity into feature correlation.•Visualization of the impact of attentive correspondence on facial simulation.•Same level of accuracy as FEM but considerably reduced computational time.
ISSN:1361-8415
1361-8423
1361-8423
DOI:10.1016/j.media.2024.103094