Turn Your Vision into Reality-AI-Powered Pre-operative Outcome Simulation in Rhinoplasty Surgery
The increasing demand and changing trends in rhinoplasty surgery emphasize the need for effective doctor-patient communication, for which Artificial Intelligence (AI) could be a valuable tool in managing patient expectations during pre-operative consultations. To develop an AI-based model to simulat...
Gespeichert in:
Veröffentlicht in: | Aesthetic plastic surgery 2024-05 |
---|---|
Hauptverfasser: | , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The increasing demand and changing trends in rhinoplasty surgery emphasize the need for effective doctor-patient communication, for which Artificial Intelligence (AI) could be a valuable tool in managing patient expectations during pre-operative consultations.
To develop an AI-based model to simulate realistic postoperative rhinoplasty outcomes.
We trained a Generative Adversarial Network (GAN) using 3,030 rhinoplasty patients' pre- and postoperative images. One-hundred-one study participants were presented with 30 pre-rhinoplasty patient photographs followed by an image set consisting of the real postoperative versus the GAN-generated image and asked to identify the GAN-generated image.
The study sample (48 males, 53 females, mean age of 31.6 ± 9.0 years) correctly identified the GAN-generated images with an accuracy of 52.5 ± 14.3%. Male study participants were more likely to identify the AI-generated images compared with female study participants (55.4% versus 49.6%; p = 0.042).
We presented a GAN-based simulator for rhinoplasty outcomes which used pre-operative patient images to predict accurate representations that were not perceived as different from real postoperative outcomes.
This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 . |
---|---|
ISSN: | 0364-216X 1432-5241 |
DOI: | 10.1007/s00266-024-04043-9 |