Can a machine learning model accurately predict patient resource utilization following lumbar spinal fusion?
With the increasing emphasis on value-based healthcare in Centers for Medicare and Medicaid Services reimbursement structures, bundled payment models have been adopted for many orthopedic procedures. Immense variability of patients across hospitals and providers makes these models potentially less v...
Gespeichert in:
Veröffentlicht in: | The spine journal 2020-03, Vol.20 (3), p.329-336 |
---|---|
Hauptverfasser: | , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the increasing emphasis on value-based healthcare in Centers for Medicare and Medicaid Services reimbursement structures, bundled payment models have been adopted for many orthopedic procedures. Immense variability of patients across hospitals and providers makes these models potentially less viable in spine surgery. Machine-learning models have been shown reliable at predicting patient-specific outcomes following lumbar spine surgery and could, therefore, be applied to developing stratified bundled payment schemes.
(1) Can a Naïve Bayes machine-learning model accurately predict inpatient payments, length of stay (LOS), and discharge disposition, following dorsal and lumbar fusion? (2) Can such a model then be used to develop a risk-stratified payment scheme?
A Naïve Bayes machine-learning model was constructed using an administrative database.
Patients undergoing dorsal and lumbar fusion for nondeformity indications from 2009 through 2016 were included. Preoperative inputs included age group, gender, ethnicity, race, type of admission, All Patients Refined (APR) risk of mortality, APR severity of illness, and Clinical Classifications Software diagnosis code.
Predicted resource utilization outcomes included LOS, discharge disposition, and total inpatient payments. Model validation was addressed via reliability, model output quality, and decision speed, based on application of training and validation sets. Risk-stratified payment models were developed according to APR risk of mortality and severity of illness.
A Naïve Bayes machine-learning algorithm with adaptive boosting demonstrated high reliability and area under the receiver-operating characteristics curve of 0.880, 0.941, and 0.906 for cost, LOS, and discharge disposition, respectively. Patients with increased risk of mortality or severity of illness incurred costs resulting in greater inpatient payments in a patient-specific tiered bundled payment, reflecting increased risk on institutions caring for these patients. We found that a large range in expected payments due to individuals’ preoperative comorbidities indicating an individualized risk-based model is warranted.
A Naïve Bayes machine-learning model was shown to have good-to-excellent reliability and responsiveness for cost, LOS, and discharge disposition. Based on APR risk of mortality and APR severity of illness, there was a significant difference in episode costs from lowest to highest risk strata. After using normalized model error to |
---|---|
ISSN: | 1529-9430 1878-1632 |
DOI: | 10.1016/j.spinee.2019.10.007 |