A personalized blood glucose level prediction model with a fine-tuning strategy: A proof-of-concept study
•We proposed a personalized blood glucose (BG) level prediction model with a fine-tuning strategy and demonstrated its efficacy on large datasets including three types of diabetes (type 1 diabetes, type 2 diabetes, and gestational diabetes).•The fine-tuned convolutional neural network (CNN) showed t...
Gespeichert in:
Veröffentlicht in: | Computer methods and programs in biomedicine 2021-11, Vol.211, p.106424-106424, Article 106424 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •We proposed a personalized blood glucose (BG) level prediction model with a fine-tuning strategy and demonstrated its efficacy on large datasets including three types of diabetes (type 1 diabetes, type 2 diabetes, and gestational diabetes).•The fine-tuned convolutional neural network (CNN) showed the performance of the general CNN in most cases and outperformed the scratch CNN.•We analyzed all cases of four predictive patterns and found that the input BG level trend and the BG level at the time of prediction were important in determining the future BG level trend.•We believe that our method's method and results will be useful for building the personalized model and analyzing its predictions.
The accurate prediction of blood glucose (BG) level is still a challenge for diabetes management. This is due to various factors such as diet, personal physiological characteristics, stress, and activities influence changes in BG level. To develop an accurate BG level predictive model, we propose a personalized model based on a convolutional neural network (CNN) with a fine-tuning strategy.
We utilized continuous glucose monitoring (CGM) datasets from 1052 professional CGM sessions and split them into three groups according to type 1, type 2, and gestational diabetes mellitus (T1DM, T2DM, and GDM, respectively). During the preprocessing, only CGM data points were utilized, and future BG levels of four different prediction horizons (PHs, 15, 30, 45, and 60 min) were used as output. In training, we trained a general CNN and a multi-output random forest regressor using a hold-out method for each group. Next, we developed two personalized models: (1) by fine-tuning the general CNN on partial sample points of each CGM dataset, and (2) by learning a CNN from scratch on the points.
For all groups, the fine-tuned CNN showed the lowest average root mean squared error, average mean absolute percentage error, highest average time gain (PH = 15 and 60 min in T1DM) and highest percentage in region A of Clarke error grid analysis at all PHs. In the performance comparison between the fine-tuned CNN and other models, we found that the fine-tuned CNN improved the performance of the general CNN in most cases and outperformed the scratch CNN at all PHs in all groups, making the fine-tuning strategy was useful for accurate BG level prediction. We analyzed all cases of four predictive patterns in each group, and found that the input BG level trend and the BG level at the time of predictio |
---|---|
ISSN: | 0169-2607 1872-7565 |
DOI: | 10.1016/j.cmpb.2021.106424 |