The implementation of inductive graph neural networks with L1 loss for spatiotemporal kriging

An important application in spatiotemporal data analysis is spatiotemporal kriging, which seeks to recover signals for unobserved locations based on observed signals. The main challenge of the spatiotemporal kriging is how to effectively model and take use of the spatiotemporal dependencies within t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Nissa, N. K., Pusparini, R. T., Setiyoko, A., Arymurthy, A. M.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:An important application in spatiotemporal data analysis is spatiotemporal kriging, which seeks to recover signals for unobserved locations based on observed signals. The main challenge of the spatiotemporal kriging is how to effectively model and take use of the spatiotemporal dependencies within the data. In order to recover data for unsampled sensors on a network or graph structure, we implement an Inductive Graph Neural Network Kriging (IGNNK) with L1loss. We conducted an experiment using four real-world spatiotemporal datasets to demonstrate the effectiveness of our model. To evaluate how the IGNNK algorithm and to minimize the error value, we add L1loss function to the model and compare it with L2loss in hope to minimize the errors in the model. L1loss shrinks the less important features coefficient to zero, removing some feature altogether. Our results show that: in general, IGNNK achieves quite good performance on two spatial datasets: NREL and USHCN. We also obtained an interesting result that the increasing number of iterations on the IGNNK model using L1loss has a positive effect on the performance of the model. Specifically, the result shows a good effect on the average value of the MAE on the METR-LA (6.816), NREL (3.960) and USHCN (2.427) datasets as well as on the value of the RMSE on the NREL (5.657) dataset. Meanwhile, the average MAE value from SeData is 4.531, which is slightly lower than using L2 loss.
ISSN:0094-243X
1551-7616
DOI:10.1063/5.0184738