Behavior of LSTM and Transformer deep learning models in flood simulation considering South Asian tropical climate

The imperative for a reliable and accurate flood forecasting procedure stems from the hazardous nature of the disaster. In response, researchers are increasingly turning to innovative approaches, particularly machine learning models, which offer enhanced accuracy compared to traditional methods. How...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of hydroinformatics 2024-09, Vol.26 (9), p.2216-2234
Hauptverfasser: Madhushanka, G.W.T.I., Jayasinghe, M. T. R., Rajapakse, R. A.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The imperative for a reliable and accurate flood forecasting procedure stems from the hazardous nature of the disaster. In response, researchers are increasingly turning to innovative approaches, particularly machine learning models, which offer enhanced accuracy compared to traditional methods. However, a notable gap exists in the literature concerning studies focused on the South Asian tropical region, which possesses distinct climate characteristics. This study investigates the applicability and behavior of long short-term memory (LSTM) and transformer models in flood simulation considering the Mahaweli catchment in Sri Lanka, which is mostly affected by the Northeast Monsoon. The importance of different input variables in the prediction was also a key focus of this study. Input features for the models included observed rainfall data collected from three nearby rain gauges, as well as historical discharge data from the target river gauge. Results showed that the use of past water level data denotes a higher impact on the output compared to the other input features such as rainfall, for both architectures. All models denoted satisfactory performances in simulating daily water levels, with Nash–Sutcliffe Efficiency (NSE) values greater than 0.77 while the transformer encoder model showed a superior performance compared to encoder–decoder models.
ISSN:1464-7141
1465-1734
DOI:10.2166/hydro.2024.083