Derivative-Based Learning of Interval Type-2 Intuitionistic Fuzzy Logic Systems for Noisy Regression Problems

This study presents a comparative evaluation of interval type-2 intuitionistic fuzzy logic system using three derivative-based learning algorithms on noisy regression problems. The motivation for this study is to manage uncertainty in noisy regression problems for the first time using both membershi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of fuzzy systems 2020-04, Vol.22 (3), p.1007-1019
Hauptverfasser: Eyoh, Imo Jeremiah, Umoh, Uduak Augustine, Inyang, Udoinyang Godwin, Eyoh, Jeremiah Effiong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This study presents a comparative evaluation of interval type-2 intuitionistic fuzzy logic system using three derivative-based learning algorithms on noisy regression problems. The motivation for this study is to manage uncertainty in noisy regression problems for the first time using both membership and non-membership functions that are fuzzy. The proposed models are able to handle ‘neither this nor that state’ in the noisy regression data with the aim of enabling hesitation and handling more uncertainty in the data. The gradient descent-backpropagation (first-order derivative), decoupled extended Kalman filter (second-order derivative) and hybrid approach (where the decoupled extended Kalman filter is used to learn the consequent parameters and gradient descent is used to optimise the antecedent parameters) are applied for the adaptation of the model parameters. The experiments are conducted using two artificially generated and one real-world datasets, namely Mackey–Glass time series, Lorenz time series and US stock datasets. Experimental analyses show that the extended Kalman filter-based learning approaches of interval type-2 intuitionistic fuzzy logic exhibit superior prediction accuracies to gradient descent approach especially at high noise level. The decoupled extended Kalman filter model however converges faster but incurs more computational overhead in terms of the running time.
ISSN:1562-2479
2199-3211
DOI:10.1007/s40815-020-00806-z