Incremental Fuzzy Clustering-Based Neural Networks Driven With the Aid of Dynamic Input Space Partition and Quasi-Fuzzy Local Models

Fuzzy clustering-based neural networks (FCNNs) based on information granulation techniques have been shown to be effective Takagi-Sugeno (TS)-type fuzzy models. However, the existing FCNNs could not cope well with sequential learning tasks. In this study, we introduce incremental FCNNs (IFCNNs), whi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cybernetics 2024-05, Vol.54 (5), p.2978-2991
Hauptverfasser: Zhang, Congcong, Oh, Sung-Kwun, Fu, Zunwei, Pedrycz, Witold
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Fuzzy clustering-based neural networks (FCNNs) based on information granulation techniques have been shown to be effective Takagi-Sugeno (TS)-type fuzzy models. However, the existing FCNNs could not cope well with sequential learning tasks. In this study, we introduce incremental FCNNs (IFCNNs), which could dynamically update themselves whenever new learning data (e.g., single datum or block data) are incorporated into the dataset. Specifically, we employ dynamic (incremental) fuzzy C-means (FCMs) clustering algorithms to reveal a structure in data and divide the entire input space into several subregions. In the aforementioned partition, the dynamic FCM adaptively adjusts the position of its prototypes by using sequential data. Due to the time-sharing arrival of training data, compared with batch learning models, incremental learning methods may lose classification (prediction) accuracy. In order to tackle this challenge, we utilize quasi-fuzzy local models (QFLMs) based on modified Schmidt neural networks to replace the popular linear functions in TS-type fuzzy models to refine and enhance the ability to represent the behavior of fuzzy subspaces. Meanwhile, the recursive least square error (LSE) estimation is utilized to update the weights of QFLMs from one-by-one or block-by-block (fixed or varying block size) learning data. In addition, the L_{2} regularization is considered to ameliorate the deterioration of generalization abilities caused by potential overfitting when carrying out weight estimation. The proposed method leads to the construction of FCNNs in a new way, which can effectively deal with incremental data as well as deliver sound generalization capability. Extensive machine-learning datasets and a real-world application are employed to show the validity and performance of the presented methods. From the experimental results, we show that the proposal can maintain sound classification accuracy when effectively processing sequential data.
ISSN:2168-2267
2168-2275
DOI:10.1109/TCYB.2022.3228303