ST-NILM: A Wavelet Scattering-Based Architecture for Feature Extraction and Multilabel Classification in NILM Signals

Nonintrusive load monitoring (NILM) is a relevant tool for improving energy consumption habits, contributing to energy conservation and distribution system planning. In recent years, high-frequency strategies using deep learning have been presented in the literature, achieving the state-of-the-art r...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE sensors journal 2024-04, Vol.24 (7), p.10540-10550
Hauptverfasser: de Aguiar, Everton Luiz, da Silva Nolasco, Lucas, Lazzaretti, André Eugenio, Pipa, Daniel Rodrigues, Lopes, Heitor Silvério
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Nonintrusive load monitoring (NILM) is a relevant tool for improving energy consumption habits, contributing to energy conservation and distribution system planning. In recent years, high-frequency strategies using deep learning have been presented in the literature, achieving the state-of-the-art results for detection, feature extraction, and classification of aggregated electrical loads, particularly with the architecture defined as deep neural network model for detection, feature extraction, and multilabel classification (DeepDFML). DeepDFML used a deep convolutional network (DCN) whose trained weights were shared for different output fully connected networks. The performance of DeepDFML depended on the availability of data and data augmentation (DA) strategies. Given this scenario, we propose the ST-NILM, a new integrated architecture based on the scattering transform (ST). ST-NILM has a DCN with analytical wavelet-based nontrained weights, shared with fully connected output networks that perform event detection and multilabel classification of aggregate loads. We compared ST-NILM and DeepDFML for the LIT-SYN dataset. ST-NILM achieved equivalent detection results to DeepDFML for two and three aggregated loads and performed better for single loads. The hardware implementation shows that ST-NILM consumes less memory, less GPU load, and substantially less computational effort than DeepDFML. ST-NILM presents comparable or even superior results than other state-of-the-art deep-learning-based methods.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2024.3360188