Research on non-intrusive load identification method based on multi-feature fusion with improved shufflenetv2
Non-intrusive load monitoring is a significant advancement in energy conservation and smart electricity usage as it enables the identification of load status and equipment type without the need for extensive sensing instruments. As one of the important steps of non-intrusive load monitoring, non-int...
Gespeichert in:
Veröffentlicht in: | Measurement science & technology 2024-07, Vol.35 (7), p.76104 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Non-intrusive load monitoring is a significant advancement in energy conservation and smart electricity usage as it enables the identification of load status and equipment type without the need for extensive sensing instruments. As one of the important steps of non-intrusive load monitoring, non-intrusive load identification plays a crucial role in the correct identification of electrical appliances. In this paper, a novel approach is presented to amalgamate multiple appliance features into a new identification feature, which is then used for appliance recognition through an improved lightweight deep learning model. This addresses the issue of low accuracy when using a single feature for appliance identification. The core concept involves initially capturing the current data of an appliance during steady state operation and fusing the image features of the appliance encoded by Gramian summation angular field, Gramian difference angular field, and Markov transition field into new recognition features for each appliance using average weighting. Subsequently, a shufflenetv2 lightweight deep learning model based on the squeeze -and-excitation module is used to mine the constructed load feature information for the load classification task. The method is then experimentally validated on the Self-test Dataset, PLAID, and WHITED dataset, resulting in recognition accuracies of 100%, 98.214%, and 99.745%, respectively. These results demonstrate the effectiveness of the proposed method in significantly improving recognition performance compared to the original approach. |
---|---|
ISSN: | 0957-0233 1361-6501 |
DOI: | 10.1088/1361-6501/ad3978 |