Dual-feature and multi-scale fusion using U-net deep learning model for ECG biometric recognition

Aiming at the challenges that the traditional photoplethysmography (PPG) biometrics is not robust and precision of recognition, this paper proposes a dual-feature and multi-scale fusion using U2-net deep learning model (DMFUDM). First, to obtain complementary information of different features, we ex...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of intelligent & fuzzy systems 2023-11, Vol.45 (5), p.7445
Hauptverfasser: Hu, Zunmei, Huang, Yuwen, Yang, Yuzhen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Aiming at the challenges that the traditional photoplethysmography (PPG) biometrics is not robust and precision of recognition, this paper proposes a dual-feature and multi-scale fusion using U2-net deep learning model (DMFUDM). First, to obtain complementary information of different features, we extract the local and global features of one-dimensional multi-resolution local binary patterns (1DMRLBP) and multi-scale differential feature (MSDF). Then, to extract robust discriminant feature information from the 1DMRLBP and MSDF features, a novel two-branch U2-net framework is constructed. In addition, a multi-scale extraction module is designed to capture the transition information. It consists of multiple convolution layers with different receptive fields for capturing multi-scale transition information. At last, a two-level attention module is used to adaptively capture valuable information for ECG biometrics. DMFUDM can obtain the average subject recognition rates of 99.76%, 98.31%, 98.97% and 98.87% on four databases, respectively, and experiment results show that it performs competitively with state-of-the-art methods on all four databases.
ISSN:1064-1246
1875-8967
DOI:10.3233/JIFS-230721