TEVA: Training-Efficient and Verifiable Aggregation for Federated Learning for Consumer Electronics in Industry 5.0
Federated learning (FL) has been widely used for privacy-preserving model updates in Industry 5.0, facilitated by 6G networks. Despite FL's privacy-preserving advantages, it remains vulnerable to attacks where adversaries can infer private data from local models or manipulate the central server...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on consumer electronics 2025, p.1-1 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated learning (FL) has been widely used for privacy-preserving model updates in Industry 5.0, facilitated by 6G networks. Despite FL's privacy-preserving advantages, it remains vulnerable to attacks where adversaries can infer private data from local models or manipulate the central server (CS) to deliver falsified global models. Current privacy-preserving approaches, primarily based on the FedAvg algorithm, fail to optimize training efficiency for non-independent and identically distributed (non-IID) data. This article proposes training-efficient and verifiable aggregation (TEVA) for FL to resolve these issues. This scheme combines threshold Paillier homomorphic encryption (TPHE), verifiable aggregation, and an optimized double momentum update mechanism (OdMum). TEVA not only leverages TPHE to protect the privacy of local models but also ensures the integrity of the global model through a verifiable aggregation mechanism. Additionally, TEVA integrates the OdMum algorithm to effectively address the challenges posed by non-IID data, promoting rapid model convergence and significantly enhancing overall training efficiency. Security analysis indicates that TEVA meets the requirements for privacy protection. Extensive experimental results demonstrate that TEVA can accelerate model convergence while incurring lower computational and communication overheads. |
---|---|
ISSN: | 0098-3063 1558-4127 |
DOI: | 10.1109/TCE.2024.3517739 |