Fast-Convergent and Communication-Alleviated Heterogeneous Hierarchical Federated Learning in Autonomous Driving
Street Scene Semantic Understanding (denoted as TriSU) is a complex task for autonomous driving (AD). However, inference model trained from data in a particular geographical region faces poor generalization when applied in other regions due to inter-city data domain-shift. Hierarchical Federated Lea...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Street Scene Semantic Understanding (denoted as TriSU) is a complex task for
autonomous driving (AD). However, inference model trained from data in a
particular geographical region faces poor generalization when applied in other
regions due to inter-city data domain-shift. Hierarchical Federated Learning
(HFL) offers a potential solution for improving TriSU model generalization by
collaborative privacy-preserving training over distributed datasets from
different cities. Unfortunately, it suffers from slow convergence because data
from different cities are with disparate statistical properties. Going beyond
existing HFL methods, we propose a Gaussian heterogeneous HFL algorithm
(FedGau) to address inter-city data heterogeneity so that convergence can be
accelerated. In the proposed FedGau algorithm, both single RGB image and RGB
dataset are modelled as Gaussian distributions for aggregation weight design.
This approach not only differentiates each RGB image by respective statistical
distribution, but also exploits the statistics of dataset from each city in
addition to the conventionally considered data volume. With the proposed
approach, the convergence is accelerated by 35.5\%-40.6\% compared to existing
state-of-the-art (SOTA) HFL methods. On the other hand, to reduce the involved
communication resource, we further introduce a novel performance-aware adaptive
resource scheduling (AdapRS) policy. Unlike the traditional static resource
scheduling policy that exchanges a fixed number of models between two adjacent
aggregations, AdapRS adjusts the number of model aggregation at different
levels of HFL so that unnecessary communications are minimized. Extensive
experiments demonstrate that AdapRS saves 29.65\% communication overhead
compared to conventional static resource scheduling policy while maintaining
almost the same performance. |
---|---|
DOI: | 10.48550/arxiv.2409.19560 |