A fault‐tolerant and scalable boosting method over vertically partitioned data

Vertical federated learning (VFL) can learn a common machine learning model over vertically partitioned datasets. However, VFL are faced with these thorny problems: (1) both the training and prediction are very vulnerable to stragglers; (2) most VFL methods can only support a specific machine learni...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:CAAI Transactions on Intelligence Technology 2024-10, Vol.9 (5), p.1092-1100
Hauptverfasser: Jiang, Hai, Shang, Songtao, Liu, Peng, Yi, Tong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Vertical federated learning (VFL) can learn a common machine learning model over vertically partitioned datasets. However, VFL are faced with these thorny problems: (1) both the training and prediction are very vulnerable to stragglers; (2) most VFL methods can only support a specific machine learning model. Suppose that VFL incorporates the features of centralised learning, then the above issues can be alleviated. With that in mind, this paper proposes a new VFL scheme, called FedBoost, which makes private parties upload the compressed partial order relations to the honest but curious server before training and prediction. The server can build a machine learning model and predict samples on the union of coded data. The theoretical analysis indicates that the absence of any private party will not affect the training and prediction as long as a round of communication is achieved. Our scheme can support canonical tree‐based models such as Tree Boosting methods and Random Forests. The experimental results also demonstrate the availability of our scheme.
ISSN:2468-2322
2468-2322
DOI:10.1049/cit2.12339