PNAS: A privacy preserving framework for neural architecture search services

•This paper proposes PNAS, a novel privacy preserving training framework for MLaaS scenarios, supports the optimization for both network parameter and model architecture.•We design a double encryption scheme for PNAS. Our scheme eliminates privacy leakage during remote training. Samples and their co...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information sciences 2021-09, Vol.573, p.370-381
Hauptverfasser: Pan, Zijie, Zeng, Jiajin, Cheng, Riqiang, Yan, Hongyang, Li, Jin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•This paper proposes PNAS, a novel privacy preserving training framework for MLaaS scenarios, supports the optimization for both network parameter and model architecture.•We design a double encryption scheme for PNAS. Our scheme eliminates privacy leakage during remote training. Samples and their corresponding labels, as well as intermediate feature maps are well protected from an untrusted server.•We implement a prototype PNAS and evaluate its performance. Experimental result shows that PNAS is able to deliver models with higher accuracy. The success of deep neural networks has contributed to many fields, such as finance, medic and speech recognition. Machine learning models adopted in these fields are always trained with a massive amount of distributed and highly personalized data harvested directly from users. Concerns for data privacy and the demand for better data exploitation have prompted the design of several secure schemes that allow an untrusted server to train ML models for one or multiple parties. However, these existing schemes only focus on network parameter, and hardly extend their optimization range to model architecture scope. Sine the performance of a neural network is closely related to both parameter and its architecture, service providers are difficult to deliver customized and flexible neural networks to each client. To this end, in this paper we propose PNAS, a novel MLaaS framework that enables a server to jointly optimize network parameter and architecture while ensuring the privacy of training sets. A double-encryption scheme is derived to prevent privacy leakage from sample itself, as well as intermediate feature maps during training. Specifically, we adopt functional encryption and feature transformation to secure forward and back propagation. Extensive experiments have demonstrated the superiority of our proposal.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2021.05.073