Efficient Secure Neural Network Prediction Protocol Reducing Accuracy Degradation

Machine learning models inherently memorize significant amounts of information, and thus hiding not only prediction processes but also trained models, i.e., model obliviousness, is desirable in the cloud setting. Several works achieved model obliviousness with the MNIST dataset, but datasets that in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences Communications and Computer Sciences, 2020/12/01, Vol.E103.A(12), pp.1367-1380
Hauptverfasser: NISHIDA, Naohisa, OBA, Tatsumi, UNAGAMI, Yuji, CRUZ, Jason PAUL, YANAI, Naoto, TERUYA, Tadanori, ATTRAPADUNG, Nuttapong, MATSUDA, Takahiro, HANAOKA, Goichiro
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Machine learning models inherently memorize significant amounts of information, and thus hiding not only prediction processes but also trained models, i.e., model obliviousness, is desirable in the cloud setting. Several works achieved model obliviousness with the MNIST dataset, but datasets that include complicated samples, e.g., CIFAR-10 and CIFAR-100, are also used in actual applications, such as face recognition. Secret sharing-based secure prediction for CIFAR-10 is difficult to achieve. When a deep layer architecture such as CNN is used, the calculation error when performing secret calculation becomes large and the accuracy deteriorates. In addition, if detailed calculations are performed to improve accuracy, a large amount of calculation is required. Therefore, even if the conventional method is applied to CNN as it is, good results as described in the paper cannot be obtained. In this paper, we propose two approaches to solve this problem. Firstly, we propose a new protocol named Batch-normalizedActivation that combines BatchNormalization and Activation. Since BatchNormalization includes real number operations, when performing secret calculation, parameters must be converted into integers, which causes a calculation error and decrease accuracy. By using our protocol, calculation errors can be eliminated, and accuracy degradation can be eliminated. Further, the processing is simplified, and the amount of calculation is reduced. Secondly, we explore a secret computation friendly and high accuracy architecture. Related works use a low-accuracy, simple architecture, but in reality, a high accuracy architecture should be used. Therefore, we also explored a high accuracy architecture for the CIFAR10 dataset. Our proposed protocol can compute prediction of CIFAR-10 within 15.05 seconds with 87.36% accuracy while providing model obliviousness.
ISSN:0916-8508
1745-1337
DOI:10.1587/transfun.2020TAP0011