Privacy-Preserving Encrypted Low-Dose CT Denoising
Deep learning (DL) has made significant advancements in tomographic imaging, particularly in low-dose computed tomography (LDCT) denoising. A recent trend involves servers training powerful models with large amounts of self-collected private data and providing application programming interfaces (API...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning (DL) has made significant advancements in tomographic imaging,
particularly in low-dose computed tomography (LDCT) denoising. A recent trend
involves servers training powerful models with large amounts of self-collected
private data and providing application programming interfaces (APIs) for users,
such as Chat-GPT. To avoid model leakage, users are required to upload their
data to the server model, but this way raises public concerns about the
potential risk of privacy disclosure, especially for medical data. Hence, to
alleviate related concerns, in this paper, we propose to directly denoise LDCT
in the encrypted domain to achieve privacy-preserving cloud services without
exposing private data to the server. To this end, we employ homomorphic
encryption to encrypt private LDCT data, which is then transferred to the
server model trained with plaintext LDCT for further denoising. However, since
traditional operations, such as convolution and linear transformation, in DL
methods cannot be directly used in the encrypted domain, we transform the
fundamental mathematic operations in the plaintext domain into the operations
in the encrypted domain. In addition, we present two interactive frameworks for
linear and nonlinear models in this paper, both of which can achieve lossless
operating. In this way, the proposed methods can achieve two merits, the data
privacy is well protected and the server model is free from the risk of model
leakage. Moreover, we provide theoretical proof to validate the lossless
property of our framework. Finally, experiments were conducted to demonstrate
that the transferred contents are well protected and cannot be reconstructed.
The code will be released once the paper is accepted. |
---|---|
DOI: | 10.48550/arxiv.2310.09101 |