Reducing Overlearning through Disentangled Representations by Suppressing Unknown Tasks
Existing deep learning approaches for learning visual features tend to overlearn and extract more information than what is required for the task at hand. From a privacy preservation perspective, the input visual information is not protected from the model; enabling the model to become more intellige...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Existing deep learning approaches for learning visual features tend to
overlearn and extract more information than what is required for the task at
hand. From a privacy preservation perspective, the input visual information is
not protected from the model; enabling the model to become more intelligent
than it is trained to be. Current approaches for suppressing additional task
learning assume the presence of ground truth labels for the tasks to be
suppressed during training time. In this research, we propose a three-fold
novel contribution: (i) a model-agnostic solution for reducing model
overlearning by suppressing all the unknown tasks, (ii) a novel metric to
measure the trust score of a trained deep learning model, and (iii) a simulated
benchmark dataset, PreserveTask, having five different fundamental image
classification tasks to study the generalization nature of models. In the first
set of experiments, we learn disentangled representations and suppress
overlearning of five popular deep learning models: VGG16, VGG19, Inception-v1,
MobileNet, and DenseNet on PreserverTask dataset. Additionally, we show results
of our framework on color-MNIST dataset and practical applications of face
attribute preservation in Diversity in Faces (DiF) and IMDB-Wiki dataset. |
---|---|
DOI: | 10.48550/arxiv.2005.10220 |