On Deep Representation Learning from Noisy Web Images
The keep-growing content of Web images may be the next important data source to scale up deep neural networks, which recently obtained a great success in the ImageNet classification challenge and related tasks. This prospect, however, has not been validated on convolutional networks (convnet) -- one...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The keep-growing content of Web images may be the next important data source
to scale up deep neural networks, which recently obtained a great success in
the ImageNet classification challenge and related tasks. This prospect,
however, has not been validated on convolutional networks (convnet) -- one of
best performing deep models -- because of their supervised regime. While
unsupervised alternatives are not so good as convnet in generalizing the
learned model to new domains, we use convnet to leverage semi-supervised
representation learning. Our approach is to use massive amounts of unlabeled
and noisy Web images to train convnets as general feature detectors despite
challenges coming from data such as high level of mislabeled data, outliers,
and data biases. Extensive experiments are conducted at several data scales,
different network architectures, and data reranking techniques. The learned
representations are evaluated on nine public datasets of various topics. The
best results obtained by our convnets, trained on 3.14 million Web images,
outperform AlexNet trained on 1.2 million clean images of ILSVRC 2012 and is
closing the gap with VGG-16. These prominent results suggest a budget solution
to use deep learning in practice and motivate more research in semi-supervised
representation learning. |
---|---|
DOI: | 10.48550/arxiv.1512.04785 |