Exploiting Convolutional Representations for Multiscale Human Settlement Detection
We test this premise and explore representation spaces from a single deep convolutional network and their visualization to argue for a novel unified feature extraction framework. The objective is to utilize and re-purpose trained feature extractors without the need for network retraining on three re...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We test this premise and explore representation spaces from a single deep
convolutional network and their visualization to argue for a novel unified
feature extraction framework. The objective is to utilize and re-purpose
trained feature extractors without the need for network retraining on three
remote sensing tasks i.e. superpixel mapping, pixel-level segmentation and
semantic based image visualization. By leveraging the same convolutional
feature extractors and viewing them as visual information extractors that
encode different image representation spaces, we demonstrate a preliminary
inductive transfer learning potential on multiscale experiments that
incorporate edge-level details up to semantic-level information. |
---|---|
DOI: | 10.48550/arxiv.1707.05683 |