Adaptive Linear Span Network for Object Skeleton Detection
Conventional networks for object skeleton detection are usually hand-crafted. Despite the effectiveness, hand-crafted network architectures lack the theoretical basis and require intensive prior knowledge to implement representation complementarity for objects/parts in different granularity. In this...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on image processing 2021, Vol.30, p.5096-5108 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Conventional networks for object skeleton detection are usually hand-crafted. Despite the effectiveness, hand-crafted network architectures lack the theoretical basis and require intensive prior knowledge to implement representation complementarity for objects/parts in different granularity. In this paper, we propose an adaptive linear span network (AdaLSN), driven by neural architecture search (NAS), to automatically configure and integrate scale-aware features for object skeleton detection. AdaLSN is formulated with the theory of linear span, which provides one of the earliest explanations for multi-scale deep feature fusion. AdaLSN is materialized by defining a mixed unit-pyramid search space, which goes beyond many existing search spaces using unit-level or pyramid-level features. Within the mixed space, we apply genetic architecture search to jointly optimize unit-level operations and pyramid-level connections for adaptive feature space expansion. AdaLSN substantiates its versatility by achieving significantly higher accuracy and latency trade-off compared with the state-of-the-arts. It also demonstrates general applicability to image-to-mask tasks such as edge detection and road extraction. Code is available at https://github.com/sunsmarterjie/SDL-Skeletongithub.com/sunsmarterjie/SDL-Skeleton . |
---|---|
ISSN: | 1057-7149 1941-0042 |
DOI: | 10.1109/TIP.2021.3078079 |