Explicit information for category-orthogonal object properties increases along the ventral stream

This study shows that the amount of linearly decodable information for categorical-orthogonal object tasks (for example, position, scale, pose, perimeter and aspect ratio) increases up the ventral visual hierarchy, ultimately matching human levels in inferior temporal cortex. It also provides a comp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Nature neuroscience 2016-04, Vol.19 (4), p.613-622
Hauptverfasser: Hong, Ha, Yamins, Daniel L K, Majaj, Najib J, DiCarlo, James J
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This study shows that the amount of linearly decodable information for categorical-orthogonal object tasks (for example, position, scale, pose, perimeter and aspect ratio) increases up the ventral visual hierarchy, ultimately matching human levels in inferior temporal cortex. It also provides a computational model that explains how this pattern of information arises. Extensive research has revealed that the ventral visual stream hierarchically builds a robust representation for supporting visual object categorization tasks. We systematically explored the ability of multiple ventral visual areas to support a variety of 'category-orthogonal' object properties such as position, size and pose. For complex naturalistic stimuli, we found that the inferior temporal (IT) population encodes all measured category-orthogonal object properties, including those properties often considered to be low-level features (for example, position), more explicitly than earlier ventral stream areas. We also found that the IT population better predicts human performance patterns across properties. A hierarchical neural network model based on simple computational principles generates these same cross-area patterns of information. Taken together, our empirical results support the hypothesis that all behaviorally relevant object properties are extracted in concert up the ventral visual hierarchy, and our computational model explains how that hierarchy might be built.
ISSN:1097-6256
1546-1726
1546-1726
DOI:10.1038/nn.4247