Why concepts are (probably) vectors

Modern language models and vector-symbolic architectures show that vector-based models are capable of handling the compositional, structured, and symbolic properties required for human concepts.Vectors are also able to handle key phenomena from the psychology, including computation of features and s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Trends in cognitive sciences 2024-09, Vol.28 (9), p.844-856
Hauptverfasser: Piantadosi, Steven T., Muller, Dyana C.Y., Rule, Joshua S., Kaushik, Karthikeya, Gorenstein, Mark, Leib, Elena R., Sanford, Emily
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Modern language models and vector-symbolic architectures show that vector-based models are capable of handling the compositional, structured, and symbolic properties required for human concepts.Vectors are also able to handle key phenomena from the psychology, including computation of features and similarities, reasoning about relations and analogies, and representation of theories.Language models show how vector representation of word semantics and sentences can interface between concepts and language, as seen in definitional theories of concepts or ad hoc concepts.The idea of Church encoding, from logic, allows us to understand how meaning can arise in vector-based or symbolic systems.By combining these recent computational results with classic findings in psychology, vector-based models provide a compelling account of human conceptual representation. For decades, cognitive scientists have debated what kind of representation might characterize human concepts. Whatever the format of the representation, it must allow for the computation of varied properties, including similarities, features, categories, definitions, and relations. It must also support the development of theories, ad hoc categories, and knowledge of procedures. Here, we discuss why vector-based representations provide a compelling account that can meet all these needs while being plausibly encoded into neural architectures. This view has become especially promising with recent advances in both large language models and vector symbolic architectures. These innovations show how vectors can handle many properties traditionally thought to be out of reach for neural models, including compositionality, definitions, structures, and symbolic computational processes. For decades, cognitive scientists have debated what kind of representation might characterize human concepts. Whatever the format of the representation, it must allow for the computation of varied properties, including similarities, features, categories, definitions, and relations. It must also support the development of theories, ad hoc categories, and knowledge of procedures. Here, we discuss why vector-based representations provide a compelling account that can meet all these needs while being plausibly encoded into neural architectures. This view has become especially promising with recent advances in both large language models and vector symbolic architectures. These innovations show how vectors can handle many properties tradition
ISSN:1364-6613
1879-307X
1879-307X
DOI:10.1016/j.tics.2024.06.011