Geometry-based anisotropy representation learning of concepts for knowledge graph embedding

The entities in the knowledge graphs are generally categorized into concepts and instances, where each concept is used to represent the abstraction of a set of instances with common properties. Most previous Knowledge Graph Embedding methods tend to treat them in the same way by projecting them into...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023-09, Vol.53 (17), p.19940-19961
Hauptverfasser: Yu, Jibin, Zhang, Chunhong, Hu, Zheng, Ji, Yang, Fu, Dongjun, Wang, Xueyu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The entities in the knowledge graphs are generally categorized into concepts and instances, where each concept is used to represent the abstraction of a set of instances with common properties. Most previous Knowledge Graph Embedding methods tend to treat them in the same way by projecting them into low-dimension space as vector points without explicitly distinguishing them, therefore ignoring the potential specification of concept. Some recent studies address this problem by modeling each concept as a sphere rather than a vector point. However, the isotropy of the sphere is less capable of modeling the semantic abstraction of concepts, as well as the complex relations between concepts and instances. To solve this problem, we propose to model concepts using geometric shapes with anisotropy to enrich the representation power of concepts. Two algorithms, named as TransEllipsoid and TransCuboid, are presented to project each concept as an ellipsoid and a cuboid in embedding space respectively. The anisotropy of concept embedding is learned by allowing the length of the axes of the ellipsoid or the edges of the cuboid to vary across different dimensions. Experimental results on three real-world datasets show that modeling the anisotropy of concept embedding would significantly benefit not only the learned representations of concepts but also the corresponding instances. The visualization of embedding results reveals human-intuitive relative positions between concepts and instances and provides potential interpretability for the transitivity of isA relations.
ISSN:0924-669X
1573-7497
DOI:10.1007/s10489-023-04528-1