Relational grounded language learning
In the past, research on learning language models mainly used syntactic information during the learning process but in recent years, researchers began to also use semantic information. This paper presents such an approach where the input of our learning algorithm is a dataset of pairs made up of sen...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In the past, research on learning language models mainly
used syntactic information during the learning process but in recent
years, researchers began to also use semantic information. This paper
presents such an approach where the input of our learning algorithm
is a dataset of pairs made up of sentences and the contexts
in which they are produced. The system we present is based on inductive
logic programming techniques that aim to learn a mapping
between n-grams and a semantic representation of their associated
meaning. Experiments have shown that we can learn such a mapping
that made it possible later to generate relevant descriptions of images
or learn the meaning of words without any linguistic resource. |
---|---|
ISSN: | 0922-6389 |