Towards automatic evaluation of learning object metadata quality
Thanks to recent developments on automatic generation of metadata and interoperability between repositories, the production, management and con- sumption of learning object metadata is vastly surpassing the human capacity to review or process these metadata. However, we need to make sure that the pr...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Thanks to recent developments on automatic generation of metadata
and interoperability between repositories, the production, management and con-
sumption of learning object metadata is vastly surpassing the human capacity to
review or process these metadata. However, we need to make sure that the
presence of some low quality metadata does not compromise the performance
of services that rely on that information. Consequently, there is a need for
automatic assessment of the quality of metadata, so that tools or users can be
alerted about low quality instances. In this paper, we present several quality
metrics for learning object metadata. We applied these metrics to a sample of
records from a real repository and compared the results with the quality assess-
ment given to the same records by a group of human reviewers. Through corre-
lation and regression analysis, we found that one of the metrics, the text infor-
mation content, could be used as a predictor of the human evaluation. While
this metric is not a definitive measurement of the "real" quality of the metadata
record, we present several ways in which it can be used. We also propose new
research in other quality dimensions of the learning object metadata. |
---|---|
ISSN: | 0302-9743 |