Utilizing context-relevant keywords extracted from a large collection of user-generated documents for music discovery

•Our system develops generalized context-relevant music descriptors by extracting keywords from a large collection of user-generated documents.•The music descriptors contain various context-relevant terms which could enhance semantic music search/discovery.•We identified a correlation between the pr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information processing & management 2017-09, Vol.53 (5), p.1185-1200
Hauptverfasser: Hyung, Ziwon, Park, Joon-Sang, Lee, Kyogu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Our system develops generalized context-relevant music descriptors by extracting keywords from a large collection of user-generated documents.•The music descriptors contain various context-relevant terms which could enhance semantic music search/discovery.•We identified a correlation between the proposed music descriptors with conventional features such as acoustic features or lyrics.•User studies confirm that the proposed method can be applied to semantic music search/discovery and context-aware music recommendation. The contextual background of a user is one of the important criteria when deciding what music to listen to. In this paper, we propose a novel method to embed the user context for music search and retrieval. The proposed system extracts keywords from a large collection of documents written by users. Each of these documents contains a personal story about the writer’s situation and/or mood, followed by a song request. We consider that there is a strong correlation between the story and the song. Therefore, by extracting keywords from these documents, it is possible to develop a list of terms that can generally be used to describe the user context when requesting a song, which may then be employed to represent a music item in a richer manner. Once each song is represented using the proposed context-relevant music descriptors, we perform Latent Dirichlet Allocation to retrieve similar music based on context similarity. By conducting a series of experiments, we identified a correlation between the proposed music descriptors and conventional approaches, such as acoustic features or lyrics. The identified correlation can be used to auto-tag songs with no document association. We also qualitatively evaluated our system by comparing the performance of our proposed music descriptors with other conventional features for music retrieval. The results showed that the performance of the proposed music descriptors was competitive with conventional features, thereby suggesting their potential use for describing music in semantic music search/retrieval.
ISSN:0306-4573
1873-5371
DOI:10.1016/j.ipm.2017.04.006