Automatically finding relevant citations for clinical guideline development

[Display omitted] •Automated citation finding can augment manual literature search.•We built a gold standard with citations from 653 guideline recommendations.•The query expansion method vs. PubMed improved recall with non-significant loss on precision.•The unsupervised citation ranking approach per...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of biomedical informatics 2015-10, Vol.57, p.436-445
Hauptverfasser: Bui, Duy Duc An, Jonnalagadda, Siddhartha, Del Fiol, Guilherme
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:[Display omitted] •Automated citation finding can augment manual literature search.•We built a gold standard with citations from 653 guideline recommendations.•The query expansion method vs. PubMed improved recall with non-significant loss on precision.•The unsupervised citation ranking approach performed better than standard PubMed ranking and a machine learning classifier. Literature database search is a crucial step in the development of clinical practice guidelines and systematic reviews. In the age of information technology, the process of literature search is still conducted manually, therefore it is costly, slow and subject to human errors. In this research, we sought to improve the traditional search approach using innovative query expansion and citation ranking approaches. We developed a citation retrieval system composed of query expansion and citation ranking methods. The methods are unsupervised and easily integrated over the PubMed search engine. To validate the system, we developed a gold standard consisting of citations that were systematically searched and screened to support the development of cardiovascular clinical practice guidelines. The expansion and ranking methods were evaluated separately and compared with baseline approaches. Compared with the baseline PubMed expansion, the query expansion algorithm improved recall (80.2% vs. 51.5%) with small loss on precision (0.4% vs. 0.6%). The algorithm could find all citations used to support a larger number of guideline recommendations than the baseline approach (64.5% vs. 37.2%, p
ISSN:1532-0464
1532-0480
DOI:10.1016/j.jbi.2015.09.003