Testing the face validity and inter-rater agreement of a simple approach to drug-drug interaction evidence assessment
[Display omitted] •We developed a novel instrument to assess drug-drug interaction (DDI) evidence.•The simple instrument provided non-ambiguous criteria to filter DDI evidence.•Evaluating evidence without consideration of clinical relevance reduces reliability.•Discrepancies evaluating in vitro data...
Gespeichert in:
Veröffentlicht in: | Journal of biomedical informatics 2020-01, Vol.101, p.103355-103355, Article 103355 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | [Display omitted]
•We developed a novel instrument to assess drug-drug interaction (DDI) evidence.•The simple instrument provided non-ambiguous criteria to filter DDI evidence.•Evaluating evidence without consideration of clinical relevance reduces reliability.•Discrepancies evaluating in vitro data were common suggesting a need for assistance.•Participants expressed a need for assistance considering risk-modifying factors.
Low concordance between drug-drug interaction (DDI) knowledge bases is a well-documented concern. One potential cause of inconsistency is variability between drug experts in approach to assessing evidence about potential DDIs. In this study, we examined the face validity and inter-rater reliability of a novel DDI evidence evaluation instrument designed to be simple and easy to use.
A convenience sample of participants with professional experience evaluating DDI evidence was recruited. Participants independently evaluated pre-selected evidence items for 5 drug pairs using the new instrument. For each drug pair, participants labeled each evidence item as sufficient or insufficient to establish the existence of a DDI based on the evidence categories provided by the instrument. Participants also decided if the overall body of evidence supported a DDI involving the drug pair. Agreement was computed both at the evidence item and drug pair levels. A cut-off of ≥ 70% was chosen as the agreement threshold for percent agreement, while a coefficient > 0.6 was used as the cut-off for chance-corrected agreement. Open ended comments were collected and coded to identify themes related to the participants’ experience using the novel approach.
The face validity of the new instrument was established by two rounds of evaluation involving a total of 6 experts. Fifteen experts agreed to participate in the reliability assessment, and 14 completed the study. Participant agreement on the sufficiency of 22 of the 34 evidence items (65%) did not exceed the a priori agreement threshold. Similarly, agreement on the sufficiency of evidence for 3 of the 5 drug pairs (60%) was poor. Chance-corrected agreement at the drug pair level further confirmed the poor interrater reliability of the instrument (Gwet’s AC1 = 0.24, Conger’s Kappa = 0.24). Participant comments suggested several possible reasons for the disagreements including unaddressed subjectivity in assessing an evidence item’s type and study design, an infeasible separation of evidence evaluation from the cons |
---|---|
ISSN: | 1532-0464 1532-0480 |
DOI: | 10.1016/j.jbi.2019.103355 |