Non-standard diagnostic assessment reliability in psychiatry: a study in a Brazilian outpatient setting using Kappa

The use of Structured Diagnostic Assessments (SDAs) is a solution for unreliability in psychiatry and the gold standard for diagnosis. However, except for studies between the 50 s and 70 s, reliability without the use of Non-SDAs (NSDA) is seldom tested, especially in non-Western, Educated, Industri...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:European archives of psychiatry and clinical neuroscience 2024-10, Vol.274 (7), p.1759-1770
Hauptverfasser: Rocha Neto, Helio G., Lessa, José Luiz Martins, Koiller, Luisa Mendez, Pereira, Amanda Machado, de Souza Gomes, Bianca Marques, Veloso Filho, Carlos Linhares, Telleria, Carlos Henrique Casado, Cavalcanti, Maria T., Telles-Correia, Diogo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The use of Structured Diagnostic Assessments (SDAs) is a solution for unreliability in psychiatry and the gold standard for diagnosis. However, except for studies between the 50 s and 70 s, reliability without the use of Non-SDAs (NSDA) is seldom tested, especially in non-Western, Educated, Industrialized, Rich, and Democratic (WEIRD) countries. We aim to measure reliability between examiners with NSDAs for psychiatric disorders. We compared diagnostic agreement after clinician change, in an outpatient academic setting. We used inter-rater Kappa measuring 8 diagnostic groups: Depression (DD: F32, F33), Anxiety Related Disorders (ARD: F40–F49, F50–F59), Personality Disorders (PD: F60–F69), Bipolar Disorder (BD: F30, F31, F34.0, F38.1), Organic Mental Disorders (Org: F00–F09), Neurodevelopment Disorders (ND: F70–F99) and Schizophrenia Spectrum Disorders (SSD: F20–F29). Cohen’s Kappa measured agreement between groups, and Baphkar’s test assessed if any diagnostic group have a higher tendency to change after a new diagnostic assessment. We analyzed 739 reevaluation pairs, from 99 subjects who attended IPUB’s outpatient clinic. Overall inter-rater Kappa was moderate, and none of the groups had a different tendency to change. NSDA evaluation was moderately reliable, but the lack of some prevalent hypothesis inside the pairs raised concerns about NSDA sensitivity to some diagnoses. Diagnostic momentum bias (that is, a tendency to keep the last diagnosis observed) may have inflated the observed agreement. This research was approved by IPUB’s ethical committee, registered under the CAAE33603220.1.0000.5263, and the UTN-U1111-1260–1212.
ISSN:0940-1334
1433-8491
1433-8491
DOI:10.1007/s00406-023-01730-7