Validity and Reproducibility of Immunohistochemical Scoring by Trained Non-Pathologists on Tissue Microarrays

Background: Scoring of immunohistochemistry (IHC) staining is often done by non-pathologists, especially in large-scale tissue microarray (TMA)-based studies. Studies on the validity and reproducibility of scoring results from non-pathologists are limited. Therefore, our main aim was to assess inter...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Cancer epidemiology, biomarkers & prevention biomarkers & prevention, 2021-10, Vol.30 (10), p.1867-1874
Hauptverfasser: Jenniskens, Josien C. A., Offermans, Kelly, Samarska, Iryna, Fazzi, Gregorio E., Simons, Colinda C. J. M., Smits, Kim M., Schouten, Leo J., Weijenberg, Matty P., van den Brandt, Piet A., Grabsch, Heike
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Background: Scoring of immunohistochemistry (IHC) staining is often done by non-pathologists, especially in large-scale tissue microarray (TMA)-based studies. Studies on the validity and reproducibility of scoring results from non-pathologists are limited. Therefore, our main aim was to assess interobserver agreement between trained non-pathologists and an experienced histopathologist for three IHC markers with different subcellular localization (nucleus/membrane/cytoplasm). Methods: Three non-pathologists were trained in recognizing adenocarcinoma and IHC scoring by a senior histopathologist. Kappa statistics were used to analyze interobserver and intraobserver agreement for 6,249 TMA cores from a colorectal cancer series. Results: Interobserver agreement between non-pathologists (independently scored) and the histopathologist was "substantial" for nuclear and membranous IHC markers (kappa(range) = 0.67-0.75 and kappa(range) = 0.61-0.69, respectively), and "moderate" for the cytoplasmic IHC marker (k(range) = 0.43-0.57). Scores of the three non-pathologists were also combined into a "combination score" (if at least two non-pathologists independently assigned the same score to a core, this was the combination score). This increased agreement with the pathologist (kappa(nuclear) = 0.74; kappa(membranous) = 0.73; kappa(cytopasmic) = 0.57). Interobserver agreement between nonpathologists was "substantial" (kappa(nuclear) = 0.78; kappa(membranous) = 0.72; kappa(cytopasmic) = 0.61). Intraobserver agreement of non-pathologists was "substantial" to "almost perfect" (kappa(nuclear), (range) = 0.83-0.87; kappa(membranous), range = 0.75-0.82; kappa(cytopasmic) = 0.69). Overall, agreement was lowest for the cytoplasmic IHC marker. Conclusions: This study shows that adequately trained nonpathologists are able to generate reproducible IHC scoring results, that are similar to those of an experienced histopathologist. A combination score of at least two non-pathologists yielded optimal results. Impact: Non-pathologists can generate reproducible IHC results after appropriate training, making analyses of large-scale molecular pathological epidemiology studies feasible within an acceptable time frame.
ISSN:1055-9965
1538-7755
DOI:10.1158/1055-9965.EPI-21-0295