Artificial intelligence for diagnosis and Gleason grading of prostate cancer: the PANDA challenge
van der Laak J; Corrado GS; Allan R; Nagpal K; Egevad L; Ström P; Chen PHC; Amin MB; Hulsbergen-van de Kaa C; Bulten W; Eklund M; & the PANDA challenge consortium; Pinckaers H; van der Kwast T; Ruusuvuori P; Vink R; Steiner DF; Mermel CH; Tsuzuki T; Peng L; Häkkinen T; Cai YN; Grönberg H; Delahunt B; Humphrey PA; Kartasalo K; Demkin M; Evans AJ; Dane S; Valkonen M; Litjens G; Samaratunga H; Tan F; van Boven H
Artificial intelligence for diagnosis and Gleason grading of prostate cancer: the PANDA challenge
van der Laak J
Corrado GS
Allan R
Nagpal K
Egevad L
Ström P
Chen PHC
Amin MB
Hulsbergen-van de Kaa C
Bulten W
Eklund M; & the PANDA challenge consortium
Pinckaers H
van der Kwast T
Ruusuvuori P
Vink R
Steiner DF
Mermel CH
Tsuzuki T
Peng L
Häkkinen T
Cai YN
Grönberg H
Delahunt B
Humphrey PA
Kartasalo K
Demkin M
Evans AJ
Dane S
Valkonen M
Litjens G
Samaratunga H
Tan F
van Boven H
NATURE PORTFOLIO
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe2022081154004
https://urn.fi/URN:NBN:fi-fe2022081154004
Tiivistelmä
Through a community-driven competition, the PANDA challenge provides a curated diverse dataset and a catalog of models for prostate cancer pathology, and represents a blueprint for evaluating AI algorithms in digital pathology.Artificial intelligence (AI) has shown promise for diagnosing prostate cancer in biopsies. However, results have been limited to individual studies, lacking validation in multinational settings. Competitions have been shown to be accelerators for medical imaging innovations, but their impact is hindered by lack of reproducibility and independent validation. With this in mind, we organized the PANDA challenge-the largest histopathology competition to date, joined by 1,290 developers-to catalyze development of reproducible AI algorithms for Gleason grading using 10,616 digitized prostate biopsies. We validated that a diverse set of submitted algorithms reached pathologist-level performance on independent cross-continental cohorts, fully blinded to the algorithm developers. On United States and European external validation sets, the algorithms achieved agreements of 0.862 (quadratically weighted kappa, 95% confidence interval (CI), 0.840-0.884) and 0.868 (95% CI, 0.835-0.900) with expert uropathologists. Successful generalization across different patient populations, laboratories and reference standards, achieved by a variety of algorithmic approaches, warrants evaluating AI-based Gleason grading in prospective clinical trials.
Kokoelmat
- Rinnakkaistallenteet [19207]