Breadcrumbs
Highlights in Pathology - Digital Pathology and Machine Learning (November 2020)
Dr. Carlo Hojilla, Sinai Health System - Mount Sinai Hospital
1. Validation of a digital pathology system including remote review during the COVID-19 pandemic.
Hanna MG, Reuter VE, Ardon O, et al. Mod Pathol. 2020 Nov;33(11):2115-2127. doi: 10.1038/s41379-020-0601-5. Epub 2020 Jun 22. PMID: 32572154; PMCID: PMC7306935.
This is a recent randomized prospective validation study on the use of a digital pathology system for diagnostic review and reporting. In response to social distancing restrictions posed by the COVID-19 pandemic, Clinical Laboratory Improvement Amendments (CLIA) regulations in the US were relaxed to allow for the sign-out of cases from non-CLIA-certified facilities. This paved the way for a safer practice workflow, by allowing remote review and sign-out for pathologists
The staff pathologists received their normal daily workloads that included small biopsies, large resections, and consultation cases, which were scanned as whole-slide images (WSI), for a total of 1196 slides. Trainees were also included in this study as they previewed the cases and prepopulated the pathology report with initial interpretations. Staff then reviewed the WSI, completed the report (without sign-out) and then finalized the case by reviewing the glass slides and signing out the case later (within their expected turn-around times). The major diagnostic equivalency between glass and digital slide diagnoses (ie, on-site versus remote sign-out) was 100%.
The results not only demonstrate the implementation of digital solutions, but also offer an instructional template complete with assessing readiness, quality control measures, recommended technical requirements, and a system that includes trainees. The applicability of this approach in Canada requires coordinated efforts between local hospital departments, their staff, as well as provincial and national organizations and/or regulatory bodies.
2. HistoQC: An Open-Source Quality Control Tool for Digital Pathology Slides.
Janowczyk A, Zuo R, Gilmore H, et al. JCO Clin Cancer Inform. 2019 Apr;3:1-7. doi: 10.1200/CCI.18.00157. PMID: 30990737; PMCID: PMC6552675.
Digital and computational pathology are highly dependent on good quality, standardized WSI.
The authors created and reported an open-source tool (HistoQC, available through GitHub) to address the pain-staking quality control measures needed in adopting a digital pathology workflow.
The tool can detect pre-analytical histology artifacts (tissue folds, lifting, uneven staining, etc) as well as digitization artifacts (out-of-focus, blurry, colors, etc).
Using HistoQC on The Cancer Genome Atlas (TCGA) WSIs, there was very high interobserver agreement with individual pathologist reviewers for detecting artifacts and thereby selecting high quality WSIs.
This open-source, freely available tool has tremendous practicality and is indispensable QC for any digital workflow setting including research, education, or clinical practice.
3. GloFlow: Global Image Alignment for Creation of Whole Slide Images for Pathology from Video.
Krishna V, Joshi, A, Bulterys, PL, et al. 2020 Oct. arXiv:2010.15269v1.
A neat technical paper describes the use of video images of glass slides as an alternative to whole-slide scanning.
Conventional slide scanning technology uses precise sub-micron motor stages to capture image tiles at a magnification and stitch them together—a process that can be a bottleneck as it is time-consuming (despite recent speed advances) and more expensive.
The proposed idea in this paper is that video frames gathered from a standard light microscope viewing glass slides can be stitched together in a very precise and fast way using computational approaches.
While their proof-of-principle simulation used video frames from a WSI as their ground truth, this nonetheless presents an intriguing concept that obviates the need for standalone digital scanners.
4. TissueWand, a Rapid Histopathology Annotation Tool.
Lindvall M, Sanner A, Petré F, et al. J Pathol Inform. 2020 Aug 21;11:27. doi: 10.4103/jpi.jpi_5_20. PMID: 33042606; PMCID: PMC7518350.
A machine learning (ML) algorithm is only as good as the training dataset it has been validated on during supervised or unsupervised learning.
One of the biggest roadblocks to adoption of machine learning tools in pathology is the laborious task of ground truth data creation by manual annotation by pathologists (or more often trainees), especially in the early stages of ML training.
This study describes an iterative design process to create a prototype annotation tool to assist in polygon- or region-based image segmentation.
The preliminary user data supports that this tool decreased annotation tasks and is preferred over the manual method.
The potential lies in the ability for an open-source tool with a built-in supervised learning ability, so it can perform more efficiently and accurately. Of note, current third-party digital pathology software (QuPath, Cytomine, Ilastik) exist that has similar automated annotation tools, but these are not stand-alone tools.
5. Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images.
Mahmood F, Borders D, Chen RJ, et al. IEEE Trans Med Imaging. 2020 Nov;39(11):3257-3267. doi: 10.1109/TMI.2019.2927182. PMID: 31283474.
Convolutional neural networks (CNN) are used in many nuclear segmentation-based computational pathology applications that include nuclear morphology analysis, cell classification, and cancer grading.
However, the performance of CNNs is highly dependent and highly specific to its own training dataset.
For example, training datasets for breast and prostate nuclei are available and CNNs generated based on these have limited applicability and performance when tested on other tissue sites. Creating a generalized CNN requires tedious work to curate many nuclei from many tissue sites.
The authors in this study created a conditional generative adversarial network (GAN) pipeline to generate a set of synthetic images of segmented nuclei for data training. These GANs iteratively challenge and “punish” an algorithm for incorrect associations, thereby promoting an unsupervised form of deep learning.
This synthetic data set was then used to train a nuclear segmentation CNN, which was found to have generalizable applicability for nuclear segmentation of other tissue types.
6. Pathological Visual Question Answering.
He, X, Cai, Z, Wei, W, et al. 2020 Oct. ArXiv:2010.12435v1.
Can an artificial intelligence (AI) system be developed to pass the American Board of Pathology examination?
This is not just a silly question posed by authors rather it is an important step in achieving AI-aided clinical decision support tools.
Visual Question Answering (VQA) is an AI term for assessing the content of an image, moving beyond a simple identification.
Essentially, glass slide exams are the type of VQA that this paper is trying to emulate.
The authors created the first ever pathology VQA dataset, pairing images with related open-ended questions and answers. They then validated their dataset by using different learning algorithms to ensure a ‘clean’ and representative VQA dataset for pathology images.
The logical extension of this project is the creation of a VQA dataset using WSI, which is the more practical and clinically relevant of form of examination.