Detection and Localization of Prostate Cancer Malignancies from mpMRI Using Machine Deep Learning as a Step Closer to Clinical Utilization

In this section, the literature in the field of prostate cancer (PCa) diagnosis from medical imaging, current cutting-edge research on the use of deep learning algorithms to automate the diagnostic process, and challenges in bridging the gap research and clinical use is discussed.

Background

Prostate cancer (PCa) has been studied for its common nature and high mortality rate in the United States. As recommended by almost all types of cancer, early detection through periodic screening plays a fundamental role in reducing the mortality rate and improving the quality of life of patients with PCa (https://www.cancer.org/cancer/prostate-cancer/about/key-statistics.html). In clinics, diagnosis is made based on screening tests and biopsies. Specifically, the biopsy serves as the gold standard when determining the malignancy of a suspicious lesion. However, the typical prostate biopsy procedure is invasive, and complications such as bleeding, dysuria, and infections have been reported. Disagreement between biopsy and screening tests have been reported, although rarely1.

Among detection techniques, multiparametric magnetic resonance imaging (mpMRI) has shown an increasing impact on clinical decision-making. Popular sequences include T2-weighted imaging (T2W), diffusion-weighted imaging (DWI), dynamic contrast-enhanced imaging (DCE), and MR spectroscopy. According to the report, the best visualization of PCa lesions occurs when the sequences are integrated in a parametric format.23.

The definition of malignancy in clinics correlates with the Gleason score 7 (including 3 + 4) in histopathology.4 and its automatic detection from mpMRI is the objective of this work. Specifically, the techniques used in automating the screen reading process are machine learning and machine vision customized to this particular task.5.

In recent years, deep learning models have made progress in making diagnoses from reading chest X-rays.6mammography7and mpMRI8.9.

However, the small sample size has been one of the challenges for deep machine learning algorithms to learn a good pattern from limited data and labeling. Data acquisition and sharing is a process related to policy and regulation, including patient privacy and medical ethics. Data labeling at the patient level is labor intensive and is also restricted by regulations. While data collection is well underway for many institutions, efforts have been made to leverage sophisticated models trained on large-scale data sets consisting of natural images.10,eleven.12.

previous work

Data used in the study originated from the SPIE-AAPM-NCI Prostate MR Gleason Grade Group Challenge13, whose objective was to develop quantitative mpMRI biomarkers for the determination of malignant lesions in patients with PCa. PCa patients were previously anonymized by SPIE-AAPM-NCI and The Cancer Imaging Archive (TCIA).

Figure 1
Figure 1

Sample images of the 64 × 64 pixel cropped rectangle of the four modalities: T2, ADC, DWI, and K-trans after resampling and recording. Lesions are malignant in PZ, benign in PZ, malignant in GC, and benign in GC from row 1 to row 4 respectively.

Each patient came with four modalities shown as in Fig. 1. Lesions exhibited hypointense signals on apparent diffusion coefficient (ADC) map and T2-weighted, and hyperintense signals on diffusion-weighted images with low b-values ​​(b = fifty). The k-trans modality was not included in the input channels because it could not visually differentiate cancer and disease under k-trans.

In our previous work, we found that multimodal input contributed significantly to accurate classification. In most cases, the class activation map (CAM)14 it helped provide proof of where the model is looking when making predictions. The center point of the potential lesion was provided.13.

One step closer to clinical utilization by relieving the workload of a medical expert is the focus and contribution of this document. The contributions of this document can be summarized as follows. (1) We entered the entire prostate gland (PZ and CG separately) instead of the cropped region of interest (ROI) into the classification models. (2) According to figure 1, the lesion can present very different characteristics when it resides in PZ and CG. Another attempt to improve our previous work was to train and test using separate models for PZ and CG. (3) To verify the robustness of the trained model, we tested it on an independent cohort from our own institute. The experimental results showed that the trained PZ detector and CG detector were able to classify the probability of malignancy for each slice and highlight suspicious out-of-sequence slices, despite the challenges posed by test samples generated from different scanners with different parameters. .

Leave a Reply

Your email address will not be published. Required fields are marked *