Post Page Advertisement [Top]

Deep Learning Classification on Early Detection of Pancreatic Cancer Using CT Scan

 Deep Learning Classification on Early Detection of Pancreatic Cancer Using CT Scan


Bhagyashree Pramod Bendale

Research Scholar (CSE),

Department of Computer Science and Engineering,

MIT School of Computing,

MIT Art Design and Technology University,

Loni Kalbhor, Pune, Maharashtra, India.

bendale660@gmail.com

Dr. Sagar Tambe

Ph.D (CSE), Associate Professor,

Department of Computer Science and Engineering,

MIT School of Computing,

MIT Art Design and Technology University,

Loni Kalbhor, Pune, Maharashtra, India.

swatishirke@mituniversity.edu.in


 

 

 


Abstract

The pancreas, a gland found beneath the stomach, is the site of a particular kind of cancer called a pancreatic tumour. It is very difficult to identify the pancreatic tumour.  Therefore it is necessary to employ a computer-aided diagnostic (CAD) system to identify pancreatic tumour. In this paper, in-depth clinical data as well as accurate assessment of images can be provided by artificial intelligence (AI) throughout intervention. This effort designs a model for optimum deep learning based pancreatic tumour as well as nontumor classification using CT pictures. The suggested approach uses an (AWF) adaptive window filtering method to reduce noise in order to identify and categorise the presence of pancreatic tumours and nontumors. The image segmentation method uses which involves dividing an image into meaningful and distinct regions or segments. Additionally, feature extraction using the UNET results in an accumulation of feature vectors. For categorization reasons, pancreatic ductal adenocarcinoma (PDAC) is predicted by pathological grade classification models for PDAC. A number of simulations are conducted in order to confirm the PDAC technique's enhanced efficiency, and the outcomes are examined from various angles. The ODLPTNTC method showed excellent results compared to more current techniques, according to a thorough comparative outcomes study.

Keywords- Computed Tomography (CT), Adaptive Window Filtering (AWF), Cascade Forward Neural Network (CFNN),  Classification; Machine learning, Feature Extraction; Deep Learning.

                                                                                                                                                    I.           Introduction

The pancreatic tumour, one of the most fatal types of cancer with stagnant survival rates in the past few decades, has been declared irreversible [1]. Frequently, advanced stages of pancreatic cancer are found, early detection is a significant problem. A kind of cancer called the cells of the pancreas are where pancreatic cancer develops, an organ found behind the stomach. One of the most serious and most deadly forms of cancer is this one. Pancreatic cancer can be of two primary categories such as Exocrine tumours are develop from the exocrine glands that make digestive enzymes and are especially prevalent. Adenocarcinoma is a particularly typical kind of exocrine pancreatic cancer. Neuroendocrine tumors are arise from the pancreatic hormone-producing cells. Due to the fact that pancreatic cancer may not initially exhibit any symptoms, it is sometimes referred to as a "silent" illness. However, if it gets worse, symptoms including digestive issues, unexplained weight loss, changes in stool colour, and jaundice (yellowing of the cheeks and eyes) might appear. The goal of ongoing research is to enhance early detection techniques, create more efficient cures, and comprehend the genetic and molecular causes of pancreatic cancer. Novel medicines and strategies are being investigated in clinical trials to increase survival rates. The categorization of pancreatic tumours using deep learning algorithms and computer tomography (CT) images is a subject of continuing investigation and growth in the fields of medical imaging and machine learning [2]. It's vital to highlight that clinical and scientific uses of deep learning algorithms for medical picture categorization, including pancreatic tumour diagnosis, have showed potential. The effectiveness and efficiency of cancer diagnosis can both be enhanced by these techniques. But meticulous validation, regulatory permissions, and coordination with healthcare experts are necessary for the use of such models in clinical practise. Currently, radiation treatment with MRI monitoring is used to reduce tumour size, but anatomical alterations like respiration are unaffected because of interpatient variability and infarction [2]. The job of accurately and quickly identifying the pancreatic tumour is difficult [3]. Early diagnosis, prompt treatment, and earlier recognition are more crucial.

With the advancement of computer science and image processing technologies for detection and diagnostics using computer-aided design (CAD) systems became more technologically advanced. Radiation therapists are increasingly using CAD systems to enhance diagnostic precision, help with disease interpretation and detection, and lessen physician burden. Deep neural network (DNN) technology was recently developed, expanding the need for health care. Increased pathology in pancreatic cancer prompts significant focus on improving efficient treatment and testing CAD systems when accurate pancreatic cancer diagnosis is possible using segmentation. Therefore, it is necessary to create a novel pancreatic segmentation mechanism. The segmentation of the pancreas in problems with computed tomography (CT) continue that has not been overcome in this work. CT is frequently used to diagnose and monitor PC patients. However, in up to 30% of cases, a patient receives an incorrect or delayed PC diagnosis. Accurate focus can be provided via image-guided therapy to enhance therapeutic possibilities. (AI) Artificial intelligence has the potential to enhance as well as deliver precise picture analysis for operational purposes and comprehensive diagnostic expertise [7]. In radiology, dermatology, and ophthalmology, image diagnosis tasks have successfully benefited from recent improvements. Ophthalmology, Dermatology and radiology, scanning detection tasks have all benefited from recent developments. This work uses CT scan scan information to create an optimum deep learning-based pancreatic tumour as well as nontumor classification. Adaptive window filtering (AWF) is a method that is included to reduce any noise that may be present. Additionally, feature extraction using the UNET results in an accumulation of feature vectors. For categorization reasons, pancreatic ductal adenocarcinoma (PDAC) is predicted by pathological grade classification models for PDAC. A number of simulations are conducted in order to confirm the PDAC technique's enhanced efficiency, and the outcomes are examined from various angles. The ODLPTNTC method showed excellent results compared to more current techniques, according to a thorough comparative outcomes study.

The rest of the essay is prepared as follows: We go into depth about the crisis response design process in the next segment. We give an overview of the planning and creation of this particular study in division 3. The experimental conclusions and Section 4 of this article contains explanations on this research, It is followed by Section 5's conclusion.

                                                                                                                                          II.         Process for design

A thorough analysis of the design process categorization models for pancreatic tumors is provided in this section. Deep learning is a family of techniques with enhanced performance for learning and wide application horizons that employ numerous layers to gradually remove characteristics from the source data [6]. In order to support disaster management, deep learning approaches are especially well adapted to solve problems involving, natural language processing, damage assessment, motion detection, facial recognition and transportation prediction.

1. Data gathering: Collect a sizable dataset of pancreatic tumours on CT scans, both benign and malignant (cancerous) patients. Experienced radiologists should appropriately categorise these images. Make sure the information is varied, containing information on various pancreatic tumour sizes, forms, and locations.

2. Data preprocessing: The data should be cleaned and prepared. To ensure that the pixel sizes and values are uniform, normalise the photos. Increase the dataset's size and variability by adding new data. This includes the ability to rotate, flip, zoom, and change the brightness and contrast.

3.   Division of  Data: Produce test sets, validation and training sets from the dataset. 15% for testing set, 15% for validation and 70% for training is a usual proportion.

4.  Selecting a deep learning model: Select the best deep learning architecture for classifying images. For picture examination, (CNNs) convolutional neural networks are frequently employed.

5.  Architectural models: Develop a framework for the neural network system. Multiple layers of convolution, max-pooling layers, and fully integrated layers are common components of a CNN architecture for classifying images.

6.  Model Education: Utilising the proper loss coefficients and optimizers, the deep learning model to be trained through the training data. To prevent overfitting, use strategies like early halting.

7.  Validation and Hyperparameter Tuning: Validate the model's performance using the validation dataset. Fine-tune the model and hyperparameters as needed to improve performance.

8.  Testing: Analyse the efficacy of the completed model on the experimental set of data to get an accurate assessment of its classification capabilities.

9.  Interpretability and Visualization: Implement techniques to make the model's decisions interpretable to medical professionals. This can include techniques like Grad-CAM for highlighting the regions in the CT scan that influenced the classification.his context.

10.  Therapeutic Validation: Work together with medical experts to confirm the model's correctness and use in a clinical situation. This stage is essential to verifying that the model can support diagnoses in the actual world.

11.  Deployment: Integrate the model into clinical workflows if it demonstrates to be accurate and trustworthy. This can entail creating an intuitive user interface that radiologists can utilise in their regular work.

12.  Frequent Updating and Upkeep: To maintain the model's performance current with changing medical knowledge and deep learning technologies, ongoing monitoring and upgrades are required.

13.  Ethics-Related Matters: Maintain compliance with healthcare laws, such as HIPAA in the US, and protect the confidentiality and safety of patient data.

14.  Medical Research: To determine how employing this technology may affect patient outcomes and healthcare practises, consider conducting clinical trials.

15.  Learning Efforts: Teach medical practitioners how to use AI-assisted diagnostic tools so they can change their workflow to accommodate the new technology.

                                                                                                                             III.        Design and Development

1. Obtaining information about CT scans: Data on pancreatic patients is gathered from the (CPR) Central Person Registry and the (DNPR), Danish National Patient Registry utilising demographic data. Averaging 26.7 diagnostic codes per patient, DNPR covers roughly 229 million hospital diagnoses for 8.6 million individuals.


Figure 1.  Tissue from a pancreatic cancer CT scan.


2. Adaptive Window Filtering: By applying a filter with different parameters based on the properties of the picture, adaptive window filtering is an approach used in picture processing as well as computer vision to improve or filter pictures to remove noise [28].  The filter characteristics are modified using adaptive window filtering in accordance with the local content of the picture. This might enhance the quality of images.

Figure 1.  Tissue from a pancreatic cancer CT scan.

3.  Segmentation of Image: Segmentation of image is a fundamental task in computer vision and image processing, which involves dividing an picture into meaningful and distinct regions or segments. These segments are often characterized by shared visual properties, such as color, intensity, texture, or other features. This approach uses the preprocessed picture as input during the image segmentation procedure to identify the damaged areas in the CT image. Image segmentation has undergone a revolution thanks to convolutional neural networks (CNNs). CNNs are used in pixel-wise segmentation mask learning and prediction in architectures like U-Net, FCN, and SegNet. They are excellent at many different segmentation tasks [27].

4. An Extraction of Features Approach Using the UNet: 4 encoder units and 4 decoder units which are joined by the U-shaped encoder-decoder network architecture, or UNET, consists of a bridge.

Figure 2.  UNet Architecture [2].

In order to extract pertinent characteristics from input data, notably photographs, a UNet-based feature extraction approach makes use of the UNet architecture. Its encoder-decoder layout with skip connections makes it useful for a variety of computer vision and image analysis work since it can capture fine-grained details and multiscale information. Utilising the UNet structure for obtaining pertinent characteristics from pictures or information is referred to as a UNet-based feature extraction approach. Convolutional neural network (CNN) structure known as UNet was initially created for segmentation of biomedical images tasks but has subsequently found use in a number of different disciplines. It is renowned for its capacity to preserve an overall context while capturing minute details in a picture. In activities like medical image analysis, computer vision, and remote sensing where high-resolution maps of features are necessary, UNet is particularly well-liked [26].

5.  Models for classifying pathogenic grades in PDAC: The severity or aggressiveness of pancreatic ductal adenocarcinoma (PDAC) is predicted by pathological grade classification models for PDAC. The features of the tumour are revealed by pathological grading, which is crucial information that can influence therapy choices and forecast patient outcomes. The degree of differentiation and other histological characteristics are frequently evaluated as part of the grading systems for PDAC. Histological characteristics such tubular development, nuclear pleomorphism, and mitotic activity are frequently used to grade PDAC. The World Health Organisation (WHO) classification, which divides tumours into three grades—(Grade 1) well-differentiated, (Grade 2) moderately differentiated, and (Grade 3) poorly differentiated—is the most widely used grading system for PDAC. Because of the infrequent occurrence of clinical samples with extreme pathological separation classes in the hospital, the scores with few specimens were merged in this inquiry, and all specimens were assigned either of the two prediction labels: low grade or high grade. Undifferentiated, poorly, and moderate-lowly diferentiated pathologies were defined as high grade; moderately, moderate-highly, and highly diferentiated pathologies were defined as low grade. The segmentation results showed that the lesion areas were removed from the 3D segmentation, CT  and PET Mask data, respectively.

Figure 3.  Classification using PDAC [3].

 

6.  Procedure of labelling PET/CT images: Labelling PET-CT scans is a crucial step in many medical applications, including illness detection, diagnosis, and therapy planning. Positron Emission Tomography and Computed Tomography is referred to as PET-CT. Typically, the procedure entails locating and labelling particular areas or structures on the pictures for further study. A crucial stage in the analysis of medical imaging, aiding research, diagnosis, and patient care, is labelling PET-CT images. To ensure the reliability and clinical significance of the labelled data, extensive cooperation between medical specialists, annotators, and data scientists is required [29]. The analysis took into account factors like the tumor's size, location, (SUVmax) standard uptake value, connection to the tissues around it, normal liver parenchyma SUV mean, SUVR (tumor-tonormal liver standard uptake value ratio), existence of distant metastases, the existence of metastasis to lymph nodes, including observations obtained at various points in time.

Figure 4.  Procedure of labelling PET/CT images [4].

7.  Pathalogical grading prediction model: In the context of medical imaging and histology, a a model for predicting abnormal grades is a machine learning or deep model for learning created to identify the pathological grade of a given sample, frequently a tissue biopsy or medical picture. In oncology, where it is used to assess the seriousness and aggressiveness of the disease, pathological grading is an required component of disease diagnosis. Models for pathological grading are useful tools that can help doctors diagnose patients more precisely and consistently. To assure their clinical value and safety, they should be created in close cooperation with domain specialists.

                                                                                                                              IV.        Experimental Outcomes

We begin by gathering a big dataset of individuals with benign and malignant (cancerous) pancreatic tumours using CT scans. Using the benchmark BioGPS dataset from [5], the performance of pancreatic tumour classification is examined in this section. Figure 5 displays a sample set of the CT scans that make up the dataset.

Figure 5.  Pancreatic tumours images [7].

Table 1 provides a thorough review of the proposed technique's comparative performance of categorization utilizing various Training Sets.

Table 1: comparative performance of categorization utilizing various Training Sets.

Variable

Accuracy

Sensitivity

Recall

Precision

Training Set 40

76.55

98.88

76.55

85.23

Training Set 50

78.65

98.2

78.65

89.9

Training Set 60

96.46

98.1

85.23

90.17

Training Set 70

95.41

98.01

86.56

91.56

Training Set 80

98.1

97.23

89.9

98.88

Training Set 90

98.01

96.46

90.17

92.22

Training Set 100

92.22

96.24

91.56

92.45

Training Set 20

96.24

95.41

92.22

93.54

Training Set 110

98.2

95.22

92.45

95.22

Training Set 120

93.54

93.54

93.54

95.41

Training Set 130

97.23

92.45

95.22

96.46

Training Set 140

95.22

92.22

95.41

95.41

Training Set 150

92.45

91.56

96.24

98.1

Training Set 160

85.23

90.17

96.46

98.01

Training Set 170

89.9

89.9

97.23

92.22

Training Set 180

90.17

86.56

98.01

96.46

Training Set 190

91.56

85.23

98.1

96.24

Training Set 200

98.88

78.65

98.2

95.41

Training Set 210

86.56

76.55

98.88

95.22

Training Set 220

96.24

95.41

92.22

93.54

 

Figure 6: Comparison of different segmentation examples.

PDAC's pathological grade classification mode's performance. In contrast to the clinical data model, which had an AUC of The PET/CT system achieved an AUC of 0.99 in the training group cohort, 0.72 in the cohort for validation, and 0.74 in the trial cohort, compared to 0.95 in the training cohort, 0.68 in the validity group, and 0.68 in the trial cohort. To boost the model's efficiency and precision, we combined the clinical approach alongside the PET/CT DL model to develop the PET/CT+Clinical information model.

                                                                                                                                                      V.         Conclusion

In this work, a useful method is developed to identify pancreatic tumours and nontumors and to categorise their presence. This is the study that we are aware of that uses a DL method to predict the preoperative pathological grade of PDAC using PET/CT. Combining PET/CT characteristics with significant clinical data increased the model's ability to predict outcomes. Using persistent real-world records of illness courses and deep studying, we provide a method for forecasting the risk of a rare occurrence but particularly dedicated malignancy. The method we've suggested includes several steps, including preprocessing, segmentation, feature extraction, classification, and parameter optimisation. Future based on DL segmentation approaches may be developed to enhance the technique's classification performance.

References

[1]      J. Behrmann, C. Etmann, T. Boskamp, R. Casadonte, J. Kriegsmann, and P. Maaβ, “Deep learning for tumor classification in imaging mass spectrometry,” Bioinformatics, vol. 34, no. 7, pp. 1215–1223, 2017

[2]      https://medium.com/analytics-vidhya/what-is-unet-157314c87634

[3]      https://www.mdpi.com/2072-6694/12/12/3656

[4]      https://www.google.com/search?q=PET%E2%80%91CT+image+labeling+process&sca_esv=574726742&tbm=isch&sxsrf=AM9HkKmDSBbZcfuVHGu0MhUhHL4JS-EOEQ:1697698848144&source=lnms&sa=X&ved=2ahUKEwi1wZ2GxYGCAxWASWwGHej_AIYQ_AUoAXoECAEQAw&biw=1536&bih=739&dpr=1.25#imgrc=6J6i3xninQLPoM

[5]      W. Xuan and G. You, “Detection and diagnosis of pancreatic tumor using deep learning-based hierarchical convolutional neural network on the internet of medical things platform,” Future Generation Computer Systems, vol. 111, pp. 132–142, 2020.

[6]      Design of Optimal Deep Learning-Based Pancreatic Tumor and Nontumor Classification Model Using Computed Tomography Scans Maha M. Althobaiti , 1 Ahmed Almulihi , 1 Amal Adnan Ashour , 2 Romany F. Mansour , 3 and Deepak Gupta,” Hindawi Journal of Healthcare Engineering Volume 2022, Article ID 2872461, 15 pages https://doi.org/10.1155/2022/2872461

[7]      F-FDG-PET/CT-based deep learning model for fully automated prediction of pathological grading for pancreatic ductal adenocarcinoma before surgery Gong Zhang1,2†, Chengkai Bao3,6†, Yanzhe Liu2†, Zizheng Wang4†, Lei Du5 , Yue Zhang3,6, Fei Wang2 , Baixuan Xu5*, S. Kevin Zhou3,6* and Rong Liu2*,” Zhang et al. EJNMMI Research (2023) 13:49 https://doi.org/10.1186/s13550-023-00985-4”

[8]      T. G. W. Boers, Y. Hu, E. Gibson et al., “Interactive 3D U-net for the segmentation of the pancreas in computed tomog_raphy scans,” Physics in Medicine and Biology, vol. 65, no. 6, p. 065002, Article ID 065002, 2020.

[9]      W. Xuan and G. You, “Detection and diagnosis of pancreatic tumor using deep learning-based hierarchical convolutional neural network on the internet of medical things platform,” Future Generation Computer Systems, vol. 111, pp. 132–142, 2020.

[10]   H. Ma, Z.-X. Liu, J.-J. Zhang et al., “Construction of a convolutional neural network classifier developed by computed tomography images for pancreatic cancer diagnosis,” World Journal of Gastroenterology, vol. 26, no. 34, pp. 5156–5168, 2020.

[11]   Y. Luo, X. Chen, J. Chen et al., “Preoperative prediction of pancreatic neuroendocrine neoplasms grading based on enhanced computed tomography imaging: validation of deep learning with a convolutional neural network,” Neuroendo_crinology, vol. 110, no. 5, pp. 338–350, 2020.

[12]   M. Fu, W. Wu, X. Hong et al., “Hierarchical combinatorial deep learning architecture for pancreas segmentation of medical computed tomography cancer images,” BMC Systems Biology, vol. 12, no. 4, pp. 56–127, 2018.

[13]   K. Men, X. Chen, Y. Zhang et al., “Deep deconvolutional neural network for target segmentation of nasopharyngeal cancer in planning computed tomography images,” Frontiers Oncology, vol. 7, p. 315, 2017.

[14]   L. Shen, W. Zhao, and L. Xing, “Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning,” Nature biomedical engineering, vol. 3, no. 11, pp. 880–888, 2019.

[15]   K. Dmitriev, A. E. Kaufman, A. A. Javed et al., “Classification of pancreatic cysts in computed tomography images using a random forest and convolutional neural network ensemble,” in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 150–158, Springer, Cham, December 2017.

[16]   K. Manabe, Y. Asami, T. Yamada, and H. Sugimori, “Improvement in the convolutional neural network for computed tomography images,” Applied Sciences, vol. 11, no. 4, p. 1505, 2021.

[17]   Y. Ding, Q. Zhu, Z. Xing, and L. Li, “An adaptive-fuzzy filter algorithm for vision preprocessing,” in Proceedings of the 2006 IEEE International Conference on Robotics and Biomimetics, pp. 578–582, IEEE, Kunming, China, December 2006.

[18]   J. N. Kapur, P. K. Sahoob, and A. K. C. Wongc, “A new method for gray-level picture thresholding using the entropy of the histogram,” Computer Vision, Graphics, and Image Processing, vol. 29, no. 3, pp. 273–285, 1985.

[19]   M. Li, Y. Li, Y. Chen, and Y. Xu, “Batch recommendation of experts to questions in community-based question-answering with a sailfish optimizer,” Expert Systems with Applications, vol. 169, Article ID 114484, 2021.

[20]   S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules,” in Proceedings of the Proc. 31st Conf. Neural Inf. Process. Syst., pp. 1–11, Long Beach, CA, USA, December 2017.

[21]   H.-C. Li, W.-Y. Wang, L. Pan, W. Li, Q. Du, and R. Tao, “Robust capsule network based on maximum correntropy criterion for hyperspectral image classification,” Ieee Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 738–751, 2020.

[22]   B. Warsito, R. Santoso, H. Suparti, and H. Yasin, “Cascade forward neural network for time series prediction,” Journal of Physics: Conference Series, vol. 1025, no. 1, p. 012097, 2018.

[23]   A. Fathy and H. Rezk, “Political optimizer based approach for estimating SOFC optimal parameters for static and dynamic models,” Energy, vol. 238, p. 122031, 2022.

[24]   K.-L. Liu, T. Wu, P.-T. Chen et al., “Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation,” Be Lancet Digital Health, vol. 2, no. 6, pp. e303–e313, 2020.

[25]   K. Si, Y. Xue, X. Yu et al., “Fully end-to-end deep-learning based diagnosis of pancreatic tumors,” Beranostics, vol. 11, no. 4, pp. 1982–1990, 2021.

 

 

 

 

 

 

 

 

 

 

Latest Posts

5/recent/post-list