Skip to main content

Table 3 Similarity of deep learning models for cancer determining

From: Unified deep learning models for enhanced lung cancer prediction with ResNet-50–101 and EfficientNet-B3 using DICOM images

Researcher

Area characterization

DL illustrate

Information

set

Outcome

[26]

Advanced Breast Tomosynthesis vs. Computerized Mammography

Pretrained VGG16

Breast Screen Norway screening program

The rate of breast cancer detected by screening is comparable between computerized breast tomosynthesis and stepwise mammography in a population-based screening program

[27]

Precise aspiratory nodule discovery

Convolutional Neural Systems (CNNs)

LIDC-IDRI dataset

Affectability of 92.7% with 1 untrue positive per filter and affectability of 94.2% with 2 wrong positives per check for lung knob discovery on 888 checks. Utilization of thick Most extreme Concentrated Projection (MIP) images makes a difference distinguish little aspiratory knobs (3 mm-10 mm) and diminishes wrong positives

[32]

Pathogenesis of Oral Cancer

Not applicable (no deep learning model mentioned)

Not applicable (no dataset mentioned)

Audit and talk of key atomic concepts and chosen biomarkers embroiled in verbal carcinogenesis, particularly in verbal squamous cell carcinoma, with a center on deregulation amid diverse stages of verbal cancer advancement and movement

[33]

Liquid Biopsies for BC

Not applicable

Meta-analysis of 69 studies

ctDNA mutation rates for TP53, PIK3CA, and ESR1: 38%, 27%, and 32% respectively

[34]

Assessment of smartphone-based Employing Visual Review of the Cervix with Acidic Corrosive in helpful settings

Not appropriate

Information collected from 4,247 patients who experienced cervical cancer screening in rustic Eswatini from September 1, 2016, to December 31, 2018

Introductory Using inspiration rate expanded from 16% to 25.1% standard preparing, at that point dropped to a normal of 9.7 term refresher preparing, expanded once more to a normal of 9.6 before the beginning of mentorship, and dropped to a normal of 8.3% in 2018

[35]

Healthcare and Deep Learning

Deep Learning (Artificial Neural Network)

Electronic

Health Data—8000

Improved predictive performance and applications in various healthcare areas, Accuracy- 97.5%

[36]

Computer-Aided Diagnosis (CAD) in Gastric Cancer

Not specified in the provided text

Histopathological images of gastric cancer (GHIA)

Summarizes image preprocessing, feature extraction, segmentation, and classification techniques for future researchers

[37]

Tumor organization of non-small cell lung cancer (NSCLC) with detailed insights

Two-step deep learning shows autoencoder and CNN) for NSCLC arranging

Preparing (n = 90), Approval (n = 8), Test cohorts (n = 37, n = 26) from open space (CPTAC and TCGA)

CPTAC Test Cohort:

Precision:0.8649

Affectability:0.8000

Specificity:0.9412

AUC:0.8206

TCGA Test Cohort:

Exactness:0.8077

Affectability:0.7692

Specificity:0.8462

AUC:0.8343

[38]

Precise location and classification of breast cancer

Pa-DBN-BC (Deep Conviction Arrange)

The entire slide histopathology image dataset from four information cohorts

86% accuracy

[39]

Skin Cancer Diagnosis

U-Net and VGG19

ISIC 2016, ISIC 2017, ISIC 2018

Palatable comes about compared to state-of-the-art

[40]

Rectal Adenocarcinoma Survival Prediction

DeepSurv model (seven-layer neural network)

Patients with rectal adenocarcinoma from the Soothsayer database

C index: 0.824 (preparation cohort) and 0.821 (test cohort)

Factors influencing survival: age, gender, marital status, tumor evaluation, surgical status, and chemotherapy status. High consistency between test and cohort predictions

[41]

Prostate Cancer Diagnosis and Gleason Grading

Deep Residual Convolutional Neural Network

85 prostate core biopsy specimens digitized and annotated

Coarse-level accuracy: 91.5%, Fine-level accuracy: 85.4%

[42]

Tree-based BrT Multiclassification Demonstrate for Breast Cancer

Outfit tree-based deep learning demonstrates

BreakHis dataset (pretraining), BCBH dataset

Classification accuracy of 87. 50% to 100% for the four subtypes of BrT

The proposed show is beyond the state of the art

[43]

Breast Cancer (BC)

Transfer Learning (TL)

MIAS dataset

80–20 strategy:

Precision: 98.96D44 Affectability: 97.83D44 Specificity: 99.13D44 Accuracy: 97.35D44F-score: 97.66D44

AUC: 0.995

tenfold cross-validation strategy:

Exactness: 98.87D44 Affectability: 97.27D44 Specificity: 98.2D44 Accuracy: 98.84D44

F-score: 98.04D44

AUC: 0.993

[44]

Screening for breast cancer with mammography

Deep learning and convolutional neural systems

Different datasets in advanced mammography and advanced breast tomosynthesis

AI calculations appearing guarantee in review information sets, AUC 0.91, advance considers required for real-world screening effect

[45]

Breast Cancer Diagnosis

Statistical ML and Deep Learning

Various breast imaging datasets

Recommendations for future work

Accuracy 97%

[46]

Dermoscopic Expert

Crossbreed Convolutional, Neural Organize (hybrid-CNN)

ISIC-2016, ISIC-2017, ISIC-2018 AUC of 0.96, 0.95, 0.97 Advanced

AUC by 10.0% and 2.0% for ISIC-2016 and ISIC-2017 datasets, 3.0% higher balanced precision for ISIC-2018 dataset

[47]

Breast Cancer Classification

ResNet-50 pre-trained model

Histopathological images from Jimma College Therapeutic Center, 'BreakHis,' and 'zendo' online datasets

96.75 accuracy for twofold classification, 96.7 accuracy for generous sub-type classification, 95.78 accuracy for threatening sub-type classification, and 93.86 accuracy for review recognizable proof

[48]

Cancer-Net SCa

Custom deep neural organize plans

Universal Skin Imaging Collaboration (ISIC)

Made strides in precision compared to ResNet-50, decreased complexity, solid skin cancer discovery execution, empowered open-source utilization and improvement

[49]

Automating Medical Diagnosis

Transfer Learning, Image Classification, Object Detection,

Segmentation, Multi-task Learning

Medical image data, Skin lesion data, Pressure ulcer, Segmentation data,

Cervical cancer: Sensitivity + 5.4%, Skin lesion: Accuracy + 8.7%, Precision + 28.3%, Sensitivity + 39.7%, Pressure ulcer: Accuracy + 1.2%, IoU + 16.9%, Dice similarity + 3.5%

[50]

Symptomatic Precision of CNN for Gastric Cancer

Anticipating Attack Profundity of Gastric Cancer

Convolutional Neural Network (CNN)

17 studies, 51,446 images, 174 videos, 5539 patients

Sensitivity: 89%, Specificity: 93%, LR + : 13.4, LR–: 11, AUC: 0.94

Sensitivity: 82%, Specificity: 90%, LR + : 8.4, LR–: 20, AUC: 0.90

[51]

Image Quality Control for Cervical Precancer Screening

Deep learning gathering system

87,420 images from 14,183 patients with numerous cervical cancers think about

Accomplished higher execution than standard approaches

[52]

Breast Cancer Determination Utilizing Deep Neural Systems

Convolutional Neural Systems (CNN)

Mammography and histopathologic images

Moved forward BC conclusion with DL, utilized open and private datasets, pre-processing procedures, neural arrange models, and distinguished inquire about challenges for future advancements

[53]

HPV Status Prediction in OPC, Survival Prediction in OPC

Ensemble Model

492 OPC Patient Database

AUC: 0.83, Accuracy: 78.7%

AUC: 0.91, Accuracy: 87.7%

[54]

Pathology Detection Algorithm

YOLOv5 with an improved attention mechanism

Gastric cancer slice dataset

F1_score: 0.616, mAP: 0.611; Decision support for clinical judgment

[55]

Cervical Cancer (CC)

HSIC, RNN, LSTM, AFSA

Not mentioned

Risk scores for recurrence CC patients using the AFSA algorithm

[56]

Hepatocellular carcinoma (HCC)

Inception V3

Genomic Data Commons Databases H&E images

Matthews’s correlation coefficient, 96.0 accuracy for benign/malignant classification, and 89.6 accuracy for tumor separation. Anticipated ten most common changed qualities (CTNNB1, FMN2, TP53, ZFX4) with AUCs from 0.71 to 0.89