Skip to main content
Fig. 1 | BMC Medical Imaging

Fig. 1

From: Universal adversarial attacks on deep neural networks for medical image classification

Fig. 1

Vulnerability to nontargeted UAPs with \(p = 2\). Line plots of the fooling rate \(R_{f}\) against Inception V3 model versus perturbation magnitude \(\zeta\) for skin lesions (a), OCT (b), and chest X-ray (c) image datasets. Legend label indicates image set used for computing \(R_{f}\). Additional argument “(random)” indicates that random UAPs were used instead of UAPs. Normalized confusion matrices for Inception V3 models attacked using UAPs on test images of skin lesions (d), OCT (e), and chest X-ray (f) image datasets are also shown. \(\zeta = 4\%\) in d and f. \(\zeta = 6\%\) in e

Back to article page