Skip to main content
Fig. 4 | BMC Medical Imaging

Fig. 4

From: Universal adversarial attacks on deep neural networks for medical image classification

Fig. 4

Targeted UAPs with \(p = 2\) against Inception V3 models and their adversarial images for skin lesions (a), OCT (b), and chest X-ray image datasets. Further, \(\zeta = 2\%\) in a and c. \(\zeta = 6\%\) in b. Labels in brackets beside the images are predicted classes. Original (clean) images were correctly classified into their actual labels. Adversarial images were classified into the target classes. UAPs are emphatically displayed for clarity; in particular, each UAP is scaled by a maximum of 1 and minimum of 0

Back to article page