Fig. 4From: Universal adversarial attacks on deep neural networks for medical image classificationTargeted UAPs with \(p = 2\) against Inception V3 models and their adversarial images for skin lesions (a), OCT (b), and chest X-ray image datasets. Further, \(\zeta = 2\%\) in a and c. \(\zeta = 6\%\) in b. Labels in brackets beside the images are predicted classes. Original (clean) images were correctly classified into their actual labels. Adversarial images were classified into the target classes. UAPs are emphatically displayed for clarity; in particular, each UAP is scaled by a maximum of 1 and minimum of 0Back to article page