Fig. 2From: Universal adversarial attacks on deep neural networks for medical image classificationNontargeted UAPs with \(p = 2\) against Inception V3 models and their adversarial images for skin lesions (a), OCT (b), and chest X-ray image datasets (c). Further, \(\zeta = 4\%\) in a and c. \(\zeta = 6\%\) in b. Labels in brackets beside the images are the predicted classes. The original (clean) images are correctly classified into their actual labels. UAPs are emphatically displayed for clarity; in particular, each UAP is scaled by a maximum of 1 and minimum of 0Back to article page