Good evening!
I’ve completed my first assessment with a score of 12 out of 18 with bad label, saved my code locally in a text file and tried running it again after booting a new environment and it doesn’t work anymore, The grading either ends up being 0/18 with bad label or 0/18 with bad image, bad label and bad size. I retried everything from scratch but it doesn’t seem to work anymore.
I successfully generated an adversarial image that was initially classified as a laptop using targeted attacks. However, when I saved and reloaded the image for verification, the model labeled it as cock (which was the original prediction before the attack).
Here’s what I did:
- I used an iterative attack, gradually increasing epsilon and alpha until the image was misclassified as a laptop.
- The model correctly identified it as a laptop in real-time during the attack process.
- I saved the adversarial image (
evasion_image.png
). - Upon reloading the saved image and running inference again, the model reverted to classifying it as cock instead of a laptop.
Possible issues I considered:
Image Preprocessing: I checked if resizing/cropping altered the pixel values upon saving and reloading.
Precision Loss: Could saving in PNG/JPG introduce slight changes that undo the adversarial effect?
Has anyone encountered this issue before? Any tips on preserving adversarial modifications through file saving/loading?
Thanks in advance!