Hello I am working on the Exploring Adversarial Machine Learning but am finding the course a bit challenging and the course time to not be sufficient.
I have been able to complete the assessments except for 1 of them, as I am not able to pass the one on Poisioning. Unfortunately the course says its requires a 90 out of 100 score to receive the certificate, even though I have what is normally a sufficient source to pass, so this requires passing the Poisioning assessment.
I am wondering if I can recieve any additional tips on completing the poisioning assessment, if you can re-consider to lower the score need to receive the certificate, and/or if I can receive additional course time.
For the Poisioning asssessment I believe I have the code correct except I am just struggling with tuning the parameters for Witches Brew like epochs, trials, epislon, and percent posioned. Unfortunatey the lab doesn’t provide ones that pass and even says " You may never land on a successful poisoning and that is okay!" but if you want to complete the assessment this is not okay i you don’t have a successful poisioning.
I will note that it takes a while to run a few trials with 75 epochs.
I will also point out that it appears the Poisioning asssessment actually currently has a bug, although I was able to work around this. cifar.py is located in the 8_course_asssessment folder and refers to on line 227model_path = “models/cifar10-resnet18-pytorch-notebook.pth” but the model folder does not even exist in the 8_course_asssessment folder. I was able to move it over from the labs though to work around this.
I have not been able to get any credit for the poisoning assessment. Still struggling with it always get 0 out of 16 on it. Let me know if you ever have any success.
Your unmodified image looks correct, I see the frog in it. You need to get the modified image to look like a cat but maintain the frog label. You just showed the modified image.
Hi so I’m also stuck on assessment 6: Poisoning, I’ve written code that creates a data poisoning attack on a CIFAR-10 image dataset.
Loads and normalizes the CIFAR-10 dataset
Identifies all frog images (class 6) and cat images (class 3) in the training data
Subtly modifies each frog image by blending it with a cat image (80% frog, 20% cat)
Keeps all the original labels unchanged
The result is a “clean-label” poisoning attack where frog images still look like frogs to humans but contain subtle cat-like features. When a model is trained on this poisoned dataset, it misclassifies real frogs as cats during inference, despite the training labels remaining accurate.
Upon grading this I get a “bad data” response, is there any way to get a more descriptive output?
I’ve been trying to do as you mentioned above. But I’m bit of lost here. Because don’t know what’s the expected answer. My prompts are either provide the color without the name, or the name without the color. Based on the grading result, it looks like it is expecting both name and color…
I’ll keep trying and keep you posted. Thanks again!
You’re on the right track! Just to give you another nudge, the answer doesn’t have to be exactly “blue”; it can indeed be a shade of blue as well. Also, make sure that the name “Danny Shaffer” is clearly present in your prompt.