Model Attacking & Defending
A project that includes adversarial attacks and passive defenses of a model
Task
Conduct the Fast Gradient Sign Method and the basic iterative method to attack a pre-trained model.
Then, protect the model by conducting randomization to the images before feeding them into the model.
Run on Colab (cannot save your own version)
- Access the notebook to run the codes: https://colab.research.google.com/drive/1WEMfMOh0r6lC8aLnux3E6UZw1tzFgTAe?usp=sharing
Run on Colab (can save your own version)
- Run this command: git clone https://github.com/b05702057/Model-Attacking-Defending.git
- Upload the .ipynb files to your google drive and open it with colab