Comments (1)
Hey Abhishek, and thanks for reaching out! Sorry for the delay, but to give your question a proper response I went ahead and just tried it out (FGSM training with an L2 perturbation) since it wasn't in the main experiments of the paper.
Long story short, it also seems to work pretty much exactly the same way it worked in the Linf case. Some specifics that follow:
- I took an L2 adversary with radius eps=128/255, and ran it with the FGSM adversarial training with step size 1.25*eps (same step size as in Linf training).
- Random initialization is again key to getting competitive results! I randomly sampled a direction unit vector (sampling from a random normal distribution and normalizing to have unit norm), and a random radius (sampling from a uniform distribution from [0,epsilon]). I did try another random initialization (e.g. this one https://github.com/MadryLab/robustness/blob/master/robustness/attack_steps.py#L134) but it was not diverse enough and catastrophically overfit.
- The final model gets 69.56% robust accuracy on a ResNet18, which seems comparable to other results that I've seen at this threshold. This is with respect to a single restart PGD adversary with 50 iterations, and step size 3/255
from fast_adversarial.
Related Issues (20)
- torch.where API in MNIST and CIFAR10, ImageNet configuration files HOT 1
- Overwrite of variable i in nested for loop
- Reproduce results HOT 1
- indices HOT 1
- About PGD evaluation HOT 1
- Why do we need to do clamp(delta, lower_limit - X, upper_limit - X)? HOT 2
- Reproduce the results of Free adversarial training.
- invalid key "/xff" when loading model.
- When computing the perturbation, do we need to set model.eval()? HOT 1
- Parameter settings on CIFAR-100 HOT 1
- adversarial attack
- Reproduce the result of CIFAR-10 from the default setting HOT 2
- facing "nan" values during training the model HOT 1
- reproduce problem of imagenet on default set HOT 1
- Can't reproduce MNIST results using current codes HOT 2
- Inconsistent clamping behaviour between CIFAR and MNIST fgsm implementaitions HOT 1
- Parameters of training HOT 1
- Why not using clean samples during training? HOT 1
- Include python/pytorch version for MNIST reproducibility HOT 1
- Probable gradient accumulation bug in mnist_train.py HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fast_adversarial.