Comments (9)
Hi! Yes, there is wrapper for Keras models called KerasClassifier
. Please take a look at the examples
folder for its usage.
from adversarial-robustness-toolbox.
Thank you. I figured out looking at the example and was able to implement the same attack on my model. Except Fgsm, Deepfool, Carlini do other attacks take comparatively a very long time??
from adversarial-robustness-toolbox.
FGSM should run much faster than Carlini or DeepFool. Is that consistent with the results you are getting? Could you provide some info about your experiments? How big is your model (number of layers)? How many classes does it have? You could also try NewtonFool or JSMA (SaliencyMapMethod
).
from adversarial-robustness-toolbox.
Well I am working with mnist digit and mnist fashion. Well the model is basically a simple neural network with an input layer, 2 hidden layers (512 neurons in each layer)and a output layer(10 classes). The fgsm attack is too quick and hardly takes 10 secs and accuracy falls down to 6%, deep fool takes around 4 mins and accuracy reduces to around 1%. These are approximated times. NewtonFool even after running for around 50 mins is not completed.
from adversarial-robustness-toolbox.
Thank you for your feedback! We're actually looking into optimizing the attacks, the slow ones will do much better soon. For FGSM and DeepFool, your results match what I would expect. Do you have some stats for Carlini and JSMA as well?
from adversarial-robustness-toolbox.
For JSMA attack, theta and gamma set to 0.5, for first 100 samples the accuracy falls down to 3% . This takes around 10 mins which is comparable to jsma attack from cleverhans and foolbox.Testing with all samples takes time for me and most probably throws an out of memory error as I encountered using cleverhans. I haven't still tried CW, are other attacks such as Universal Pertubation, VIrtual Pertubation optimized??
from adversarial-robustness-toolbox.
Thanks for the info! I would not expect an out of memory exception for JSMA in ART, we don't compute it the same way as Cleverhans.
Universal perturbation and virtual adversarial are essentially iterative methods, so computationally expensive in most cases. Moreover, virtual adversarial is not an attack per say, it's a method to obtain adversarial samples to be used for adversarial training, aka make your model robust. Universal perturbation takes another attack as parameter (DeepFool as default, but you can use any other), so its performance essentially depends on the attack you choose as parameter. You can give it a try with DeepFool or FGSM.
from adversarial-robustness-toolbox.
I tried Jsma. It didnt end up throwing OOM error as you had suggested. Takes almost 3 hours on my system. Got an accuracy of 1.46% keeping theta and gamma to 0.5. So until now I was easily able to replicate results similar to cleverhans for Fgsm, DeepFool and JSMA.
Is the CW attack by default targeted ?? ( I am trying for untargeted attack)
It throws an error everytime I make target parameter as False.
from adversarial-robustness-toolbox.
Thanks for the update on the results you get!
C&W is targeted by default. If you get an error for untargeted, please create a separate issue for this problem, with more information (stack trace, version info, minimum working code example). Thanks.
from adversarial-robustness-toolbox.
Related Issues (20)
- Upgrade Pylint Version and Review Required Disable Statements
- Backdoor attack HuggingFace Model Automatic Speech Recognition via HuggingFaceClassifierPytorch ART HOT 5
- .
- Lยน `FGM` is wrong + extend to all p >= 1 HOT 5
- Implement HuggingFace Object Detection Estimators
- Not generating Adversarial examples HOT 1
- PyTorch classes that check torch.optim.lr_scheduler._LRScheduler HOT 1
- AdversarialTrainer parameter name missmatch HOT 2
- Incorrect Documentation regarding attacks.poisoning HOT 1
- Auto PGD not working with DLR loss for binary classification HOT 1
- ERROR collecting tests/attacks/evasion/test_brendel_and_bethge.py
- Measured Linf norm exceeds epsilon for ACG HOT 1
- Risky values in tests
- Gradient Explosion
- Robust
- Rr
- adversarial audio example notebook not giving the same results HOT 2
- Unable to utilize PytorchClassifier for ASR model HOT 1
- List of projects/tools utilising ART [JATIC-I4-IBM]
- Attack > Evasion > Momentum Iterative Method: major issues HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from adversarial-robustness-toolbox.