Comments (3)
Hi @akshaykulkarni07, thanks for sharing the details, I run your model with the config file and actually the results are as expected.
The problem is that the number of unstable neurons in your model is quite large. Although we support running our verification of this model, the output bounds are extremely loose, (as you can see the initial bound can be -1e13, and usually they should be larger than -1,000) 😂
The solution is that you can reduce the model size by decreasing depth and width or train the model by using verified or adversarial training (IBP or CROWN-IBP) to reduce the unstable neurons.
Please let me know if you have any questions!
from alpha-beta-crown.
Could you please give us the model, configuration file, command, and log files so we can provide more help? Thanks.
from alpha-beta-crown.
# This is an example configuration file that contains most useful parameter settings.
model:
name: partial(ResNet18, in_planes=64)
path: checkpoints/GAT_CIFAR10_ResNet18.pkl
data:
dataset: CIFAR
mean: [0.0, 0.0, 0.0] # Mean for normalization.
std: [1.0, 1.0, 1.0] # Std for normalization.
start: 0 # First example to verify in dataset.
end: 1000 # Last example to verify in dataset.
specification:
norm: .inf # Linf norm (can also be 2 or 1).
epsilon: 0.00784313725490196 # epsilon=2./255.
attack: # Currently attack is only implemented for Linf norm.
pgd_steps: 100 # Increase for a stronger attack. A PGD attack will be used before verification to filter on non-robust data examples.
pgd_restarts: 30 # Increase for a stronger attack.
solver:
batch_size: 8 # Number of subdomains to compute in parallel in bound solver. Decrease if you run out of memory.
crown:
batch_size: 8
max_crown_size: 8
alpha-crown:
iteration: 100 # Number of iterations for alpha-CROWN optimization. Alpha-CROWN is used to compute all intermediate layer bounds before branch and bound starts.
lr_alpha: 0.1 # Learning rate for alpha in alpha-CROWN. The default (0.1) is typically ok.
beta-crown:
lr_alpha: 0.01 # Learning rate for optimizing the alpha parameters, the default (0.01) is typically ok, but you can try to tune this parameter to get better lower bound.
lr_beta: 0.05 # Learning rate for optimizing the beta parameters, the default (0.05) is typically ok, but you can try to tune this parameter to get better lower bound.
iteration: 20 # Number of iterations for beta-CROWN optimization. 20 is often sufficient, 50 or 100 can also be used.
bab:
timeout: 120 # Timeout threshold for branch and bound. Increase for verifying more points.
branching: # Parameters for branching heuristics.
reduceop: min # Reduction function for the branching heuristic scores, min or max. Using max can be better on some models.
method: kfsb # babsr is fast but less accurate; fsb is slow but most accurate; kfsb is usually a balance.
candidates: 3 # Number of candidates to consider in fsb and kfsb. More leads to slower but better branching. 3 is typically good enough.
This is the configuration file. The model is the standard ResNet18 and the pretrained checkpoint is from here. Then the command I run is python abcrown.py --config cifar_resnet_GAT.yaml
where cifar_resnet_GAT.yaml
contains the above provided config.
As for the logs, I have the terminal output saved to a text file but it's very very long (like 2.8 million lines when evaluating on 1000 samples of CIFAR10), so it would be helpful if you could point to any specific output you want to see. Generally, I'm seeing this
Result: unknown in 121.7578 seconds
after every sample gets processed.
from alpha-beta-crown.
Related Issues (20)
- Query of theorem of handling residual networks with ADD layer HOT 1
- BaB for Heaviside Activation Function
- Binary out.txt file HOT 1
- CUDA out of memory HOT 6
- need help with Regression model and batch-normalization HOT 1
- alpha-beta-crown behavior HOT 1
- alpha-beta-crown always returns timeout even for a very simple model HOT 1
- GCP-CROWN‘s SDP example error
- Unable to run BAB attack with any given configuration files
- Can I compute the upper bound using incomplete verifier? HOT 14
- RuntimeError onnx network when verifying, while network works at inference
- Documentation info and "save_adv_example" only saving last example
- Can't install HOT 9
- AssertionError on act.inputs HOT 2
- Out-Of-Memory Error / "Killed" HOT 1
- Explanation of the contents of the results file "out.txt"?
- How to run abcrown.py from inside Python? HOT 1
- It appears that there are missing indentations at line 501 and 502 in abcrown.py. HOT 1
- Inconsistency in Verification Results and Issues with Network Scaling and torch.norm Usage HOT 2
- AttributeError: 'Patches' object has no attribute 'permute' HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from alpha-beta-crown.