Comments (12)
You may also convert maxpool to relu:
https://github.com/Verified-Intelligence/alpha-beta-CROWN/blob/main/vnncomp_scripts/maxpool_to_relu.py
from alpha-beta-crown.
You may also convert maxpool to relu: https://github.com/Verified-Intelligence/alpha-beta-CROWN/blob/main/vnncomp_scripts/maxpool_to_relu.py
Thanks, I will try it. By the way, what is the current level of support for maxpooling layers in the current version? Can multiple maxpooling layers be supported now?
It is supported. However, max-pooling is not a verification-friendly layer and should be avoided when designing a verification-aware network architecture. Use average pooling if possible.
Maybe you can add
--conv_mode matrix
flag to your command.
Generally, using --conv_mode matrix
is strongly discouraged because it is very inefficient for convolutional networks. You are likely to run out-of-memory.
File "/Users/alpha-beta-CROWN-main/complete_verifier/auto_LiRPA/optimized_bounds.py", line 1054, in init_slope node.init_opt_parameters(start_nodes) File "/Users/alpha-beta-CROWN-main/complete_verifier/auto_LiRPA/operators/pooling.py", line 50, in init_opt_parameters self.alpha[ns] = torch.empty( TypeError: empty(): argument 'size' must be tuple of ints, but found element of type torch.Size at pos 2
Please give us a complete stack trace, and a complete program, models, and instructions so we can help you more. @shizhouxing It might be related to the part on creating optimizing variables, related to what you are working on right now. We definitely need to clear this part in the next release.
from alpha-beta-crown.
Please post your model and code here so we can provide more help. Thanks.
from alpha-beta-crown.
Maybe you can add --conv_mode matrix
flag to your command.
from alpha-beta-crown.
My model is as follows:
class MModel(nn.Module):
def __init__(self):
super(MModel, self).__init__()
self.features = self._make_layer()
self.linear1 = nn.Linear(512,100)
self.dropout = nn.Dropout(0.5)
self.linear2 = nn.Linear(100,10)
for m in self.modules():
if isinstance(m, (nn.Conv2d, nn.Linear)):
nn.init.xavier_normal_(m.weight,gain=1)
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def _make_layer(self):
layers = []
in_planes=1
planes=32
kernel_size=[3,2]
pool_size = [2,2]
padding = [1,1]
stride = [1, 2]
for i in range(3):
if i!=0:
in_planes=32
kernel_size=[3,1]
pool_size = [2,1]
padding = [1,0]
stride = [1,1]
layers += [nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)]
layers += [nn.BatchNorm2d(planes)]
for j in range(4):
layers += [nn.Conv2d(planes, planes, kernel_size=kernel_size, stride=stride, padding=padding, bias=False)]
layers += [nn.BatchNorm2d(planes)]
layers += [nn.ReLU(inplace=True)]
layers+=[nn.MaxPool2d(pool_size, stride=pool_size)]
planes = 16
kernel_size = [3, 1]
pool_size = [2, 1]
padding = [1, 0]
stride = [2,1]
layers += [nn.Conv2d(32, planes, kernel_size=1, bias=False)]
layers += [nn.BatchNorm2d(planes)]
layers += [nn.ReLU(inplace=True)]
for k in range(2):
layers += [nn.Conv2d(planes, planes, kernel_size=kernel_size, stride=stride, padding=padding, bias=False)]
layers += [nn.BatchNorm2d(planes)]
layers += [nn.ReLU(inplace=True)]
return nn.Sequential(*layers)
def forward(self,x):
out = self.features(x)
out = out.view(out.size(0), -1)
out = F.relu(self.linear1(out))
out = self.dropout(out)
out = self.linear2(out)
return out
Adding --conv_mode matrix flag seems to have helped with my previous problem, but there is a new error:
File "/Users/alpha-beta-CROWN-main/complete_verifier/auto_LiRPA/optimized_bounds.py", line 1054, in init_slope
node.init_opt_parameters(start_nodes)
File "/Users/alpha-beta-CROWN-main/complete_verifier/auto_LiRPA/operators/pooling.py", line 50, in init_opt_parameters
self.alpha[ns] = torch.empty(
TypeError: empty(): argument 'size' must be tuple of ints, but found element of type torch.Size at pos 2
from alpha-beta-crown.
I see you mentioned in a previous issue that only one maxpooling layer is currently supported? Is it because I have three maxpooling layers in my network structure that the code is not working?
from alpha-beta-crown.
from alpha-beta-crown.
Is it possible to change the maxpooling to avgpooling in your model? The maxpooling will yield looser bound relaxation since it’s a non-linear layer.
I tried this approach, but a new problem emerged.
/alpha-beta-CROWN-main/complete_verifier/auto_LiRPA/bound_general.py:944: UserWarning: Creating an identity matrix with size 65536x65536 for node BoundBatchNormalization(name="/127"). This may indicate poor performance for bound computation. If you see this message on a small network please submit a bug report.
sparse_C = self.get_sparse_C(
After the above message, the program will get stuck, and then be killed.
from alpha-beta-crown.
You may also convert maxpool to relu: https://github.com/Verified-Intelligence/alpha-beta-CROWN/blob/main/vnncomp_scripts/maxpool_to_relu.py
Thanks, I will try it.
By the way, what is the current level of support for maxpooling layers in the current version? Can multiple maxpooling layers be supported now?
from alpha-beta-crown.
Thanks for your replay.
my model's structure is:
class MModel(nn.Module):
def __init__(self):
super(MModel, self).__init__()
self.features = self._make_layer()
self.linear1 = nn.Linear(512,100)
self.dropout = nn.Dropout(0.5)
self.linear2 = nn.Linear(100,10)
for m in self.modules():
if isinstance(m, (nn.Conv2d, nn.Linear)):
nn.init.xavier_normal_(m.weight,gain=1)
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def _make_layer(self):
layers = []
in_planes=1
planes=32
kernel_size=[3,2]
pool_size = [2,2]
padding = [1,1]
stride = [1, 2]
for i in range(3):
if i!=0:
in_planes=32
kernel_size=[3,1]
pool_size = [2,1]
padding = [1,0]
stride = [1,1]
layers += [nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)]
layers += [nn.BatchNorm2d(planes)]
for j in range(4):
layers += [nn.Conv2d(planes, planes, kernel_size=kernel_size, stride=stride, padding=padding, bias=False)]
layers += [nn.BatchNorm2d(planes)]
layers += [nn.ReLU(inplace=True)]
layers+=[nn.MaxPool2d(pool_size, stride=pool_size)]
planes = 16
kernel_size = [3, 1]
pool_size = [2, 1]
padding = [1, 0]
stride = [2,1]
layers += [nn.Conv2d(32, planes, kernel_size=1, bias=False)]
layers += [nn.BatchNorm2d(planes)]
layers += [nn.ReLU(inplace=True)]
for k in range(2):
layers += [nn.Conv2d(planes, planes, kernel_size=kernel_size, stride=stride, padding=padding, bias=False)]
layers += [nn.BatchNorm2d(planes)]
layers += [nn.ReLU(inplace=True)]
return nn.Sequential(*layers)
def forward(self,x):
out = self.features(x)
out = out.view(out.size(0), -1)
out = F.relu(self.linear1(out))
out = self.dropout(out)
out = self.linear2(out)
return out
and the config is:
model:
name: MModel
path: models/MModel_ckpt.pth
data:
dataset: MM
# mean: [0.4914, 0.4822, 0.4465] # Mean for normalization.
# std: [0.2471, 0.2435, 0.2616] # Std for normalization.
start: 0
end: 40
specification:
norm: .inf
epsilon: 0.00784313725490196
attack:
pgd_steps: 100
pgd_restarts: 30
pgd_order: skip
solver:
batch_size: 2048
alpha-crown:
iteration: 100
lr_alpha: 0.1
beta-crown:
lr_alpha: 0.01
lr_beta: 0.05
iteration: 20
timeout: 120
branching:
reduceop: min
method: kfsb
candidates: 3
The shape of the data in the dataset is:
[1,1,1024,2]
The instruction is:
python abcrown.py --config exp_configs/MModel.yaml --conv_mode patches
The complete stack trace is:
Traceback (most recent call last):
File "/alpha-beta-CROWN-main/complete_verifier/abcrown.py", line 655, in <module>
main()
File "/alpha-beta-CROWN-main/complete_verifier/abcrown.py", line 543, in main
incomplete_verifier(model_ori, x, data_ub=data_max, data_lb=data_min, vnnlib=vnnlib)
File "/alpha-beta-CROWN-main/complete_verifier/abcrown.py", line 75, in incomplete_verifier
_, global_lb, _, _, _, mask, lA, lower_bounds, upper_bounds, pre_relu_indices, slope, history, attack_images = model.build_the_model(
File "/alpha-beta-CROWN-main/complete_verifier/beta_CROWN_solver.py", line 1068, in build_the_model
lb, ub, aux_reference_bounds = self.net.init_slope(
File "/alpha-beta-CROWN-main/complete_verifier/auto_LiRPA/optimized_bounds.py", line 1054, in init_slope
node.init_opt_parameters(start_nodes)
File "/alpha-beta-CROWN-main/complete_verifier/auto_LiRPA/operators/pooling.py", line 50, in init_opt_parameters
self.alpha[ns] = torch.empty(
TypeError: empty(): argument 'size' must be tuple of ints, but found element of type torch.Size at pos 2
When use --conv_mode matrix
, as you say, it may easily run out-of-memory. Are there any relevant papers describing this parameter? I would like to know how patches
increases efficiency.
Besides, are there any papers introducing why max-pooling is not a verification-friendly layer?
Thank you all for help!
from alpha-beta-crown.
Is there any news about this issue? I have a similar problem.
from alpha-beta-crown.
@nbdyn @yusiyoh If you are still working on it, could please send all the necessary files for me to test your models? For example, currently I don't have the MM
dataset and the checkpoint to run your model. Also, maybe you could try our latest code to see if the issue persists?
When use --conv_mode matrix, as you say, it may easily run out-of-memory. Are there any relevant papers describing this parameter? I would like to know how patches increases efficiency.
I don't think we have a paper for it. It was introduced in auto_LiRPA. Basically, for a convolution, an output neuron only depends on a patch on the input, not every element on the input. Based on this property, the patches mode only tracks the necessary dependency to significantly save the memory cost.
Besides, are there any papers introducing why max-pooling is not a verification-friendly layer?
I am not aware of such a paper. But max pooling is nonlinear and it is hard to tightly bound it, while avg pooling is simply linear.
from alpha-beta-crown.
Related Issues (20)
- Binary out.txt file HOT 1
- CUDA out of memory HOT 6
- need help with Regression model and batch-normalization HOT 1
- alpha-beta-crown behavior HOT 1
- alpha-beta-crown always returns timeout even for a very simple model HOT 1
- GCP-CROWN‘s SDP example error
- Unable to run BAB attack with any given configuration files
- Can I compute the upper bound using incomplete verifier? HOT 14
- RuntimeError onnx network when verifying, while network works at inference
- Documentation info and "save_adv_example" only saving last example
- Can't install HOT 9
- AssertionError on act.inputs HOT 2
- Out-Of-Memory Error / "Killed" HOT 1
- Explanation of the contents of the results file "out.txt"?
- How to run abcrown.py from inside Python? HOT 1
- It appears that there are missing indentations at line 501 and 502 in abcrown.py. HOT 1
- Inconsistency in Verification Results and Issues with Network Scaling and torch.norm Usage HOT 2
- AttributeError: 'Patches' object has no attribute 'permute' HOT 1
- "NotImplementedError: bound_dynamic_forward is not implemented for BoundMaxPool(...)" encountered while attempting to run vnncomp23's vggnet16 benchmark HOT 1
- Example using abcrown to verify visual transformer (ViT) HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from alpha-beta-crown.