Giter VIP home page Giter VIP logo

odconv's People

Contributors

chaoli-ai avatar yaoanbang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

odconv's Issues

training error

Hello, I would like to ask you a question. When I am training, I get this error: TypeError: init() got an unexpected keyword argument 'reduction'
. I can't think of a solution, do you have any insight?

Error while using torch.summary

Thanks for your novel work!
But I found errors while using torch.summary() to calculate the number of parameters of ODConv.
It seems that the problem is caused by the output of self.attention(x)

File "/workspace/PanoFormer/PanoFormer/network/SphereConv2d.py", line 345, in _forward_impl_common
  channel_attention, filter_attention, spatial_attention, kernel_attention = self.attention(x)
File "/opt/conda/envs/panoformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1547, in _call_impl
  hook_result = hook(self, args, result)
File "/opt/conda/envs/panoformer/lib/python3.8/site-packages/torchsummary/torchsummary.py", line 22, in hook
  summary[m_key]["output_shape"] = [
File "/opt/conda/envs/panoformer/lib/python3.8/site-packages/torchsummary/torchsummary.py", line 23, in <listcomp>
  [-1] + list(o.size())[1:] for o in output

Batch_size

batch_size=4 可以运行,batch_size=1,就不能运行,这是什么原因?

I would like to ask how to use the model

I downloaded the model and got an 'archive' folder.When I try to load it with torch.load, I get an error '_pickle.UnpicklingError: A load persistent id instruction was encountered,but no persistent_load function was specified.‘
but no persistent_load function was specified.’, but I want to have a backbone pre-trained model (the best .pth file) to complete my other tasks, what should I do

Encountered an issue in Attention part of odconv.py

When I try to apply ODConv to yolov7 , the following prompt was shown:

File "D:\YOLOv7-ODConv\models\ODConv.py", line 81, in forward x = self.bn(x) File "D:\Anaconda\envs\yolov7\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "D:\Anaconda\envs\yolov7\lib\site-packages\torch\nn\modules\batchnorm.py", line 182, in forward self.eps, File "D:\Anaconda\envs\yolov7\lib\site-packages\torch\nn\functional.py", line 2448, in batch_norm _verify_batch_size(input.size()) File "D:\Anaconda\envs\yolov7\lib\site-packages\torch\nn\functional.py", line 2416, in _verify_batch_size raise ValueError("Expected more than 1 value per channel when training, got input size {}".format(size)) ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 16, 1, 1])

I am relative new to this field, I will be glad to elaborate but I have no idea what else to report

1D convolution level

Thank you very much for your proposed multi-dimensional dynamic convolution mode, which is very innovative. I would like to ask a question: Can you provide an ODconv with a 1D convolution level for reference?

kernel_num = 4 报错输入尺寸不匹配

定义输入通道128,输出通道128,kernelsize=1,stride=1,padding=0,dilation=1,groups=1.
当kernel_num为1时模型可以正常训练,kernel_num=4时报错:

    output = output.view(batch_size, self.out_planes, output.size(-2), output.size(-1))
RuntimeError: shape '[16, 128, 160, 160]' is invalid for input of size 3276800

进入调试模式可以看到输入数据x的shape为[16,128,160,160],经过reshape后为[1,2048,160,160],
self.weight的尺寸为[4, 128, 128, 1, 1]

aggregate_weight的尺寸为[128, 128, 1, 1]

经过F.conv2d后output的尺寸为[1, 128, 160, 160]

与下一步的output.view(batch_size, self.out_planes, output.size(-2), output.size(-1))所需的输入尺寸不符合
请问是哪里有问题呢?

ODconv with a 1D convolution level

Thank you very much for your proposed multi-dimensional dynamic convolution mode, which is very innovative. I would like to ask a question: Can you provide an ODconv with a 1D convolution level for reference?

How to solve this problem:conv2d() received an invalid combination of arguments

TypeError: conv2d() received an invalid combination of arguments - got (Tensor, weight=Tensor, bias=NoneType, stride=float, padding=int, dilation=int, groups=int), but expected one of:

  • (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
    didn't match because some of the arguments have invalid types: (Tensor, weight=Tensor, !bias=NoneType!, !stride=float!, !padding=int!, !dilation=int!, groups=int)
  • (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
    didn't match because some of the arguments have invalid types: (Tensor, weight=Tensor, !bias=NoneType!, !stride=float!, !padding=int!, !dilation=int!, groups=int)

Targeting to :output = F.conv2d(x, weight=self.weight.squeeze(dim=0), bias=None, stride=self.stride, padding=self.padding,
dilation=self.dilation, groups=self.groups)

About update_temperature!

I see you did not finish the function of "update_temperature" in this code version, right?
Updating T from 30 to 1, which is 30 27 24 ...3 1 1 1 ... 1.
But I see your code "class Attention" in a fixed "self.temperature = 1.0".

你好,首先感谢你之前的解答,目前调用odConv出现通道数不对,有如下错误?

File "E:\paper_code\paper_3\Paper_6cls\model\module.py", line 272, in forward
return self._forward_impl(x)
File "E:\paper_code\paper_3\Paper_6cls\model\module.py", line 250, in _forward_impl_common
channel_attention, filter_attention, spatial_attention, kernel_attention = self.attention(x)
File "C:\D_installation_packet\Anaconda\installion_package\envs\PaperMcnn\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "E:\paper_code\paper_3\Paper_6cls\model\module.py", line 212, in forward
x = self.bn(x)
File "C:\D_installation_packet\Anaconda\installion_package\envs\PaperMcnn\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\D_installation_packet\Anaconda\installion_package\envs\PaperMcnn\lib\site-packages\torch\nn\modules\batchnorm.py", line 140, in forward
self.weight, self.bias, bn_training, exponential_average_factor, self.eps)
File "C:\D_installation_packet\Anaconda\installion_package\envs\PaperMcnn\lib\site-packages\torch\nn\functional.py", line 2144, in batch_norm
_verify_batch_size(input.size())
File "C:\D_installation_packet\Anaconda\installion_package\envs\PaperMcnn\lib\site-packages\torch\nn\functional.py", line 2111, in _verify_batch_size
raise ValueError("Expected more than 1 value per channel when training, got input size {}".format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 16, 1, 1])

请问该如何解决啊,谢谢

MAdds

dear
Can you tell me how you calculate MAdds?
There is a lot of confusion on the internet about how to calculate this.

Thank you very much!

After replace Conv2d to ODConv2d, how to fused conv and bn?

My net is like conv-bn-relu。
I replaced nn.conv2d to odconv2d, and wanted to fused conv and batchnorm in inference.

我替换成了 ODConv 以后如何像之前那样在推理阶段融合 Conv 和 BN 层?在 Conv-BN-ReLU/SiLU/GELU 这样结构的 CNN 网络中?

调用方式

能否像调用functional.conv2d的方式一样调用odconve

你好,出现deepcopy错误

y = _reconstruct(x, memo, *rv)

File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 153, in deepcopy
y = copier(memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\site-packages\torch\tensor.py", line 55, in deepcopy
raise RuntimeError("Only Tensors created explicitly by the user "
RuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment

about bias

Bias is set to none in your code. Is this the same setting for the experiment in the paper? I couldn't find the corresponding description in the paper. If so, could you tell me the reason briefly? Thanks!

output = F.conv2d(x, weight=aggregate_weight, bias=None, stride=self.stride, padding=self.padding,

model weight

Can you provide the download link for Baidu Cloud?thank you very much!

release code

Hi, great job!
I want to know when you can release your code and models?
I have reproduced your code but produced inferior results compared with DyConv. Maybe some implementation details are lost. Looking forward to your release~
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.