Giter VIP home page Giter VIP logo

spike-element-wise-resnet's People

Contributors

fangwei123456 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

spike-element-wise-resnet's Issues

关于 _C_neuron.ParametricLIF_hard_reset_fptt_with_grad

class ParametricLIFMultiStep(torch.autograd.Function):
    @staticmethod
    def forward(ctx, x_seq, v, v_threshold, v_reset, alpha, detach_reset, grad_surrogate_function_index, reciprocal_tau, detach_input):
        if v_reset is None:
            raise NotImplementedError

        spike_seq, v_next, grad_s_to_h, grad_v_to_h, grad_h_to_rtau = _C_neuron.ParametricLIF_hard_reset_fptt_with_grad(x_seq, v, v_threshold, v_reset, alpha, detach_reset, grad_surrogate_function_index, reciprocal_tau, detach_input)
        ctx.save_for_backward(grad_s_to_h, grad_v_to_h, grad_h_to_rtau)
        ctx.reciprocal_tau = reciprocal_tau
        ctx.detach_input = detach_input

        return spike_seq, v_next

您好,我有两个问题希望您帮忙解答一下,

  1. 这段代码 中 _C_neuron.ParametricLIF_hard_reset_fptt_with_grad 内部数据处理的流程是什么,然后函数返回的几个变量分别是什么意思呢?
  2. 我调试发现_C_neuron.ParametricLIF_hard_reset_fptt_with_grad 输入的x_seq维度是[T,B,C,H,W],假设两层是conv1-LIFnode1-conv2-LIFnode2。我理解的处理数据正常的流程应该是每一个时刻的数据[1,B,C,H,W]顺序进入conv1-LIFnode1-conv2-LIFnode2。但是目前的处理方式感觉是所有的数据[T,B,C,H,W]先经过conv1-LIFnode1,然后再进全部进入conv2-LIFnode2。不知道我理解的对不对,这两种方式是否有所区别?

计算 GSOP

hi,我在openreview上的rebuttal中看到你们提供了关于耗能的计算。所以我想咨询一下snn的GSOP是怎么计算的,他和相同结构的ann网络FLOPS有关系吗? 谢谢!

About grad function

Dear author,
what kind of grad function do you used in SEW cext_neuron.MultiStepIFNode? Sigmoid?

The training time on Imagenet.

Thanks for your great work!

Could you please tell me how long does it take to train a model on Imagenet using 8 GPUs?

请问代码是否支持单步推理?

您好,查看代码后我发现推理使用的是多步推理。请问我能否用
from spikingjelly.clock_driven import functional
functional.set_step_mode(net, 's')
方便地将网络改为单步推理模式?
如果不能,您能否提供一个可行的办法使得可以在单步模式下推理?谢谢!

sew resnet training

Hi, thanks for open sourcing this great library. I'm using version 0.0.0.0.12 to run some experiments with sewresnet.
I followed the traing code from https://spikingjelly.readthedocs.io/zh_CN/0.0.0.0.12/clock_driven_en/16_train_large_scale_snn.html
The only change is below, i.e. use sewresent instead of spiking_resnet
net = sew_resnet.multi_step_sew_resnet18(T=4, pretrained=True, multi_step_neuron=neuron.MultiStepIFNode, surrogate_function=surrogate.ATan(), detach_reset=True, backend='cupy', cnf='ADD')

  1. I'd like to know are the default parameters are good for sewresnet? as the demo trains spiking resnet, not sure if same parameters work well with sewresnet. Is there anythinkg missing for sewresnet?
  2. I noticed that there is a pretrained argument in both spiking resnet and sewresnet, Does loading a pretrianed dnn resnet really help? I set it to true, but do not see observable benefit, acc still starts from 0 in training. I expect acc degredation, but it shall not be so much. Is that normal?

Thanks!

The testing_acc1 of dvsgesture is only 80.

Hello, I trained the smodel in dvsgesture. But the testing_acc1 is around 80%, which is much lower than the accuracy in your paper. I used the Adam with lr=0.001 and the others are the default.
python train.py --tb --amp --output-dir ./logs --model SEWResNet --connect_f ADD --device cuda:0 --lr-step-size 64 --epoch 192 --T_train 12 --T 16 --data-path ./dataset --lr 0.001 --adam
The training loss and other results are as followed:
AC75A908-D3FA-47AF-AA11-290D0EE2BC67
D0966B9A-A913-4F0B-8E96-4534BC86C87C
43026CF9-CC4D-4527-84E0-CFFF8062C0EC

RuntimeError: Trying to backward through the graph a second time

我在使用SEW-ResNet做为骨干网络进行训练时,在反向传播的过程中出现了错误,错误代码如下:
File "E:\SEW-RESNet\utils\utils_fit.py", line 66, in fit_one_epoch
loss.backward()
File "F:\Python39\lib\site-packages\torch_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "F:\Python39\lib\site-packages\torch\autograd_init_.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "F:\Python39\lib\site-packages\torch\autograd\function.py", line 253, in apply
return user_fn(self, *args)
File "F:\Python39\lib\site-packages\spikingjelly\clock_driven\neuron_kernel.py", line 351, in backward
h_seq, spike_seq = ctx.saved_tensors
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
在添加retain_graph=True后依然出现缺失值的问题

Will you fit your code into your latest Spikingjelly?

I'm sorry, but some of your code is not compatible with your latest Spikingjelly framework. For example,

from spikingjelly.cext.neuron import MultiStepParametricLIFNode

has been invalid and I found the replacement in activation_based.neuron.ParametricLIFNode. Would you give your code a long-term support by updating your code? I'll appreciate it.

Problem setting up spikingjelly in a conda environment

(This is more a suggestion than an issue :)

Thanks a lot for sharing your code.

However, there is no introduction on how to set up a conda environment to run the code. I encountered many problems installing the specific spikingjelly mentioned in readme.md. The problems include: python setup.py install asks for python=3.8 ; cannot install when torch is 1.4/1.8/1.12 ; train.py complains no tensorboard is available.

Could you please provide more details on the dependencies?

// After spending much time on it, I finally got a working plan for my machine(Linux x64): python=3.8 pytorch=1.6.0 cudatoolkit=10.2 tensorboard=1

Question about sew block

Hello, during experiment, I found that the output of SEW block is not only 0 or 1 (i.e. pulse), but may appear positive integers, such as 2,3,4,5, etc. The discovery is indeed the meaning of the method itself, but these values will definitely be multiplied in the convolution calculation with the next layer. Does this violate the original intention of the spiking neural network, which computs information as pulse values? Will the efficiency of SEW ResNet be improved after it is implemented on the actual neural chip? May it be an ann instead of an snn?

How to use pretrained weights?

As weights were saved via state_dict

import spiking_resnet
net = spiking_resnet.spiking_resnet34()
net.load_state_dict(torch.load("spiking_resnet_34_checkpoint_319.pth")['model'])
  1. to use anaconda virtual environment to avoid the permission and dependency problems also to have this specific version of Spikingjelly in this env.

  2. git clone https://github.com/fangwei123456/spikingjelly.git

  3. cd spikingjelly

  4. git reset --hard 2958519df84ad77c316c6e6fbfac96fb2e5f59a3 #Because it was made on older version of spikingjelly

  5. python setup.py install

If after running the above codes, you don't get any errors, then that's fine.
And you must run the above scripts if you want to train from scratch or reproduce the code.

If you get the following Error message:

  from spikingjelly.cext import neuron as cext_neuron
  File "/home/Desktop/SEW_2/spikingjelly/spikingjelly/cext/neuron.py", line 5, in <module>
    import _C_neuron
ModuleNotFoundError: No module named '_C_neuron'

Then you should do the following steps:

  1. to replace cext_neuron.MultiStepIFNode into:
#from spikingjelly.cext import neuron as cext_neuron
from spikingjelly.clock_driven import neuron, layer, surrogate
#self.sn1 = cext_neuron.MultiStepIFNode(detach_reset=True)
self.sn1 = layer.MultiStepContainer(neuron.IFNode(detach_reset=True, surrogate_function=surrogate.ATan()))

Because I think "git clone"(setup.py) installs without CUDA Extension by default.

CUDA_HOME is None. Install Without CUDA Extension
running install
running bdist_egg
running egg_info
creating spikingjelly.egg-info
writing spikingjelly.egg-info/PKG-INFO
writing dependency_links to spikingjelly.egg-info/dependency_links.txt
......

.....
Using /home/anaconda3/envs/sew/lib/python3.9/site-packages/typing_extensions-4.1.1-py3.9.egg
Searching for six==1.16.0
Best match: six 1.16.0
Processing six-1.16.0-py3.9.egg
six 1.16.0 is already the active version in easy-install.pth

Using /home/anaconda3/envs/sew/lib/python3.9/site-packages/six-1.16.0-py3.9.egg
Finished processing dependencies for spikingjelly==0.0.0.0.4
  1. to make changes in surrogate.py
#self.register_buffer('alpha', torch.tensor(alpha, dtype=torch.float)) 
self.alpha = alpha

If you follow these steps you will be able to use pre-trained weights.
Thank you, Dr. Wei for your help and time.

I'm making this in order to save others time and Dr. Wei's time in the future:)
Hope this helps.

P.S:
if weights were saved as

checkpoint = torch.save({
	'net': model_without_ddp,
	'model': model_without_ddp.state_dict(),
        'optimizer': optimizer.state_dict(),
        'lr_scheduler': lr_scheduler.state_dict(),
        'epoch': epoch,
        'args': args,
        'max_test_acc1': max_test_acc1,
        }, check_point_max_path)

net = torch.load("spiking_resnet_34_checkpoint_319.pth")['net']

Wouldn't we have the above issues? Thank you

spikingjelly(安装问题)

hello ,where should i download the package spikingjelly?whether I need to download spikingjelly in the environment of Anaconda configuration or any folder?(请问spikingjelly 是在下载anaconda配置的环境中还是任意一个文件夹即可?)

关于导数图像的问题

方博士您好,我看您的论文的时候有一个疑问,在下面图中的block11处,应该就是stage2的第一个block吧,第一个block的输入,也就是S11吧,经过downsample之后再与输出相加,downsample会经过一个卷积层。为什么对S11求导会和S12一样呢,不是还有个卷积层吗?我不知道是哪里没理解好,还希望您提示一下。另外G函数如果换成OR是否会有利于梯度的传播?
1628337407(1)

编译.cu文件时遇到问题

E:\NVIDIACUDA\CUDA\bin\nvcc --generate-dependencies-with-compile --dependency-output C:\Users\hp\Desktop\spikingjelly\build\temp.win-amd64-cpython-38\Release\Users\hp\Desktop\spikingjelly\spikingjelly\cext\csrc\neuron\neuron_backward_kernel.obj.d --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -DWITH_CUDA -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\TH -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\THC -IE:\NVIDIACUDA\CUDA\include -IC:\Users\hp\anaconda3\envs\torch1.12\include -IC:\Users\hp\anaconda3\envs\torch1.12\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" -c C:\Users\hp\Desktop\spikingjelly\spikingjelly\cext\csrc\neuron\neuron_backward_kernel.cu -o C:\Users\hp\Desktop\spikingjelly\build\temp.win-amd64-cpython-38\Release\Users\hp\Desktop\spikingjelly\spikingjelly\cext\csrc\neuron\neuron_backward_kernel.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -use_fast_math -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=C_neuron -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75
FAILED: C:/Users/hp/Desktop/spikingjelly/build/temp.win-amd64-cpython-38/Release/Users/hp/Desktop/spikingjelly/spikingjelly/cext/csrc/neuron/neuron_backward_kernel.obj
E:\NVIDIACUDA\CUDA\bin\nvcc --generate-dependencies-with-compile --dependency-output C:\Users\hp\Desktop\spikingjelly\build\temp.win-amd64-cpython-38\Release\Users\hp\Desktop\spikingjelly\spikingjelly\cext\csrc\neuron\neuron_backward_kernel.obj.d --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -DWITH_CUDA -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\TH -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\THC -IE:\NVIDIACUDA\CUDA\include -IC:\Users\hp\anaconda3\envs\torch1.12\include -IC:\Users\hp\anaconda3\envs\torch1.12\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" -c C:\Users\hp\Desktop\spikingjelly\spikingjelly\cext\csrc\neuron\neuron_backward_kernel.cu -o C:\Users\hp\Desktop\spikingjelly\build\temp.win-amd64-cpython-38\Release\Users\hp\Desktop\spikingjelly\spikingjelly\cext\csrc\neuron\neuron_backward_kernel.obj -D__CUDA_NO_HALF_OPERATORS
_ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -use_fast_math -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C_neuron -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75
C:/Users/hp/anaconda3/envs/torch1.12/lib/site-packages/torch/include\c10/macros/Macros.h(143): warning C4067: 预处理器指令后有意外标记 - 应输入换行符
c:\users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005: “HAVE_SNPRINTF”: 宏重定义
c:\users\hp\anaconda3\envs\torch1.12\include\pyerrors.h(315): note: 参见“HAVE_SNPRINTF”的前一个定义
C:/Users/hp/anaconda3/envs/torch1.12/lib/site-packages/torch/include\c10/macros/Macros.h(143): warning C4067: 预处理器指令后有意外标记 - 应输入换行符
c:\users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005: “HAVE_SNPRINTF”: 宏重定义
c:\users\hp\anaconda3\envs\torch1.12\include\pyerrors.h(315): note: 参见“HAVE_SNPRINTF”的前一个定义
E:/NVIDIACUDA/CUDA/include\thrust/detail/config/cpp_dialect.h:118: warning: Thrust requires at least MSVC 2019 (19.20/16.0/14.20). MSVC 2017 is deprecated but still supported. MSVC 2017 support will be removed in a future release. Define THRUST_IGNORE_DEPRECATED_CPP_DIALECT to suppress this message.
C:/Users/hp/anaconda3/envs/torch1.12/lib/site-packages/torch/include\c10/core/SymInt.h(84): warning #68-D: integer conversion resulted in a change of sign

c:\users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\pybind11\cast.h(1429): error: too few arguments for template template parameter "Tuple"
detected during instantiation of class "pybind11::detail::tuple_caster<Tuple, Ts...> [with Tuple=std::pair, Ts=<T1, T2>]"
(1507): here

c:\users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\pybind11\cast.h(1503): error: too few arguments for template template parameter "Tuple"
detected during instantiation of class "pybind11::detail::tuple_caster<Tuple, Ts...> [with Tuple=std::pair, Ts=<T1, T2>]"
(1507): here

2 errors detected in the compilation of "C:/Users/hp/Desktop/spikingjelly/spikingjelly/cext/csrc/neuron/neuron_backward_kernel.cu".
neuron_backward_kernel.cu
[3/3] E:\NVIDIACUDA\CUDA\bin\nvcc --generate-dependencies-with-compile --dependency-output C:\Users\hp\Desktop\spikingjelly\build\temp.win-amd64-cpython-38\Release\Users\hp\Desktop\spikingjelly\spikingjelly\cext\csrc\neuron\neuron_forward_kernel.obj.d --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -DWITH_CUDA -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\TH -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\THC -IE:\NVIDIACUDA\CUDA\include -IC:\Users\hp\anaconda3\envs\torch1.12\include -IC:\Users\hp\anaconda3\envs\torch1.12\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" -c C:\Users\hp\Desktop\spikingjelly\spikingjelly\cext\csrc\neuron\neuron_forward_kernel.cu -o C:\Users\hp\Desktop\spikingjelly\build\temp.win-amd64-cpython-38\Release\Users\hp\Desktop\spikingjelly\spikingjelly\cext\csrc\neuron\neuron_forward_kernel.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -use_fast_math -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=C_neuron -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75
FAILED: C:/Users/hp/Desktop/spikingjelly/build/temp.win-amd64-cpython-38/Release/Users/hp/Desktop/spikingjelly/spikingjelly/cext/csrc/neuron/neuron_forward_kernel.obj
E:\NVIDIACUDA\CUDA\bin\nvcc --generate-dependencies-with-compile --dependency-output C:\Users\hp\Desktop\spikingjelly\build\temp.win-amd64-cpython-38\Release\Users\hp\Desktop\spikingjelly\spikingjelly\cext\csrc\neuron\neuron_forward_kernel.obj.d --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -DWITH_CUDA -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\TH -IC:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\THC -IE:\NVIDIACUDA\CUDA\include -IC:\Users\hp\anaconda3\envs\torch1.12\include -IC:\Users\hp\anaconda3\envs\torch1.12\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" -c C:\Users\hp\Desktop\spikingjelly\spikingjelly\cext\csrc\neuron\neuron_forward_kernel.cu -o C:\Users\hp\Desktop\spikingjelly\build\temp.win-amd64-cpython-38\Release\Users\hp\Desktop\spikingjelly\spikingjelly\cext\csrc\neuron\neuron_forward_kernel.obj -D__CUDA_NO_HALF_OPERATORS
_ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -use_fast_math -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C_neuron -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75
C:/Users/hp/anaconda3/envs/torch1.12/lib/site-packages/torch/include\c10/macros/Macros.h(143): warning C4067: 预处理器指令后有意外标记 - 应输入换行符
c:\users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005: “HAVE_SNPRINTF”: 宏重定义
c:\users\hp\anaconda3\envs\torch1.12\include\pyerrors.h(315): note: 参见“HAVE_SNPRINTF”的前一个定义
C:/Users/hp/anaconda3/envs/torch1.12/lib/site-packages/torch/include\c10/macros/Macros.h(143): warning C4067: 预处理器指令后有意外标记 - 应输入换行符
c:\users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005: “HAVE_SNPRINTF”: 宏重定义
c:\users\hp\anaconda3\envs\torch1.12\include\pyerrors.h(315): note: 参见“HAVE_SNPRINTF”的前一个定义
E:/NVIDIACUDA/CUDA/include\thrust/detail/config/cpp_dialect.h:118: warning: Thrust requires at least MSVC 2019 (19.20/16.0/14.20). MSVC 2017 is deprecated but still supported. MSVC 2017 support will be removed in a future release. Define THRUST_IGNORE_DEPRECATED_CPP_DIALECT to suppress this message.
C:/Users/hp/anaconda3/envs/torch1.12/lib/site-packages/torch/include\c10/core/SymInt.h(84): warning #68-D: integer conversion resulted in a change of sign

c:\users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\pybind11\cast.h(1429): error: too few arguments for template template parameter "Tuple"
detected during instantiation of class "pybind11::detail::tuple_caster<Tuple, Ts...> [with Tuple=std::pair, Ts=<T1, T2>]"
(1507): here

c:\users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\include\pybind11\cast.h(1503): error: too few arguments for template template parameter "Tuple"
detected during instantiation of class "pybind11::detail::tuple_caster<Tuple, Ts...> [with Tuple=std::pair, Ts=<T1, T2>]"
(1507): here

2 errors detected in the compilation of "C:/Users/hp/Desktop/spikingjelly/spikingjelly/cext/csrc/neuron/neuron_forward_kernel.cu".
neuron_forward_kernel.cu
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\utils\cpp_extension.py", line 1808, in _run_ninja_build
subprocess.run(
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File ".\setup.py", line 54, in
setup(
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools_init_.py", line 87, in setup
return distutils.core.setup(**attrs)
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools_distutils\core.py", line 185, in setup
return run_commands(dist)
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools_distutils\core.py", line 201, in run_commands
dist.run_commands()
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools_distutils\dist.py", line 973, in run_commands
self.run_command(cmd)
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools\dist.py", line 1217, in run_command
super().run_command(command)
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools_distutils\dist.py", line 992, in run_command
cmd_obj.run()
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools\command\develop.py", line 34, in run
self.install_for_development()
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools\command\develop.py", line 114, in install_for_development
self.run_command('build_ext')
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools_distutils\cmd.py", line 319, in run_command
self.distribution.run_command(command)
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools\dist.py", line 1217, in run_command
super().run_command(command)
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools_distutils\dist.py", line 992, in run_command
cmd_obj.run()
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools\command\build_ext.py", line 79, in run
_build_ext.run(self)
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools_distutils\command\build_ext.py", line 346, in run
self.build_extensions()
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\utils\cpp_extension.py", line 765, in build_extensions
build_ext.build_extensions(self)
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools_distutils\command\build_ext.py", line 466, in build_extensions
self._build_extensions_serial()
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools_distutils\command\build_ext.py", line 492, in _build_extensions_serial
self.build_extension(ext)
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools\command\build_ext.py", line 202, in build_extension
_build_ext.build_extension(self, ext)
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\setuptools_distutils\command\build_ext.py", line 547, in build_extension
objects = self.compiler.compile(
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\utils\cpp_extension.py", line 738, in win_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\utils\cpp_extension.py", line 1487, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File "C:\Users\hp\anaconda3\envs\torch1.12\lib\site-packages\torch\utils\cpp_extension.py", line 1824, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

还是导数计算,我又来啦

我写的钩子是下面这样的,不知道和您的是否一样?gradIn[0][1]应该是L对block输入的导数矩阵,然后计算二范数,最后除以了一个矩阵的大小。然后保存在grad中进行绘图。(.....表示缩进)
def hook(module,gradIn,gradOut):
.....shape=gradIn[0][1].shape
.....num=shape[0]*shape[1]*shape[2]*shape[3]
.....grad.append(np.linalg.norm(gradIn[0][1].detach().clone().cpu().double())/num)

dvsgesture 数据集加载时 gesture_mapping.csv 文件md5错误

我已经把 DvsGesture.tar.gz 下载到服务器了,并且进行了解压,但是在代码里加载数据时发现 gesture_mapping.csv 文件的 md5 错误, DvsGesture.tar.gz 的 md5 是通过的,解压出来的 csv 文件也没改,想请教下大佬们遇到过这个问题吗?改如何解决?谢谢
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.