misaogura / flashtorch Goto Github PK
View Code? Open in Web Editor NEWVisualization toolkit for neural networks in PyTorch! Demo -->
Home Page: https://youtu.be/18Iw4qYqfPo
License: MIT License
Visualization toolkit for neural networks in PyTorch! Demo -->
Home Page: https://youtu.be/18Iw4qYqfPo
License: MIT License
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
I am calling my custom CNN model and getting this error TypeError: 'module' object is not callable
Thanks for sharing such great work, I wonder is it possible to adapt the code to support the detection models from torchvision?
Hi, I was interested in visualizing a CNN I created. I was following the example provided but when I try to use my model instead of a pretrained model from torchvision I get:
File "cnn_layer_visualization.py", line 82, in <module>
gradients = backprop.calculate_gradients(input_, target_class)
File "C:\Users\Mike\Anaconda3\lib\site-packages\flashtorch\saliency\backprop.py", line 106, in calculate_gradients
output.backward(gradient=target)
File "C:\Users\Mike\Anaconda3\lib\site-packages\torch\tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\Mike\Anaconda3\lib\site-packages\torch\autograd\__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: invalid gradient at index 0 - expected type torch.cuda.FloatTensor but got torch.FloatTensor
I looked through the source code a bit and saw that Backprop
moves the network to cuda (if available) regardless if I move it there myself or not. Is this possibly a bug? I've had a bit of trouble with CUDA in the past so it might be a problem on my end.
Thanks for your time, if there's any other information I can provide please let me know. Cheers.
Calculate Gradient function does not take in the use_gpu flag. Although the flag is taken in the visualize function, it is not used in the calculate_graidents function
Thanks a lot for developing fantastic package FlashTorch!
I have a question about the inputs. My inputs contains two parts:
The additional node is inputed in the middle layers with torch.cat.
In such case, can I use FlashTorch to obtain my saliency maps? Thanks a lot!
My network does binary classifcation for detection of 11 different classes. E.G. It predicts if or if not there are apples, oranges, pineapple, and pears in the image I give it. So the output is a binary vector of length 11.
Can I use this project with out modifying it?
I have checked and top_label will always be 9, so what I put for target_label will be ignored. I get the error
The predicted class index 9 does notequal the target class index 3. Calculatingthe gradient w.r.t. the predicted class. 'the gradient w.r.t. the predicted class.'
So in my example, if the network outputs a binary vector of length 4 corresponding to whether or not apple, orange, pineapple, and pear are in the image, how can I make it so that when I set target = 3 the code will show the gradient corresponding to the task of detecting pineapples?
Also, I am using modified ResNet-18 (transfer learned).
Hi, can I compute the saliency maps for a group of images at the same time and how ?
Describe the bug
When testing the visualization tool using a custom pre-trained CNN that performs Binary Classification, the error "AttributeError: 'NoneType' object has no attribute 'shape'" crops up when trying to visualize an image that from the dataset used to train the model.
To Reproduce
Steps to reproduce the behavior:
Load image using custom transformation:
transform = transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.RandomRotation(20), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ])
that pertains to the CNN model developed.
Load the image and apply the transformation:
from flashtorch.utils import load_image
from flashtorch.saliency import Backprop
backprop=Backprop(n)
image =load_image("img0042.png")
i = transform(image)
i = torch.unsqueeze(i,0)
target_class=0
gradients=backprop.calculate_gradients(i,0)
(general summary of the code)
AttributeError Traceback (most recent call last)
in
1 from torch.autograd import Variable
----> 2 gradients=backprop.calculate_gradients(i,0)
~\Anaconda3\lib\site-packages\flashtorch\saliency\backprop.py in calculate_gradients(self, input_, target_class, take_max, guided, use_gpu)
118 # Calculate gradients of the target class output w.r.t. input_
119
--> 120 output.backward(gradient=target)
121
122 # Detach the gradients from the graph and move to cpu
~\Anaconda3\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
164 products. Defaults to False
.
165 """
--> 166 torch.autograd.backward(self, gradient, retain_graph, create_graph)
167
168 def register_hook(self, hook):
~\Anaconda3\lib\site-packages\torch\autograd_init_.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
97 Variable._execution_engine.run_backward(
98 tensors, grad_tensors, retain_graph, create_graph,
---> 99 allow_unreachable=True) # allow_unreachable flag
100
101
~\Anaconda3\lib\site-packages\flashtorch\saliency\backprop.py in _record_gradients(module, grad_in, grad_out)
212 def _register_conv_hook(self):
213 def _record_gradients(module, grad_in, grad_out):
--> 214 if self.gradients.shape == grad_in[0].shape:
215 self.gradients = grad_in[0]
216
AttributeError: 'NoneType' object has no attribute 'shape'
Environment (please complete the following information):
I tried generating a saliency map using the following code:
backprop = Backprop(net)
target_class = 4
backprop.visualize(image, target_class, guided=True)
My net is a standard PyTorch neural network, and the image is a (1,3,84,84) image. However, I get the following error:
File "visualizations.py", line 89, in test
backprop.visualize(image, target_class, guided=True)
File "/home/user/.local/lib/python3.6/site-packages/flashtorch/saliency/backprop.py", line 168, in visualize
guided=guided)
File "/home/user/.local/lib/python3.6/site-packages/flashtorch/saliency/backprop.py", line 120, in calculate_gradients
output.backward(gradient=target)
File "/home/user/.local/lib/python3.6/site-packages/torch/tensor.py", line 150, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/user/.local/lib/python3.6/site-packages/torch/autograd/init.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
File "/home/user/.local/lib/python3.6/site-packages/flashtorch/saliency/backprop.py", line 214, in _record_gradients
if self.gradients.shape == grad_in[0].shape:
AttributeError: 'NoneType' object has no attribute 'shape'
Do I need to turn the image into a parameter with requires_grad = True?
Is this usually done by the load_image function? I found that this function was re-sizing my image to 224x224. Thanks for your help!
Thanks so much for sharing your work and make it so easy...in theory...to use. Although I haven't been able to get it to work by running your Colab example(s).
Describe the bug
The two middle images involving gradients for each example, "Gradients across RGB channels" and "Max Gradients", appear as uniform color. The RGB is all grey and the Max is all purple.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Gradient images should show content, such as colored pixels around the owl's eyes as in your Medium post. That is not what running the Colab demo gives me. See my sample screenshot below.
Environment (please complete the following information):
Colab. Whatever OS that's running.
Brand new installation of FlashTorch via the !pip install flashtorch
in the notebook. Looks like (0.1.3)
Looks like Torch 1.6
Additional context
From pip (after I re-ran it a second time):
Collecting flashtorch
Downloading https://files.pythonhosted.org/packages/de/cb/482274e95812c9a17bd156956bef80a8e2683a2b198a505fb922f1c01a71/flashtorch-0.1.3.tar.gz
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from flashtorch) (3.2.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from flashtorch) (1.18.5)
Requirement already satisfied: Pillow in /usr/local/lib/python3.6/dist-packages (from flashtorch) (7.0.0)
Requirement already satisfied: torch in /usr/local/lib/python3.6/dist-packages (from flashtorch) (1.6.0+cu101)
Requirement already satisfied: torchvision in /usr/local/lib/python3.6/dist-packages (from flashtorch) (0.7.0+cu101)
Collecting importlib_resources
Downloading https://files.pythonhosted.org/packages/ba/03/0f9595c0c2ef12590877f3c47e5f579759ce5caf817f8256d5dcbd8a1177/importlib_resources-3.0.0-py2.py3-none-any.whl
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->flashtorch) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->flashtorch) (2.4.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->flashtorch) (2.8.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->flashtorch) (1.2.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch->flashtorch) (0.16.0)
Requirement already satisfied: zipp>=0.4; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from importlib_resources->flashtorch) (3.1.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib->flashtorch) (1.15.0)
Building wheels for collected packages: flashtorch
Building wheel for flashtorch (setup.py) ... done
Created wheel for flashtorch: filename=flashtorch-0.1.3-cp36-none-any.whl size=26248 sha256=95cdabc7c3dbc87e25d0795eb47d01971668600bdc58cd558b670c5e7ce0b725
Stored in directory: /root/.cache/pip/wheels/03/6d/b1/2d3c5987b69e900fcceceeef39d3ed92dfe46ba1359b9c79f8
Successfully built flashtorch
Installing collected packages: importlib-resources, flashtorch
Successfully installed flashtorch-0.1.3 importlib-resources-3.0.0
Does this support Vision Transformers (ViT)? Do you know of any repos or papers on feature visualization (activation maximization) on ViTs?
Hi, I am trying to modify this repo to make it fit on 3D images (64 by 64 by 64)
I have a couple questions.
1, Why did you only take the first Conv2d layer here?
2, I noticed that if I reset "backprop = Backprop(model)", and call "backprop.visualize" again, the "_record_gradients(module, grad_in, grad_out)" will be called twice. If I repeat again, the " _record_gradients(module, grad_in, grad_out)" will be called 3 times. This continue to grow each time I reset "backprop = Backprop(model)". Do you know why? This is shown below.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
3, I had to remove the line self.model.eval() in the initiation of Backprop to get values for gradients; otherwise the gradient is going to be zero everywhere.
Thanks! Please let me know if you know what happened.
Hi ! Is the user API guide of the flashtorch toolkit available? For the first time, I don't know which modules and functions are there. Thanks.
I am using a custom pretrained fully connected Pytorch model which was trained on CIFAR10 using GPU. While trying to use the backprop.visualize
, I am getting the following error.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-24-554f7ff5427b> in <module>()
8 # Ready to roll!
9
---> 10 backprop.visualize(owl, target_class, guided=True, use_gpu=True)
8 frames
/usr/local/lib/python3.6/dist-packages/flashtorch/saliency/backprop.py in visualize(self, input_, target_class, guided, use_gpu, figsize, cmap, alpha, return_output)
166 gradients = self.calculate_gradients(input_,
167 target_class,
--> 168 guided=guided)
169 max_gradients = self.calculate_gradients(input_,
170 target_class,
/usr/local/lib/python3.6/dist-packages/flashtorch/saliency/backprop.py in calculate_gradients(self, input_, target_class, take_max, guided, use_gpu)
87 # Get a raw prediction value (logit) from the last linear layer
88
---> 89 output = self.model(input_)
90
91 # Don't set the gradient target if the model is a binary classifier
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
/content/archs/cifar10/fc1.py in forward(self, x)
18 def forward(self, x):
19 x = torch.flatten(x, 1)
---> 20 x = self.classifier(x)
21 return x
22 #====================================
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py in forward(self, input)
90 def forward(self, input):
91 for module in self._modules.values():
---> 92 input = module(input)
93 return input
94
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py in forward(self, input)
85
86 def forward(self, input):
---> 87 return F.linear(input, self.weight, self.bias)
88
89 def extra_repr(self):
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1368 if input.dim() == 2 and bias is not None:
1369 # fused op is marginally faster
-> 1370 ret = torch.addmm(bias, input, weight.t())
1371 else:
1372 output = input.matmul(weight.t())
RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'mat1' in call to _th_addmm
My model :
class fc1(nn.Module):
def __init__(self, num_classes=10, hidden=300):
super(fc1, self).__init__()
self.classifier = nn.Sequential(
nn.Linear(3*32*32, hidden),
nn.ReLU(inplace=True),
nn.Linear(hidden, 100),
nn.ReLU(inplace=True),
nn.Linear(100, num_classes),
)
def forward(self, x):
x = torch.flatten(x, 1)
x = self.classifier(x)
return x
Loaded the model using the following lines :
from archs.cifar10 import fc1
import torch
model = torch.load("/content/epoch_198.pth.tar")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
AttributeError Traceback (most recent call last)
in ()
4 input_=apply_transforms(image)
5 target_class=24
----> 6 backprop.visualize(input,target_class,guided=True)
1 frames
/usr/local/lib/python3.6/dist-packages/flashtorch/saliency/backprop.py in calculate_gradients(self, input_, target_class, take_max, guided, use_gpu)
84 self.model.zero_grad()
85
---> 86 self.gradients = torch.zeros(input_.shape)
87
88 # Get a raw prediction value (logit) from the last linear layer
AttributeError: 'function' object has no attribute 'shape'
how to fix it?
should i use pip or pip3 to install?
is this code based on python2 or 3?
Describe the bug
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\HUAJI0~1\AppData\Local\Temp\pip-install-uc9zsivx\flashtorch\setup.py", line 5, in
long_description = fh.read()
UnicodeDecodeError: 'gbk' codec can't decode byte 0x9d in position 5424: illegal multibyte sequence
To Reproduce
pip install flashtorch
Environment (please complete the following information):
fix
readme.md line 101
“see” fix to "see"
I have downgraded to pytorch 0.4.0 yet everytime I run # from flashtorch.saliency import Backprop i get the errors below
from flashtorch.saliency import Backprop
Traceback (most recent call last):
File "/home/farshid/anaconda3/envs/pytorchgpu/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2961, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
from flashtorch.saliency import Backprop
File "/home/farshid/anaconda3/envs/pytorchgpu/lib/python3.5/site-packages/flashtorch/saliency/init.py", line 1, in
from .backprop import *
File "/home/farshid/anaconda3/envs/pytorchgpu/lib/python3.5/site-packages/flashtorch/saliency/backprop.py", line 99
f'The predicted class index {top_class.item()} does not' +
^
SyntaxError: invalid syntax
I followed the tutorial on the README page, and the visualizations are not displaying. I am using a pretrained model, and I use a tensor with requires_grad=True as the image. My code doesn't run in to any errors. My code is below:
backprop = Backprop(net)
backprop.visualize(image, target_class, guided = True)
hello,
does flashtorch work on ensemble model, I passed output of one neural network to another, and got error.
I used
g_ascent.visualize(model.encoder[0], title='conv');
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Environment (please complete the following information):
Additional context
Add any other context about the problem here.
Problem Faced:
Backprop visualize doesn't consider models with grayscale input.
Description:
Refer to this line:
flashtorch/flashtorch/saliency/backprop.py
Line 222 in ba5e3db
My current workaround:
I have a copy of the backprop class with the in_channels set to 1 for my grayscale image use case.
Possible Fix:
Would it make sense to remove the in_channels == 3 check since the first conv would be where the input image is supplied and should be good to register the hook at or am I missing something important here?
Image Handling Notebook link is broken.
Hi, Thanks for your work. I am trying to use "Activation maximization" notebook with my own custom model. The difference is that my model takes an input of (56,56,24). The gradient ascent function accepts my model without an error. I perform my own transformation on a random generated input and the input shape is torch.Size([1, 24, 56, 56]).
When I call visualize function giving it any intermediate layer and filter, it throws below error:
TypeError Traceback (most recent call last)
in
----> 1 g_ascent.visualize(layer2_0_conv2, title='layer1_0_conv2')
~/.local/lib/python3.6/site-packages/flashtorch/activmax/gradient_ascent.py in visualize(self, layer, filter_idxs, lr, num_iter, num_subplots, figsize, title, return_output)
210 num_iter,
211 len(filter_idxs),
--> 212 title=title)
213
214 if return_output:
~/.local/lib/python3.6/site-packages/flashtorch/activmax/gradient_ascent.py in _visualize_filters(self, layer, filter_idxs, num_iter, num_subplots, title)
347 standardize_and_clip(output[-1],
348 saturation=0.15,
--> 349 brightness=0.7)))
350
351 plt.subplots_adjust(wspace=0, hspace=0); # noqa
~/.local/lib/python3.6/site-packages/matplotlib/init.py in inner(ax, data, *args, **kwargs)
1563 def inner(ax, *args, data=None, **kwargs):
1564 if data is None:
-> 1565 return func(ax, *map(sanitize_sequence, args), **kwargs)
1566
1567 bound = new_sig.bind(ax, *args, **kwargs)
~/.local/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py in wrapper(*args, **kwargs)
356 f"%(removal)s. If any parameter follows {name!r}, they "
357 f"should be pass as keyword, not positionally.")
--> 358 return func(*args, **kwargs)
359
360 return wrapper
~/.local/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py in wrapper(*args, **kwargs)
356 f"%(removal)s. If any parameter follows {name!r}, they "
357 f"should be pass as keyword, not positionally.")
--> 358 return func(*args, **kwargs)
359
360 return wrapper
~/.local/lib/python3.6/site-packages/matplotlib/axes/_axes.py in imshow(self, X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, **kwargs)
5624 resample=resample, **kwargs)
5625
-> 5626 im.set_data(X)
5627 im.set_alpha(alpha)
5628 if im.get_clip_path() is None:
~/.local/lib/python3.6/site-packages/matplotlib/image.py in set_data(self, A)
697 or self._A.ndim == 3 and self._A.shape[-1] in [3, 4]):
698 raise TypeError("Invalid shape {} for image data"
--> 699 .format(self._A.shape))
700
701 if self._A.ndim == 3:
TypeError: Invalid shape (56, 56, 24) for image data
Can anyone help me with this please.
can you please modify the code so it also can work with 3D convolution?
RuntimeError Traceback (most recent call last)
in
9 # Ready to roll!
10
---> 11 backprop.visualize(owl, target_class, guided=True)
~\Downloads\flashtorch-master\flashtorch\saliency\backprop.py in visualize(self, input_, target_class, guided, use_gpu, figsize, cmap, alpha, return_output)
180 # (title, [(image1, cmap, alpha), (image2, cmap, alpha)])
181 ('Input image',
--> 182 [(format_for_plotting(denormalize(input_)), None, None)]),
183 ('Gradients across RGB channels',
184 [(format_for_plotting(standardize_and_clip(gradients)),
~\Downloads\flashtorch-master\flashtorch\utils_init_.py in denormalize(tensor)
117
118 for channel, mean, std in zip(denormalized[0], means, stds):
--> 119 channel.mul_(std).add_(mean)
120
121 return denormalized
RuntimeError: Output 0 of UnbindBackward is a view and is being modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
If so, are there any requirements for the model?
This is a really great project! I'm trying to load my own model which is a resnet 152 model trained on my own data. This error appears to be caused by using an older version of pytorch. I updated the pytorch in my environment, but still get this error. Any ideas?
Thanks,
Bob
AttributeError Traceback (most recent call last)
in
6 # Calculate the gradients of each pixel w.r.t. the input image
7
----> 8 gradients = backprop.calculate_gradients(input_, target_class)
9
10 # Or, take the maximum of the gradients for each pixel across colour channels.
~/anaconda3/envs/fastai/lib/python3.7/site-packages/flashtorch/saliency/backprop.py in calculate_gradients(self, input_, target_class, take_max, guided, use_gpu)
77 # Get a raw prediction value (logit) from the last linear layer
78
---> 79 output = self.model(input_)
80
81 # Don't set the gradient target if the model is a binary classifier
~/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
487
488 def register_forward_hook(self, hook):
--> 489 r"""Registers a forward hook on the module.
490
491 The hook will be called every time after :func:forward
has computed an output.
~/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input)
90 def forward(self, input):
91 for module in self._modules.values():
---> 92 input = module(input)
93 return input
94
~/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
487
488 def register_forward_hook(self, hook):
--> 489 r"""Registers a forward hook on the module.
490
491 The hook will be called every time after :func:forward
has computed an output.
~/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input)
90 def forward(self, input):
91 for module in self._modules.values():
---> 92 input = module(input)
93 return input
94
~/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
487
488 def register_forward_hook(self, hook):
--> 489 r"""Registers a forward hook on the module.
490
491 The hook will be called every time after :func:forward
has computed an output.
~/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/nn/modules/activation.py in forward(self, input)
48
49 def forward(self, input):
---> 50 return F.threshold(input, self.threshold, self.value, self.inplace)
51
52 def extra_repr(self):
~/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/nn/modules/module.py in getattr(self, name)
533 tracing_state.pop_scope()
534 tracing_state._traced_module_stack.pop()
--> 535 return result
536
537 def call(self, *input, **kwargs):
AttributeError: 'ReLU' object has no attribute 'threshold'
Describe the bug
When running the google colab version Part 4. Visualize Saliency Maps throws up the following error
RuntimeError Traceback (most recent call last)
in ()
9 # Ready to roll!
10
---> 11 backprop.visualize(owl, target_class, guided=True, use_gpu=True)
1 frames
/usr/local/lib/python3.6/dist-packages/flashtorch/utils/init.py in denormalize(tensor)
117
118 for channel, mean, std in zip(denormalized[0], means, stds):
--> 119 channel.mul_(std).add_(mean)
120
121 return denormalized
RuntimeError: Output 0 of UnbindBackward is a view and is being modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Environment (please complete the following information):
Additional context
Add any other context about the problem here.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.