Giter VIP home page Giter VIP logo

pair-code / saliency Goto Github PK

View Code? Open in Web Editor NEW
932.0 24.0 190.0 321.76 MB

Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).

Home Page: https://pair-code.github.io/saliency/

License: Apache License 2.0

Jupyter Notebook 97.64% Python 2.36% Shell 0.01%
machine-learning deep-learning deep-neural-networks tensorflow convolutional-neural-networks saliency-map object-detection image-recognition ig-saliency smoothgrad

saliency's Introduction

Saliency Library

Updates

🔴   Now framework-agnostic! (Example core notebook)  🔴

🔗   For further explanation of the methods and more examples of the resulting maps, see our Github Pages website  🔗

If upgrading from an older version, update old imports to import saliency.tf1 as saliency. We provide wrappers to make the framework-agnostic version compatible with TF1 models. (Example TF1 notebook)

🔴   Added Performance Information Curve (PIC) - a human independent metric for evaluating the quality of saliency methods. (Example notebook)  🔴

Saliency Methods

This repository contains code for the following saliency techniques:

*Developed by PAIR.

This list is by no means comprehensive. We are accepting pull requests to add new methods!

Evaluation of Saliency Methods

The repository provides an implementation of Performance Information Curve (PIC) - a human independent metric for evaluating the quality of saliency methods (paper, poster, code, notebook).

Download

# To install the core subpackage:
pip install saliency

# To install core and tf1 subpackages:
pip install saliency[tf1]

or for the development version:

git clone https://github.com/pair-code/saliency
cd saliency

Usage

The saliency library has two subpackages:

  • core uses a generic call_model_function which can be used with any ML framework.
  • tf1 accepts input/output tensors directly, and sets up the necessary graph operations for each method.

Core

Each saliency mask class extends from the CoreSaliency base class. This class contains the following methods:

  • GetMask(x_value, call_model_function, call_model_args=None): Returns a mask of the shape of non-batched x_value given by the saliency technique.
  • GetSmoothedMask(x_value, call_model_function, call_model_args=None, stdev_spread=.15, nsamples=25, magnitude=True): Returns a mask smoothed of the shape of non-batched x_value with the SmoothGrad technique.

The visualization module contains two methods for saliency visualization:

  • VisualizeImageGrayscale(image_3d, percentile): Marginalizes across the absolute value of each channel to create a 2D single channel image, and clips the image at the given percentile of the distribution. This method returns a 2D tensor normalized between 0 to 1.
  • VisualizeImageDiverging(image_3d, percentile): Marginalizes across the value of each channel to create a 2D single channel image, and clips the image at the given percentile of the distribution. This method returns a 2D tensor normalized between -1 to 1 where zero remains unchanged.

If the sign of the value given by the saliency mask is not important, then use VisualizeImageGrayscale, otherwise use VisualizeImageDiverging. See the SmoothGrad paper for more details on which visualization method to use.

call_model_function

call_model_function is how we pass inputs to a given model and receive the outputs necessary to compute saliency masks. The description of this method and expected output format is in the CoreSaliency description, as well as separately for each method.

Examples

This example iPython notebook showing these techniques is a good starting place.

Here is a condensed example of using IG+SmoothGrad with TensorFlow 2:

import saliency.core as saliency
import tensorflow as tf

...

# call_model_function construction here.
def call_model_function(x_value_batched, call_model_args, expected_keys):
	tape = tf.GradientTape()
	grads = np.array(tape.gradient(output_layer, images))
	return {saliency.INPUT_OUTPUT_GRADIENTS: grads}

...

# Load data.
image = GetImagePNG(...)

# Compute IG+SmoothGrad.
ig_saliency = saliency.IntegratedGradients()
smoothgrad_ig = ig_saliency.GetSmoothedMask(image, 
											call_model_function, 
                                            call_model_args=None)

# Compute a 2D tensor for visualization.
grayscale_visualization = saliency.VisualizeImageGrayscale(
    smoothgrad_ig)

TF1

Each saliency mask class extends from the TF1Saliency base class. This class contains the following methods:

  • __init__(graph, session, y, x): Constructor of the SaliencyMask. This can modify the graph, or sometimes create a new graph. Often this will add nodes to the graph, so this shouldn't be called continuously. y is the output tensor to compute saliency masks with respect to, x is the input tensor with the outer most dimension being batch size.
  • GetMask(x_value, feed_dict): Returns a mask of the shape of non-batched x_value given by the saliency technique.
  • GetSmoothedMask(x_value, feed_dict): Returns a mask smoothed of the shape of non-batched x_value with the SmoothGrad technique.

The visualization module contains two visualization methods:

  • VisualizeImageGrayscale(image_3d, percentile): Marginalizes across the absolute value of each channel to create a 2D single channel image, and clips the image at the given percentile of the distribution. This method returns a 2D tensor normalized between 0 to 1.
  • VisualizeImageDiverging(image_3d, percentile): Marginalizes across the value of each channel to create a 2D single channel image, and clips the image at the given percentile of the distribution. This method returns a 2D tensor normalized between -1 to 1 where zero remains unchanged.

If the sign of the value given by the saliency mask is not important, then use VisualizeImageGrayscale, otherwise use VisualizeImageDiverging. See the SmoothGrad paper for more details on which visualization method to use.

Examples

This example iPython notebook shows these techniques is a good starting place.

Another example of using GuidedBackprop with SmoothGrad from TensorFlow:

from saliency.tf1 import GuidedBackprop
from saliency.tf1 import VisualizeImageGrayscale
import tensorflow.compat.v1 as tf

...
# Tensorflow graph construction here.
y = logits[5]
x = tf.placeholder(...)
...

# Compute guided backprop.
# NOTE: This creates another graph that gets cached, try to avoid creating many
# of these.
guided_backprop_saliency = GuidedBackprop(graph, session, y, x)

...
# Load data.
image = GetImagePNG(...)
...

smoothgrad_guided_backprop =
    guided_backprop_saliency.GetMask(image, feed_dict={...})

# Compute a 2D tensor for visualization.
grayscale_visualization = visualization.VisualizeImageGrayscale(
    smoothgrad_guided_backprop)

Conclusion/Disclaimer

If you have any questions or suggestions for improvements to this library, please contact the owners of the PAIR-code/saliency repository.

This is not an official Google product.

saliency's People

Contributors

amitani avatar bwedin avatar craymichael avatar davedgd avatar david-haber avatar dsmilkov avatar gkapish avatar nalinimsingh avatar ruthcfong avatar sherryy avatar tolga-b avatar vsubhashini avatar wistanmarch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

saliency's Issues

Supported data/modalities

Thank you for the great repository! From the very cool examples there, I saw that the package supports 2D images. I was wondering if it also offers methods for 3D image data, video, audio, text, tabular, and/or time-series data. I haven’t seen documentation of such inputs.
And does it also contain methods to evaluate XAI output (e.g., measures of faithfulness, robustness, localization, etc.)?
Thanks!

lack of Guided Backpropogation

Hi,
in the example-pytorch.ipynb, it is said that Guided Backpropogation is implemented in the notebook, but I can't find this.

saliency for reconstruction task

Thanks for the great implementation! I am wondering what would be different if I'd like to investigate the saliency map of a reconstruction task. In the classification task you can select one neuron, but for reconstruction task, the output is the whole image, is it possible to visualize it in similar way?

The returned masks were all nan

After predicted 237 correctly. It failed becaused the returned masks were all nan
`# Compute the vanilla mask and the smoothed mask.

vanilla_mask_3d = gradient_saliency.GetMask(im, feed_dict = {neuron_selector: prediction_class})
`

return self.session.run(self.gradients_node, feed_dict=feed_dict)[0]

values of vanilla_mask_3d:
[[[nan nan nan]
[nan nan nan]
[nan nan nan]
...
[nan nan nan]
[nan nan nan]
[nan nan nan]]

Examples_core.ipynb doesn't work

Hi, I am running the core example code for Vanilla Gradient & SmoothGrad on a Kaggle notebook, and I get the error below:


ValueError Traceback (most recent call last)
in
6 ShowImage(im_orig)
7
----> 8 _, predictions = model(np.array([im]))
9 prediction_class = np.argmax(predictions[0])
10 call_model_args = {class_idx_str: prediction_class}

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in call(self, *args, **kwargs)
996 inputs = self._maybe_cast_inputs(inputs, input_list)
997
--> 998 input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
999 if eager:
1000 call_fn = self.call

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
272 ' is incompatible with layer ' + layer_name +
273 ': expected shape=' + str(spec.shape) +
--> 274 ', found shape=' + display_shape(x.shape))
275
276

ValueError: Input 0 is incompatible with layer model_1: expected shape=(None, 224, 224, 3), found shape=(1, 224, 224, 4)


Here is the kaggle notebook with the error.

The only lines I have modified in the example are:

# From our repository.
import saliency.core as saliency

and

# Load the image
im_orig = LoadImage('./doberman.png')

as

# From our repository.
try:
    import saliency.core as saliency`
except:
    ! pip install saliency
    import saliency.core as saliency`

and

# Load the image
im_orig = LoadImage('../input/saliency-imgs/doberman.png')

Also, I have seen there was an issue (#5) created for the code examples to be adapted to TF2. But since the current title of the examples states "for TF2 and other frameworks" I assume the code was finally adapted, and this issue isn't relevant anymore for the problem I am describing.

Thank you.

Saliency for regression task

Dear authors,
first of all, thanks for publishing this code and for your amazing paper.
I would like to use your technique to inspect the behavior of a neural network use for 6DoF Pose regression from a single RGB image. In particular, I would like to have a visual insight on which pixels the network could consider most important in the task of localization.
In your paper, as well as in the other cited by you, the focus is on the classification problem. Hence, I was wondering how to adapt this visualization technique in the case of regression.

Thanks in advance for your help.

Batched method for getting the mask

Is there a way to get the mask with batched images.

Currently, the methods GetMask and GetSmoothedMask work with only one image as input.

Call_model_function

I tried to define the call_model_function in an other file and to import the function in my main file.
There resulting a lot of errors. Does anybody tried same and has a solution for the problem?

Noise in the normalised space?

Hey guys,

Thanks for providing the implementations of these famous/helpful algorithms. I really appreciate it.

I was looking to use SmoothGrad in my project. On inspecting your code, I find it weird that you are adding noise (approx range - -1.3 - 1.5, with the default settings) in the normalized image space rather than adding it to the unnormalized image and then normalizing it (which was my initial understanding). Going by your code, it would mean that the distribution of the model input has been completely changed (range - -2.3 - 2.5 something) which seems weird to me.

It would be really helpful if you could elaborate on it and confirm that this is correct.

Thanks,
Naman

What about tensorflow 2.0

I am trying to apply saliency to a model created with tensorflow 2.0 with the built-in keras. As we don't have session in TF2.0. Where should I start?

why divide by 255 in the LoadImage function

Dear Authors

This is a very cool repo. I am trying to use it with my own checkpoint and images.
I look at your Examples_tf1.ipynb, the LoadImage function divide the image values by 255. I'm wondering why divide by 255, is it because when training the inception checkpoint, the input image is divided by 255?

Gradient Problems

I'm trying to call:

gradient_saliency = saliency.GradientSaliency()
vanilla_mask_3d = gradient_saliency.GetMask(im, call_model_function, call_model_args)

but I'm stuck in a loop where I can't solve the problem.

If I setup my input like this:
im = img.unsqueeze(dim=0).to("cuda").requires_grad_(True)
Then I get
RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.

But if I change it to
im = img.unsqueeze(dim = 0).to(device)
I get
RuntimeError: One of the differentiated Tensors does not require grad

It makes sense that it requires the gradient given its task, but why does it try to call .numpy() without doing a .detach()? Maybe I'm setting up something else wrong, but it seems to be isolated to this specific section.

Remove empty dictionary as a default argument in GetMask

The GetMask function currently uses an empty dictionary as the default argument for feed_dict. This is potentially dangerous, as the global variable may be edited during a run of GetMask. We should change the default value to
None and only pass the feed_dict argument when it is specified by the user.

Adding new attribution method

Hi PAIR-code/saliency team,

Thanks for your great work and contribution to the community! Your package helped a lot for my research.

I am just wondering if you would be interested in our new attribution method, which improves the quality of IG-based methods (e.g., IG, GIG, and BlurIG). The paper is accepted by and will be presented in CVPR 2023. Here is the link if you are interested: https://arxiv.org/abs/2303.14242

Since I heavily used your package during my research so I am happy to share/contribute our method if you are interested.

I have already read the contributing guidelines and submitted the CLA. Since the code I have is based on different IG-based methods, I am just wondering how can I start to contribute.

We are open to any suggestions, instructions, and discussion. Again, thanks for your package and the great work!

Audio Implementation

In your paper, you mention it is possible to implement it on audio. I am confused about how to transfer the audio to the image. Do Ithe plot spectrogram first or the pass model in and then the plot spectrogram

NoneType in vanilla_mask_3d

Hi,
I would like to create the saliency map for a DNN that I use for classification of 4 types of images. I manage to calculate the gradient_saliency but when I try to calculate any of the masks e.g vanilla_mask_3d I get the following error:

vanilla_mask_3d = gradient_saliency.GetMask(im, feed_dict = {neuron_selector: prediction_class}) Traceback (most recent call last):
File "<ipython-input-19-7508dfaae246>", line 1, in <module> vanilla_mask_3d = gradient_saliency.GetMask(im, feed_dict = {neuron_selector: prediction_class})

File "/usr/local/lib/python2.7/dist-packages/saliency/base.py", line 97, in GetMask return self.session.run(self.gradients_node, feed_dict=feed_dict)[0]

File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 889, in run run_metadata_ptr)

File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1105, in _run self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)

File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 414, in __init__ self._fetch_mapper = _FetchMapper.for_fetch(fetches)

File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 231, in for_fetch (fetch, type(fetch)))

TypeError: Fetch argument None has invalid type <type NoneType'>

Could you please tell me where might this error be coming from since I don't seem to have a NoneType argument in my prediction_class.

Cheers,
Pouyan

GPU for Examples_core.ipynb

Dear PAIR code group,

Thank you for sharing this great work. I really like it.

I am using the Examples_core.ipynb now. Is there a way to use GPU in this torch implementation? The cpu one is a bit slow.

Thank you for your help.

Best Wishes,

Zongze

Example code doesn't work

Hi. I tried to play with the same code, inception model graph and test image (doberman.png) which you provided in examples section as iPython notebook. But I got an error during making the prediction:

ValueError: Cannot feed value of shape (1, 252, 261, 4) for Tensor 'Placeholder:0', which has shape '(?, 299, 299, 3)'

Do you have any idea how to force example code to work?
Thanks.

# Boilerplate imports.
import tensorflow as tf
import numpy as np
import PIL.Image
from matplotlib import pylab as P
import pickle
import os
from subprocess import call
from tensorflow.contrib.slim.python.slim.nets import inception_v3
import saliency

slim = tf.contrib.slim

if not os.path.exists('models/research/slim'):
  call(["git", "clone", "https://github.com/tensorflow/models/"])
old_cwd = os.getcwd()
os.chdir('models/research/slim')
os.chdir(old_cwd)

# Boilerplate methods.
def ShowImage(im, title='', ax=None):
  if ax is None:
    P.figure()
  P.axis('off')
  im = ((im + 1) * 127.5).astype(np.uint8)
  P.imshow(im)
  P.title(title)

def ShowGrayscaleImage(im, title='', ax=None):
  if ax is None:
    P.figure()
  P.axis('off')

  P.imshow(im, cmap=P.cm.gray, vmin=0, vmax=1)
  P.title(title)

def ShowDivergingImage(grad, title='', percentile=99, ax=None):
  if ax is None:
    fig, ax = P.subplots()
  else:
    fig = ax.figure

  P.axis('off')
  divider = make_axes_locatable(ax)
  cax = divider.append_axes('right', size='5%', pad=0.05)
  im = ax.imshow(grad, cmap=P.cm.coolwarm, vmin=-1, vmax=1)
  fig.colorbar(im, cax=cax, orientation='vertical')
  P.title(title)

def LoadImage(file_path):
  im = PIL.Image.open(file_path)
  im = np.asarray(im)
  return im / 127.5 - 1.0

if not os.path.exists('inception_v3.ckpt'):
  call(["curl", "-O", "http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz"])
  call(["tar", "-xvzf", "inception_v3_2016_08_28.tar.gz"])

ckpt_file = './inception_v3.ckpt'

graph = tf.Graph()

with graph.as_default():
  images = tf.placeholder(tf.float32, shape=(None, 299, 299, 3))

  with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
    _, end_points = inception_v3.inception_v3(images, is_training=False, num_classes=1001)

    # Restore the checkpoint
    sess = tf.Session(graph=graph)
    saver = tf.train.Saver()
    saver.restore(sess, ckpt_file)

  # Construct the scalar neuron tensor.
  logits = graph.get_tensor_by_name('InceptionV3/Logits/SpatialSqueeze:0')
  neuron_selector = tf.placeholder(tf.int32)
  y = logits[0][neuron_selector]

  # Construct tensor for predictions.
  prediction = tf.argmax(logits, 1)

  # Load the image
im = LoadImage('./doberman.png')

# Show the image
ShowImage(im)

# Make a prediction.
prediction_class = sess.run(prediction, feed_dict = {images: [im]})[0]

print("Prediction class: " + str(prediction_class))  # Should be a doberman, class idx = 237

Cannot use example code on different image

I'm trying to run Examples.ipynb. It works with the provided doberman image, but gives an error when I load my own image. I reshape my image to (229,299,3) using

def LoadImage(file_path):  
  im = PIL.Image.open(file_path)  
  im = im.resize((229,229))
  im = np.asarray(im)
  print(im.shape)
  return im / 127.5 - 1.0

Then, when I run the block called Load an image and infer, I get the following error (I print out the im shape first):

(229, 229, 3)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-7-35f925d7617e> in <module>
      7 
      8 # Make a prediction.
----> 9 prediction_class = sess.run(prediction, feed_dict = {images: [im]})[0]
     10 
     11 print("Prediction class: " + str(prediction_class))  # Should be a doberman, class idx = 237

/usr/local/anaconda3/envs/cs230/lib/python3.7/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    927     try:
    928       result = self._run(None, fetches, feed_dict, options_ptr,
--> 929                          run_metadata_ptr)
    930       if run_metadata:
    931         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/local/anaconda3/envs/cs230/lib/python3.7/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1126                              'which has shape %r' %
   1127                              (np_val.shape, subfeed_t.name,
-> 1128                               str(subfeed_t.get_shape())))
   1129           if not self.graph.is_feedable(subfeed_t):
   1130             raise ValueError('Tensor %s may not be fed.' % subfeed_t)

ValueError: Cannot feed value of shape (1, 229, 229, 3) for Tensor 'Placeholder:0', which has shape '(?, 299, 299, 3)'

The shapes appear to match, so it's unclear why it's not working.

Citing PAIR-code/saliency

I'm using the XRAI implementation from this repository for a project I am working on, and I would like to cite this library. Is there a particular citation I should use? I didn't find anything in the README or via GitHub search. The repository has been very helpful, thank you!

Problems implementing with my own model

I'm looking for some help please.

I have my own model for which I'd like to implement the saliency techniques here.... and thanks for supplying the code examples and documentation, it's really useful. Recreating your core example works flawlessly in Google Colab, however I now want to use my own model and images.

Firstly here is my code, notable things are that I dropped the preprocessimage function and used the same code I did to create the model and in call_model_function() I changed model(images) to model.predict(images) because the code failed there and I assumed my implementation should mirror the prediction part of the code I got working, but I'm unsure if that's correct? and finally I had to add im = im.reshape(256,256,1) right before moving into the gradient_saliency = saliency.GradientSaliency() stuff because it complained that my image was the wrong shape, which makes no sense to me as model.predict already worked prior to that?

Code:

model = tf.keras.models.load_model('/content/drive/MyDrive/model.h5')

class_idx_str = 'class_idx_str'
def call_model_function(images, call_model_args=None, expected_keys=None):
    target_class_idx =  call_model_args[class_idx_str]
    images = tf.convert_to_tensor(images)
    with tf.GradientTape() as tape:
        if expected_keys==[saliency.base.INPUT_OUTPUT_GRADIENTS]:
            tape.watch(images)
            _, output_layer = **model.predict(images) # I changed this to predict**
            output_layer = output_layer[:,target_class_idx]
            gradients = np.array(tape.gradient(output_layer, images))
            return {saliency.base.INPUT_OUTPUT_GRADIENTS: gradients}
        else:
            conv_layer, output_layer = model(images)
            gradients = np.array(tape.gradient(output_layer, conv_layer))
            return {saliency.base.CONVOLUTION_LAYER_VALUES: conv_layer,
                    saliency.base.CONVOLUTION_OUTPUT_GRADIENTS: gradients}

file_path = ('/content/drive/MyDrive/dataset/01215.jpg')
im = cv.cvtColor(cv.imread(file_path), cv.COLOR_BGR2GRAY)
im = np.asarray(im, dtype=np.float32)
im = cv.resize(im, (img_height, img_width))
im = np.expand_dims(im, axis=0)
print(f'im expand: {im.shape}')

predictions = model.predict(im, batch_size=1, verbose='silent')
prediction_class = np.argmax(predictions[0])
call_model_args = {class_idx_str: prediction_class}
print(f'call_model_args: {call_model_args}')

class_names = ['active','inactive']
prediction = class_names[np.argmax(predictions)]
percentage = int(np.max(predictions, axis=-1)*100)
print(f'{file_path} - {prediction} - {percentage}% - Prediction class: {str(prediction_class)}')

im = im.reshape(256,256,1) # I did this because the subsequent code complained about the shape. (Which makes no sense to me as model.predict already worked above?)

Output:

im expand: (1, 256, 256)
call_model_args: {'class_idx_str': 1}
/content/drive/MyDrive/dataset/01215.jpg - inactive - 98% - Prediction class: 1

So this is great and works as expected so far. Now I try to run the next part of the core example code and I get an error that I do not understand.

# Construct the saliency object. This alone doesn't do anything.
gradient_saliency = saliency.GradientSaliency()

# Compute the vanilla mask and the smoothed mask.
vanilla_mask_3d = gradient_saliency.GetMask(im, call_model_function, call_model_args)
smoothgrad_mask_3d = gradient_saliency.GetSmoothedMask(im, call_model_function, call_model_args)

# Call the visualization methods to convert the 3D tensors to 2D grayscale.
vanilla_mask_grayscale = saliency.VisualizeImageGrayscale(vanilla_mask_3d)
smoothgrad_mask_grayscale = saliency.VisualizeImageGrayscale(smoothgrad_mask_3d)

# Set up matplot lib figures.
ROWS = 1
COLS = 2
UPSCALE_FACTOR = 10
P.figure(figsize=(ROWS * UPSCALE_FACTOR, COLS * UPSCALE_FACTOR))

# Render the saliency masks.
ShowGrayscaleImage(vanilla_mask_grayscale, title='Vanilla Gradient', ax=P.subplot(ROWS, COLS, 1))
ShowGrayscaleImage(smoothgrad_mask_grayscale, title='SmoothGrad', ax=P.subplot(ROWS, COLS, 2))

LookupError Traceback (most recent call last)
in
3
4 # Compute the vanilla mask and the smoothed mask.
----> 5 vanilla_mask_3d = gradient_saliency.GetMask(im, call_model_function, call_model_args)
6 smoothgrad_mask_3d = gradient_saliency.GetSmoothedMask(im, call_model_function, call_model_args)
7

3 frames
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gradients_util.py in _GradientsHelper(ys, xs, grad_ys, name, colocate_gradients_with_ops, gate_gradients, aggregation_method, stop_gradients, unconnected_gradients, src_graph)
637 grad_fn = func_call.python_grad_func
638 else:
--> 639 raise LookupError(
640 "No gradient defined for operation"
641 f"'{op.name}' (op type: {op.type}). "

LookupError: No gradient defined for operation'IteratorGetNext' (op type: IteratorGetNext). In general every operation must have an associated @tf.RegisterGradient for correct autodiff, which this op is lacking. If you want to pretend this operation is a constant in your program, you may insert tf.stop_gradient. This can be useful to silence the error in cases where you know gradients are not needed, e.g. the forward pass of tf.custom_gradient. Please see more details in https://www.tensorflow.org/api_docs/python/tf/custom_gradient.

I'm not great a writing code so I've clearly done something stupid, can anyone assist me in what the problem might be please? I'm also open to pointers on any other aspect too. If it helps this is how I created the model:

model = tf.keras.Sequential([
  tf.keras.layers.Rescaling(1./255),
  tf.keras.layers.Conv2D(32, 3, activation='relu'),
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.Conv2D(32, 3, activation='relu'),
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.Conv2D(32, 3, activation='relu'),
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(2,activation='softmax') #activation change
])

RuntimeWarning: Invalid value encountered in percentile

I have the following problem when attempting to calculate vanilla gradients:

C:\Users\z003zv1a\AppData\Roaming\Python\Python36\site-packages\numpy\lib\function_base.py:3652: RuntimeWarning: Invalid value encountered in percentile interpolation=interpolation) C:\Users\z003zv1a\AppData\Roaming\Python\Python36\site-packages\numpy\core\fromnumeric.py:83: RuntimeWarning: invalid value encountered in reduce return ufunc.reduce(obj, axis, dtype, out, **passkwargs) C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\image.py:405: UserWarning: Warning: converting a masked element to nan. dv = (np.float64(self.norm.vmax) - C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\image.py:406: UserWarning: Warning: converting a masked element to nan. np.float64(self.norm.vmin)) C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\image.py:412: UserWarning: Warning: converting a masked element to nan. a_min = np.float64(newmin) C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\image.py:417: UserWarning: Warning: converting a masked element to nan. a_max = np.float64(newmax) C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\colors.py:916: UserWarning: Warning: converting a masked element to nan. dtype = np.min_scalar_type(value) C:\Users\z003zv1a\AppData\Roaming\Python\Python36\site-packages\numpy\ma\core.py:718: UserWarning: Warning: converting a masked element to nan. data = np.array(a, copy=False, subok=subok)

In Examples_pytorch.ipynb file unable to find XRAI mask for other classes by replacing call_model_args

Hi,

Firstly thanks for this great repo.

In the Examples_pytorch.ipynb file, If i change call_model_args = {class_idx_str: prediction_class}
to call_model_args = {class_idx_str: <ANY_OTHER_CLASS>} XRAI mask still remains that of Doberman (i.e. class 236). What do I do to see the visualization of any other class in the Doberman image? I had for instance set ANY_OTHER_CLASS to 562 (fountain) and was hoping to see difference gradient image of the class. But I still see the Doberman.

Could you please help?
Thanks,

module 'saliency.core' has no attribute 'GuidedIG'

Hi,
I am running the Example_pytorch.ipynb in the branch pytorch-notebook.
When I run the cell of Guided IG, the error is shown,
module 'saliency.core' has no attribute 'GuidedIG'

How to solve this? Thanks for any suggestion.

Use saliency code for other checkpoints

Dear authors,

Thank you for the comprehensive code examples!
I would like to modify your code to use my own Inception v3 checkpoints. However, when I save the checkpoints in tensorflow I get three files (.meta, .graph and .index) instead of one .ckpt file and I was wondering if you could help me modify your example code accordingly to do so.

Thank you!

AttributeError: 'MaskedConstant' object has no attribute '_fill_value'

`

AttributeError Traceback (most recent call last)
d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\formatters.py in call(self, obj)
330 pass
331 else:
--> 332 return printer(obj)
333 # Finally look for special method names
334 method = get_real_method(obj, self.print_method)

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\pylabtools.py in (fig)
235
236 if 'png' in formats:
--> 237 png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs))
238 if 'retina' in formats or 'png2x' in formats:
239 png_formatter.for_type(Figure, lambda fig: retina_figure(fig, **kwargs))

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\pylabtools.py in print_figure(fig, fmt, bbox_inches, **kwargs)
119
120 bytes_io = BytesIO()
--> 121 fig.canvas.print_figure(bytes_io, **kw)
122 data = bytes_io.getvalue()
123 if fmt == 'svg':

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, **kwargs)
2206 orientation=orientation,
2207 dryrun=True,
-> 2208 **kwargs)
2209 renderer = self.figure._cachedRenderer
2210 bbox_inches = self.figure.get_tightbbox(renderer)

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\backends\backend_agg.py in print_png(self, filename_or_obj, *args, **kwargs)
505
506 def print_png(self, filename_or_obj, *args, **kwargs):
--> 507 FigureCanvasAgg.draw(self)
508 renderer = self.get_renderer()
509 original_dpi = renderer.dpi

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\backends\backend_agg.py in draw(self)
428 if toolbar:
429 toolbar.set_cursor(cursors.WAIT)
--> 430 self.figure.draw(self.renderer)
431 finally:
432 if toolbar:

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\figure.py in draw(self, renderer)
1293
1294 mimage._draw_list_compositing_images(
-> 1295 renderer, self, artists, self.suppressComposite)
1296
1297 renderer.close_group('figure')

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
136 if not_composite or not has_images:
137 for a in artists:
--> 138 a.draw(renderer)
139 else:
140 # Composite any adjacent images together

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\axes_base.py in draw(self, renderer, inframe)
2397 renderer.stop_rasterizing()
2398
-> 2399 mimage._draw_list_compositing_images(renderer, self, artists)
2400
2401 renderer.close_group('axes')

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
136 if not_composite or not has_images:
137 for a in artists:
--> 138 a.draw(renderer)
139 else:
140 # Composite any adjacent images together

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\image.py in draw(self, renderer, *args, **kwargs)
546 else:
547 im, l, b, trans = self.make_image(
--> 548 renderer, renderer.get_image_magnification())
549 if im is not None:
550 renderer.draw_image(gc, l, b, im)

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\image.py in make_image(self, renderer, magnification, unsampled)
772 return self._make_image(
773 self._A, bbox, transformed_bbox, self.axes.bbox, magnification,
--> 774 unsampled=unsampled)
775
776 def _check_unsampled_image(self, renderer):

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\image.py in _make_image(self, A, in_bbox, out_bbox, clip_bbox, magnification, unsampled, round_to_pixel_border)
368 # old versions of numpy do not work with np.nammin
369 # and np.nanmax as inputs
--> 370 a_min = np.ma.min(A).astype(scaled_dtype)
371 a_max = np.ma.max(A).astype(scaled_dtype)
372 # scale the input data to [.1, .9]. The Agg

d:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\numpy\ma\core.py in astype(self, newtype)
3203 output._mask = self._mask.astype([(n, bool) for n in names])
3204 # Don't check _fill_value if it's None, that'll speed things up
-> 3205 if self._fill_value is not None:
3206 output._fill_value = _check_fill_value(self._fill_value, newtype)
3207 return output

AttributeError: 'MaskedConstant' object has no attribute '_fill_value'

<matplotlib.figure.Figure at 0x3828a8d0>
`

cmake (0.6.0)
cycler (0.10.0)
gym (0.7.3, c:\users\rye
matplotlib (2.1.1)
nltk (3.2.4)
numpy (1.11.1)
olefile (0.44)
opencv-python (3.1.0.0)
pandas (0.19.2)
Pillow (4.0.0)
pip (9.0.1)
protobuf (3.2.0)
pydicom (0.9.9)
pyglet (1.2.4)
pyparsing (2.2.0)
python-dateutil (2.6.1)
pytz (2017.3)
requests (2.13.0)
saliency (0.0.2)
setuptools (20.10.1)
six (1.11.0)
tensorflow (0.12.1)
wheel (0.29.0)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.