Giter VIP home page Giter VIP logo

bayes-mil's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

bayes-mil's Issues

Calculating patch-level metrics such as FROC

Hi,

Great repo and paper - I really enjoyed reading it!

You have calculated some patch-level localisation metrics in the paper, and I would like to do the same on an algorithm I am working on. Can you provide insight into how you calculated those? Specifically, the patch-level FROC and precision.

Thank you!

question about datasets

what a great work.Thank you for opensource ,but i have a question about your datasets. what about the function of the shape_dict()

Question for applying the method to tumor subtyping task

While attempting to utilize this code for a subtyping task involving 3-class classification, I encountered an issue indicating that the variables prior_mu and prior_logvar are only configured for 2-class classification tasks. Since the experimental results presented in the paper are primarily focused on CAMELYON datasets, I would like to inquire whether the authors have explored applying their method to subtyping tasks as well.

Is kl_div used for all instances when applying Slide-Dependent Patch Regularizer ?

Hello, authors. I think your work is a solid contribution to WSI analysis.

Here is my question encountered when reading the code.

For the network probabilistic_MIL_Bayes_enc, a KL loss, i.e., Slide-Dependent Patch Regularizer as stated in the paper, is computed by the class method kl_logistic_normal and returns as kl_div (refer to code).

From my understanding, kl_div is a tensor with the shape same as that of mu or logvar, i.e., torch.Size([N, ]), right? But, when using this KL loss for gradient backward and network optimization, ONLY the first element, i.e., the parameter of attention score distribution of the first instance, in kl_div is used and involved in loss backward, as written in code.

If the above is right, I think the class method kl_logistic_normal should like

def kl_logistic_normal(self, mu_pr, mu_pos, logvar_pr, logvar_pos):
    kl_loss = (logvar_pr - logvar_pos) / 2. + (logvar_pos ** 2 + (mu_pr - mu_pos) ** 2) / (2. * logvar_pr ** 2) -0.5
    return kl_loss.mean()

Look forward to your reply!
Thx!

prior distribution initialization and sdpr math question

Hi,

Thank you for the great paper and publishing the code.
While reading through the repository, I was investigating the slide dependent patch regularizer and have a few questions.

First, it seems that the mean and variance for the prior distributions are set as regular torch tensors. Should these not be parameters of the network? Looking at the paper, as well as the repository, I believe these should be weights instead of regular tensors.

Second, if these are not supposed to be parameters, but fixed tensors, could you explain why you chose these specific values, i.e. a mean of [-5, 0] and variance of [-1, 3]? I also noticed that the means and variances have different values for the subtyping case.

self.prior_mu = torch.tensor([-5., 0.])

Finally, I was a bit confused by the notation in the paper regarding the Bayesian framework of the sdpr. Specifically, you state that the prior network is defined as $P(a_{k} | Y)$ and variational posterior $q(a_{k}| \mu, \sigma)$.

This makes me wonder what we are originally trying to solve. When I try writing it out, I get:

$$P(a_{k} | y) = \frac{P(Y|a_{k})P(a_{k})}{P(Y)}$$

However, I now have $P(a_{k} | Y)$ as the posterior instead of the prior. Could you from which posterior I must start such that $P(a_{k} | Y)$ is a prior, and how it matches the given likelihood in the paper?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.