Giter VIP home page Giter VIP logo

boundary-loss's People

Contributors

akamojo avatar hkervadec avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

boundary-loss's Issues

Distances on GT boundaries

Thank you for your loss function, it exceeded all my expectations. I applied it to solving the problem of pulmonary nodules segmentation, implementing this loss function in Keras framework.

You reset the distances for pixels that are on the edges of GT:
res [c] = distance (negmask) * negmask - (distance (posmask) - 1) * posmask
Thus, these pixels are not included in the loss function. Would it be better to take these pixels into account in the loss function?
res [c] = distance (negmask) * negmask - distance (posmask) * posmask

test problem

early i just use dice loss, when epoch is 87, the val loss is 0.181 and test dice is 0.45 ,then i trained more epochs, the val loss is 0.160, and test dice is only 0.39
the i use your boundary loss, because i use early stopped method , it trained 93 epoches, and the val loss is 0.166, but the test dice is only 0.35
did the traindataset overfitting ? i use 250 CT and augmented it to 2124 CT.
or the train model is too simple? i used U-net, but it didn't trained well, like the train loss didn't decreaed after only first epoch, i don't konw why?
can you give me some advice, thx!

Assert error when constructing dist maps of ground truth

Thanks for your great work. I am really enjoy it.

I am trying to modify your code to work on my project which is a binary segmentation problem (C =2 ). When I attempt transforming the loaded image of shape wh to C distance maps with shape cwh, I got assertion error.

I follow your implementation below:

    dist_map_transform = transforms.Compose([
        lambda img: np.array(img)[np.newaxis, ...],
        lambda nd: torch.tensor(nd, dtype=torch.int64),
        partial(class2one_hot, C=n_class),
        itemgetter(0),
        lambda t: t.cpu().numpy(),
        one_hot2dist,
        lambda nd: torch.tensor(nd, dtype=torch.float32)
    ])

My code for transforming load image to dist maps is as follows:

mask is a (H, W) numpy array

mask_tensor = torch.tensor(mask, dtype=torch.int64)
mask_onehot = class2one_hot(mask_tensor, 2)
mask_distmap = one_hot2dist(mask_onehot.cpu().numpy())

Corresponding error is:

assert one_hot(torch.Tensor(seg), axis=0)
AssertionError

I found this error happened in one_hot2dist. It seems that there is some wrong with mask_onehot. But I do use your code without any modification. Why would this happen? Could your please give me some suggestions ? Thanks al lot. I am really appreciate.

Best

train unet problem

i use unet and loss is only generalized dice loss
i found the loss first decrease ,but quickly increase like the picthure below
image
i tried trained it more time , finally the loss increase to 1, i don't know why

How to apply to 'instance' segment in neuron segmentation?

Hi, @HKervadec thanks for your sharing
I want to apply your code to EM datasets in ISBI 2012 (The website is here)
I find that main difference between EM datasets and WMH datasets is the Intensive degree of each segment.
The code in your project is to calculate boundary loss for each class

def one_hot2dist(seg: np.ndarray) -> np.ndarray:
    assert one_hot(torch.Tensor(seg), axis=0)
    C: int = len(seg)

    res = np.zeros_like(seg)
    for c in range(C):
        posmask = seg[c].astype(np.bool)

        if posmask.any():
            negmask = ~posmask
            res[c] = distance(negmask) * negmask - (distance(posmask) - 1) * posmask
    return res

I think distance(distance_transform_edt) will get 1 for all membrane (black region) in EM datasets, so is there any chance to rewrite this function to suit for cell segmentation?
That is C in your script is number of neurons instead of class.

Question about the optional argument resolution in the dist_map_transform function

Hi! Much thanks for sharing your code!
I am quite confused about this resolution argument in the dist_map_transform function. I am not sure whether I should specify it in my code. Currently I have a label image with shape of 512x512(.png) feeding into the dist_map_transform function to compute the distance map, and do I need to specify the resolution argument(if yes, [1, 1] or [1, 1, 1])? Or just set it to None? If I change the resolution of the label image to 300x200, do I need to specify the resolution argument? And in what rules?
Thanks!

Loss for 3D Segmentation

Hey Can you tell me what to alter in: class2one_hot,one_hot2dist and SurfaceLoss
to make it work for 3D segmentation tasks?

Thanks

Surface loss in keras-tensorflow

Hi!
Could you help me with the implementation of surface loss function in keras and tensorflow?
Thank you very much in advance!

My best wishes.

Does einsum really make the code easier to understand

Hey,

I see that in the code for the loss you use the einsum multipled = einsum("bkwh,bkwh->bkwh", pc, dc).

Also in the README, you explain that for 3D we need to change it to multipled = einsum("bkxyz,bkxyz->bkxyz", pc, dc).

What is the point of using einsums if the underlying op is just pc * dc?

IoU and Dice score estimation in multiclass segmentation

Hello and thanks a lot for sharing your code!
I find this whole paper pretty awesome, however I am facing an issue when trying to use the SurfaceLoss in my architecture and after a few days of debugging I am completely desperate...

I implemented the losses in the way it is done in issue 6 - #6 and they do run well; Your GDL is giving me similar results as another implementation that I have so I pretty certain it works fine (and it gives me good confusion matrix and IoU and Dice score estimations). However, the moment I add the Surface loss (e.g. total_loss = (alpha*region_loss) + ((1-alpha) * surface_loss)) my confusion matrix is all zeroes (except for the background class, which I do not use); from thereon my IoU is 0 and the Dice score is 0 as well.. I call the GDL on all 8 classes of mine (0 to 7) and the Surface loss on classes 1 to 7.

`
def SurfaceLoss(probs, dist_maps, idc):
assert simplex(probs)
assert not one_hot(dist_maps)

    pc = probs[:, idc, ...].type(torch.float32)
    dc = dist_maps[:, idc, ...].type(torch.float32)        

    print("pc ", pc.shape)
    print("dc ", dc.shape)

    multipled = einsum("bcwh,bcwh->bcwh", pc, dc)

    loss = multipled.mean()
    return loss   

def GeneralizedDice(probs, target, idc):
assert simplex(probs) and simplex(target)

    pc = probs[:, idc, ...].type(torch.float32)
    tc = target[:, idc, ...].type(torch.float32)

    w = torch.Tensor()
    intersection = torch.Tensor()
    union = torch.Tensor()
    divided = torch.Tensor()

    w = 1 / ((einsum("bcwh->bc", tc).type(torch.float32) + 1e-10) ** 2)
    intersection = w * einsum("bcwh,bcwh->bc", pc, tc)
    union = w * (einsum("bcwh->bc", pc) + einsum("bcwh->bc", tc))

    divided = 1 - 2 * (einsum("bc->b", intersection) + 1e-10) / (einsum("bc->b", union) + 1e-10)
    loss = divided.mean()
    return loss

class Combined(nn.Module):
def init(self, **kwargs):
super(Combined, self).init()
# Self.idc is used to filter out some classes of the target mask. Use fancy indexing
self.idc = kwargs["idc"]
def forward(self, probs: Tensor, target: Tensor, onehot_labels: Tensor, dist_maps: Tensor) -> Tensor:
outputs_softmaxes = F.softmax(probs, dim=1)

    #with torch.no_grad():
    onehot_labels = cuda(onehot_labels)
    dist_maps = cuda(dist_maps)
    
    region_loss = GeneralizedDice(probs=outputs_softmaxes, target=onehot_labels, idc=[0, 1, 2, 3, 4, 5, 6, 7])
    surface_loss = SurfaceLoss(probs=outputs_softmaxes, dist_maps=dist_maps, idc=[1, 2, 3, 4, 5, 6, 7])

    alpha = 0.80
    total_loss = (alpha*region_loss) + ((1-alpha) * surface_loss)
    return total_loss

`

My validation code (confusion matrix + IoU and Dice score estimations is basically this code here - https://github.com/ternaus/robot-surgery-segmentation/blob/master/validation.py

`Generalized DICE on 0 to 7 classes
confusion_one
[[ 1118977 361517 5204866 34883107 86448949 375518847 222144 40271]
[ 34672 10748 238810 9134 3131142 5310977 1017 3465]
[ 117454 6785 495962 6441 1472272 3218903 2485 2626]
[ 253567 133000 1007755 22668 12158524 21612781 6985 35023]
[ 0 0 0 0 0 0 0 0]
[ 45668 14682 268096 10774 3597826 10969731 888 6391]
[ 16966 16472 119121 9718 4741829 16902867 486 10611]
[ 0 0 0 0 0 0 0 0]]

confusion_two
[[ 10748 238810 9134 3131142 5310977 1017 3465]
[ 6785 495962 6441 1472272 3218903 2485 2626]
[ 133000 1007755 22668 12158524 21612781 6985 35023]
[ 0 0 0 0 0 0 0]
[ 14682 268096 10774 3597826 10969731 888 6391]
[ 16472 119121 9718 4741829 16902867 486 10611]
[ 0 0 0 0 0 0 0]]
iou {'iou_4': 0.0, 'iou_1': 0.0012108741637217233, 'iou_5': 0.17717714705689105, 'iou_2': 0.07251695213631425, 'iou_6': 2.228082374314263e-05, 'iou_7': 0.0, 'iou_3': 0.0006474203165053652}
dice {'dice_5': 0.3010203646916845, 'dice_6': 4.4560654638193385e-05, 'dice_7': 0.0, 'dice_1': 0.0024188194414750566, 'dice_4': 0.0, 'dice_3': 0.0012940028692635529, 'dice_2': 0.1352276101405575}
Valid loss: 1.0000, average IoU: 0.0359, average Dice: 0.0629
`

This is confusion matrix with the combined GDL + Surface losses; As you can see it is all zeroes
`confusion_one
[[503798678 0 0 0 0 0 0 0]
[ 8739965 0 0 0 0 0 0 0]
[ 5322928 0 0 0 0 0 0 0]
[ 35230303 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0]
[ 14914056 0 0 0 0 0 0 0]
[ 21818070 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0]]

confusion_two
[[0 0 0 0 0 0 0]
[0 0 0 0 0 0 0]
[0 0 0 0 0 0 0]
[0 0 0 0 0 0 0]
[0 0 0 0 0 0 0]
[0 0 0 0 0 0 0]
[0 0 0 0 0 0 0]]`

"confusion_one" is the confusion matrix before trimming the background class and "confusion_two" is afterwords. The trimming operation is done by line 56 in the validation.py file above ( confusion_matrix = confusion_matrix[1:, 1:] # exclude background).

Please, oh please, if you have any ideas why is this happening, do tell me.
P.S. it also happens when I run SurfaceLoss on all classes (including class 0, the background), so the number of classes is unrelated (and, yes, I am aware that surfaceloss should not be run on the background class)

Negative loss values

From my understanding, the loss values could be negative if the predicted boundary is smaller or equal and enclosed by the real boundary, but in terms of optimisation would it be better to use an absolute value of the surface loss?
If this understanding is incorrect, the negatives could be due to implementation errors when I converted it to TF

When combining it with other losses does it matter if the surface loss/overall loss is negative?

Heterogeneous resolution yields non-zero boundary.

Hi, when the resolution for the distance maps is heterogeneous the formula in one_hot2dist will yield non-zero values in the boundaries. For example:

import numpy as np
from scipy.ndimage import distance_transform_edt as eedt

array = np.zeros((5, 5))
array[1:4, 1:4] = 1
resolution = (1, 0.5)

mask = array.astype(np.bool)
eedt(~mask, resolution) * ~mask - (eedt(mask, resolution) - 1) * mask

Here the boundaries will be 0.5 instead of 0. Maybe eroding one of the masks could help generalize:

import numpy as np
from scipy.ndimage import distance_transform_edt as eedt
from scipy.ndimage import binary_erosion

def signed_eedt(array: np.ndarray, resolution: Sequence[float]) -> np.ndarray:
    if not array.any():
        return np.zeros_like(array)

    mask = array.astype(np.bool)
    positive = eedt(~mask, resolution) * ~mask
    negative = eedt(binary_erosion(mask), resolution) * mask

    return positive - negative

Is multiplication by negmask in one_hot2dist() irrelevant?

First of all, thank you for your work and for making the code publicly available. I find it very interesting and I have a small question about the implementation of one_hot2dist():

def one_hot2dist(seg: np.ndarray, resolution: Tuple[float, float, float] = None,
                 dtype=None) -> np.ndarray:
    assert one_hot(torch.tensor(seg), axis=0)
    K: int = len(seg)

    res = np.zeros_like(seg, dtype=dtype)
    for k in range(K):
        posmask = seg[k].astype(np.bool)

        if posmask.any():
            negmask = ~posmask
            res[k] = eucl_distance(negmask, sampling=resolution) * negmask \
                - (eucl_distance(posmask, sampling=resolution) - 1) * posmask
        # The idea is to leave blank the negative classes
        # since this is one-hot encoded, another class will supervise that pixel

    return res

In the following, for the sake of simplicity I'll define euc_neg = eucl_distance(negmask, sampling=resolution) and euc_pos = eucl_distance(posmask, sampling=resolution).

The result of eucl_distance(negmask, sampling=resolution) is multiplied by negmask, but this seems to be not necessary: negmask is zero when euc_neg is zero, and is one otherwise. So euc_neg should be unchanged. The multiplication instead seems to be needed for the other term, in order to remove the -1 values on the pixels that belong to the k-th class.

Am I getting something wrong or could the multiplication by negmask be removed without consequence?

The multiplication by negmask admittedly isn't a very computationally intensive operation, so removing it wouldn't change much, I'm more concerned about making sure I understood the loss definition and implementation correctly.

Thanks again for the great work.

Dice loss works, Generalized Dice not?

Hello,
first of all, thank you very much for this great method and scripts. It really works fine for my unbalanced dataset (from histology with small tumor formations vs. huge normal tissue).
For me, the normal dice loss works, whereas it does not work with the generalized dice loss function? The settings are the same.
Is there any difference regarding the gt-preparations?
Thank you very much in advance.
With kind regards

Is negative boundary loss a problem?

I notice the distance map inside the ground truth contour is negative. During training, I notice my boundary loss is negative (around -6). Is boundary loss optimized towards zero or towards to -inf by torch.adam optimizer?

An issue in Keras implementation

Hello
How are you?
Thanks for contributing to this project.
I am going to introduce the boundary loss in my Keras project for binary segmentation.
I used keras_loss.py without any modification.
But the loss has negative values.

image

How should I understand?
Thanks

change of sign in surface loss

i have use the surface loss that has been mentioned here " #14 (comment)"

so i am using a dataset where the images have two different labels, so i have trained two models for two different label for the first label the surface loss is positive in all the cases. whereas while using the second set of label, the surface loss is negative. how can i solve this problem

How to apply the boundary loss to 3D images both efficiently and correctly?

Hi, thanks for sharing your code. I am trying to use the boundary loss for 3D (really high-resolution) image segmentation, but I have problems with the implementation of the loss function both efficiently and correctly. For 3D image segmentation, a popular way is to train the networks using image patches. Often the time, the training samples include image patches that belong to the background. And for these samples, a naive generalization of your implementation may give SDM that are all 0s (using eras version of the loss function). To me, this does not make sense because even if these samples do not contain any foreground voxels, the SDM should not be 0s in reality. I think it makes more sense if the SDM is calculated based on the entire images rather than image patches. How do you think about this problem?

Also, I found it pretty time consuming to calculate SDM in 3D cases. How can the time efficiency be improved?

Thanks

Plug-in and play into own pytorch problems?

How easy is it to incorporate your Boundary Loss with our own works?

I have a true segmentation mask of Nx224x224 and predicted of Nx4x224x224.

Using the code from the utils and losses, would this work with my own dataset, or does anything need to be modified?

Surface loss is usually very small compared with Dice

Hey, thanks for sharing the code to your work! I'm working on my own simplified implementation of this, when there is only one class present.

However, Im getting stuck on one issue: given that predictions are mostly zeros wouldn't the surface loss always be very small because it is defined as the mean of the element wise product between the prediction and GT distance map, and this element wise product would mostly consist of 0s. So until alpha is very very small (or just 0), the overall loss is mostly dominated by the dice loss and surface loss does very little to influence learning?

I hope this is clear, sorry for the formatting, I'm writing from my phone.

How to apply to text segmentation?

Thanks for your sharing.

I believe surface loss will also play a role in text detection models and would like to try to use it in text segmentation models.

I noticed that in probs2class, you use probs.argmax (dim = 1) to get the class, but if I only have one featuremap, the part above a certain threshold is the text area, how do I use surface loss? Can I einsum the foreground distance map and the logits map directly?

If I do this, I find that if the predicted text area is smaller than the real text area, the loss does not change, because the value of the text area in dist_map is 0, will this cause the prediction result to be too small?

WMH dataset

Hello, I want to reproduce your results on WMH dataset, but I have waited the responses from the official email of MICCAI2017 challenge for two weeks. I wonder if you could share the download link?

Question on SurfaceLoss

Thanks for sharing a very good idea. I am interested in the surface loss function.

(1): In the SurfaceLoss(), the dist_maps is Tensor, while dist_maps is from def one_hot2dist(seg: np.ndarray) -> np.ndarray:, the format of input and output is ndarray, how to convert the format of data? This question confused me several days, I want to rewrite SurfaceLoss in keras, The following is the code I am rewriting:

def one_hot2dist(seg: np.ndarray) -> np.ndarray:
    C: int = len(seg)
    res = np.zeros_like(seg)
    for c in range(C):
        posmask = seg[c].astype(np.bool)
        if posmask.any():
            negmask = ~posmask
            res[c] = distance(negmask) * negmask - (distance(posmask) - 1) * posmask
    return res

def SurfaceLoss(y_true,y_pred):            # y_true, y_pred: tensorflow tensor
    with tf.Session() as sess:
        y_true_Numpy = y_true.eval()
    dist_maps = one_hot2dist(y_true_Numpy)
    dist_maps_tensor = tf.convert_to_tensor(dist_maps)
    #assert simplex(y_pred)
    #assert not one_hot(dist_maps)
    loss =  K.sum(dist_maps_tensor * y_pred)      
    return loss

if I use generalized_dice_loss as cost, my code is ok, but if I use SurfaceLoss, there are some error:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'conv2d_19_target' with dtype float and shape [?,?,?,?]
	 [[node conv2d_19_target (defined at /home/mx/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:517)  = Placeholder[dtype=DT_FLOAT, shape=[?,?,?,?], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
	 [[{{node conv2d_19_target/_103}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_4_conv2d_19_target", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

---> I guess the problem is the dimension problem about output, I try to modify, but it always have this problem, could you help me?

(2): your final_loss is formula(6) in your paper midl, combine the generalized_dice_loss and surfaceloss, but I did not find final_loss in your main.py, how to operate?

(3): In formula(6), alpha is dynamic, is related to epoch, how to generate alpha? In your project, I didn't find the definition of function about alpha

(4): For two classifications, the activation function of final layer: softmax, sigmoid. what the difference between softmax and sigmoid? when I used generalized_dice_loss as cost in my code, the sigmoid function perform better than softmax, I have no idea why? Looking forward to your reply.

error in SurfaceLoss?

Hi there,
the signature of the call method for the SurfaceLoss looks like this:

https://github.com/LIVIAETS/surface-loss/blob/108bd9892adca476e6cdf424124bc6268707498e/losses.py#L80

meaning that the loss expects the arguments to be in the following order: softmax_probabilities, distance_maps, anything
However, all losses in main.py are called with

https://github.com/LIVIAETS/surface-loss/blob/108bd9892adca476e6cdf424124bc6268707498e/main.py#L101

where the order is softmax_probabilities, labels, distance_maps.
The latter is consistent with the other losses in losses.py.
Could it be that there is a mistake?
Thank you very much for sharing your code :-)
Best,
Fabian

Surface loss Keras

Hi!

I've a question about the surface-loss Keras.

When I do: model.compile(optimizer=opt, loss= ...)

In the 'loss' parameter, should I call only to surface-loss? Or sourface-loss+some loss function?

Thank you very much for your attention!

Boundary loss did not work

Hi, I performed some experiments with boundary loss on my semantic segmentation task:

  • exp1. with setting 1 * celoss + 1 * boundaryloss and training from scratch, the boundaryloss decreased quickly from 7 to -6 in a few thousand iters, leaving CEloss decreasing very slowly to a high converging point. The final IoU metric dropped 3~4 points compared with only using celoss.
  • exp2. with setting 1 * celoss + 0.1 * boundaryloss and training from scratch , the boundaryloss decreased quickly again, from 0.7 to -0.5 in a few thousand iters. and the CEloss decreased more quickly to an expected lower converging point. The final IoU metric did not drop.
  • exp3. with setting 1 * celoss + 0.5 * boundaryloss and finetuning from a well-trained model, the boundaryloss was around -3 from the very beginning and did not decrease as training proceeds,which indicates that the boundaryloss for a model well-trained by celoss may be already saturate and may further implies that the boundaryloss will not work in my task.

Some information on my task:

  • image semantic segmentation for driving scene.
  • 10+ classes.
  • severe class imbalance, with 99% v.s. 1% pixels for the most frequent class and the least frequent class.
  • 3000 images with 720 * 1080 resolution.

Here are my questions:

  1. Does the indication/implication in exp3 reasonable?
  2. Could you provide some suggestions which may be useful for making the boundaryloss work?

Plus, following is a visualization of my implementation for the distance map:

following is gt_mask
xx2
following is distance map
dist2

how to avoid empty foreground (null values of the softmax probabilities almost everywhere) when using SurfaceLoss alone

hi, liviaets,
thank you so much for sharing your codes.
I throughly read your paper and tried to run your codes with only surface loss while i met the same problem of obtaining empty foreground. Of course relatively lower gradient is one cause. I am wondering whether you have some ideas on avoiding this instead of combining the surface loss with another loss. Besides, i noticed that you consider probabilities of all pixels in predicted probability map which is a little different from the mathematical analysis, where only G\S and S\G are considered. probably this problem can be solved if we only consider probabilities corresponding to G\S and S\G

how to use with sigmoid as activation function when meeting binary classification segmentation task

Hi, thanks for ur great work.
But I'm confused, when I have to use it in a binary calssification segmentation task, like only foreground and background.
So , according to the readme, the dist_label should be like [batch_size,2,h,w]
but when I using sigmoid as activation function, the logits should be like [batch_size,1,h,w],
the channels dimention is not match, I dont know is there something I get wrong?
Hope to ur replys, Thanks.

Question on make file

Hi @HKervadec ,
Thanks for the great work.

Would it be possible for you to write a make file for this public colon CT dataset?

This task is also high imbalanced, but the dataset is much easy to access. Anyone can download it directly. Although the ground truth of test set is not available, we can do cross-validation with the 126 Training data. If you do not have time to train the model, I will be happy to do that.

I'm not familiar with make language. If writing the make file is time-consuming, please let me know.

ISLES 2018

Hello!

I downloaded the training data set from the ISLES2018 official website and found that it included:

  • CT,
  • CT_4DPWI,
  • CT_CNF,
  • CT_CBV,
  • CT_MTT,
  • CT_Tmax, and
  • OT.

There is no corresponding DWI data, but I have seen the real DWI data paired with CT in your result. May I ask where to obtain the DWI data?Is it possible to share dataset?

Thank you!

Visualizing distance map from surface loss

The first image is the binary image and the second side is the distance map when I implement your distance map code used to calculate the surface loss. Is this how you wanted it to look like or am I making some mistake?

image
2)
image

below is the code I used from your repository.

 res = np.zeros_like(a_new)
 for c in range(C):
     posmask = a_new[c].astype(np.bool)

     if posmask.any():
         negmask = ~posmask
         res[c] = distance(negmask) * negmask - (distance(posmask) - 1) * posmask

Help. Is this loss function the same with yours. Thanks very much.

I mean is the same idea but not same implementation?
please refer: https://github.com/yiskw713/boundary_loss_for_remote_sensing.

class BDLoss(nn.Module):
    """Boundary Loss proposed in:
    Alexey Bokhovkin et al., Boundary Loss for Remote Sensing Imagery Semantic Segmentation
    https://arxiv.org/abs/1905.07852
    """

    def __init__(self, theta0=3, theta=5):
        super().__init__()

        self.theta0 = theta0
        self.theta = theta

    def forward(self, pred, gt):
        """
        Input:
            - pred: the output from model (before softmax)
                    shape (N, C, H, W)
            - gt: ground truth map
                    shape (N, H, w)
        Return:
            - boundary loss, averaged over mini-bathc
        """

        n, c, _, _ = pred.shape

        # softmax so that predicted map can be distributed in [0, 1]
        pred = torch.softmax(pred, dim=1)

        # one-hot vector of ground truth
        one_hot_gt = one_hot(gt, c)

        # boundary map
        gt_b = F.max_pool2d(1 - one_hot_gt, kernel_size=self.theta0, stride=1, padding=(self.theta0 - 1) // 2)
        gt_b -= 1 - one_hot_gt

        pred_b = F.max_pool2d(1 - pred, kernel_size=self.theta0, stride=1, padding=(self.theta0 - 1) // 2)
        pred_b -= 1 - pred

        # extended boundary map
        gt_b_ext = F.max_pool2d(gt_b, kernel_size=self.theta, stride=1, padding=(self.theta - 1) // 2)

        pred_b_ext = F.max_pool2d(pred_b, kernel_size=self.theta, stride=1, padding=(self.theta - 1) // 2)

        # reshape
        gt_b = gt_b.view(n, c, -1)
        pred_b = pred_b.view(n, c, -1)
        gt_b_ext = gt_b_ext.view(n, c, -1)
        pred_b_ext = pred_b_ext.view(n, c, -1)

        # Precision, Recall
        P = torch.sum(pred_b * gt_b_ext, dim=2) / (torch.sum(pred_b, dim=2) + 1e-7)
        R = torch.sum(pred_b_ext * gt_b, dim=2) / (torch.sum(gt_b, dim=2) + 1e-7)

        # Boundary F1 Score
        BF1 = 2 * P * R / (P + R + 1e-7)

        # summing BF1 Score for each class and average over mini-batch
        loss = torch.mean(1 - BF1)

        return loss

Create dist_map for image segmentation mask as label.

Hello,

I put the relevant code in order to preprocess my data.
while loading the data I have preprocess step where I have put your code like the given usage example
and it works as follows:

    def preprocess(self, sample):
        #my code  - usual preprocess

        if self.transform is not None:
            sample = self.transform(sample)
        image, mask = sample

        #  loss code: at this point, mask is of shape(3,H,W) and values are 0 or 1
        #  each of the channels represent the pixel class(r,g,b)

        disttransform = dist_map_transform([1,1,1], 3) # k=3 for 3 classes
        
        dist_map_tensor = disttransform( mask)

        return image, mask ,dist_map_tensor

It is not working well I have many problems, but just as a beginning is this the right configuration for my case?
problems example :

  1. in gt_transform ---> class2one_hot returns 5 dim --- > res.shape torch.Size([1, 3, 3, 320, 320])
  2. after all when computing the loss with dist map the assertion returned from simplex function doesn't hold
    torch.allclose(_sum, _ones)

i need some help to configure

Thanks!

Can this loss be used for multi-label classification?

Hello, thanks for sharing your code. I want to know if this boundary loss can be used for multi-label classification? I now want to segment the lung lobes and divide the lung lobes into 5 parts. In the segmentation result, the segmentation boundary is a little bit different from the label boundary. I want the segmentation boundary to be as close to the label boundary as possible. May I use the boundary loss you proposed for multi-labell classification improved?

Can surface loss be understood in following not efficient but simple way?

Thanks for sharing the code.
I very enjoy your nice idea and reading your paper.
But I'm confused about the implementation of surface loss.
In section 2 of the paper, it said "boundary loss in Eq. (5) is the sum of linear functions of the regional softmax probability outputs of the network".

Without regard to the efficiency of the code, can surfaceloss be understood in following way?

# inputs (2D/3D):
    # ground_truth
    # cnn_prediction: softmax output
# outputs: surface_loss

# precompute the level set function
gt_onehot = class2one_hot(ground_truth)
gt_dist = one_hot2dist(gt_onehot)

def surface_loss(CNN_prediction, gt_dist):
    cnn_pred_onehot = class2one_hot(CNN_prediction)
    loss = np.mean(cnn_pred_onehot, gt_dist)
    return loss

Looking forward to your reply.
Best regards,
Edward

Could you write the code more clear?

Thanks for sharing a very good idea. I am looking at the surface loss function
https://github.com/LIVIAETS/surface-loss/blob/108bd9892adca476e6cdf424124bc6268707498e/losses.py#L74

I have some questions:

  1. How to generate dist_maps from the binary image? I did not find it in your project. The paper mentioned distance_transform_edt, do you need to normalize the level set to 0 -1, otherwise distance map value may be 0 to 10 or 100. It is too high

  2. If I do it for 3 classes segmentation. So C =3, then I must compute distance map of each class: background (class 0), class1, and class2 then store it in BxCxHxW. Is it right?

  3. This coding style does not like pytorch. For example, multipled = einsum("bcwh,bcwh->bcwh", pc, dc) can simplfy by multipled = pc*dc or intersection: Tensor = w * einsum("bcwh,bcwh->bc", pc, tc) by intersection = w * pc* tc

  4. Why we need to onehot the distance map. I think the distance map has size of BxCxHxW and prediction also has size of BxCxHxW then it can be multiplied directly

https://github.com/LIVIAETS/surface-loss/blob/108bd9892adca476e6cdf424124bc6268707498e/losses.py#L82
Thanks so much

About the calculation of dist_map

According to your suggestion, I read the gt image, but the pixel value of the read image is [0, 255]. When generating the dist_map, the pixel value is [0, 1]. How do I calculate it.

self.dist = dist_map_transform([1, 1], 2)
image = Image.open(self.images[index]).convert('RGB')
# image = np.array(image)
gt = Image.open(self.gts[index]).convert('L')

boundary = self.dist(gt)

Image gt is 24-bit color

InvalidArgumentError: required broadcastable shapes at loc(unknown) [Op:Mul]

multipled = y_pred * y_true_dist_map

The code works fine mostly, but when I use tf.distribute.MirroredStrategy() for multiple GPU, sometimes the shape for y_true is (0,400,400,2), and the function calc_dist_map_batch returns array with shape (0,) with cause the error:

InvalidArgumentError: required broadcastable shapes at loc(unknown) [Op:Mul]

from keras import backend as K
import numpy as np
import tensorflow as tf
from scipy.ndimage import distance_transform_edt as distance


def calc_dist_map(seg):
    res = np.zeros_like(seg)
    posmask = seg.astype(np.bool)

    if posmask.any():
        negmask = ~posmask
        res = distance(negmask) * negmask - (distance(posmask) - 1) * posmask

    return res


def calc_dist_map_batch(y_true):
    y_true_numpy = y_true.numpy()
    return np.array([calc_dist_map(y)
                     for y in y_true_numpy]).astype(np.float32)


def surface_loss_keras(y_true, y_pred):
    y_true_dist_map = tf.py_function(func=calc_dist_map_batch,
                                     inp=[y_true],
                                     Tout=tf.float32)
    multipled = y_pred * y_true_dist_map
    return K.mean(multipled)



shape = (0,400,400,2)
y_pred = tf.ones(shape)
y_true = tf.ones(shape)

surface_loss_keras(y_pred, y_true)

To fix this I need to reshape the numpy array.

...
    return np.array([calc_dist_map(y)
                     for y in y_true_numpy]).reshape(y_true.shape).astype(np.float32)
...

Correct code is now:

from keras import backend as K
import numpy as np
import tensorflow as tf
from scipy.ndimage import distance_transform_edt as distance


def calc_dist_map(seg):
    res = np.zeros_like(seg)
    posmask = seg.astype(np.bool)

    if posmask.any():
        negmask = ~posmask
        res = distance(negmask) * negmask - (distance(posmask) - 1) * posmask

    return res


def calc_dist_map_batch(y_true):
    y_true_numpy = y_true.numpy()
    return np.array([calc_dist_map(y)
                     for y in y_true_numpy]).reshape(y_true.shape).astype(np.float32)


def surface_loss_keras(y_true, y_pred):
    y_true_dist_map = tf.py_function(func=calc_dist_map_batch,
                                     inp=[y_true],
                                     Tout=tf.float32)
    multipled = y_pred * y_true_dist_map
    return K.mean(multipled)



shape = (0,400,400,2)
y_pred = tf.ones(shape)
y_true = tf.ones(shape)

surface_loss_keras(y_pred, y_true)

**TENSORLOF VERSION: **: 2.4.1

Negative loss and extreme cases.

Hi, thanks for sharing your repository.

I went over the paper and I'm stuck at line 208 of utils.py

res[c] = distance(negmask) * negmask - (distance(posmask) - 1) * posmask

I noted that if the ground truth had all 1s, it would result in a irreversible negative loss (the edt function returns distances from matrix edges in the case of all 1s). Might be a good idea to hardwire a loss of 0s during such events.

Do you think it would be wise to weight by the maximum distance possible for a particular image? That is, sqrt((h-1)**2 + (w-1)**2). This ensures that the loss function will always be capped to 1.

Finally, does the surface loss function provide a safe guard against gradient ascent (i.e negative loss)? I noticed that it tends to get negative very quickly in the early epochs.

How to properly perform data augmentation on cached distance maps

Hi Hoel,

Thanks very much for posting your work on the boundary loss! I really appreciate what you've done. However I have a question with regards to data augmentation. I am currently working on 3D organ segmentation, so computing the distance transforms on the fly in the data loader like in the 2D case would wreck havoc on I/O. For 3D, the best way to compute this loss function is to pre-compute all of the signed distance maps for each volume and cache them so that we can load them quickly inside the data loader. I do see a related issue open here that talks about it in some detail: #29. As mentioned by the author of the related issue, when it comes to data augmentation I don't see any other choice but to create the signed distance maps on the augmented volumes "on the fly" to accurately calculate them. However, in the related issue you mentioned that you can perform the augmentation on the cached signed distance maps but I do not see how this can properly be applied as the values in the signed distance maps are not actually intensities. In other words, caching the signed distance maps, then trying to perform augmentation on these does not operate in the same way that would be done with the input volumes as they directly deal with intensities.

To be more specific, suppose we have loaded in a batch of patches corresponding to original input volume, ground truth masks and distance maps. If we do augmentation on these input volume patches and if they are subject to some affine transformation, the distance maps will naturally be affected as the affine transformation on the input volume will affect the placement of the voxels which will now affect the distances of these voxels to the ground truth surfaces from the masks. The distance maps of course are not intensities or colours, so simply applying the same augmentation steps to the distance maps would treat them as actual intensities or colours so the final values after you run through the augmentation would not actually be the distances to the ground truth surfaces.

As I currently understand, the signed distance maps are computed "on the fly" with the resulting augmented volumes which I can see the augmentation of the input volumes being done here - https://github.com/LIVIAETS/boundary-loss/blob/master/dataloader.py#L320. Noting the related issue, how would you properly handle performing augmentation on the distance maps so that the locations correctly capture the distances to the ground truth surfaces?

Thanks!

Generalise DiceLoss never decrease

Hello, my project use your losses. I use the generalise dice loss in my segment. I want to explain that my data set has been normalized. The network has a batchnorm layer, and the last layer of the network is conv. The input of GDL is the output of softmax function. lr is 5e-5 Neither dicelos nor gdiceloss will decline. If I replace the loss function with the cross entropy function, it goes down. I looked for some possible reasons, but it didn't work. You have rich experience. Do you have any suggestions? Thank you for your work. Thank you for your work for open source

Need to calculate the background-class loss for Multi-class Segment?

The content of your work is great and gave me a lot of ideas. But I have some questions that I haven't figured out. For multi-class segmentation, such as (ACDC dataset), will your dice loss and bl_loss calculate the four types of losses of 0, 1, 2, 3? Does not calculate the loss of 0 (background) to make the effect better? What kind of thinking are you based on?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.