Giter VIP home page Giter VIP logo

Comments (24)

ma1112 avatar ma1112 commented on May 27, 2024 8

@vijayanand-Git

I encountered the same problem that the loss is stuck in the margin value.
Then I tried to tune parameters including learning rate, batch size and even data normalization, finally the loss converged.
Also, I modified the "batch_hard_triplet_loss function" as follows:
image

you can have a try....

As @vijayanand-Git pointed it out, the loss function introduced in this repository is not to be applied as-is in a Keras environment. A small enhancment is needed, that in the answer above is adding the line ( labels = tf.squeeze(y_true, axis = -1).

In Keras, the default shape for y_true is (batch_size,1) and omoindrot 's code is intended to be used with labels of shape (batch_size,). It may seem that the difference is minimal, however, Tenforflow (and Numpy) functions work in a very different way with objects of these two shapes. So one should flatten their y_true tensor before applying the hereby defined triplet loss function on it.

To elaborate a bit more on the expected shapes of y_pred and y_true tensors in Keras, an how a loss function like this can work in Keras: I believe that the purpose of the loss funciton is to come up with a number (the loss) in a way that the loss can later be backpropagated in the network. Up to my understanding the y_pred tensor does not have to be of the shape as y_true, as long as the defined loss function is able to calculate the loss based on these two tensors of whatever shapes. It is true though that many conventional loss functions expect the two shapes to match, but I don't see why one could not define a loss function that expects these two tensors to have different shapes.

For those who are still looking for a working example in Keras, I created a notebook that shows how omoindrot 's triplet loss function can be used with Keras, check it out here: https://github.com/ma1112/keras-triplet-loss

from tensorflow-triplet-loss.

ChristieLin avatar ChristieLin commented on May 27, 2024 4

@vijayanand-Git

I encountered the same problem that the loss is stuck in the margin value.
Then I tried to tune parameters including learning rate, batch size and even data normalization, finally the loss converged.
Also, I modified the "batch_hard_triplet_loss function" as follows:
image

you can have a try....

from tensorflow-triplet-loss.

Cong222 avatar Cong222 commented on May 27, 2024 2

I can't find the way.
I gave up.

from tensorflow-triplet-loss.

omoindrot avatar omoindrot commented on May 27, 2024

This usually means that all the embeddings have collapsed on a single point.

One solution that might work is to lower your learning rate so that this collapse doesn't happen.

from tensorflow-triplet-loss.

Cong222 avatar Cong222 commented on May 27, 2024

thx, but I lower my learning rate down to 1e-6, also the problem exists, maybe my learning rate need lower? My dataset is cifar10, and the net is alexNet

from tensorflow-triplet-loss.

Cong222 avatar Cong222 commented on May 27, 2024

I realize the problem now.
I use your loss function in keras, but keras loss function need return tensor of [batch_size, 1] ,
however your function return a scalar tensor.
So the problem out.
Could you have some methods about this?

from tensorflow-triplet-loss.

omoindrot avatar omoindrot commented on May 27, 2024

You could juste duplicate the loss so that it has the right shape?

loss = ...  # scalar
loss = tf.ones([batch_size, 1]) * loss

from tensorflow-triplet-loss.

Cong222 avatar Cong222 commented on May 27, 2024

No, I have tested it.
It is wrong way to get right.
But thx.

from tensorflow-triplet-loss.

qingchenwuhou avatar qingchenwuhou commented on May 27, 2024

@Cong222 Hi, how did you solve the problem that triplet_loss is scalar tensor which is inconsistent with the loss in keras?

from tensorflow-triplet-loss.

virgile-blg avatar virgile-blg commented on May 27, 2024

Hello, I think the problem mainly comes from the fact that in Keras, any custom loss should be designed this way:

"The function should takes the following two arguments:
y_true: True labels. TensorFlow/Theano tensor.
y_pred: Predictions. TensorFlow/Theano tensor of the same shape as y_true. "

This is in practice impossible for any embedding learning tasks, but maybe there could be a workaround for it...

from tensorflow-triplet-loss.

vijayanand-Git avatar vijayanand-Git commented on May 27, 2024

@Cong222 did you find the way we can use the triplet loss in keras? Even I have the same issue with the loss value.

from tensorflow-triplet-loss.

swpucl avatar swpucl commented on May 27, 2024

@Cong222 I meet the same problem but set lower learining rate the loss converged.
I'd like to ask you a question that how to calculate mAP about cifar10?

from tensorflow-triplet-loss.

vijayanand-Git avatar vijayanand-Git commented on May 27, 2024

@vijayanand-Git

I encountered the same problem that the loss is stuck in the margin value.
Then I tried to tune parameters including learning rate, batch size and even data normalization, finally the loss converged.
Also, I modified the "batch_hard_triplet_loss function" as follows:
image

you can have a try....

Thank you @ChristieLin . Changing the learning rate worked for me.

from tensorflow-triplet-loss.

xiaomingdaren123 avatar xiaomingdaren123 commented on May 27, 2024

Epoch 1/60
97/97 [==============================] - 24s - loss: 1.0072 - mAP: 0.1649 - val_loss: 0.9624 - val_mAP: 0.1296
Epoch 2/60
97/97 [==============================] - 22s - loss: 1.0060 - mAP: 0.1959 - val_loss: 0.9647 - val_mAP: 0.0784
Epoch 3/60
97/97 [==============================] - 21s - loss: 1.0051 - mAP: 0.2268 - val_loss: 0.9851 - val_mAP: 0.1536
Epoch 4/60
97/97 [==============================] - 21s - loss: 1.0051 - mAP: 0.1650 - val_loss: 0.9519 - val_mAP: 0.1808
Epoch 5/60
97/97 [==============================] - 21s - loss: 1.0034 - mAP: 0.2474 - val_loss: 0.9696 - val_mAP: 0.3072
Epoch 6/60
97/97 [==============================] - 21s - loss: 1.0025 - mAP: 0.2577 - val_loss: 0.9895 - val_mAP: 0.3584
Epoch 7/60
97/97 [==============================] - 21s - loss: 1.0044 - mAP: 0.2990 - val_loss: 0.9717 - val_mAP: 0.5392
Epoch 8/60
97/97 [==============================] - 21s - loss: 1.0007 - mAP: 0.2784 - val_loss: 0.9902 - val_mAP: 0.4096

Hi, I found something wrong for the loss value.
The loss value is almost not changing at the time model training.
And the loss value is changed when I change margin value, loss value is approximate of margin

I also meet same question, loss value is approximate of margin,i found that the distance is close to 0,I don't know how it was caused.Does the output of the network need to be L2 normalized, what is the role of L2 normalization?

from tensorflow-triplet-loss.

Cong222 avatar Cong222 commented on May 27, 2024

#18 (comment)
Hey. Google the way to calculate mAP, you can find it.

from tensorflow-triplet-loss.

parthnatekar avatar parthnatekar commented on May 27, 2024

@Cong222 @ChristieLin Can you elaborate how you used this loss function with Keras with incompatible y_true and y_pred shapes?

from tensorflow-triplet-loss.

TuanAnhNguyen14111998 avatar TuanAnhNguyen14111998 commented on May 27, 2024

Hello, I have a similar problem, I use transfer learning on vggface with keras, combined with triplet loss. val_loss does not change every time to 0.500. Because the training data is too much, I read and store data into the ".h5" file, each time I train, I will read each batch from that file. Then I create a Data Generate that returns batch_x and batch_y. I use model.fit_generator to train the model, however the error occurs when val_loss doesn't change every time down to 0.500. My learning_rate is 0.001.
I did the same as @omoindrot and @ChristieLin instructions but still doesn't work for my case. Do you have any ideas to solve this problem for me?
Should I change the value of the learning rate and how should I change it properly? Thank you!

from tensorflow-triplet-loss.

aknakshay avatar aknakshay commented on May 27, 2024

I am facing similar problem with my model. Training loss is stuck at the margin with very low learning rate as well. Is there any solution yet?

from tensorflow-triplet-loss.

shanmukh05 avatar shanmukh05 commented on May 27, 2024

Adding labels = tf.squeeze(y_true, axis = -1) worked for me, thanks @ma1112 for detailed explanation.

from tensorflow-triplet-loss.

JJKK1313 avatar JJKK1313 commented on May 27, 2024

but there are not labels on triplet loss, there is only the embeddings and the margin.
which value did you choose for y_true then?

from tensorflow-triplet-loss.

ma1112 avatar ma1112 commented on May 27, 2024

but there are not labels on triplet loss, there is only the embeddings and the margin.
which value did you choose for y_true then?

When using triplet loss, labels help the algorithm determine which pairs are positive and which pairs are negative, by inspecting whether the labels for two training examples are the same or not.

Two training examples with the same label are considered a positive pair and will have their embeddings close together in the embedding space.
Two training examples with different labels are considered a negative pair and will have their embeddings far away.

So the only important concept around labels is that they should be the same for every example from a given class and they should be different for examples from different classes. Keeping that in mind you can use any numeric value as a label.

Particularly, if your dataset has N different classes, you can use label 1 for examples belonging to the first class, 2 for examples belonging to the second class, ..., N for examples belonging to the N-th class.

from tensorflow-triplet-loss.

JJKK1313 avatar JJKK1313 commented on May 27, 2024

@ma1112 thanks for the explanation, but if I understand you correctly, your samples are combinations of pairs? not triplets?
My samples are built from 3 images, (anchor, positive, negative) images (all 3 in one sample). Is that incorrect? Or less preferred for some reason? I'm asking because I'm trying to improve my failing model.

from tensorflow-triplet-loss.

ma1112 avatar ma1112 commented on May 27, 2024

@JJKK1313 Sorry for the confusing answer, let me elaborate further.

If you wish to use the triplet loss implementation found in this repo, your samples should be individual samples just as if you trained a network without using triplet loss. I.e. in case of working with the MNIST dataset, in which there are 60k grayscale images of hand written digits, each with a size of 28x28, you can use that dataset as-is to train a network with the triplet loss algorithm. So your input tensor should have a size of 60kx28x28x1. (Note that you should keep labels as integers from 0 to 9 when working with triplet loss, whereas if you were to use softmax activation + crossentropy loss, you'd one-hot encode the labels.)

That is because the triplet loss implementation found in this repo implements online triplet mining, and picks the best triplets from a batch of images during the time the model is being trained. As triplets are created on-the-fly, the algorithm needs to know whether for a given anchor another sample is negative or positive. Hence you need to have labels for online triplet mining.

And you are quite right, if you were to use a model with offline triplet mining, i.e. if you fed the network with triplets of samples during training, then you would not need to pass labels to the network. However in that case you could not use the triplet loss function you find in this repo and your model would be probably worse than one with online triplet mining.

from tensorflow-triplet-loss.

JJKK1313 avatar JJKK1313 commented on May 27, 2024

Ohhhhhh nnooowww I got it! Thank you very much for the explanation @ma1112!!

from tensorflow-triplet-loss.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.