Giter VIP home page Giter VIP logo

Comments (9)

lucastabelini avatar lucastabelini commented on July 17, 2024

Yes, what takes the most to converge are the polynomials. The performance is not bad at earlier epochs, if I remember correctly, they are only a few accuracy points below the final accuracy. The training loss is around 0.05 in the final epoch. The exact values can be seen on the file log.txt inside each experiments directory in the Google Drive link I provide in the README.md.

from polylanenet.

haopo2005 avatar haopo2005 commented on July 17, 2024

I've checked the tusimple_full log. After 2695 epochs, the training loss is around 0.04-0.07 and poly loss is around 0.02-1.4988.
I'd like to know

  • whether there is big visual difference between epoch 2695 and epoch 50.
  • How to choose best model in case of no validation loss/score in "tusimple_full log"
  • Is there any advice about the learning policy to make the training with less epoch.

And samples of my dataset are 10 times larger than tusimple. It takes too much time for training.
I'm not sure whether I'm on the correct way. The lane line is almost wrong at first glance.
However, I can find it gets better with more training iterations.

Besides, I've also got questions about the transfer learning.
I've tuned the pre-trained model of tusimple full. (change max lane number and add classification layer).
Do I need to fix parameters from previous layers and only fine tuned the last layer?
Currently, all parameters are not fixed. And I find the training loss seems no difference from the training from scratch (efficient-b0 imagenent)

from polylanenet.

lucastabelini avatar lucastabelini commented on July 17, 2024
  1. Since "big visual difference" is quite subjective, I think it would be easier if you visualize both results and compare them yourself.
  2. In our paper, we simply chose the last one. Without a validation set there's not much you can do.
  3. Unfortunately, no.

In our case, we also found that not even after all those epochs the model stopped learning, as in the accuracy was still increasing. As to transfer learning, we did not freeze the parameters, we trained the whole model. I would say that at first, the loss would have no difference to training from scratch, but the model should converge faster.

from polylanenet.

haopo2005 avatar haopo2005 commented on July 17, 2024

https://github.com/lucastabelini/PolyLaneNet/blob/master/lib/datasets/lane_dataset.py#L71
This line may be buggy.
Since you can't guarantee the point y-coordinate is sorted, the line endpoint will not the upper and lower bound. Especially for turn around and sharp curve case.

from polylanenet.

lucastabelini avatar lucastabelini commented on July 17, 2024

In the three datasets used in that work, that assumption was valid. For other datasets, it may not be. That problem should be easy to fix though. If you change that line to use min/max instead you should be fine, I think.

from polylanenet.

haopo2005 avatar haopo2005 commented on July 17, 2024

Hi,
I've tried to inference the tusimple model 2695 on tusimple dataset.
And I'm confused why the the upper bound is beyond the horizontal line.
Is there any post processing measure to handle this?

from polylanenet.

haopo2005 avatar haopo2005 commented on July 17, 2024

1

from polylanenet.

lucastabelini avatar lucastabelini commented on July 17, 2024

Did you run the inference without calculating the loss? The model only predicts an upper limit (the horizon line) to the first lane, and when the loss is calculated, that limit is copied to the other lanes. This happens here. That copy should be made in the model, so that calculating the loss is not necessary for inference, but it is not a major issue as it can be easily fixed. You just need to do the same thing that line does in the loss function.

from polylanenet.

haopo2005 avatar haopo2005 commented on July 17, 2024

ok, thank you so much...

from polylanenet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.