Giter VIP home page Giter VIP logo

Comments (8)

okanlv avatar okanlv commented on May 13, 2024

yolov3 uses 'steps' policy to adjust the learning rate. At the end of the training lr = 0.00001, so it should converge with this learning rate using SGD. Why did you try to use the polynomial policy?

from yolov3.

glenn-jocher avatar glenn-jocher commented on May 13, 2024

Ohhhhh. I read about the polynomial lr curve in the v2 paper, I thought it was carried over to v3. I'll implement the steps policy from the cfg file instead.

But something is odd. I thought yolov3 was trained to 160 epochs, but maybe not. It looks like in yolov3.cfg batch = 16 (batch_size I think), and max_batches = 500200. trainvalno5k.txt has 117264 images in it, or 117264 / 16 = 7329 batches/epoch. 500200 / 7329 = 68 epochs. Do you think this means yolov3 is fully trained in 68 epochs?

from yolov3.

okanlv avatar okanlv commented on May 13, 2024

Could you point out where have the authors specified the epoch number in yolov3 paper (or somewhere else)? I might have missed that.

from yolov3.

glenn-jocher avatar glenn-jocher commented on May 13, 2024

Section 3 of the yolov2 paper (aka yolo "9000") has many training details. v3 paper is completely missing details though, this is why everyone is so confused translating it to pytorch. I think I finally found the right loss function to use though, my latest commit can continue training at lr = 1e-5 without performance losses I think. I haven't tested a full epoch yet but the first ~2000 batches show stable P and R values. The main change I made was to merge the obj and noobj confidence loss terms. I think you or @ydixon might have recommended the same change a few days ago. I'm hoping this is the missing link.

https://pjreddie.com/media/files/papers/YOLO9000.pdf
"Training for classification. We train the network on the standard ImageNet 1000 class classification dataset for 160 epochs using stochastic gradient descent with a starting learning rate of 0.1, polynomial rate decay with a power of 4, weight decay of 0.0005 and momentum of 0.9 using the Darknet neural network framework [13]."

from yolov3.

glenn-jocher avatar glenn-jocher commented on May 13, 2024

Ah I forgot to mention, in the spirit on this issue, I've implemented the correct yolov3 step lr policy now. This assumes 68 total epochs, and 0.1 lr drops at 80% and 90% completion, just like the cfg.

yolov3/train.py

Lines 106 to 114 in 7416c18

# Update scheduler (manual)
if epoch < 54:
lr = 1e-3
elif epoch < 61:
lr = 1e-4
else:
lr = 1e-5
for g in optimizer.param_groups:
g['lr'] = lr

from yolov3.

okanlv avatar okanlv commented on May 13, 2024

Ahh, they probably did not use the same training config in yolov3. I hope the training converges with the new loss term. Btw, you referenced the training of the classification network, not the detection network. The detection training in yolo2 should be

We train the network for 160 epochs with a starting learning rate of 10−3, dividing it by 10 at 60 and 90 epochs. We use a weight decay of 0.0005 and momentum of 0.9. We use a similar data augmentation to YOLO and SSD with random crops, color shifting, etc. We use the same training strategy on COCO and VOC.

from yolov3.

okanlv avatar okanlv commented on May 13, 2024

It seems good to schedule the learning rate with the total number of epochs. You probably already know that but the darknet schedules the learning rate with the total number of batches processed during the training. I am not sure which one is the better practice, although both methods will give the same result for the standard .cfg file.

from yolov3.

glenn-jocher avatar glenn-jocher commented on May 13, 2024

@okanlv yes darknet tracks total batches, with 16 images per batch. I tracked the epochs instead. There's probably not much effect one way or the other.

from yolov3.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.