Giter VIP home page Giter VIP logo

Comments (12)

HarukiYqM avatar HarukiYqM commented on June 25, 2024

—resume and you may have to manually adjust the epoch number and learning rate.

from non-local-sparse-attention.

cheun726 avatar cheun726 commented on June 25, 2024

'--resume' Is this one?How do I set the value of this parameter and learning rate?Can you be more specific? Thank you

from non-local-sparse-attention.

HarukiYqM avatar HarukiYqM commented on June 25, 2024

For example, if you want to train 100 epochs with learning rate 1e-4 for first 50 and 5e-5 for rest. Suppose you training stop at 53. To resume, please add —resume 53 and change —epoch from 100 to 47 and —lr to 5e-5.

from non-local-sparse-attention.

cheun726 avatar cheun726 commented on June 25, 2024

thank you so much

from non-local-sparse-attention.

cheun726 avatar cheun726 commented on June 25, 2024

thank you so much

from non-local-sparse-attention.

cheun726 avatar cheun726 commented on June 25, 2024

How do I set the value of --resume to model_latest?as shown below
2

from non-local-sparse-attention.

cheun726 avatar cheun726 commented on June 25, 2024

Why should the parameter of --data_range be set to 801-900 during test? as shown below
3

from non-local-sparse-attention.

HarukiYqM avatar HarukiYqM commented on June 25, 2024

To recover from the latest checkpoint, you have to set —resume to the the latest epoch number.

This parameter does nothing in testing with benchmarks. Free feel to remove.

from non-local-sparse-attention.

cheun726 avatar cheun726 commented on June 25, 2024

thank you very much

from non-local-sparse-attention.

cheun726 avatar cheun726 commented on June 25, 2024

During training, why did the model with fewer parameters prompt that the GPU's memory was insufficient, but the model with more parameters did not ?as shown below
屏幕截图 2021-08-25 164802

from non-local-sparse-attention.

HarukiYqM avatar HarukiYqM commented on June 25, 2024

Memory and parameters are not directly related. For example, when the input is larger, it requires more memory to store activation maps. The —patch_size is the output size. If you want to train X4 model, please set the patch size as 192 (48*4) to make the input same to x2.

from non-local-sparse-attention.

cheun726 avatar cheun726 commented on June 25, 2024

hi,Is there any code in the program to compute FLOPS?thanks

from non-local-sparse-attention.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.