Giter VIP home page Giter VIP logo

shape-adaptor's People

Contributors

lorenmt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

shape-adaptor's Issues

ShapeAdaptor issue

hi, I git clone your project and run model_train.py, some problem occured

here are some errors:
Traceback (most recent call last):
File "model_training.py", line 421, in
main()
File "model_training.py", line 357, in main
if ShapeAdaptor.current_dim_true < args.limit_dim:
AttributeError: 'function' object has no attribute 'current_dim_true'

ShapeAdapto is defined in model_list but It's a function not a class, pls help to check the code is right

Question about default weight decay in model_training_imagenet.py

Thank you for the nice work and code :)
I have a quick question.
According to the paper, it seems like weight decay is set to 5e-4 for VGG & ResNet50 regardless of the dataset, while the default value in model_training_imagenet.py is 1e-4.

I guess the value reported in the paper is correct, but just wanted to make really sure about this.
Which number is correct one?

Thanks a lot

resnet shape-adaptor

hi~ @lorenmt

when I use shape-adaptor in resnet50, I find you insert each shape-adaptor into the end of bottlenet, add a lot of extra 1x1 convolution operations to resnet, I think the a part of performace gain is coming from it. Is that True?

pic

Question about reimplementation

@lorenmt Hi, is it possible to use the following argument to reimplment the autosc mode on ImageNet?

python model_training.py --dataset imagenet --mode autosc --network resnet --lr 0.1 --batch_size 256 --input_dim 224 --epochs 100

I tried but the initial accuracy seems very low, and the GPU memory occupation on 8 GPUs is largely unbalanced. Thank you.

GMACs for ImageNet dataset?

In the paper, it seems that there is no report of FLOPs for most of settings (especially for ImageNet dataset)
However, FLOPs comparison seems essential for follow-up studies.

Rather than conducting all experiments directly again, it would be greatly helpful to have a record of FLOPs.
Would there be FLOPs recorded for each setting?

some problems about experiments

Hi

I have some problems about some results in experiments.

  1. I just wonder if you remove one branch of shape adaptor, the trained network will maintain the same performance.

  2. In imagenet experiment, the search space of shape adaptor is 0.5~1.0, it is larger than original setting in resnet50, so the performance from 77.18±0.04 to 78.74±0.12 is acquired by larger search space.

  3. I just curious, if I remove one branch of shape adaptor and retrain the searched architecture, can I get the same performance as previous?

  4. I am interested in AUTOSC, in mobilenetv2 experiments, autosc's result is larger both in Params and MACS, so the performance is better, can autosc outperform original setting when it has smaller both in Params and MACS?

I try to run model_training.py in distributed data parallel training without changing any hyperparameters. I trained the vgg16 with 8 P100 GPUS and test on valid dataset. The test acc is not so good compared to the vgg16 trained from scratch. I just wonder whether the result of experiments in paper are with retrain?

2020-09-13 07:08:44,075:INFO:Epoch:199 step:1200 lr:6.168375916970619e-06 loss:2.165580 acc:0.520218 fps:178.7379645282133
2020-09-13 07:08:49,295:INFO:Epoch:199 step:1210 lr:6.168375916970619e-06 loss:2.165537 acc:0.520227 fps:196.1497854095874
2020-09-13 07:08:55,025:INFO:Epoch:199 step:1220 lr:6.168375916970619e-06 loss:2.165494 acc:0.520237 fps:178.72407086182486
2020-09-13 07:09:00,241:INFO:Epoch:199 step:1230 lr:6.168375916970619e-06 loss:2.165451 acc:0.520246 fps:196.33716160952534
2020-09-13 07:09:06,005:INFO:Epoch:199 step:1240 lr:6.168375916970619e-06 loss:2.165404 acc:0.520256 fps:177.68041829805352
2020-09-13 07:09:11,207:INFO:Epoch:199 step:1250 lr:6.168375916970619e-06 loss:2.165362 acc:0.520264 fps:196.83474920999691

EPOCH: 0199 ITER: 250400 | TRAIN [LOSS|ACC.]: 1.1114 0.7423 || TEST [LOSS|ACC.]: 1.8600 0.5766 || MACs 7.477G Params 15.236M
s(alpha) = [array([0.99647926, 0.9964938 , 0.93286162, 0.55869093])] | current shape = [32, 31, 31, 31, 30, 30, 30, 27, 27, 27, 15, 15, 15]
TOP: 0.5766369700431824
Total training takes 137928.8354 seconds.
EPOCH: 0199 ITER: 250400 | TRAIN [LOSS|ACC.]: 1.1098 0.7434 || TEST [LOSS|ACC.]: 1.8378 0.5805 || MACs 7.477G Params 15.236M
s(alpha) = [array([0.99647926, 0.9964938 , 0.93286162, 0.55869093])] | current shape = [32, 31, 31, 31, 30, 30, 30, 27, 27, 27, 15, 15, 15]
TOP: 0.5811771154403687
Total training takes 137928.9651 seconds.
EPOCH: 0199 ITER: 250400 | TRAIN [LOSS|ACC.]: 1.1087 0.7443 || TEST [LOSS|ACC.]: 1.8509 0.5730 || MACs 7.477G Params 15.236M
s(alpha) = [array([0.99647926, 0.9964938 , 0.93286162, 0.55869093])] | current shape = [32, 31, 31, 31, 30, 30, 30, 27, 27, 27, 15, 15, 15]
TOP: 0.5729699730873108
Total training takes 137929.0455 seconds.
EPOCH: 0199 ITER: 250400 | TRAIN [LOSS|ACC.]: 1.1104 0.7426 || TEST [LOSS|ACC.]: 1.8716 0.5697 || MACs 7.477G Params 15.236M
s(alpha) = [array([0.99647926, 0.9964938 , 0.93286162, 0.55869093])] | current shape = [32, 31, 31, 31, 30, 30, 30, 27, 27, 27, 15, 15, 15]
TOP: 0.5728103518486023
Total training takes 137929.0534 seconds.
EPOCH: 0199 ITER: 250400 | TRAIN [LOSS|ACC.]: 1.1038 0.7453 || TEST [LOSS|ACC.]: 1.8506 0.5799 || MACs 7.477G Params 15.236M
s(alpha) = [array([0.99647926, 0.9964938 , 0.93286162, 0.55869093])] | current shape = [32, 31, 31, 31, 30, 30, 30, 27, 27, 27, 15, 15, 15]
TOP: 0.5799017548561096
Total training takes 137929.0631 seconds.
EPOCH: 0199 ITER: 250400 | TRAIN [LOSS|ACC.]: 1.1137 0.7435 || TEST [LOSS|ACC.]: 1.9037 0.5683 || MACs 7.477G Params 15.236M
s(alpha) = [array([0.99647926, 0.9964938 , 0.93286162, 0.55869093])] | current shape = [32, 31, 31, 31, 30, 30, 30, 27, 27, 27, 15, 15, 15]
TOP: 0.569606363773346
Total training takes 137929.1492 seconds.
EPOCH: 0199 ITER: 250400 | TRAIN [LOSS|ACC.]: 1.1027 0.7443 || TEST [LOSS|ACC.]: 1.8845 0.5733 || MACs 7.477G Params 15.236M
s(alpha) = [array([0.99647926, 0.9964938 , 0.93286162, 0.55869093])] | current shape = [32, 31, 31, 31, 30, 30, 30, 27, 27, 27, 15, 15, 15]
TOP: 0.5741389989852905
Total training takes 137929.1914 seconds.
EPOCH: 0199 ITER: 250400 | TRAIN [LOSS|ACC.]: 1.1115 0.7428 || TEST [LOSS|ACC.]: 1.8278 0.5795 || MACs 7.477G Params 15.236M
s(alpha) = [array([0.99647926, 0.9964938 , 0.93286162, 0.55869093])] | current shape = [32, 31, 31, 31, 30, 30, 30, 27, 27, 27, 15, 15, 15]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.