Comments (11)
Which dataset are you trying to train?
from autodl-projects.
I do not have good machine.
I just want to run very basic CIFAR trying to figure out this codeset.
Also, can I run it across a few nanos like 20 in parallel?
Thanks!!
from autodl-projects.
4 GB should be ok to run CIFAR. If not, you could decrease init_channels or layers in https://github.com/D-X-Y/GDAS/blob/master/scripts-cnn/train-cifar.sh#L36.
The codes support GPU parallel.
from autodl-projects.
Thanks. Tried
--init_channels 1 --layers 2 \
does not work
.....
Train model from scratch without pre-trained model or snapshot
==>>[2019-08-12-16:41:37] [Epoch=000/600] [Need: 00:00:00] LR=0.0250 ~ 0.0250, Batch=96
THCudaCheck FAIL file=/media/nvidia/WD_BLUE_2.5_1TB/pytorch-v1.1.0/aten/src/THCUNN/generic/SpatialAveragePooling.cu line=184 error=7 : too many resources requested for launch
Traceback (most recent call last):
File "./exps-cnn/train_base.py", line 89, in
main()
File "./exps-cnn/train_base.py", line 84, in main
main_procedure(config, args.dataset, args.data_path, args, genotype, args.init_channels, args.layers, None, log)
File "/home/nvidia/ros/nas/GDAS/exps-cnn/train_utils.py", line 96, in main_procedure
train_acc1, train_acc5, train_los = _train(train_loader, model, criterion, optimizer, 'train', epoch, config, args.print_freq, log)
File "/home/nvidia/ros/nas/GDAS/exps-cnn/train_utils.py", line 148, in _train
loss.backward()
File "/home/nvidia/.virtualenvs/py36/lib/python3.6/site-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/nvidia/.virtualenvs/py36/lib/python3.6/site-packages/torch/autograd/init.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cuda runtime error (7) : too many resources requested for launch at /media/nvidia/WD_BLUE_2.5_1TB/pytorch-v1.1.0/aten/src/THCUNN/generic/SpatialAveragePooling.cu:184
Tried
--init_channels 1 --layers 1 \
Does not work either
...
Train model from scratch without pre-trained model or snapshot
==>>[2019-08-12-16:43:46] [Epoch=000/600] [Need: 00:00:00] LR=0.0250 ~ 0.0250, Batch=96
Traceback (most recent call last):
File "./exps-cnn/train_base.py", line 89, in
main()
File "./exps-cnn/train_base.py", line 84, in main
main_procedure(config, args.dataset, args.data_path, args, genotype, args.init_channels, args.layers, None, log)
File "/home/nvidia/ros/nas/GDAS/exps-cnn/train_utils.py", line 96, in main_procedure
train_acc1, train_acc5, train_los = _train(train_loader, model, criterion, optimizer, 'train', epoch, config, args.print_freq, log)
File "/home/nvidia/ros/nas/GDAS/exps-cnn/train_utils.py", line 138, in _train
logits, logits_aux = model(inputs)
File "/home/nvidia/.virtualenvs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/nvidia/.virtualenvs/py36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/nvidia/.virtualenvs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/nvidia/ros/nas/GDAS/lib/nas/CifarNet.py", line 81, in forward
logits_aux = self.auxiliary_head(s1)
File "/home/nvidia/.virtualenvs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/nvidia/ros/nas/GDAS/lib/nas/CifarNet.py", line 24, in forward
x = self.classifier(x.view(x.size(0),-1))
File "/home/nvidia/.virtualenvs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/nvidia/.virtualenvs/py36/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 92, in forward
return F.linear(input, self.weight, self.bias)
File "/home/nvidia/.virtualenvs/py36/lib/python3.6/site-packages/torch/nn/functional.py", line 1406, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: size mismatch, m1: [96 x 6912], m2: [768 x 10] at /media/nvidia/WD_BLUE_2.5_1TB/pytorch-v1.1.0/aten/src/THC/generic/THCTensorMathBlas.cu:268
from autodl-projects.
Please try with init_channels >= 2 and layers >= 2
from autodl-projects.
With this setting
init_channels = 2 and layers = 2
It complains the followings:
==>>[2019-08-12-18:51:54] [Epoch=000/600] [Need: 00:00:00] LR=0.0250 ~ 0.0250, Batch=96
THCudaCheck FAIL file=/media/nvidia/WD_BLUE_2.5_1TB/pytorch-v1.1.0/aten/src/THCUNN/generic/SpatialAveragePooling.cu line=184 error=7 : too many resources requested for launch
Traceback (most recent call last):
File "./exps-cnn/train_base.py", line 89, in
main()
File "./exps-cnn/train_base.py", line 84, in main
main_procedure(config, args.dataset, args.data_path, args, genotype, args.init_channels, args.layers, None, log)
File "/home/nvidia/ros/nas/GDAS/exps-cnn/train_utils.py", line 96, in main_procedure
train_acc1, train_acc5, train_los = _train(train_loader, model, criterion, optimizer, 'train', epoch, config, args.print_freq, log)
File "/home/nvidia/ros/nas/GDAS/exps-cnn/train_utils.py", line 148, in _train
loss.backward()
File "/home/nvidia/.virtualenvs/py36/lib/python3.6/site-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/nvidia/.virtualenvs/py36/lib/python3.6/site-packages/torch/autograd/init.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cuda runtime error (7) : too many resources requested for launch at /media/nvidia/WD_BLUE_2.5_1TB/pytorch-v1.1.0/aten/src/THCUNN/generic/SpatialAveragePooling.cu:184
from autodl-projects.
It seems a hardware problem. I can run successfully on my GPU.
from autodl-projects.
So are you able to try the 4GB GPU memory situation with your GPU?
Which file(s) I need to look into to debug the resources issue?
I think Jetson may can run it in parallel... so how to make it run in a parallel (cluster) fashion?
Thanks!
from autodl-projects.
I did not have a 4 GPU memory situation, but I checked the GPU memory usage, which is lower than 4 GPU. The codes (by default) use parallel. Regarding the resources issue, I'm not familiar with how to debug that.
from autodl-projects.
Thanks. I can look into it. There might be some differences between Jetson nano and typical GPU in hardware architecture. But it is certainly interesting to compare....
from autodl-projects.
No worries. sorry, I'm not familiar with Jetson nano.
I would temporarily close this issue, and please feel free to reopen it if you want.
from autodl-projects.
Related Issues (20)
- Fail to generate the onnx file HOT 2
- ENAS in NATS-Bench is better than that in NAS-Bench-201
- The link of NAS-BENCH-201-4-v1.0-archive.tar failed HOT 2
- How to quickly retrain network architecture from scratch after getting the best topology HOT 2
- Where can I get the code for regularized_ea train and evaluation HOT 2
- Difference between GDAS variants HOT 3
- failed to download NAS-Bench-201-v1_1-096897.pth HOT 3
- https://github.com/D-X-Y/AutoDL-Projects/blob/main/docs/NATS-Bench.md HOT 1
- Loading weights for NAS-Bench 201 HOT 5
- Usage of algorithm
- Where is the documentation
- Query related to reduce_concat and normal_concat in Genotype
- training time on NATS-Benchmark HOT 1
- NASBench201 dataset difficult to download HOT 1
- how to decompress .bpzip file HOT 1
- Question regarding NAS-Bench-201
- AutoDL algorithm HOT 3
- Questions about GDAS sampling process
- Regarding Flop and Parameter count in NASBench201 cifar10 dataset
- Question for create the network from api in nas_bench_201 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from autodl-projects.