Comments (5)
Hi @rogerzico
As i had so much problems running RETURNN i also had to try different configurations to find the best match. So far my current configuration is working fine. the system config is as follow:
GPU= GTX 1080 TI
Theano Version: 0.9.0 (this version is highly recommended)
important dont forget to edit your theanorc file :
[global]
floatX=float32
device = cpu
[lib]
cnmem = 0.8 (this is kinda important. dont set this value to 1, most likely your model wont start to run.its a bug.try starting from 0.7 or 0.6 and if the model works then higher the value.this is the amount of memory that theano will use from your gpu. if you dont set any values theano try to use the whole memory which most likely lead you to crash before model starts)
gcc version = 4.8.5 this is also important regarding to theano compatibility (use old gcc in general)
Cuda version = use cuda 8.0 with cudnn 5110 is the best match for RETURNN
Nvidia driver version : 390.77 (this is not really important, just go with whatever version get installed while you are installing cuda 8.0)
h5py = version is not important just use (pip install h5py)
I hope this will help you running your model.If you try this and it wont work let me know. Maybe i can help
from returnn.
Hi Prolaser,
Congrads on your the full IAM dataset running and converging.
I was trying the same thing but with very different results.
Do you mind share your software configuration here ? such as:
Theano version,
Theano GPU package version
NVidia driver version
NVidia cuda, cudnn library version
As for the number of epochs, I have asked the same question, I was told about 20-40, which is consistent with what is on their publication.
Thanks
from returnn.
Hi prolaser,
I don't remember all the details, but I think having low cost at epoch 30 is nothing unusual.
Towards testing the accuracy, use config_fwd to create a h5 file for the validation set and then try the decode script https://github.com/rwth-i6/returnn/blob/master/demos/mdlstm/IAM/decode.py
from returnn.
Thanks for your reply. I have used config_fwd on my test set and i used the decoder and I have to say i have got a pretty good results on my testset. I did not measure the accuracy but i can say it would be around 90% accurate. Now i am trying to improve the accuracy on my test set. I was thinking using dictionaries or some language models. Do you have any suggestions?
Thank you
from returnn.
Hi @prolaser,
Thanks for the detailed description of your machine's configuration.
I also used Theano 0.9, but when I ran IAM demo, I couldn't get passed the Theano sandbox test.
I tracked down the python code, it is in theano sandbox/cuda/test/
I think what it's trying to test is summation of 1000 floating point and compare the results; (so called reduction test ?)
I can't remember my cuda cudnn version, but I will definitely try yours:
"Cuda version = use cuda 8.0 with cudnn 5110 is the best match for RETURNN"
Thanks again for your help!
Roger
from returnn.
Related Issues (20)
- Different effective learning rate reported over gpus HOT 11
- CUDA error: initialization error HOT 3
- MultiProcDataset inside PyTorch DataLoader with num_workers>0, multiple issues HOT 4
- RuntimeError: CUDA error: unspecified launch failure HOT 2
- NonDaemonicSpawnProcess hangs at exit HOT 2
- High memory usage with datasets (specifically when multi procs are used)
- Hang at exit in TDL worker in multiprocessing `_run_finalizers`, deadlock in `_wait_for_tstate_lock`? HOT 6
- Hang HOT 2
- Returnn Native after using different apptainer uses old compilation HOT 6
- MetaDataset with sequence list filter file
- HDFDataset (or generic dataset) post processing HOT 15
- Dataset batching like ESPnet support
- torch.nn.functional.conv2d: RuntimeError: GET was unable to find an engine to execute this computation HOT 1
- TensorFlow 2.14 degradation in WER HOT 2
- Updates for recent TensorFlow version
- Hang in dataset iterator HOT 5
- Log GPU device for torch backend HOT 2
- torch.onnx.export requires input_names and output_names to be in order HOT 12
- RF weight dropout HOT 6
- Support for larger scale datasets HOT 33
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from returnn.