Giter VIP home page Giter VIP logo

Comments (9)

jakezhaojb avatar jakezhaojb commented on July 23, 2024 1

@vineetjohn Good point! I used PyTorch 0.3.1. I'm adding this to the README

from arae.

jakezhaojb avatar jakezhaojb commented on July 23, 2024

Hmm, could you try maybe run with python3?

from arae.

vineetjohn avatar vineetjohn commented on July 23, 2024

I've run into the same issue.
Python 3.5.2
torch==0.4.1

Training...     
Traceback (most recent call last):
  File "train.py", line 574, in <module>
    train_ae(1, train1_data[niter], total_loss_ae1, start_time, niter)                                                       
  File "train.py", line 400, in train_ae
    output = autoencoder(whichdecoder, source, lengths, noise=True)
  File "/home/v2john/.pyenv/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__                   
    result = self.forward(*input, **kwargs)
  File "/home/v2john/ARAE/yelp/models.py", line 143, in forward
    hidden = self.encode(indices, lengths, noise)                                                                            
  File "/home/v2john/ARAE/yelp/models.py", line 160, in encode
    batch_first=True)
  File "/home/v2john/.pyenv/lib/python3.5/site-packages/torch/onnx/__init__.py", line 67, in wrapper                         
    if not might_trace(args):
  File "/home/v2john/.pyenv/lib/python3.5/site-packages/torch/onnx/__init__.py", line 141, in might_trace
    first_arg = args[0]                                                                                                      
IndexError: tuple index out of range

Python3 clearly isn't the fix. It seems like something about the PyTorch + ONNX interop is broken.
Is there a specific version of PyTorch that's needed to run this?

from arae.

vineetjohn avatar vineetjohn commented on July 23, 2024

@jiwoongim

You can try using my forked version of the repository to see if it fixes the issue for you.
I've verified it to be working for Python 3.5.2 and PyTorch 0.4.1
https://github.com/vineetjohn/arae

I've not identified the actual problem yet, but I've added a workaround that avoids having to deal with ONNX altogether. The pack_padded_sequence method in torch.nn.utils.rnn seems to be buggy.

from arae.

jakezhaojb avatar jakezhaojb commented on July 23, 2024

Guys can you try python 3.6? @jiwoongim @vineetjohn

from arae.

rainyrainyguo avatar rainyrainyguo commented on July 23, 2024

@jiwoongim
You can try using my forked version of the repository, I have resolved the issue by doing several changes to the original code.
I have verified it to be working for python 3.6.5 and PyTorch 0.4.1
https://github.com/rainyrainyguo/ARAE

from arae.

vineetjohn avatar vineetjohn commented on July 23, 2024

@jakezhaojb

This doesn't look like a Python version issue.
The named arguments used in this project vs. those accepted by PyTorch 0.4.1 are inconsistent.

You should consider adding the version of PyTorch used to perform your experiments, to the project README.

from arae.

dangvanthin avatar dangvanthin commented on July 23, 2024

@rainyrainyguo
I have run your forked version in python 3.6.5 with PyTorch 0.4.1 (Cudnn=7.1.3, Cudatoolkit=8.0) and I have a error as follow:
Training ....
run_oneb.py:256: UserWarning: torch.nn.utils.clip_grad_norm is now deprecated in favor of torch.nn.utils.clip_grad_norm_
.
run_oneb.py:259: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() t
o convert a 0-dim tensor to a Python number
run_oneb.py:263: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() t
o convert a 0-dim tensor to a Python number
| epoch 1 | 0/ 765 batches | ms/batch 0.61 | loss 0.05 | ppl 1.05 | acc 0.00
Traceback (most recent call last):
File "run_oneb.py", line 102, in
exec(open("train.py").read())
File "", line 434, in
File "", line 395, in train
File "", line 324, in train_gan_d
File "/home/thindv/anaconda3/envs/ARAE/lib/python3.6/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/thindv/anaconda3/envs/ARAE/lib/python3.6/site-packages/torch/autograd/init.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: invalid gradient at index 0 - expected shape [] but got [1]

Can you give me some advices?

from arae.

V-Enzo avatar V-Enzo commented on July 23, 2024

@dangvanthin Hi, I met the same problem. Do you have the solution right now? Thank you

from arae.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.