Comments (9)
@yaorong0921
Hello!
Thanks for replying. But sorry, currently I am still adjusting this code because my later tries with it revealed some issues.
I am currently checking things like the loss func and how it works with batches in this model, in accordance to the implementation of ConvLSTM in Tensorflow.
Also, I was wrong about the bias because the model has already added bias here:
self.Wxi = nn.Conv2d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)
from convolutional_lstm_pytorch.
By the way this is the result of the code after the x=input[step] change.
I'm training with a moving squares dataset adjusted from the Keras's ConvLSTM2D example code here
After 1 epoch * 5000 batches * 6 seqs per batch, here's a random result and the ground truth:
So great it worked!! Cheers to the author!
I'll try to add bias into ConvLSTMCell sometime later.
from convolutional_lstm_pytorch.
@mikumeow
Hi,
could you please share your code which works on the Keras's example?
Many thanks :-)
from convolutional_lstm_pytorch.
@mikumeow i get the similar problem with you, about same x(absence of sequence size).i think your method should be right .i will try it and give a response.surely,it doesn't lack of bias. and convlstm seems that it does't need parameter step(get from x.size()[0])
from convolutional_lstm_pytorch.
@mikumeow if it's appropriate to loop layers within loops of timesteps?
from convolutional_lstm_pytorch.
I think iterating over timesteps seems reasonable
from convolutional_lstm_pytorch.
@mikumeow if it's appropriate to loop layers within loops of timesteps?
It seems ok, since any hidden state is independent of future hidden states. So no need to compute the entire time-loop hidden states ahead. @mikumeow also mentioned that good decent is performed using this code when he did x=input[step]
from convolutional_lstm_pytorch.
The first problem is that in ConvLSTM.forward, the code is using the same x = input in multiple timesteps.
I guess the input shape of forward func. shall be changed to
[sequence, bsize, channel, x, y]instead of the original
[bsize, channel, x, y]And, x=input line shall be changed to
x=input[step]for different steps.
I am still studying if it's appropriate to loop layers within loops of timesteps, but after training your current code(with the change I mentioned above), I can get decent outcomes.
The second problem is that in ConvLSTMCell, there're no biases. For example in
ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci)
While it should be something like
ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci + self.Bci)
But I don't know if such constants would affect the backward phase.
P.S. I'm myself a beginner so maybe I'm wrong. Please reply :)
Hi:I agree with your question about the lack of bias...
But now I am only a beginning scholar of Convlstm, I can understand the principle but cannot use it, so you have successfully used the author's Convlstm_pytorch, could you please send me the code of this successful prediction image (from Keras)?
I'm very grateful because learning convlstm is really painful
from convolutional_lstm_pytorch.
could you please send me the code of this successful prediction image (from Keras)? Thank you
from convolutional_lstm_pytorch.
Related Issues (20)
- using for custom dataset HOT 1
- Is there any experiments results provided?
- Found the code is much slower than Keras counterpart (takes 2-3 times longer time). Do you know why?
- What's the shape of input? HOT 1
- I think forward code is wrong
- The shape of the output
- Why Wci, Wcf, Wco are Variables rather than nn.Parameters HOT 3
- Where is the squence length HOT 1
- why hidden_channels % 2 == 0 ?
- Error in backward HOT 1
- Peephole connections (Wci, Wcf, Wco) gradient update HOT 1
- Why self.num_features=4 in line 15?
- How can I use this module for predicting the moving mnist like the paper? HOT 1
- about concat HOT 5
- about forward HOT 1
- RuntimeError: Jacobian mismatch for output 0 with respect to input 0 HOT 3
- 关于input 和 step 的问题 HOT 2
- How to do the entire sequence all at once?
- Why Wci,Wcf,Wco should be initialized at the beginning of each batch? HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from convolutional_lstm_pytorch.