Comments (4)
I think the back-propagation also has to deal with this addition by probably adding a reverse tanh calculation. But because I don't fully understand the back-propagation yet, I can't pinpoint what to do exactly. Funny that the overall loss is already better without the back-prop fix.
I also tried replacing these lines in the backpropagation (located at the begin of the function top_diff_is):
ds = self.state.o * top_diff_h + top_diff_s
do = self.state.s * top_diff_h
I changed them into (added np.tanh around both s values):
ds = self.state.o * top_diff_h + np.tanh(top_diff_s)
do = np.tanh(self.state.s) * top_diff_h
This resulted in an even better loss of 4.26917706433e-07. But I am skeptic about the correctness over here.
Anyway, only saying this for people who want to add the tanh for performance improvements. I am not saying this should be added to the code. The code is more simplistic without the tanh. This makes the code better to understand for learning purposes.
from lstm.
the second.
from lstm.
Quote from the paper:
"It is customary that the internal state first be run through a tanh
activation function, as this gives the output of each cell the same dynamic
range as an ordinary tanh hidden unit. However, in other neural network
research, rectified linear units, which have a greater dynamic range, are
easier to train. Thus it seems plausible that the nonlinear function on the
internal state might be omitted."
But with the current example code it seems like adding tanh will result in a better result. Still both results are quite accurate:
With tanh (100 iterations), loss: 6.31438767294e-07
Without tanh (100 iterations), loss: 2.61076356822e-06
(Note: do not confuse this tanh with the tanh at the input a.k.a. LstmState.g)
tl;dr: Without or with tanh() is both possible.
from lstm.
I think the back-propagation also has to deal with this addition by probably adding a reverse tanh calculation. But because I don't fully understand the back-propagation yet, I can't pinpoint what to do exactly. Funny that the overall loss is already better without the back-prop fix.
I also tried replacing these lines in the backpropagation (located at the begin of the function top_diff_is):
ds = self.state.o * top_diff_h + top_diff_s
do = self.state.s * top_diff_hI changed them into (added np.tanh around both s values):
ds = self.state.o * top_diff_h + np.tanh(top_diff_s)
do = np.tanh(self.state.s) * top_diff_hThis resulted in an even better loss of 4.26917706433e-07. But I am skeptic about the correctness over here.
Anyway, only saying this for people who want to add the tanh for performance improvements. I am not saying this should be added to the code. The code is more simplistic without the tanh. This makes the code better to understand for learning purposes.
I think the back-propagation also has to deal with this addition by probably adding a reverse tanh calculation. But because I don't fully understand the back-propagation yet, I can't pinpoint what to do exactly. Funny that the overall loss is already better without the back-prop fix.
I also tried replacing these lines in the backpropagation (located at the begin of the function top_diff_is):
ds = self.state.o * top_diff_h + top_diff_s
do = self.state.s * top_diff_hI changed them into (added np.tanh around both s values):
ds = self.state.o * top_diff_h + np.tanh(top_diff_s)
do = np.tanh(self.state.s) * top_diff_hThis resulted in an even better loss of 4.26917706433e-07. But I am skeptic about the correctness over here.
Anyway, only saying this for people who want to add the tanh for performance improvements. I am not saying this should be added to the code. The code is more simplistic without the tanh. This makes the code better to understand for learning purposes.
hello. I have got a lot after read your commit,but i hanve a question here ,if we and tanh , the first should be:
ds = self.state.o * top_diff_h * (1 - np.tanh(top_diff_s) ** 2)+top_diff_s;
i think it is .Welcome to discuss
from lstm.
Related Issues (20)
- Usage of the LSTM
- action recogniton
- A parameter (x_dim) appears to be unused
- Explain self.wi = rand_arr(-0.1, 0.1, mem_cell_ct, concat_len)
- line 85&86:if s_prev == None: s_prev = np.zeros_like(self.state.s) HOT 5
- new dataset HOT 1
- Swift port
- why there is no 0.5* in the equation of the derivative of sate(t) ? HOT 4
- How to predict a new sequence ?
- Execution problem HOT 1
- can't convergence
- I have a question of ds,how to compulete ds?
- I don't understand wi_diff wf_diff etc
- The tanh_derivative should be : 1. + values**2 HOT 4
- self.state.h = self.state.s * self.state.o HOT 1
- lstm.py the 97 line
- loss compute
- You forget the tanh function in the last computation in the part of def bottom_data_is(): HOT 4
- Error in using 2 outputs
- The backpropagation part is needed? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lstm.