Giter VIP home page Giter VIP logo

Comments (4)

ScottMackay2 avatar ScottMackay2 commented on July 22, 2024 1

I think the back-propagation also has to deal with this addition by probably adding a reverse tanh calculation. But because I don't fully understand the back-propagation yet, I can't pinpoint what to do exactly. Funny that the overall loss is already better without the back-prop fix.

I also tried replacing these lines in the backpropagation (located at the begin of the function top_diff_is):
ds = self.state.o * top_diff_h + top_diff_s
do = self.state.s * top_diff_h

I changed them into (added np.tanh around both s values):
ds = self.state.o * top_diff_h + np.tanh(top_diff_s)
do = np.tanh(self.state.s) * top_diff_h

This resulted in an even better loss of 4.26917706433e-07. But I am skeptic about the correctness over here.

Anyway, only saying this for people who want to add the tanh for performance improvements. I am not saying this should be added to the code. The code is more simplistic without the tanh. This makes the code better to understand for learning purposes.

from lstm.

xylcbd avatar xylcbd commented on July 22, 2024

the second.

from lstm.

ScottMackay2 avatar ScottMackay2 commented on July 22, 2024

Quote from the paper:
"It is customary that the internal state first be run through a tanh
activation function, as this gives the output of each cell the same dynamic
range as an ordinary tanh hidden unit. However, in other neural network
research, rectified linear units, which have a greater dynamic range, are
easier to train. Thus it seems plausible that the nonlinear function on the
internal state might be omitted."

But with the current example code it seems like adding tanh will result in a better result. Still both results are quite accurate:
With tanh (100 iterations), loss: 6.31438767294e-07
Without tanh (100 iterations), loss: 2.61076356822e-06

(Note: do not confuse this tanh with the tanh at the input a.k.a. LstmState.g)

tl;dr: Without or with tanh() is both possible.

from lstm.

ZhangPengB avatar ZhangPengB commented on July 22, 2024

I think the back-propagation also has to deal with this addition by probably adding a reverse tanh calculation. But because I don't fully understand the back-propagation yet, I can't pinpoint what to do exactly. Funny that the overall loss is already better without the back-prop fix.

I also tried replacing these lines in the backpropagation (located at the begin of the function top_diff_is):
ds = self.state.o * top_diff_h + top_diff_s
do = self.state.s * top_diff_h

I changed them into (added np.tanh around both s values):
ds = self.state.o * top_diff_h + np.tanh(top_diff_s)
do = np.tanh(self.state.s) * top_diff_h

This resulted in an even better loss of 4.26917706433e-07. But I am skeptic about the correctness over here.

Anyway, only saying this for people who want to add the tanh for performance improvements. I am not saying this should be added to the code. The code is more simplistic without the tanh. This makes the code better to understand for learning purposes.

I think the back-propagation also has to deal with this addition by probably adding a reverse tanh calculation. But because I don't fully understand the back-propagation yet, I can't pinpoint what to do exactly. Funny that the overall loss is already better without the back-prop fix.

I also tried replacing these lines in the backpropagation (located at the begin of the function top_diff_is):
ds = self.state.o * top_diff_h + top_diff_s
do = self.state.s * top_diff_h

I changed them into (added np.tanh around both s values):
ds = self.state.o * top_diff_h + np.tanh(top_diff_s)
do = np.tanh(self.state.s) * top_diff_h

This resulted in an even better loss of 4.26917706433e-07. But I am skeptic about the correctness over here.

Anyway, only saying this for people who want to add the tanh for performance improvements. I am not saying this should be added to the code. The code is more simplistic without the tanh. This makes the code better to understand for learning purposes.

hello. I have got a lot after read your commit,but i hanve a question here ,if we and tanh , the first should be:
ds = self.state.o * top_diff_h * (1 - np.tanh(top_diff_s) ** 2)+top_diff_s;
i think it is .Welcome to discuss

from lstm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.