Giter VIP home page Giter VIP logo

Comments (13)

eugenioclrc avatar eugenioclrc commented on May 22, 2024 2

Another great lib for "inspiration"
https://github.com/PAIR-code/deeplearnjs

from neataptic.

AlexisTM avatar AlexisTM commented on May 22, 2024 1

Hey @snapo, while it is the way forward for inferring or backpropagate the network, I guess it will be hard to gain performance on NEAT algorithm as benchmarks do not take the parsing into account, while the network is updated all the time.

Also, you could loose performance for smaller arrays.

NOTE: I love gpu.js :D

from neataptic.

AlexisTM avatar AlexisTM commented on May 22, 2024 1

@snapo Checkout the recent #50 PR, you can check how much better it performs already!

from neataptic.

eugenioclrc avatar eugenioclrc commented on May 22, 2024 1

this also looks very promising:
https://tenso.rs

from neataptic.

snapo avatar snapo commented on May 22, 2024

Hey @AlexisTM this is a good point with small data ... currently i am working in a different way , my workflow currently looks like.

I use my full dataset (1.2TB) with keras.... i have a small 120MB dataset that i use with this javascript project. to find good fittings that i dont find with keras as i dont know how to change it to do evolutionized learning.

After the 120MB are done i look at the fittings and apply the same fittings to my keras integration. Would have just been nice to do those dense layers in one framework instead of two :-)

(In neataptic)
Dense(Input=408)
Dense(816)
Dense(816)
Dropout(0.5)
Dense(816)
Dropout(0.2)
Dense(200)
Dropout(0.2)
Dense(1)

For me it works great... a bit slow but the results for improving my keras implementation are huge...
Even if it will not come with gpu support i will still use it to find the best connections.

With the help of this javascript library i got from 50,2% to 67% with just the test data.

One way i was thinking about the gpu implementation is for example:

Input: random 20M rows
population: 1'000'000
elitism: 10

In this way the data transfered to the gpu is getting bigger and the gpu has to do more work. When the one million tests are dead it gets back and new data is prepared with the 10% best weights i received. Its just a idea or how i think about it.

It does not make the search faster, but it should make it more accurate over big inputdata and very low learning rate. Maybe i have a missunderstanding how it should work as i never worked with gpu programming.

from neataptic.

wagenaartje avatar wagenaartje commented on May 22, 2024

I have been doing lots of tests with GPU.js and other gpu libraries, as my main goal is to optimize Neataptic.

First, I tried optimizing the activation of a single network by parallizing. This is very easy if you have layered networks, because you could activate the network layer-by-layer, activating all neurons in a layer in parallel. This is not the case in Neataptic, you might construct networks through layers, but the design of networks has to be extremely flexible to be able to do mutation. So neurons are represtented in a single array, limiting the activation of neurons to one-by-one. Except if we do some smart topology sorting and queuing. This is something I am experimenting with, but the computational cost of this might overthrow any boost we get by parallizing.'

If i'm really smart, I parallize the topology sorting and queuing over the entire population as well. But that would be some serious algorithm (and i'm thinking of giving it a try as i'm speaking :)) - but this still requires sometimes I mention in the following paragraph.

Secondly, I tried activating the entire population in parallel. This is do-able, but GPU.js doesn't allow this yet - I can't create any arrays in the functions I create a kernel with. See this issue I just created.

And personally, I think GPU.js is overselling themselves. Just like @AlexisTM mentioned, it is not computation-efficient at smaller arrays, and it does not take the kernel creation in account. But I'm keeping track of its development.

I could also start experimenting with WebMonkeys, but I have to learn some shader langauge then 🆘

I am also busy with multithreading for node, but at this moment it does not really seem to make it a lot faster. When I did micro-optimization at Neataptic, I only looked at the speeds in Chrome. I'm such a fool.


I find it very interesting how you use Neataptic to gather some weights for your network in Keras @snapo. I am truly intrigued.

But stay tuned, i'm looking for ways to improve the speed of Neataptic.

from neataptic.

snapo avatar snapo commented on May 22, 2024

@wagenaartje , wow WebMonkey sounds really intresting, did never hear about it 👍 As i mentioned for me everything works fine currently. There is no stress for GPU support or multithreading... just a "nice to have" but not mandatory. If i would know more about gpu programming and parallelism i would help you, but currently im limited with learning more about machine learning and all the math i missed in school 🥇 ...

from neataptic.

snapo avatar snapo commented on May 22, 2024

@AlexisTM thanks a lot... will try it this evening or maybe saturday... will get back.

from neataptic.

snapo avatar snapo commented on May 22, 2024

I was able to test it... speed improvement is around 250% compared to the 1.3 version. Will try to provide a sample or two (with public data) and merge it to this repo if thats ok (probably this week).
What i noticed is the difference in the batch size speed... if i have a small batch size say 3-10 the speed is nearly the same... but with bigger batch size's speed goes up a lot (exactly what i requested. The dataset i am using is private... so i cant share it. As mentioned before i try to implement 2 samples from CSV's with public data , one LSTM for stock prediction and one CSV with some dense layers.

from neataptic.

eugenioclrc avatar eugenioclrc commented on May 22, 2024

👍 to this suggestion

from neataptic.

wagenaartje avatar wagenaartje commented on May 22, 2024

Definitely looks promising. I do hope that

TensorFire models run up to 100x faster than previous in-browser neural network libraries

will actually be true.

from neataptic.

CyborgDroid avatar CyborgDroid commented on May 22, 2024

Was anyone able to implement GPU support?

from neataptic.

AlexisTM avatar AlexisTM commented on May 22, 2024

@CyborgDroid I guess this has never been implemented at the end; Mainly because Neataptic is meant to be flexible and not dense layers. For dense layers, there are other implementations on which you can just run the Tensorflow model on the client that uses the GPU.

from neataptic.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.