Giter VIP home page Giter VIP logo

ruby-dnn's People

Contributors

kojix2 avatar unagiootoro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

Forkers

kojix2 saouddk

ruby-dnn's Issues

GPU Support

Hi @unagiootoro

I ran the XOR sample and found that Cumo was slower than Numo.

If you don't mind my asking, Do you have a GPU + CUDA environment?

If you don't have a GPU, someone in Ruby community (including me) will support you with a donation...

Improve the readme

Hello.

I recently introduced rubydnn on reddit.
The reddit comment was "improve the documentation".

I'm not an engineer or a programmer.
I don't really understand deep learning / neural networks.

But, I think the concept of ruby-dnn is nice.

Please brush up the README. This will encourage many people to use it.

I think the following list is useful.
https://github.com/matiassingers/awesome-readme

Flux.jl
https://github.com/FluxML/Flux.jl

Reddit
https://www.reddit.com/r/ruby/comments/bwnely/rubydnn/

CR+LF (Windows) line break types.

Hi.

I think ruby-dnn should use LF line breaks.
ruby-dnn uses Windows (CR LF) line breaks.

When you open a file in vim, it looks like this:

image

You see ^ M at the end of every line. ^M means (CR).

This is not a fault.

But, Cross-platform open source project usually use LF line breaks.
Potential contributors must set CR + LF in the editor for ruby-dnn project.

Again, this is not a mistake. You can leave the line breaks as they are.
But that would be a disadvantage in attracting users and contributors.

Unclear part of the dcgan exsample

Rubocop find out this line is unclear...

examples/dcgan/dcgan.rb:132:14: W: Lint/Void: Operator + used in void context.
    dis_loss + @dis.train_on_batch(images, y_fake)

def train_step(x_batch, y_batch)
batch_size = x_batch.shape[0]
noise = Numo::SFloat.new(batch_size, 20).rand(-1, 1)
images = @gen.predict(noise)
y_real = Numo::SFloat.ones(batch_size, 1)
y_fake = Numo::SFloat.zeros(batch_size, 1)
@dis.enable_training
dis_loss = @dis.train_on_batch(x_batch, y_real)
dis_loss + @dis.train_on_batch(images, y_fake)
noise = Numo::SFloat.new(batch_size, 20).rand(-1, 1)
label = Numo::SFloat.cast([1] * batch_size).reshape(batch_size, 1)
dcgan_loss = train_on_batch(noise, label)
{ dis_loss: dis_loss, dcgan_loss: dcgan_loss }
end

Example of using Embedding

Hi,

Thanks for creating this package. I am very happy to see a native ruby implementation for DNNs. I have been playing around with it to use in a machine learning class I am teaching.

I have a dataset with two different categorical features and I would like to use one Embedding for each feature. Some other features in my dataset are numeric. In other packages there is a way to slice along an axis, apply the embedding and the concatenate. I can't figure out how to do this. There is code for an Embedding layer but no examples.

Also, can you show how embed features into multiple dimensions.

class MyModel < Model 
 ##...
  def forward x
    e1 = @embed1.(x) # Slice
    e2 = @embed2.(x) # Slice
    x = Concatenate.(e1, e2, axis: 1)
    x = @dense.(x)
  end
end

My data looks like this:

48.0, 2.0595, 1.0, 2.0, 1.0, 0.0

where the last two features are to be embedded and the rest are numeric.

Thanks!

Ruby-dnn Project Logo

How about making an original character of ruby dnn? A logo has a great power to attract people. I like drawing pictures. When I saw your twitter, I felt that you also like drawing pictures.

(After you create a character concept, you can also order a logo from a designer.)

References:
yoshoku/rumale#4

Feature Request: Add Mish Activation

New to Ruby. Only done web scraping using the same. But hopefully this can be considered. Nice work with the library though.

Mish is a new novel activation function proposed in this paper.
It has shown promising results so far and has been adopted in several packages including:

All benchmarks, analysis and links to official package implementations can be found in this repository

It would be nice to have Mish as an option within the activation function group.

This is the comparison of Mish with other conventional activation functions in a SEResNet-50 for CIFAR-10:
se50_1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.