Giter VIP home page Giter VIP logo

ml-projects's Introduction

Implementation of Small Projects using TensorFlow.js

check this tutorial on TensorFlow.js https://medium.com/tensorflow/a-gentle-introduction-to-tensorflow-js-dba2e5257702

pix2pix

Fast image-to-image check demo https://zaidalyafeai.github.io/pix2pix/cats.html alt text

fast-style

Fast style transfer check demo https://zaidalyafeai.github.io/fast-style/ alt text

Real Time Face Segmentation

Real Time Face Segmentation check demo https://zaidalyafeai.github.io/face-segmentation/ alt text

Real Time style transfer

Real Time style transfer check demo https://zaidalyafeai.github.io/RST/ alt text

Real Time Face recunstruction

Face recunstuction demo https://zaidalyafeai.github.io/fast-style/ alt text

Texter

Recognition of latex symbols check demo https://zaidalyafeai.github.io/texter/

alt text

Sketcher

Recognition of sketch drawings check demo https://zaidalyafeai.github.io/sketcher/

alt text

Poser

Track an object using your eyes https://zaidalyafeai.github.io/poser/

alt text

Racer

Control a racing car using your eye movement check demo https://zaidalyafeai.github.io/racer/

alt text

Sentiment Classification

Given a movie review classify it as positive or negative check demo https://zaidalyafeai.github.io/sentiment-classification/

alt text

ml-projects's People

Contributors

choas avatar git-hamza avatar mrm8488 avatar zaidalyafeai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ml-projects's Issues

Total number of Shard files

I was making my own project on Quick draw Doodle challenge but I'm stuck at a point. When I'm trying to convert my model.h5 file to tensorflow.js compatible file I get only one Shard file and a json file. I have seen many people converting their model files to tensorflowjs compatible file and they end up having 4-5 Shard files. Can you please explain to me why is it so?

I'm getting 97% accuracy on training and 95.08% on testing data but when I load my model to browser its doesn' t perform well or rather performs very poorly. I have just used 3 categories for now.

I guess due to some problem in file conversion my model is performing poorly. Your help would be much appreciated

Pb2json

I 've trained the pix2pix model by python, and got a pb or checkpoint file ,but i don't know how to use it in js. Should I transform the pb or ckpt into a json? Can u offer me the .py or any other script to do it. It will be appreciated so much!!!

How long dose the model loading take?

Hi,
Thanks for your great work!
When I opened some 'index.html' files, such as sketcher, sentiment-classification, fast-style, texter and face-segmentation, the model loading seems cannot stop. Could you please tell me how to run your html files and how long should I wait until I can tell something is going wrong?
Best,
Amose.

Warning occurred while load the model

Hi @zaidalyafeai

I was trying to inference your model by official tensorflow-js library version 0.13.0.
But warning occurred while load some of the models, something like
The shape of the input tensor ([null,128,128,32]) does not match the expectation of layer conv2d_2: [null,256,256,3]

Is there something wrong on my code?
Thanks

Can't reproduce the Quick Draw Example

Hi Zaid,

Thanks for you great works.

I tried your Quick Draw example in the folder Sketcher (in this repository), it's quite nice, both of your pre-trained models work great.

But after following your Google Colab Sketcher, I did exactly the same you wrote in that colab. But the model produced by the colab has very poor results. See my pictures below

Screenshot 2019-12-06 at 13 07 55

Screenshot 2019-12-06 at 13 08 25

I want to ask you whether the models in your Sketch folder using same model architecture with your Colab, do you use same dataset to feed your model?. Could you explain how can I train you model to get same results as your pre-train models?

Thank you for your time.

ValueError: Layer weight shape (4, 4, 3, 32) not compatible with provided weight shape (4, 4, 3, 64)

When trying to convert my model, i get the following error:

 File "convert_keras.py", line 133, in <module>
    layer.set_weights([W, b])
  File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\base_layer.py", line 1057, in set_weights
    'provided weight shape ' + str(w.shape))
ValueError: Layer weight shape (4, 4, 3, 32) not compatible with provided weight shape (4, 4, 3, 64)

Any tips on what might be going wrong here?

Sketcher.ipynb model.fit error

I copied sketcher/Sketcher.ipynb to train the model
https://colab.research.google.com/github/zaidalyafeai/zaidalyafeai.github.io/blob/master/sketcher/Sketcher.ipynb
but module 'tensorflow._api.v2.train' has no attribute 'AdamOptimizer
so I changed

adam = tf.train.AdamOptimizer()
to
adam = tf.optimizers.Adam()

https://github.com/AllenBootung/zaidalyafeai.github.io/blob/master/Sketcher.ipynb
After this, I still get
ValueError: Shapes (256, 4) and (256, 100) are incompatible
How to fix this? Thank you.

model.fit(x = x_train, y = y_train, validation_split=0.1, batch_size = 256, verbose=2, epochs=5)

Epoch 1/5
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-42-d96732c3590f> in <module>()
----> 1 model.fit(x = x_train, y = y_train, validation_split=0.1, batch_size = 256, verbose=2, epochs=5)

9 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
  1098                 _r=1):
  1099               callbacks.on_train_batch_begin(step)
-> 1100               tmp_logs = self.train_function(iterator)
  1101               if data_handler.should_sync:
  1102                 context.async_wait()

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    826     tracing_count = self.experimental_get_tracing_count()
    827     with trace.Trace(self._name) as tm:
--> 828       result = self._call(*args, **kwds)
    829       compiler = "xla" if self._experimental_compile else "nonXla"
    830       new_tracing_count = self.experimental_get_tracing_count()

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    869       # This is the first call of __call__, so we have to initialize.
    870       initializers = []
--> 871       self._initialize(args, kwds, add_initializers_to=initializers)
    872     finally:
    873       # At this point we know that the initialization is complete (or less

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
    724     self._concrete_stateful_fn = (
    725         self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
--> 726             *args, **kwds))
    727 
    728     def invalid_creator_scope(*unused_args, **unused_kwds):

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
  2967       args, kwargs = None, None
  2968     with self._lock:
-> 2969       graph_function, _ = self._maybe_define_function(args, kwargs)
  2970     return graph_function
  2971 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
  3359 
  3360           self._function_cache.missed.add(call_context_key)
-> 3361           graph_function = self._create_graph_function(args, kwargs)
  3362           self._function_cache.primary[cache_key] = graph_function
  3363 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
  3204             arg_names=arg_names,
  3205             override_flat_arg_shapes=override_flat_arg_shapes,
-> 3206             capture_by_value=self._capture_by_value),
  3207         self._function_attributes,
  3208         function_spec=self.function_spec,

/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    988         _, original_func = tf_decorator.unwrap(python_func)
    989 
--> 990       func_outputs = python_func(*func_args, **func_kwargs)
    991 
    992       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
    632             xla_context.Exit()
    633         else:
--> 634           out = weak_wrapped_fn().__wrapped__(*args, **kwds)
    635         return out
    636 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
    975           except Exception as e:  # pylint:disable=broad-except
    976             if hasattr(e, "ag_error_metadata"):
--> 977               raise e.ag_error_metadata.to_exception(e)
    978             else:
    979               raise

ValueError: in user code:

    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function  *
        return step_function(self, iterator)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
        return fn(*args, **kwargs)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step  **
        outputs = model.train_step(data)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:756 train_step
        y, y_pred, sample_weight, regularization_losses=self.losses)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:203 __call__
        loss_value = loss_obj(y_t, y_p, sample_weight=sw)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/losses.py:152 __call__
        losses = call_fn(y_true, y_pred)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/losses.py:256 call  **
        return ag_fn(y_true, y_pred, **self._fn_kwargs)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
        return target(*args, **kwargs)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/losses.py:1537 categorical_crossentropy
        return K.categorical_crossentropy(y_true, y_pred, from_logits=from_logits)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
        return target(*args, **kwargs)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/backend.py:4833 categorical_crossentropy
        target.shape.assert_is_compatible_with(output.shape)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/tensor_shape.py:1134 assert_is_compatible_with
        raise ValueError("Shapes %s and %s are incompatible" % (self, other))

    ValueError: Shapes (256, 4) and (256, 100) are incompatible

Half the canvas greyed-out

On both Chrome and Firefox on my computer, for all of the sketch-to-image scripts, the bottom half the canvas appears as grey.

For the cat model, whatever I sketch, nothing appears on the white part of the canvas, even after I select a model. Not posting as a separate issue because I think both are related.

capture

How to recognize camera data stream images

I preprocess imgData like this

 const imgData = {data: new Uint8Array(frame.data), width: frame.width, height: frame.height}
    return tf.tidy(() => {
        var tensor = tf.browser.fromPixels(imgData, 1)
        const resized = tf.image.resizeBilinear(tensor, [28, 28]).toFloat()
        const offset = tf.scalar(255.0);
        const normalized = tf.scalar(1.0).sub(resized.div(offset));
        const batched = normalized.expandDims(0)
        return batched
   }

but it doesn't work right.
Did I make a mistake?

Not working tensorflowjs_converter

image
Good day!
Thank you very much for your hard work.
I worked following the instructions https://github.com/zaidalyafeai/zaidalyafeai.github.io/tree/master/pix2pix and the script https://colab.research.google.com/github/zaidalyafeai/zaidalyafeai.github.io/blob/master/pix2pix/tf_pix2pix.ipynb
And everything was fine until I got to the point of converting the keras model.
I also followed your recommendations, but get the error: AttributeError: 'EnumTypeWrapper' object has no attribute 'DT_FLOAT'

Maybe you know what the reason is?
Thank you!

More detalization.

Can you help me, which parameter in the code changes the detail of the resulting image. Suppose that by reducing the productivity.

In pix2pix project.

How to get faster prediction time

We are working on license plate recognition.We used a model converted from keras h5. But it is taking long to get loaded. The time for prediction is also high.

Integration with ml5.js

Hey! This is not an issue, but more of an invitation!

You have really cool demos and examples with tf.js.
If you are interested, we can work on integrating some of your example and methods into ml5.js

Licence for models

Thank you for all your work on these models!
We are wondering what the license is of the pretrained models. We are building a free online drawing tool to integrate and use together many models in one place, and would love to include some of these.

How to run predictions on saved images/drawings ?

Tried to make a prediction on a .jpg/.png saved image of random drawings (from quick draw data set, directly saved the image).
However, the prediction came off quite wrong and can't seem to get it working and my guess is on due to the image processing before feeding to the model.

This is what I am using

from PIL import Image
import cv2
import matplotlib.pyplot as plt
from random import randint
%matplotlib inline  

clock = qd.get_drawing("clock")
apple = clock

img = apple.image.convert("L")
img = img.resize((28,28),Image.BILINEAR)
img = np.array(img)
img = cv2.bitwise_not(img)


img = img.reshape(28,28,1)
img = img.astype('float32')
img /= 255.0

print(img.shape)
plt.imshow(img.squeeze()) 
print(img)

Yellowish image is the original one in npy format, the other is processed using above code.

chrome_2019-06-06_10-01-14

chrome_2019-06-06_10-01-27

Issue regarding reproducing XOR project

Hello @zaidalyafeai ,
I am learning from your blog and got an error.
I followed the steps as given. But I was stuck at the very last step when we have to push the changes. So I downloaded the files from colab and push the changes to GitHub repository locally.
Here is the repo and this is where the result should be.
But I got an error here. It will be really helpful if you help.

Orthogonal initializer is being called on a matrix with more elements: Slowness may result.

tensorflowjs version: 0.11.7
Keras version: 2.0.4

I am trying to run Keras converted model on browser. For conversion I have used tensorflowjs converter, the conversion went fine. However, at load time,
Orthogonal initializer is being called on a matrix with more than 2000 (1000000) elements: Slowness may result.
this message pops up and the memory blowup happens and lastly the browser stops working.
Following is my keras model architecture:

Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 25)                0         
_________________________________________________________________
OneHot (Embedding)           (None, 25, 128)           186368    
_________________________________________________________________
bidirectional_1 (Bidirection (None, 2000)              9032000   
_________________________________________________________________
repeat_vector_1 (RepeatVecto (None, 25, 2000)          0         
_________________________________________________________________
bidirectional_2 (Bidirection (None, 25, 2000)          24008000  
_________________________________________________________________
dropout_1 (Dropout)          (None, 25, 2000)          0         
_________________________________________________________________
AttentionDecoder (AttentionD (None, 25, 1456)          7125040   
=================================================================
Total params: 40,351,408
Trainable params: 40,351,408
Non-trainable params: 0

Is there a way to speed up the Orthogonal initialization process ? so the model gets loaded in less time and with less effort.
Any help will be appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.