Giter VIP home page Giter VIP logo

tensorflow-101's Introduction

Tensorflow Tutorials using Jupyter Notebook

TensorFlow tutorials written in Python (of course) with Jupyter Notebook. Tried to explain as kindly as possible, as these tutorials are intended for TensorFlow beginners. Hope these tutorials to be a useful recipe book for your deep learning projects. Enjoy coding! :)

Contents

  1. Basics of TensorFlow / MNIST / Numpy / Image Processing / Generating Custom Dataset
  2. Machine Learing Basics with TensorFlow: Linear Regression / Logistic Regression with MNIST / Logistic Regression with Custom Dataset
  3. Multi-Layer Perceptron (MLP): Simple MNIST / Deeper MNIST / Xavier Init MNIST / Custom Dataset
  4. Convolutional Neural Network (CNN): Simple MNIST / Deeper MNIST / Simple Custom Dataset / Basic Custom Dataset
  5. Using Pre-trained Model (VGG): Simple Usage / CNN Fine-tuning on Custom Dataset
  6. Recurrent Neural Network (RNN): Simple MNIST / Char-RNN Train / Char-RNN Sample / Hangul-RNN Train / Hangul-RNN Sample
  7. Word Embedding (Word2Vec): Simple Version / Complex Version
  8. Auto-Encoder Model: Simple Auto-Encoder / Denoising Auto-Encoder / Convolutional Auto-Encoder (deconvolution)
  9. Class Activation Map (CAM): Global Average Pooling on MNIST
  10. TensorBoard Usage: Linear Regression / MLP / CNN
  11. Semantic segmentation
  12. Super resolution (in progress)
  13. Web crawler
  14. Gaussian process regression
  15. Neural Style
  16. Face detection with OpenCV

Requirements

  • TensorFlow
  • Numpy
  • SciPy
  • Pillow
  • BeautifulSoup
  • Pretrained VGG: inside 'data/' folder

Note

Most of the codes are simple refactorings of Aymeric Damien's Tutorial or Nathan Lintz's Tutorial. There could be missing credits. Please let me know.

Collected and Modifyed by Sungjoon

info

tensorflow-101's People

Contributors

bluedisk avatar jdsutton avatar sjchoi86 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-101's Issues

TypeError: Input 'b' of 'MatMul' Op has type float32 that does not match type int32 of argument 'a'.

I've followed the tutorial, but after the step of # Construct the variables for the NCE loss, which it's the block code of

with tf.device('/cpu:0'):
    # Loss function 
    num_sampled = 64        # Number of negative examples to sample. 
    loss = tf.reduce_mean(
        tf.nn.nce_loss(nce_weights, nce_biases, embed
                       , train_labels, num_sampled, vocabulary_size))
    # Optimizer
    optm = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
    # Similarity measure (important)
    norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
    normalized_embeddings = embeddings / norm
    valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings
                    , valid_dataset)
    siml = tf.matmul(valid_embeddings, normalized_embeddings
                    , transpose_b=True)
    
print ("Functions Ready")

Got the error of TypeError: Input 'b' of 'MatMul' Op has type float32 that does not match type int32 of argument 'a'.

mlp_mnist_simple.ipynb => `softmax_cross_entropy_with_logits` with named arguments (labels=..., logits=..., ...)

About https://github.com/sjchoi86/Tensorflow-101/blob/master/notebooks/mlp_mnist_simple.ipynb

Recently updated Tensorflow (1.4) will make the following error.

ValueError: Only call softmax_cross_entropy_with_logits with named arguments (labels=..., logits=..., ...)

Just simply adding named arguments will solve this issue.

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits =pred, labels=y))

FYI, http://www.edwith.org/deeplearningchoi/lecture/15552/ has the right code.

add CTC module

It is very helpful to see your tutorial code . Thanks very much. But I want to add CTC module,so it will more powerful. I write as the Blog write.
My code is using LSTM to classify Mnist data . But CTC_LOSS don't work at all. Can you help me define the right call style? The code is based on your Tensorflow-101 and I promise U can get it when you see the code .

Numbering on the files

Thanks for the great tutorials.
I think numbering the files as in the tutorial order would improve the readability of the files.
For example, 1-1_basic_tensorflow.ipynb, 1-2_basic_tensorflow.ipynb instead of basic_tensorflow.ipynb, basic_tensorflow.ipynb.
Thank you!

How to deal with huge data?

I'm currently trying to generate a dataset based on your tutorial, but i was wondering how to deal with a huge data(10 gb of images). My laptop can't handle the huge amount of data(because the tutorial told us that we need to store the data into array variable first). Is there anyway to handle this? Thanks

i have the 100 percent accuracy!

I have done my own dataset and with the first step of training, i have the 100 percent accuracy for test and train data!! it was impossible I think, I search for the mistake in my code but I didn't find it! do you think where is my mistake?

clarify the version of pre-requisite packages

Hi, @sjchoi86 ,

Thank you very much for releasing such a useful repos. As it's been quite a time since you first released this repos, so some conflict may occur when running this repos with current TF (0.12.1 or 1.0).

For example, when I run "semseg_basic.pynb" at the last cell, I came across this TF error

In order to run this repos, could you clarify the version of pre-requisite packages?

Tensorflow version: 0.xx?
python version: 2.7/3.4/3.5 ?
cuda version: 7.0/7.5/8.0?
cudnn version ?
anaconda: used or not?

THX!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.