Giter VIP home page Giter VIP logo

binroot / tensorflow-book Goto Github PK

View Code? Open in Web Editor NEW
4.5K 299.0 1.2K 44.69 MB

Accompanying source code for Machine Learning with TensorFlow. Refer to the book for step-by-step explanations.

Home Page: http://www.tensorflowbook.com

License: MIT License

Jupyter Notebook 96.38% Python 3.62%
tensorflow machine-learning regression convolutional-neural-networks logistic-regression book reinforcement-learning autoencoder linear-regression classification

tensorflow-book's Introduction

This is the official code repository for Machine Learning with TensorFlow.

Get started with machine learning using TensorFlow, Google's latest and greatest machine learning library.

Summary

Chapter 2 - TensorFlow Basics

  • Concept 1: Defining tensors
  • Concept 2: Evaluating ops
  • Concept 3: Interactive session
  • Concept 4: Session loggings
  • Concept 5: Variables
  • Concept 6: Saving variables
  • Concept 7: Loading variables
  • Concept 8: TensorBoard

Chapter 3 - Regression

  • Concept 1: Linear regression
  • Concept 2: Polynomial regression
  • Concept 3: Regularization

Chapter 4 - Classification

  • Concept 1: Linear regression for classification
  • Concept 2: Logistic regression
  • Concept 3: 2D Logistic regression
  • Concept 4: Softmax classification

Chapter 5 - Clustering

  • Concept 1: Clustering
  • Concept 2: Segmentation
  • Concept 3: Self-organizing map

Chapter 6 - Hidden markov models

  • Concept 1: Forward algorithm
  • Concept 2: Viterbi decode

Chapter 7 - Autoencoders

  • Concept 1: Autoencoder
  • Concept 2: Applying an autoencoder to images
  • Concept 3: Denoising autoencoder

Chapter 8 - Reinforcement learning

  • Concept 1: Reinforcement learning

Chapter 9 - Convolutional Neural Networks

  • Concept 1: Using CIFAR-10 dataset
  • Concept 2: Convolutions
  • Concept 3: Convolutional neural network

Chapter 10 - Recurrent Neural Network

  • Concept 1: Loading timeseries data
  • Concept 2: Recurrent neural networks
  • Concept 3: Applying RNN to real-world data for timeseries prediction

Chapter 11 - Seq2Seq Model

  • Concept 1: Multi-cell RNN
  • Concept 2: Embedding lookup
  • Concept 3: Seq2seq model

Chapter 12 - Ranking

  • Concept 1: RankNet
  • Concept 2: Image embedding
  • Concept 3: Image ranking

tensorflow-book's People

Contributors

alanyee avatar binroot avatar donggeliu avatar energyfirefox avatar fofyou avatar kracekumar avatar kunalghosh avatar mremond avatar nisbus avatar ritiek avatar rongpenl avatar superizer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-book's Issues

A mistake in Concept01 in Chapter04

I am reading TensorFlow-Book/ch04_classification/Concept01_linear_regression_classification.ipynb , I find a mistake.

In your description

Let's say we have numbers that we want to classify. They'll just be 1-dimensional values. Numbers close to 2 will be given the label [0], and numbers close to 5 will be given the label [1], as designed here:

but in your code,

x_label0 = np.random.normal(5, 1, 10)
x_label1 = np.random.normal(2, 1, 10)
print x_label0
print x_label1
xs = np.append(x_label0, x_label1)
labels = [0.] * len(x_label0) + [1.] * len(x_label1)

we can find you give Numbers close to 2 is label [1], and numbers close to 5 is label [0].

Maybe you need fix this mistake.

can't open Concept03_denoising.ipynb

Hi, thanks for the nice book!

TensorFlow-Book/ch07_autoencoder/Concept03_denoising.ipynb can't load, maybe there was smthg wrong when uploading it.

cheers,

Issue found on chapter 5 (clustering)

When I executed, the audio_clustering.py script, I got the following error:

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value matching_filenames
[[Node: matching_filenames/read = IdentityT=DT_STRING, _class=["loc:@matching_filenames"], _device="/job:localhost/replica:0/task:0/cpu:0"]]

To solve this, I had to make the following changes:

  1. I had to initialize the local variables using tf.local_variables_initializer()
  2. Then I had to install/downgrade to numpy 1.11.0 using sudo pip install -U numpy==1.11.0

Maybe you can keep this note in your codes as well.

Ch02 concept 6 Saving variable

saver.save(sess, "spikes.ckpt") doesn't work

require running tf.global_variables_initializer() for newer version of Tensorflow

sess = tf.InteractiveSession()
init = tf.global_variables_initializer()
sess.run(init)

raw_data = [1., 2., 8., -1., 0., 5.5, 6., 13]
spikes = tf.Variable([False] * len(raw_data), name='spikes')
spikes.initializer.run()

saver = tf.train.Saver()

Clustering with self-organizing map

Hi,

I bought the MEAP book and its great.
We are trying to cluster some values with the algorithm. We would like to get the label of each input to a cluster. So if there are 10 clusters and an input vector v, we would like to know which cluster does V belong to . Can you please let me know what are the changes required?

Error in Concept06_saving_variables

When I run the code as it is, at the moment of saving (block 5), I get the following error:

FailedPreconditionError: Attempting to use uninitialized value Variable
	 [[Node: save_2/SaveV2 = SaveV2[dtypes=[DT_BOOL, DT_BOOL], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save_2/Const_0, save_2/SaveV2/tensor_names, save_2/SaveV2/shape_and_slices, Variable, spikes)]]

Caused by op u'save_2/SaveV2', defined at:

I find that it is solved by modifying block 2 in:

saver = tf.train.Saver([spikes])

or

saver = tf.train.Saver({'spikes': spikes})

Invalid argument error in Ch02 TensorBoard

Hi,

I am trying to reproduce your exponential moving average example but I get the error:

InvalidArgumentError Traceback (most recent call last)
in
4 sess.run(init)
5 for i in range(len(raw_data)):
----> 6 summary_str, curr_value_float = sess.run([merged, update_avg], feed_dict={curr_value: raw_data[i]})
7 sess.run(tf.assign(prev_avg, curr_value_float ))
8 print(raw_data[i], curr_value_float )

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float
[[node Placeholder (defined at :2) = Placeholderdtype=DT_FLOAT, shape=, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

CHP 2 Tensorboard attribute error

Hello,

I've been searching all over for a solution to this and can't come up with anything. In Listing 2.16 there is an issue. When I copy from the example I get this error:


AttributeError Traceback (most recent call last)
in ()
17 with tf.Session() as sess:
18 sess.run(init)
---> 19 sess.add_graph(sess.graph)
20 for i in range(len(raw_data)):
21 summary_str, curr_avg = sess.run([merged, update_avg], feed_dict={curr_value: raw_data[i]})

AttributeError: 'Session' object has no attribute 'add_graph'

LICENSE missing

Please add a license to this repository. Otherwise, this code is restricted by the full extent of Copyright terms

Request genuine consecutive scheme batch generation for RNN Trainning

The concept of “genuine consecutive scheme ” can be seen at here(5.4. Batch).

My scenario is as follows:

I have some files with different sequence lengths.

First, do buckecting to generate file-batches with parameter batch_size

Then, split each file-batch with parameter seq_len to generate trainning sample-batches

Last, use each sample-batch for one step of trainning.

Following is my test code:

# -*- coding:utf8 -*-

import os
import time
import random
import tensorflow as tf
from tensorflow.contrib.training import bucket_by_sequence_length, batch_sequences_with_states


context_features = {
    "length": tf.FixedLenFeature([], dtype=tf.int64)
}

sequence_features = {
            "inputs": tf.FixedLenSequenceFeature([], dtype=tf.int64),
}

def GenerateFakeData():
    FILE_NUM = 100
    DATA_PATH = "test_dataset"
    file_path_list, file_len_list = [], []
    for idx in range(FILE_NUM):
        filename = "{fileno}-of-{idx}".format(idx=idx+1, fileno=FILE_NUM)
        token_length = random.randint(50, 100)
        ex = tf.train.SequenceExample()
        ex.context.feature["length"].int64_list.value.append(token_length)
        ###########################################
        ex_tokens = ex.feature_lists.feature_list["inputs"]
        for tok in range(token_length):
            ex_tokens.feature.add().int64_list.value.append(tok)
        with tf.python_io.TFRecordWriter(os.path.join(DATA_PATH, filename) + ".tfrecord") as filew:
            filew.write(ex.SerializeToString())
        file_len_list.append(token_length)
        file_path_list.append(os.path.join(DATA_PATH, filename) + ".tfrecord")
    with open("filelist.txt", "w") as filew:
        for file_name, file_len in zip(file_path_list, file_len_list):
            filew.write("{fn}\t{fl}\n".format(fn=os.path.join(file_name), fl=file_len))

def LoadFileList(filepath):
    with open(filepath, "r") as filer:
        wfilelist, wfilelengthlist = tuple(zip(*[tuple(line.strip().split("\t")) for line in filer if line.strip() != ""]))
        return list(wfilelist), [int(item) for item in wfilelengthlist]

        
def InputProducer():
    batch_size = 2
    seq_len = 75
    state_size = 1024
    bucket_boundaries = [60, 70, 80, 90]
    #####################################
    filelist, filelengthlist = LoadFileList("filelist.txt")
    #####################################
    tf_file_queue = tf.train.string_input_producer(
            string_tensor = filelist, 
            num_epochs = 1, 
            shuffle = False, 
            seed = None, 
            capacity = 32, 
            shared_name = None,
            name = "tf_file_queue",
            cancel_op=None
    )
    ######################################
    tf_reader = tf.TFRecordReader()
    tf_key, tf_serialized = tf_reader.read(tf_file_queue)
    tf_context, tf_sequence = tf.parse_single_sequence_example(
            serialized = tf_serialized,
            context_features = context_features,
            sequence_features = sequence_features
    )
    ######################################
    tf_bucket_sequence_length, tf_bucket_outputs = bucket_by_sequence_length(
        input_length = tf.cast(tf_context["length"], dtype=tf.int32), 
        tensors = tf_sequence, 
        batch_size = batch_size, 
        bucket_boundaries = bucket_boundaries, 
        num_threads=1, 
        capacity=32, 
        shapes=None, 
        dynamic_pad=True,
        allow_smaller_final_batch=False, 
        keep_input=True, 
        shared_name=None, 
        name="bucket_files"
    )
    #######################################
    tf_bbucket_outputs = {}
    for fkey in tf_bucket_outputs:
        tf_bbucket_outputs[fkey]=tf_bucket_outputs[fkey][0]
    #######################################
    # Solution 1:
    tf_fb_key=time.strftime('%Y-%m-%d-%H-%M-%S',time.localtime(time.time())) + str(random.randint(1,100000000))
    initial_state_values = tf.zeros((state_size,), dtype=tf.float32)
    initial_states = {"lstm_state": initial_state_values}
    tf_batch=batch_sequences_with_states(
        input_key = tf_fb_key, 
        input_sequences = tf_bbucket_outputs, 
        input_context = {}, 
        input_length = tf.reduce_max(tf_bucket_sequence_length), 
        initial_states=initial_states, 
        num_unroll=seq_len, 
        batch_size=batch_size, 
        num_threads=3, 
        capacity=1000, 
        allow_small_batch=False, 
        pad=True, 
        name=None)
    #######################################
    # Solution 2:
    '''
    tf_index_queue=tf.train.range_input_producer(
        limit=tf.reduce_max(tf_bucket_sequence_length),
        num_epochs=1, 
        shuffle=False, 
        seed=None, 
        capacity=32, 
        shared_name=None, 
        name=None
    )
    tf_index=tf_index_queue.dequeue()
    tf_batch=tf_bbucket_outputs["inputs"][tf_index]
    '''
    #######################################
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        sess.run(tf.local_variables_initializer())
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(sess, coord)
        try:
            while True:
                #####################################
                # Test Bucketing
                #bucket_sequence_length, bucket_outputs = sess.run([tf_bucket_sequence_length, tf_bucket_outputs])
                #print(bucket_sequence_length)
                #print(bucket_outputs)
                #print("#################")
                #####################################
                # Test Solution 1:
                batch = sess.run(tf_batch)
                print(batch)
                #####################################
                # Test Solution 2:
                #bucket_sequence_length, bucket_outputs, index = sess.run([tf_bucket_sequence_length, tf_bucket_outputs, tf_index])
                #print(bucket_sequence_length)
                #print(bucket_outputs)
                #print(index)
                #print("#################")
        except tf.errors.OutOfRangeError:
            pass
        except tf.errors.InvalidArgumentError:
            pass
        finally:
            coord.request_stop()
        coord.join(threads)
    
if __name__ == "__main__":
    #GenerateFakeData()
    InputProducer()
    pass

With Solution 1, I raised the error as below:

Traceback (most recent call last):
  
    File "/home/yangming/workspace/tfstudy-3.5.3-tf-1.1.0/BatchSchemas/make_test_dataset.py", line 157, in <module>
    
InputProducer()
  
    File "/home/yangming/workspace/tfstudy-3.5.3-tf-1.1.0/BatchSchemas/make_test_dataset.py", line 107, in InputProducer
    
name=None)
  
    File "/home/yangming/.pyenv/versions/tfstudy-3.5.3/lib/python3.5/site-packages/tensorflow/contrib/training/python/training/sequence_queueing_state_saver.py", line 1522, in batch_sequences_with_states
    
allow_small_batch=allow_small_batch)
  
    File "/home/yangming/.pyenv/versions/tfstudy-3.5.3/lib/python3.5/site-packages/tensorflow/contrib/training/python/training/sequence_queueing_state_saver.py", line 849, in __init__
    
initial_states)
  
    File "/home/yangming/.pyenv/versions/tfstudy-3.5.3/lib/python3.5/site-packages/tensorflow/contrib/training/python/training/sequence_queueing_state_saver.py", line 332, in _prepare_sequence_inputs
    
"sequence", inputs.sequences, ignore_first_dimension=True)
  
    File "/home/yangming/.pyenv/versions/tfstudy-3.5.3/lib/python3.5/site-packages/tensorflow/contrib/training/python/training/sequence_queueing_state_saver.py", line 326, in _assert_fully_defined
    
ignore_first_dimension else "", v.get_shape()))
ValueError: Shape for sequence inputs is not fully defined (ignoring first dimension): (?, ?)

See the document of batch_sequences_with_states, I found that

  1. it seems only support only one sequence and don't support multiple sequences .
  2. it don't support the situation of Shape for sequence inputs is not fully defined, which means bucket_by_sequence_length can not be followed with batch_sequences_with_states.

What more, I have tried Solution 2, but I failed because of thread synchronization problem between tf.train.string_input_producer and tf.train.range_input_producer.

So, how to relize my request ?

Hope for your help.

Quick comment on Chapter 10

In Concept01_timeseries_data.ipynb in Chapter 10, I believe you can simplify and speed up the following code:

def split_data(data, percent_train=0.80):
    num_rows = len(data)
    train_data, test_data = [], []
    for idx, row in enumerate(data):
        if idx < num_rows * percent_train:
            train_data.append(row)
        else:
            test_data.append(row)
    return train_data, test_data

as follows:

def split_data(data, percent_train=0.80):
    num_rows = len(data)*percent_train
    return data[:num_rows], data[num_rows:]

SOM() method in TensorFlow-Book/ch05_clustering/Concept03_som.ipynb

I am trying to implement self-organizing maps in python, ideally in tensorflow. Thanks for your efforts here!

When using SOM() with anything other than dim=3 doesn't seem to work. I even adjusted the inputs to be an array of shape (10, 2), but maybe I'm off here?

Any help you can offer is greatly appreciated. If you need to see code, please let me know. My hope is to eventually be able to apply SOM to an arbitrarily large feature space on a large number of observations/rows. Thanks!

ch02_basics/moving_avg.py TypeError: Can not convert a float32 into a Tensor.

Hi,

I am trying to reproduce your exponential moving average example but I get the error "TypeError: Can not convert a float32 into a Tensor. "

I modified the lines

        summary_str, curr_value = sess.run([merged, update_avg], feed_dict={curr_value: raw_data[i]})
        sess.run(tf.assign(prev_avg, curr_value))
        print(raw_data[i], curr_value)

for

        summary_str, curr_value_float = sess.run([merged, update_avg], feed_dict={curr_value: raw_data[i]})
        sess.run(tf.assign(prev_avg, curr_value_float))
        print(raw_data[i], curr_value_float)

to make things work, but I was wondering is this way the best solution or can we do better?

Thanks

CH02 Concept 8 Warnings

You may want to update code to fix warnings.

Fix:

avg_hist = tf.summary.scalar("running_average", update_avg)
value_hist = tf.summary.scalar("incoming_values", curr_value)

merged = tf.summary.merge_all()
writer = tf.summary.FileWriter("./logs")

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    for i in range(len(raw_data)):
        summary_str, curr_avg = sess.run([merged, update_avg], feed_dict={curr_value: raw_data[i]})
        sess.run(tf.assign(prev_avg, curr_avg))
        print(raw_data[i], curr_avg)
        writer.add_summary(summary_str, i)

WARNING:tensorflow:From :1 in .: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.
Instructions for updating:
Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported.
WARNING:tensorflow:From :2 in .: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.
Instructions for updating:
Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported.
WARNING:tensorflow:From :4 in .: merge_all_summaries (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.
Instructions for updating:
Please switch to tf.summary.merge_all.
WARNING:tensorflow:From C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\logging_ops.py:264 in merge_all_summaries.: merge_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.
Instructions for updating:
Please switch to tf.summary.merge.
WARNING:tensorflow:From :5 in .: SummaryWriter.init (from tensorflow.python.training.summary_io) is deprecated and will be removed after 2016-11-30.
Instructions for updating:
Please switch to tf.summary.FileWriter. The interface and behavior is the same; this is just a rename.

Ch5/segmentation

Is segmentation of sounds works well ?
I`ve listened to TalkingMachinesPodcast.wav, watched waves spectre in Camtasia Studio and compared with output of
segmentation.py:
('0.0m 0.0s', 0)
('0.0m 2.Ss', 1)
('0.0m 5.0s', 0)
( '0.0m 7 .Ss ', 1)
('0.0m 10.0s', 1)
('0.0m 12.Ss', 1)
('0.0m 15.0s', 1)
('0.0m 17.Ss', 0)
('0.0m 20.0s', 1)
('0.0m 22.Ss', 1)
('0.0m 25.0s', 0)
('0.0m 27.Ss', 0)

And i dont think that it works correct as it makes segment no exectly close where they should be.
Moreover if i put k=3 i see:
('0.0m 0.0s', 0)
('0.0m 5.0s', 1)
('0.0m 10.0s', 2)
('0.0m 15.0s', 2)
('0.0m 20.0s', 1)
('0.0m 25.0s', 2)
('0.0m 30.0s', 1)
('0.0m 35.0s', 2)
('0.0m 40.0s', 1)
('0.0m 45.0s', 2)
('0.0m 50.0s', 2)
('0.0m 55.0s', 2)
Why timeline in code run away ? And we have much more clusters ?

ch02_basics Concept06_saving_variables

When running the following code from code:

save_path = saver.save(sess, "spikes.ckpt")
print("spikes data saved in file: %s" % save_path)

I receive an error.

I am running on Windows 7 in conda virtual environment and Jupyter.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-36-444ce612b9fd> in <module>()
----> 1 save_path = saver.save(sess, "spikes.ckpt")
      2 print("spikes data saved in file: %s" % save_path)

C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\saver.py in save(self, sess, save_path, global_step, latest_filename, meta_graph_suffix, write_meta_graph, write_state)
   1312     if not gfile.IsDirectory(os.path.dirname(save_path)):
   1313       raise ValueError(
-> 1314           "Parent directory of {} doesn't exist, can't save.".format(save_path))
   1315 
   1316     save_path = os.path.dirname(save_path)

ValueError: Parent directory of spikes.ckpt doesn't exist, can't save.

No module named 'data_loader'


ModuleNotFoundError Traceback (most recent call last)
in ()
3 import tensorflow as tf
4 from tensorflow.contrib import rnn
----> 5 import data_loader
6 import matplotlib.pyplot as plt
7

ModuleNotFoundError: No module named 'data_loader'

Error during import of Tensorflow module post installation

I happened to stumble upon your repository and the book which you are working towards. I guess going through the learning materials put together by you would help me a great deal to know and understand Tensorflow better. However, as much as I have been wanting to try a hands-on with Tensorflow, I am running into an issue when I import tensorflow module in Python.

So, if you could let me know if this might be the place where I could get some insights, I shall post my issue so that I can get some pointers to troubleshoot the issue. If not, can you please help me out by letting me know where would be the best place or who would be the best ones to help me with this?

P.S. Although I know what GitHub is basically used for, I am still slightly a beginner in using GitHub and its working model. This would give me a good opportunity to learn things the right way. So until then, please do pardon my way of communication.

Eagerly awaiting a positive reply from your end.

Thanks & Regards,
Namratha

ch02_basics/types.py causing Jupyter Notebook kernel to crash ["Fix" included]

Hi,

Just a heads up - the types.py is causing Jupyter Notebook's kernel to crash when attempting to load any of the ipynb notebooks:

(Snipped stacktrace)
...
File "~/opt/anaconda3/lib/python3.5/functools.py", line 22, in
from types import MappingProxyType
File "~/DEEP_LEARNING/research/TensorFlow-Book/ch02_basics/types.py", line 3, in
...

Fix: renaming types.py to types_examples.py (or any non site-package/standard lib name) fixes the issue.

Hope that helps!

Bregman lib error

The Chromagram function used in chapter 5 for K-Means classification returns an error when fed an audio file.
TypeError: 'float' object cannot be interpreted as an index

typo in v09

v09 chapter about HMMs refers to "gait" and "gate."

I think it should just be "gait?"

Identifying people based on their gait is a pretty cool idea, but first we need a model to
recognize the gate. Consider a HMM where the sequence of hidden states for a gate are

s/gate/gait/g

image clean

Hi, I want to know why you compare stdsT with 1.0 / np.sqrt(img_size) then take maximum as standard deviation.
adj_stds = np.maximum(stdsT, 1.0 / np.sqrt(img_size))
normalized = (img_data - meansT) / adj_stds

Thanks!

Ch 8 predicting?

I've managed to run the code from chapter 8 successfully and the update_q seems to be creating Q values for states.

I now wanted to run the simulation a 100 times and then predict the same (or other) prices using the learnt knowledge.

I tried adding the following method to the QDecisionPolicy

    def predict(self, state):                
        action_q_vals = self.sess.run(self.q, feed_dict={self.x: state})        
        action_idx = np.argmax(action_q_vals)
        action = self.actions[action_idx]
        print('Action {}, Q {}, STATE :{}'.format(action, action_q_vals, state))
        return action

This always prints out Action 'HOLD', Q [[0. 0. 0.]] for any state given
event though I've tested printing the same in the update Q and seeing that the state I'm putting into predict is being updated to non zero values.

How can I query the policy, or is there some other mechanism that I should be using to predict using the learnt policy?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.