Comments (3)
It depends what kind of data you want to use (sequence, image, features...). You need to format your dataset as a numpy array.
For such network https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3%20-%20Neural%20Networks/multilayer_perceptron.py, you can use:
X (your data): 2-D numpy array, with shape: [number_of_samples, features]
Y (your data labels): 2-D array (An array of binary arrays; for class "1" out of a total of 5 classes: [0, 1, 0, 0, 0]) with shape [number_of_samples, labels_binary_array]
But it woulds required some changes:
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
You can change this line to instead return a batch of your own data (with shapes as above).
If you data are not so large, maybe you don't need batch, and can directly feed all data:
xs, ys = YOUR_DATA, YOUR_LABELS
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
cost = 0.
# Fit training using batch data
sess.run(optimizer, feed_dict={x: xs, y: ys})
# Compute average loss
cost = sess.run(cost, feed_dict={x: xs, y: ys})
# Display logs per epoch step
if epoch % display_step == 0:
print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(cost)
print "Optimization Finished!"
from tensorflow-examples.
Thanks.
It is really very helpful to me.
Regards
Nahid
On Tue, Mar 22, 2016 at 10:56 PM, Aymeric Damien [email protected]
wrote:
It depends what kind of data you want to use (sequence, image,
features...). You need to format your dataset as a numpy array.
For such network
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3%20-%20Neural%20Networks/multilayer_perceptron.py,
you can use:
X (your data): 2-D numpy array, with shape: [number_of_samples, features]
Y (your data labels): 2-D array (An array of binary arrays; for class "1"
out of a total of 5 classes: [0, 1, 0, 0, 0]) with shape
[number_of_samples, labels_binary_array]But it woulds required some changes:
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
You can change this line to instead return a batch of your own data (with
shapes as above).If you data are not so large, maybe you don't need batch, and can directly
feed all data:xs, ys = YOUR_DATA, YOUR_LABELS
with tf.Session() as sess:
sess.run(init)# Training cycle for epoch in range(training_epochs): cost = 0. # Fit training using batch data sess.run(optimizer, feed_dict={x: xs, y: ys}) # Compute average loss cost = sess.run(cost, feed_dict={x: xs, y: ys}) # Display logs per epoch step if epoch % display_step == 0: print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(cost) print "Optimization Finished!"
—
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#21 (comment)
from tensorflow-examples.
Hi
I did my programme as follows.
................................................................................................................
Import MINST data
#import input_data
#mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
import tensorflow as tf
from numpy import genfromtxt
mnist_data=genfromtxt('data_1.csv', delimiter=',')
mnist_label=genfromtxt('Level_1.csv', delimiter=',')
Parameters
learning_rate = 0.001
training_epochs = 15
batch_size = 100
display_step = 1
Network Parameters
n_hidden_1 = 64 # 1st layer num features
n_hidden_2 = 64 # 2nd layer num features
n_input = 9 # MNIST data input (img shape: 28*28)
n_classes = 2 # MNIST total classes (0-9 digits)
tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
Create model
def multilayer_perceptron(_X, _weights, _biases):
layer_1 = tf.nn.relu(tf.add(tf.matmul(_X, _weights['h1']),
_biases['b1'])) #Hidden layer with RELU activation
layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1, _weights['h2']),
_biases['b2'])) #Hidden layer with RELU activation
return tf.matmul(layer_2, _weights['out']) + _biases['out']
Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
Construct model
pred = multilayer_perceptron(x, weights, biases)
Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) #
Softmax loss
optimizer =
tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam
Optimizer
Initializing the variables
xs, ys = mnist_data, mnist_label
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
Training cycle
for epoch in range(training_epochs):
cost = 0.
# Fit training using batch data
sess.run(optimizer, feed_dict={x: xs, y: ys})
# Compute average loss
cost = sess.run(cost, feed_dict={x: xs, y: ys})
# Display logs per epoch step
if epoch % display_step == 0:
print "Epoch:", '%04d' % (epoch+1), "cost=",
"{:.9f}".format(cost)
print "Optimization Finished!"
......................................................................................................................................
But I got the following error.
File "", line 55 with tf.Session()
as sess: ^ IndentationError: unexpected indent
Here I attach my data file and Labels file. Could you please help me to
solve it.
Thanks and regrads
Nahid
On Wed, Mar 23, 2016 at 5:09 PM, Aymeric Damien [email protected]
wrote:
—
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#21 (comment)
from tensorflow-examples.
Related Issues (20)
- 404 not found
- Please provide a example for stacked bidirectional LSTM for Tensorflow 2.x
- [Potential NAN bug] Loss may become NAN during training HOT 1
- Tensor
- How to get prediction code ?
- fixes for Word2Vec for Python 3
- ml_introduction.ipynb Links
- InternalError: Dst tensor is not initialized. [[{{node IteratorGetNext/_2}}]] [Op:__inference_distributed_function_24557]
- In the tf1 example: I replace the weigtht and bias with tf.layers.dense, I found the accuracy decrease. why??? HOT 1
- The CNN example diagram shows 3 conv & pooling layers but the implementation only has 2
- AttributeError on placeholder HOT 1
- fig
- In K-Means Example, when i am running "from tensorflow.contrib.factorization import KMeans" line, i am getting an error "ModuleNotFoundError: No module named 'tensorflow.contrib'" HOT 2
- possible issue at: tensorflow_v2/notebooks/3_NeuralNetworks/autoencoder.ipynb HOT 6
- Add a development container HOT 3
- Ikvvh
- TPU Usage
- Activities HOT 3
- Offo HOT 3
- How to autotune simple MLP onnx model on hand dataset ? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tensorflow-examples.