Giter VIP home page Giter VIP logo

simplehtr's People

Contributors

alexeevrevan avatar chazzz avatar githubharald avatar n1snt avatar reed-jones avatar tosemml avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

simplehtr's Issues

variable length: resizing vs. padding

Hi,
Common approach to standarize cropped word images is to resize them to some predefined ratio (height&width). That's fine if the words (=images) are more or less equal length (=have equal ratio). But if length varies we are squeezing and stretching. Probably that's why @githubharald added this kind of augmentation. But I wonder would it be better to rather pad the image with "empty" backgroud to keep the original letter ratio? What do you think folks?

Train on Cloud TPU

Could you please provide some guidance on how to transform your code in order to train the model on TPU (e.g. using Google Colab)? Thanks!

Consider switching from RMSPropOptimizer to AdamOptimizer

I've been consistently getting 68-69% word accuracy using the AdamOptimizer. I like that Adam improves accuracy fairly consistently, whereas the jitter present in RMSProp makes the program more likely to terminate before reaching 68% or higher. I measured a ~25% per-epoch time penalty in using Adam, and it generally takes more epochs to reach a higher accuracy percentage (good problem to have).

I also experimented with various batch sizes with no meaningful improvement, though Adam with a default learning rate tends to do better with larger batch sizes.

Results:
AdamOptimizer (Tuned) Batch size 50
rate = 0.001 if self.batchesTrained < 10000 else 0.0001 # decay learning rate
end result: ('Epoch:', 68)
Character error rate: 13.104371%. Word accuracy: 69.008696%.
Character error rate: 13.082070%. Word accuracy: 69.026087%. (best)
end result: ('Epoch:', 46)
Character error rate: 13.577769%. Word accuracy: 68.295652%.
Character error rate: 13.600071%. Word accuracy: 68.452174%. (best)
end result: ('Epoch:', 55)
Character error rate: 13.198626%. Word accuracy: 68.782609%.
Character error rate: 12.984522%. Word accuracy: 69.165217%. (best)

InvalidArgumentError: Labels length is zero in batch in ctc_loss function

I am getting this error while training the model for line by line handwritten text recognition after training with some batches.

2018-12-10 15:15:10.154857: W tensorflow/core/framework/op_kernel.cc:1318] OP_REQUIRES failed at ctc_loss_op.cc:166 : Invalid argument: Labels length is zero in batch 37
Traceback (most recent call last):
File "main.py", line 143, in
main()
File "main.py", line 130, in main
train(model, loader)
File "main.py", line 34, in train
loss = model.trainBatch(batch)
File "/home/dell/FAQ/SimpleHTR/src/Model.py", line 215, in trainBatch
(_, lossVal) = self.sess.run([self.optimizer, self.loss], { self.inputImgs : batch.imgs, self.gtTexts : sparse , self.seqLen : [Model.maxTextLen] * Model.batchSize, self.learningRate : rate} )
File "/home/dell/FAQ/virenv_scrap/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/home/dell/FAQ/virenv_scrap/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "/home/dell/FAQ/virenv_scrap/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "/home/dell/FAQ/virenv_scrap/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Labels length is zero in batch 37
[[Node: CTCLoss = CTCLoss[ctc_merge_repeated=true, ignore_longer_outputs_than_inputs=true, preprocess_collapse_repeated=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](transpose, _arg_Placeholder_1_0_1, _arg_Placeholder_2_0_2, _arg_Placeholder_4_0_4)]]

Caused by op u'CTCLoss', defined at:
File "main.py", line 143, in
main()
File "main.py", line 129, in main
model = Model(loader.charList, decoderType)
File "/home/dell/FAQ/SimpleHTR/src/Model.py", line 34, in init
(self.loss, self.decoder) = self.setupCTC(rnnOut3d)
File "/home/dell/FAQ/SimpleHTR/src/Model.py", line 104, in setupCTC
loss = tf.nn.ctc_loss(labels=self.gtTexts, inputs=ctcIn3dTBC, ignore_longer_outputs_than_inputs=True,sequence_length=self.seqLen, ctc_merge_repeated=True)
File "/home/dell/FAQ/virenv_scrap/local/lib/python2.7/site-packages/tensorflow/python/ops/ctc_ops.py", line 158, in ctc_loss
ignore_longer_outputs_than_inputs=ignore_longer_outputs_than_inputs)
File "/home/dell/FAQ/virenv_scrap/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_ctc_ops.py", line 285, in ctc_loss
name=name)
File "/home/dell/FAQ/virenv_scrap/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/dell/FAQ/virenv_scrap/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
op_def=op_def)
File "/home/dell/FAQ/virenv_scrap/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1718, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

Assertion Error

Traceback (most recent call last):
File "main.py", line 139, in
main()
File "main.py", line 115, in main
loader = DataLoader(FilePaths.fnTrain, Model.batchSize, Model.imgSize, Model.maxTextLen)
File "/home/syedjafer/Documents/FinalYearProj/SimpleHTR/src/DataLoader.py", line 43, in init
assert (len(lineSplit) >= 9) , "Working"
AssertionError: Working

Does the training code make use of the images?

Thank you so much for this awesome tutorial. I had one more clarification. The training code in the repository that is main.py, does it make use of the image-text pair both for training? Or just the words.txt? I know it might seem a silly doubt, I'm a complete beginner, so just wanted to clear this out.

Thanks you

Change number of Hidden layers and Neuron in Layers

I am using this code to show behavior of NN in my assignment, I am new to python and unable to understand where number of neurons in each layer and number of hidden layers are specified, and how to change number of hidden layers and number of neurons can be changed ?

  1. Versions
    Latest TensorFlow version
    Latest Python version
    Latest Operating system

Using model checkpoint for inference.

I have trained a model on my dataset and the model performs decently on validation set. I get an accuracy of 72% using --wordbeamsearch. Now I want to load the model for inference on device. It can be done by converting the model to the .pb format. Now to convert the saved checkpoint to the frozen model I need the name of the output node. I am confused which output node name should I use? Is there a node which can directly give the recognized text (with word beam search decoding) given the input image? Or some other output node name has to be used?

Thank You :)

ModuleNotFoundError: No module named 'editdistance.bycython'

I am having an issue when running a text recognition program.

In [6]: runfile('D:/MachineLearning/Github/SimpleHTR-master/src/main.py', wdir='D:/MachineLearning/Github/SimpleHTR-master/src')
Traceback (most recent call last):

File "", line 1, in
runfile('D:/MachineLearning/Github/SimpleHTR-master/src/main.py', wdir='D:/MachineLearning/Github/SimpleHTR-master/src')

File "C:\Users\nikun\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 786, in runfile
execfile(filename, namespace)

File "C:\Users\nikun\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "D:/MachineLearning/Github/SimpleHTR-master/src/main.py", line 7, in
import editdistance

File "C:\Users\nikun\Anaconda3\lib\site-packages\editdistance_init_.py", line 1, in
from .bycython import eval

ModuleNotFoundError: No module named 'editdistance.bycython'

This is the error I am facing
I have looked for editdistance.bycython but I couldn't find any module like this.
I have pip installed cython module but it was of no use

Please hel

batch normalization

Hi Harald,

Thank you for this starter in OCR NN.
Is there any reason why you didn't include batch normalization layer between conv2d and relu?

Regards,

Instructions for unzipping model.zip

This is minor, but it definitely threw some of us for a loop: when unzipping the model/model.zip, it's not clear that the contents of that need to be extracted into the model/ -- on some computers, it ended up extracting to model/model/ and that doesn't work. Moving the results of the extraction into the model/ directory seemed to make things work.

WordBeamSearch Decoder

Please have a look at the FAQ section in the README - maybe your question is already answered there.
Only issues concerning the repositories code will be answered.
The following questions will not be answered:

  • How to convert dataset X into IAM format?
  • How to modify the model to recognize text-lines/more characters/...?
  • General/theoretical questions regarding (handwritten) text recognition.

If you create a new issue, please provide the following information:

  1. Versions
  • TensorFlow version
  • Python version
  • Operating system
  1. Issue
  • Which result/error did you get?
  • If you think the result is wrong - what result did you expect instead?
  • How to reproduce the issue?
  • Provide all necessary data

Doubling number of conv layers improves accuracy

I'm not sure the title is tremendously surprising to anyone, but I cleared 76% word accuracy with a deeper network. More interestingly, using a deeper network and terminating around epoch 25 yields a 74-75% word accuracy model, which is better and faster than training a smaller network to the bitter end.

screenshot from 2019-01-11 01-57-09

Relevant code:

		for i in range(numLayers):
			kernel = tf.Variable(tf.truncated_normal([kernelVals[i], kernelVals[i], featureVals[i], featureVals[i + 1]], stddev=0.1))
			conv = tf.nn.conv2d(pool, kernel, padding='SAME',  strides=(1,1,1,1))
			conv_norm = tf.layers.batch_normalization(conv, training=self.is_train)
			relu = tf.nn.relu(conv_norm)
			kernel2 = tf.Variable(tf.truncated_normal([kernelVals[i], kernelVals[i], featureVals[i+1], featureVals[i + 1]], stddev=0.1))
			conv2 = tf.nn.conv2d(relu, kernel2, padding='SAME',  strides=(1,1,1,1))
			conv_norm2 = tf.layers.batch_normalization(conv2, training=self.is_train)
			relu2 = tf.nn.relu(conv_norm2)
			pool = tf.nn.max_pool(relu2, (1, poolVals[i][0], poolVals[i][1], 1), (1, strideVals[i][0], strideVals[i][1], 1), 'VALID')

OCR on sentences

Hey there

Thank you for uploading your model it is really useful for me to understand the concept of using CNN+LSTM architecture. I wanted to know if it is also possible to apply this model on IAM sentences instead of characters.Have you tried it your self?and if so how was the accuracy on the testset?

Dataloader Module error

Hi ,

I'm new in python,facing an issue while importing the dataloder module in jupyter notebook

`---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
in ()
3 import cv2
4 import editdistance
----> 5 from DataLoader import DataLoader, Batch
global DataLoader = undefined
global Batch = undefined
6 from Model import Model, DecoderType
7 from SamplePreprocessor import preprocess

ModuleNotFoundError: No module named 'DataLoader'`

Calculation of numTrainSamplesPerEpoch & batchSize Value

  • TensorFlow version 1.12
  • Python version 3.5
  • Operating system - Windows 10

We have numTrainSamplesPerEpoch as 25000 set in DataLoader.py and batchSize as 50 in Model.py. In my case, My input image count varies when I am training my model every time, so what's formula / best way to calculate numTrainSamplesPerEpoch & batchSize based on input image count to get best possible accuracy.

Model Retrained!

Hi!
I have trained a model from scratch. I initially got an accuracy of 42% (word-accuracy) on validation step. I then deleted the checkpoints and retrained the model again but this time I got the validation word-accuracy of 52%. Why is this difference taking place? Is it because of the way test and train data are split? Does training the first time got harder examples to train than the second time?
I had around 1350 data points in total.

I mean how can I assess the performance of my model with such huge difference encountered after every retraining from scratch?

Will this difference become less significant when I have a much bigger dataset?

Thanks a lot in advance!

RNN

Why do we need to use recurrent NN? Can we just use Convolutional NN?

Should the training text file contain ground truth as well?

The words.txt file which is used for training, should it also incorporate a ground truth? Also while training does the main.py make use of the images that were generated as a result of that script used for generating dataset similar to IAM?

I am getting this error

Ground truth -> Recognized
[ERR:8] "Electric" -> ""
[ERR:8] "Electric" -> ""
[ERR:3] "Any" -> ""
[ERR:9] "Electrons" -> ""
[ERR:9] "Cyclotron" -> ""
[ERR:7] "Induced" -> ""
[ERR:4] "Mass" -> ""
[ERR:4] "Flux" -> ""
[ERR:6] "Region" -> ""
[ERR:1] "p" -> ""
Character error rate: 100.000000%. Word accuracy: 0.000000%.

Adding new characters

Hi,

I'm trying to add digit characters to the characters the model can recognize using images from this dataset:
https://www.nist.gov/srd/nist-special-database-19

But when the model finishes training its not able to recognize any digits. I also tried generating images for digits with openCV and adding them to the training, which did improve the performance.
what would I need to change to achieve this?

cv2.imread on corrupted image file terminates program with std::out_of_range error

Relevant Versions: Ubuntu 18.10, Python 2.7.15, python-opencv 3.2.0. Experienced when running python main.py --train.

Error message:

terminate called after throwing an instance of 'std::out_of_range'
what(): basic_string::substr: __pos (which is 140) > this->size() (which is 0)
Aborted (core dumped)

Further debugging revealed the terminate is called directly after trying to read a corrupted image file such as r06-022-03-05.png

Adding a try-except clause does not resolve the issue. The process still terminates.

Fix: in src/DataLoader.py, imageio was imported and the getNext was modified as follows:

	imgs = []
	for i in batchRange:
		try:
			img = imageio.imread(self.samples[i].filePath)
		except ValueError:
			img = np.zeros([self.imgSize[1], self.imgSize[0]])
		imgs.append(preprocess(img, self.imgSize, self.dataAugmentation))
	# imgs = [preprocess(cv2.imread(self.samples[i].filePath, cv2.IMREAD_GRAYSCALE), self.imgSize, self.dataAugmentation) for i in batchRange]

I assume imageio is also functional with python 3 however it's not clear whether adding a dependency to imageio is desirable across all versions of SimpleHTR.

ValueError: None values not supported.

  1. Versions
  • TensorFlow version : 1.0.0
  • Python version :3.5.2
  • Operating system : windows 8.1, 64 bit
  1. Issue
  • Which result/error did you get?

ValueError: None values not supported.
On executing the main.py file, i'm getting the following error:

C:\Users\My Pc\Downloads\SimpleHTR-master\SimpleHTR-master\src>python main.py
Validation character error rate of saved model: 13.956289%
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core
\framework\op_kernel.cc:943] OpKernel ('op: "BestSplits" device_type: "CPU"') fo
r unknown op: BestSplits
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core
\framework\op_kernel.cc:943] OpKernel ('op: "CountExtremelyRandomStats" device_t
ype: "CPU"') for unknown op: CountExtremelyRandomStats
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core
\framework\op_kernel.cc:943] OpKernel ('op: "FinishedNodes" device_type: "CPU"')
for unknown op: FinishedNodes
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core
\framework\op_kernel.cc:943] OpKernel ('op: "GrowTree" device_type: "CPU"') for
unknown op: GrowTree
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core
\framework\op_kernel.cc:943] OpKernel ('op: "ReinterpretStringToFloat" device_ty
pe: "CPU"') for unknown op: ReinterpretStringToFloat
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core
\framework\op_kernel.cc:943] OpKernel ('op: "SampleInputs" device_type: "CPU"')
for unknown op: SampleInputs
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core
\framework\op_kernel.cc:943] OpKernel ('op: "ScatterAddNdim" device_type: "CPU"'
) for unknown op: ScatterAddNdim
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core
\framework\op_kernel.cc:943] OpKernel ('op: "TopNInsert" device_type: "CPU"') fo
r unknown op: TopNInsert
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core
\framework\op_kernel.cc:943] OpKernel ('op: "TopNRemove" device_type: "CPU"') fo
r unknown op: TopNRemove
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core
\framework\op_kernel.cc:943] OpKernel ('op: "TreePredictions" device_type: "CPU"
') for unknown op: TreePredictions
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core
\framework\op_kernel.cc:943] OpKernel ('op: "UpdateFertileSlots" device_type: "C
PU"') for unknown op: UpdateFertileSlots
Traceback (most recent call last):
File "C:\Python-3.5\lib\site-packages\tensorflow\python\framework\op_def_libra
ry.py", line 491, in apply_op
preferred_dtype=default_dtype)
File "C:\Python-3.5\lib\site-packages\tensorflow\python\framework\ops.py", lin
e 716, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Python-3.5\lib\site-packages\tensorflow\python\framework\constant_op.
py", line 176, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "C:\Python-3.5\lib\site-packages\tensorflow\python\framework\constant_op.
py", line 165, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=
verify_shape))
File "C:\Python-3.5\lib\site-packages\tensorflow\python\framework\tensor_util.
py", line 360, in make_tensor_proto
raise ValueError("None values not supported.")
ValueError: None values not supported.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "main.py", line 139, in
main()
File "main.py", line 134, in main
model = Model(open(FilePaths.fnCharList).read(), decoderType, mustRestore=Tr
ue)
File "C:\Users\My Pc\Downloads\SimpleHTR-master\SimpleHTR-master\src\Model.py"
, line 31, in init
rnnOut3d = self.setupRNN(cnnOut4d)
File "C:\Users\My Pc\Downloads\SimpleHTR-master\SimpleHTR-master\src\Model.py"
, line 79, in setupRNN
((fw, bw), _) = tf.nn.bidirectional_dynamic_rnn(cell_fw=stacked, cell_bw=sta
cked, inputs=rnnIn3d, dtype=rnnIn3d.dtype)
File "C:\Python-3.5\lib\site-packages\tensorflow\python\ops\rnn.py", line 363,
in bidirectional_dynamic_rnn
seq_dim=time_dim, batch_dim=batch_dim)
File "C:\Python-3.5\lib\site-packages\tensorflow\python\ops\array_ops.py", lin
e 2346, in reverse_sequence
name=name)
File "C:\Python-3.5\lib\site-packages\tensorflow\python\ops\gen_array_ops.py",
line 2776, in reverse_sequence
batch_dim=batch_dim, name=name)
File "C:\Python-3.5\lib\site-packages\tensorflow\python\framework\op_def_libra
ry.py", line 504, in apply_op
values, as_ref=input_arg.is_ref).dtype.name
File "C:\Python-3.5\lib\site-packages\tensorflow\python\framework\ops.py", lin
e 716, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Python-3.5\lib\site-packages\tensorflow\python\framework\constant_op.
py", line 176, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "C:\Python-3.5\lib\site-packages\tensorflow\python\framework\constant_op.
py", line 165, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=
verify_shape))
File "C:\Python-3.5\lib\site-packages\tensorflow\python\framework\tensor_util.
py", line 360, in make_tensor_proto
raise ValueError("None values not supported.")
ValueError: None values not supported.

  • If you think the result is wrong - what result did you expect instead?
  • How to reproduce the issue?
  • Provide all necessary data

Weird error

OS- ubuntu 18
Conda python 3.6

things were working perfectly. I only used to get the warning of AVX, AVX2. I then tried to remove that warning by following online blog. downloading the image of TF for my CPU instruction set. Now nothing works. Its working fine on th windows machine. i was able to get rid of warning (AVX/AVX2). I am attaching the output that i get when i run the file

/home/chetan/anaconda3/envs/tensor-env/bin/python3 /home/chetan/PycharmProjects/SimpleHTR-run/src/main.py
Validation character error rate of saved model: 10.624916%
Traceback (most recent call last):
File "/home/chetan/PycharmProjects/SimpleHTR-run/src/main.py", line 143, in
main()
File "/home/chetan/PycharmProjects/SimpleHTR-run/src/main.py", line 138, in main
model = Model(open(FilePaths.fnCharList).read(), decoderType, mustRestore=True)
File "/home/chetan/PycharmProjects/SimpleHTR-run/src/Model.py", line 38, in init
self.setupRNN()
File "/home/chetan/PycharmProjects/SimpleHTR-run/src/Model.py", line 80, in setupRNN
cells = [tf.contrib.rnn.LSTMCell(num_units=numHidden, state_is_tuple=True) for _ in range(2)] # 2 layers
File "/home/chetan/PycharmProjects/SimpleHTR-run/src/Model.py", line 80, in
cells = [tf.contrib.rnn.LSTMCell(num_units=numHidden, state_is_tuple=True) for _ in range(2)] # 2 layers
File "/home/chetan/.local/lib/python3.6/site-packages/tensorflow/python/util/lazy_loader.py", line 53, in getattr
module = self._load()
File "/home/chetan/.local/lib/python3.6/site-packages/tensorflow/python/util/lazy_loader.py", line 42, in _load
module = importlib.import_module(self.name)
File "/home/chetan/anaconda3/envs/tensor-env/lib/python3.6/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 994, in _gcd_import
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 665, in _load_unlocked
File "", line 678, in exec_module
File "", line 219, in _call_with_frames_removed
File "/home/chetan/.local/lib/python3.6/site-packages/tensorflow/contrib/init.py", line 48, in
from tensorflow.contrib import distribute
File "/home/chetan/.local/lib/python3.6/site-packages/tensorflow/contrib/distribute/init.py", line 34, in
from tensorflow.contrib.distribute.python.tpu_strategy import TPUStrategy
File "/home/chetan/.local/lib/python3.6/site-packages/tensorflow/contrib/distribute/python/tpu_strategy.py", line 27, in
from tensorflow.contrib.tpu.python.ops import tpu_ops
File "/home/chetan/.local/lib/python3.6/site-packages/tensorflow/contrib/tpu/init.py", line 69, in
from tensorflow.contrib.tpu.python.ops.tpu_ops import *
File "/home/chetan/.local/lib/python3.6/site-packages/tensorflow/contrib/tpu/python/ops/tpu_ops.py", line 39, in
resource_loader.get_path_to_datafile("_tpu_ops.so"))
File "/home/chetan/.local/lib/python3.6/site-packages/tensorflow/contrib/util/loader.py", line 56, in load_op_library
ret = load_library.load_op_library(path)
File "/home/chetan/.local/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 60, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Invalid name:
An op that loads optimization parameters into HBM for embedding. Must be
preceded by a ConfigureTPUEmbeddingHost op that sets up the correct
embedding table configuration. For example, this op is used to install
parameters that are loaded from a checkpoint before a training loop is
executed.

parameters: A tensor containing the initial embedding table parameters to use in embedding
lookups using the Adagrad optimization algorithm.
accumulators: A tensor containing the initial embedding table accumulators to use in embedding
lookups using the Adagrad optimization algorithm.
table_name: Name of this table; must match a name in the
TPUEmbeddingConfiguration proto (overrides table_id).
num_shards: Number of shards into which the embedding tables are divided.
shard_id: Identifier of shard for this operation.
table_id: Index of this table in the EmbeddingLayerConfiguration proto
(deprecated).
(Did you use CamelCase?); in OpDef: name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" input_arg { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: DT_FLOAT type_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" number_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type_list_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } input_arg { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: DT_FLOAT type_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" number_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type_list_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" default_value { i: -1 } description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" has_minimum: true minimum: -1 } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" default_value { s: "" } description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } summary: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" is_stateful: true

Process finished with exit code 1

Same nonsense output for any input when inference

  • I modified your models with more layers of CNN and MDLSTM instead of Basic LSTM, training and validating worked perfectly for me.
  • But when I try to inference a single input images, i get the same output for all input when inference, but when I feed different image to the infer batch then it will be fine
  • Fill all the batch with the same img or with white or black image are all result in same output for any input

Have you ever encountered this weird problem ? What thing that i possibly missed here ? Thanks

handwritten cursive letter segmentation

handwritten cursive letter segmentation (in a given word), ?
what is the model you can advise ?
any .cpp code example or model name and it's implementation
will help.

No saved model found

Python: 3.6.3 |Anaconda custom (64-bit)| (default, Oct 13 2017, 12:02:49)
[GCC 7.2.0]
Tensorflow: 1.8.0
2018-06-25 17:43:33.775081: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA

Traceback (most recent call last):
File "main.py", line 97, in
infer(fnInfer)
File "main.py", line 86, in infer
model = Model(open(fnCharList).read(), mustRestore=True)
File "/home/jatinpal/goall/SimpleHTR/src/Model.py", line 33, in init
(self.sess, self.saver) = self.setupTF()
File "/home/jatinpal/goall/SimpleHTR/src/Model.py", line 106, in setupTF
raise Exception('No saved model found in: ' + modelDir)
Exception: No saved model found in: ../model/

can anyone suggest why I an not able to locate the saved model .
My laptop details
ubuntu 16.04,8GB RAM,
python 3.6.3(anaconda)
tensorflow 1.8.0
cv2 3.4.1

Unable to find TFWordBeamSearch.so

Given the below error when trying to use --WordBeamSearch. I have gone through the steps to add it and have the TFWordBeamSearch.so in the src folder

  • TensorFlow version 1.12
  • Python version 3.5
  • Operating system Ubuntu 16.04

Traceback (most recent call last):
File "main.py", line 144, in
main()
File "main.py", line 139, in main
model = Model(open(FilePaths.fnCharList).read(), decoderType, mustRestore=True)
File "/home/matt/Documents/capstone/ocr/SimpleHTR/src/Model.py", line 39, in init
self.setupCTC()
File "/home/matt/Documents/capstone/ocr/SimpleHTR/src/Model.py", line 119, in setupCTC
word_beam_search_module = tf.load_op_library('TFWordBeamSearch.so')
File "/home/matt/.local/lib/python3.5/site-packages/tensorflow/python/framework/load_library.py", line 60, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: TFWordBeamSearch.so: cannot open shared object file: No such file or directory

hand written text

Hi,

Thanks for good article.

During training at word level -> CNN learns features of each word. Then how does decoding happen character by character in each word? I am not clear about what happens at LSTM and CTC layer?

If I give word which is not in IAM dataset (words.tgz) does it work?

Some of handwritten documents have boxes, we fill each character in box - how does this algorithm work?

Thanks

Training base model appears to yield higher error rate than the checked-in model

Hello Harald,

I am trying to train the model with some new IAM data, but before doing that I tried training with just the base dataset I was directed to in the README. The accuracy of the model changed from ~10% error rate in the GitHub model here, to ~43% error rate (according to accuracy.txt).

I ran into similar problems on:

  • Tensorflow 1.10, Python 3.6.5, Ubuntu 16.04.5 LTS
  • Tensorflow 1.10, Python 3.7.2, MacOS 10.13

I got this high error rate just by following the training instructions in the README. Hope to hear back from you soon on how you produced a more accurate model.

Thanks for your great work!

Minimum Dataset Count

  • TensorFlow version 1.12
  • Python version 3.5
  • Operating system - Windows 10

I am using 2,823 images to train a model with numTrainSamplesPerEpoch as 25000 and batchSize as 100. I am not getting accuracy more than 14%. I have tried playing around those values (numTrainSamplesPerEpoch and batchSize) but don't see much improvement.

Is my dataset (2,823 images) are good enough for training a model? What would be minimum dataset size to train a model?

My sample project is attached. Please take a look and let me know how I can improve accuracy for a model.

SimpleHTRTest.zip

Works only for single word

Hi,
Is this model only works for single handwritten word?
When the image has a collection of multiple words (a sentence) then this model gives poor/no result. But when the image has only one word like (Cat), this model works perfectly. Is my observation is correct or am I missing something here?
Thanks.

Error in model training

Hi,

I have trained the model from scratch for 100 epochs. Now when I retrain the model again, initializing weights from the model earlier trained (on 100 epoch) it seems that it starts training from scratch again. Because the loss is very high and accuracy very low in retraining. There is also some weird message which is displayed "2019-03-02 12:39:12.753314: tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr"

I would be very grateful if you could help me through it.

Below is the logs. The first part of the logs shows the loss and accuracy on train set in the 100th epoch.And then I restart training, initialising weights from the previous model, but the loss is very high and that weird message is displayed. :(

Thank You!
Epoch: 100
Train NN
Batch: 1 / 18 Loss: 0.32776487
Batch: 2 / 18 Loss: 0.60198474
Batch: 3 / 18 Loss: 0.48517135
Batch: 4 / 18 Loss: 0.27418187
Batch: 5 / 18 Loss: 0.55649453
Batch: 6 / 18 Loss: 0.26786774
Batch: 7 / 18 Loss: 0.34285638
Batch: 8 / 18 Loss: 0.20939198
Batch: 9 / 18 Loss: 0.27928185
Batch: 10 / 18 Loss: 0.6662815
Batch: 11 / 18 Loss: 0.5485757
Batch: 12 / 18 Loss: 0.5808813
Batch: 13 / 18 Loss: 0.9294588
Batch: 14 / 18 Loss: 0.86137664
Batch: 15 / 18 Loss: 0.50504154
Batch: 16 / 18 Loss: 0.68977255
Batch: 17 / 18 Loss: 0.9456356
Batch: 18 / 18 Loss: 0.41855794
Character train error rate: 1.444444%. Word train accuracy: 92.111111%.
Validate NN
Batch: 1 / 8
Batch: 2 / 8
Batch: 3 / 8
Batch: 4 / 8
Batch: 5 / 8
Batch: 6 / 8
Batch: 7 / 8
Batch: 8 / 8
Character dev error rate: 13.708333%. Word dev accuracy: 51.750000%.
Character error rate not improved
(SimpleHTR) shipsy@shipsy-pc:~/Ritika/Text_Recognition/SimpleHTR/src$ python main1.py --train
Python: 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0]
Tensorflow: 1.12.0
2019-03-02 12:39:07.470070: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-03-02 12:39:07.474119: I tensorflow/core/common_runtime/process_util.cc:69] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Init with stored values from ../model/snapshot-34
Epoch: 1
Train NN
Batch: 1 / 18 Loss: 0.27486247
Batch: 2 / 18 Loss: 12.9338875
Batch: 3 / 18 Loss: 59.33571
2019-03-02 12:39:12.753314: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:12.753377: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 4 / 18 Loss: 59.97041
Batch: 5 / 18 Loss: 37.964706
Batch: 6 / 18 Loss: 58.27094
Batch: 7 / 18 Loss: 35.533077
Batch: 8 / 18 Loss: 33.15117
Batch: 9 / 18 Loss: 25.096054
2019-03-02 12:39:18.523761: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:18.523794: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 10 / 18 Loss: 14.908901
2019-03-02 12:39:19.445465: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:19.445845: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 11 / 18 Loss: 14.322981
2019-03-02 12:39:20.312253: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:20.312287: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 12 / 18 Loss: 13.987829
2019-03-02 12:39:21.205180: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:21.205211: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 13 / 18 Loss: 14.24977
2019-03-02 12:39:21.989036: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:21.989381: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 14 / 18 Loss: 14.085554
2019-03-02 12:39:22.900757: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:22.901134: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 15 / 18 Loss: 13.849102
2019-03-02 12:39:23.766655: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:23.766704: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 16 / 18 Loss: 13.1134615
2019-03-02 12:39:24.642604: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:24.642637: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 17 / 18 Loss: 13.719116
2019-03-02 12:39:25.501310: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:25.501352: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 18 / 18 Loss: 14.152748
2019-03-02 12:39:26.263092: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:26.263124: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Character train error rate: 93.259259%. Word train accuracy: 0.111111%.
Validate NN
Batch: 1 / 8
2019-03-02 12:39:26.595233: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:26.595940: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 2 / 8
2019-03-02 12:39:27.060717: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:27.060748: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 3 / 8
2019-03-02 12:39:27.378411: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:27.378444: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 4 / 8
2019-03-02 12:39:27.926617: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:27.926659: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 5 / 8
2019-03-02 12:39:28.494950: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:28.494982: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 6 / 8
2019-03-02 12:39:28.825010: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:28.825041: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 7 / 8
2019-03-02 12:39:29.049452: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:29.049822: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Batch: 8 / 8
2019-03-02 12:39:29.292437: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
2019-03-02 12:39:29.292787: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr
Character dev error rate: 100.000000%. Word dev accuracy: 0.000000%.
Character error rate improved, save model

The logs is not complete. This message "2019-03-02 12:39:29.292787: E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr" stops showing after training more epochs. Also, it is displayed everytime, whether I train from scratch or initialise weights from previous models.

Error in preprocessTextImage.py

File "C:/Users/Dexter/Desktop/MR/src/rrt.py", line 10, in
imgContrast = (img - pxmin) / (pxmax - pxmin) * 255

TypeError: unsupported operand type(s) for -: 'NoneType' and 'NoneType'

I keep getting this error -

File "main.py", line 115, in main
loader = DataLoader(FilePaths.fnTrain, Model.batchSize, Model.imgSize, Model.maxTextLen)
File "/Users/adarsh/Desktop/SimpleHTR/src/DataLoader.py", line 43, in init
assert len(lineSplit) >= 9
AssertionError

My training file looks like this.

words.txt

editdistance

Hi. When I try to run the demo, it says:
ModuleNotFoundError: No module named 'editdistance'

Failed to load the native TensorFlow runtime,while running the main.py

Please have a look at the FAQ section in the README - maybe your question is already answered there.
Only issues concerning the repositories code will be answered.
The following questions will not be answered:

  • How to convert dataset X into IAM format?
  • How to modify the model to recognize text-lines/more characters/...?
  • General/theoretical questions regarding (handwritten) text recognition.

If you create a new issue, please provide the following information:

  1. Versions
  • TensorFlow version
  • Python version
  • Operating system
  1. Issue
  • Which result/error did you get?
  • If you think the result is wrong - what result did you expect instead?
  • How to reproduce the issue?
  • Provide all necessary data

matrix missmatch when using own data

When I try to train t a model with my handwriting, I works. But when I try to detect an image, I always get:

InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Assign requires shapes of both tensors to match. lhs shape= [1,1,512,50] rhs shape= [1,1,512,47]
         [[Node: save/Assign_17 = Assign[T=DT_FLOAT, _class=["loc:@Variable_5"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Variable_5/RMSProp_1, save/RestoreV2:17)]]

I do not understand why the tensros do not match :(

preparing own dataset based on IAM dataset

Hi,

I have read this article but I'm still kind of lost on how to prepare it. Perhaps an example would be nice or please guide me on how to prepare my dataset, for example, I have labeled some of my dataset and xml files are available, I would like to know how to make the python code works or edit the getNext() function. Thank you so much.

Training model on IAM Dataset

Hi,

According to your guide for training model on IAM Dataset, when I run "python main.py --train".
I receive the following error:
Traceback (most recent call last):
File "main.py", line 97, in
infer(fnInfer)
File "main.py", line 86, in infer
model = Model(open(fnCharList).read(), mustRestore=True)
File "E:\Users\Juzer\Dropbox\Master Thesis - Automatic Handling of Sheets\Implementation\SimpleHTR-master\SimpleHTR-master\src\Model.py", line 33, in init
(self.sess, self.saver) = self.setupTF()
File "E:\Users\Juzer\Dropbox\Master Thesis - Automatic Handling of Sheets\Implementation\SimpleHTR-master\SimpleHTR-master\src\Model.py", line 106, in setupTF
raise Exception('No saved model found in: ' + modelDir)
Exception: No saved model found in: ../model/

Can you help me?

Thanks in advance.

ZeroDivisionError: division by zero

I'm trying to run python main.py --train with a new image data. But I got the error:

Init with new values
Epoch: 1
Train NN
Validate NN
Traceback (most recent call last):
File "main.py", line 142, in
main()
File "main.py", line 129, in main
train(model, loader)
File "main.py", line 39, in train
charErrorRate = validate(model, loader)
File "main.py", line 84, in validate
charErrorRate = numCharErr / numCharTotal
ZeroDivisionError: division by zero

How can I solve it?

Warning: tensorflow/core/util/ctc/ctc_loss_calculator.cc:144] No valid path found.

p02-109-01-00.png has the label of "----------------------------------------------", which even when truncated to max length of inputs does not result in a findable valid path due to needing additional room for blanks. See https://stackoverflow.com/questions/45130184/ctc-loss-error-no-valid-path-found/45266262#45266262 for further details.

While this doesn't terminate the program, it does result in an infinite gradient for the batch which is possibly detrimental to learning results.

Font style for creating dataset like IAM

I wanted to know, the script which you have provided to generate dataset in the form of IAM Handwriting database, the images generated during this script will have font style defined by this line in the code -
cv2.putText(img,word,(2,20), cv2.FONT_HERSHEY_SIMPLEX, 0.4, (0), 1, cv2.LINE_AA)
return (word, img)
but this font is not realistic since its not an image of actual handwriting, is it?

Also while training for new dataset, will the code make use of the image-words.txt pair or just the words.txt?

No such file or directory

I'm getting two errors while I tried to run main.py . Below I have shared the error that I have got

Traceback (most recent call last):
File "main.py", line 139, in
main()
File "main.py", line 133, in main
print(open(FilePaths.fnAccuracy).read())
IOError: [Errno 2] No such file or directory: 'Documents/GitHub/SimpleHTR/model/accuracy.txt'

PS: I have given the Path correctly only but somehow its shows error message. and I don't know why it shows a Traceback error for main.py.
Please help me to resolve it. I'm completely new to this and on a learning stage.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.