Comments (5)
from fold.
Hrm, sorry you're getting a segfault! The code does seem fine, I don't know what's going wrong based on what you've provided. Maybe you could share the code that actually generated the segfault?
Also you could try running some of our example code (and some TF examples) to see if the problem is with your particular model vs. TF or Fold not running well in general on your machine.
from fold.
P.S. One more thing to check is to make sure that the segfault is actually being generated while the code is running, by e.g. adding a print statement at the very end of your code. I ask this because during development we encountered some issues due to the way TF does dynamic library loading that could cause a segfault when unlinking the library (which happens when the python interpreter exits). FWIW we never encountered any segfaults while code was being run, although ipython would occasionally segfault durring tab completion (this was not a Fold problem per se, TF did the same thing in some cases for unclear reasons).
from fold.
Thank you very much for your prompt replies and help! Following your advice, I found that the problem can be solved by adding a virtualenv (as is suggested by the installation document -- sorry I omitted this at the beginning). As a result, this problem seemed to result from a conflict between some python modules and TF or TF Fold.
However, another problem came up. While the code can run now, the batched cross entropy loss turned out to be the same for each example across all batches during training. I was wondering if there was anything wrong with the training code (a continuation of the code for defining and compiling the blocks in the original post):
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
tf.summary.FileWriter('./tf_graph', graph = sess.graph)
batch_size = 30
train_set = compiler.build_loom_inputs(Input_train_tf)
train_feed_dict = {}
dev_feed_dict = compiler.build_feed_dict(Input_dev_tf)
for epoch, shuffled in enumerate(td.epochs(train_set, epochs), 1):
train_loss = 0.0
for batch in td.group_by_batches(shuffled, batch_size):
train_feed_dict[compiler.loom_input_tensor] = batch
_, batch_loss = sess.run([train_step, cross_entropy], train_feed_dic t)
print batch_loss
train_loss += np.sum(batch_loss)
dev_loss = np.average(sess.run(cross_entropy, dev_feed_dict))
print dev_loss
Otherwise, would it be possible for you to indicate how to diagnose this problem? Thanks a lot for your attention and time!
from fold.
Train code looks ok. What I would recommend doing here is breaking the code for defining your model down into pieces and putting each inside a function. Then if you have e.g. a foo_block() function you can write unit tests against it and/or interactively debug it with
foo_block().eval(foo_input)
and see that each piece does what you expect.
from fold.
Related Issues (20)
- NEAT Algorithm implementation HOT 1
- How do I get the root embedding tensors after training the model? HOT 1
- Is it still live?
- Compatability with Tensorflow 2.4+ HOT 3
- Not able to find GPU version of Fold
- can current fold work with tensorflow 1.3? Is it tested? HOT 8
- How to use fold in serving? HOT 2
- Fold & Eager HOT 3
- "undefined symbol" caused by building fold from source HOT 3
- Not compiled to use: SSE4.1 SSE4.2
- Is Fold not compatible with tensorflow-1.4.0? HOT 1
- Typos in sample
- using tf.constant() in td.Composition HOT 1
- "import tensorflow_fold" problem HOT 7
- How to build child-sum tree using tensorflow fold? HOT 2
- How to print intermediate outputs in recursion in TF Fold? HOT 4
- Printing predictions HOT 7
- Making Windows compatible HOT 2
- tensorflow_fold-0.0.1-cp27-cp27mu-manylinux1_x86_64.whl is not a supported wheel on this platform. HOT 1
- fold incompatible with TF-gpu 1.10.1 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fold.