Comments (13)
from fold.
from fold.
Marcello, I'm aware, however, as I said, when I construct a new compiler it doesn't work, with the error:
TypeError: block <td.Pipe> is already a child of <td.Pipe>
from fold.
from fold.
from fold.
So, I tried that approach. Here's the relevant (full) code:
CL=td.ScopedLayer(convLayer)
def mk_encod():
return(td.Map(td.Tensor((24,10)) >> CL >>
td.Function(lambda x: tf.reshape(x,[-1,64*(10-2)]))) >>
td.Fold(td.Concat() >>
td.Function(td.FC(256)), td.FromTensor(tf.zeros(256))))
model = ( mk_encod() >> td.Function(td.FC(1)))
compiler = td.Compiler.create((model, td.Scalar()))
y, y_ = compiler.output_tensors
loss = tf.nn.l2_loss(y - y_)
train = tf.train.AdamOptimizer().minimize(loss)
sess.run(tf.global_variables_initializer())
for i in range(1000):
sess.run(train, compiler.build_feed_dict([get_example() for _ in range(batch_size)]))
encod=mk_encod()
newComp=td.Compiler.create((encod,td.Scalar())) #copied the structure of the prev. compiler
fvBlock,_=newComp.output_tensors
FV=sess.run(fvBlock,newComp.build_feed_dict([get_real(i) for i in range(N)]))
However, that gets me:
FailedPreconditionError (see above for traceback): Attempting to use uninitialized value FC_256_2/weights
[[Node: FC_256_2/weights/read = Identity[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](FC_256_2/weights)]]
So I guess I'm messing up with the scopes, I think...
Isn't this approach (train a whole net, use the results of a middle layer) somewhat common on NLP ?(isn't word2vec like this?). What would be the canonical way for that?
from fold.
from fold.
Is my understanding correct that there is (currently) no way to obtain/inspect the output_tensors of a non-root block without compiling a separate instance of the tf graph (for each such block), and then doing separate corresponding executions of sess.run(...), etc.? Since sess.run([tensor1, tensor2,...]) already allows arbitrary/non-root tensors in the tf graph to be specified, it seems like this may be an issue with the td compiler only being able to expose the root-level tensors in the resulting graph. Anyway, having a simple way to inspect non-root tensors would be very helpful for debugging purposes. Thanks for the great resource!
from fold.
from fold.
Ok, I think I see how one could post-hoc stitch a Metric into the pipeline to inspect values in a non-root block. A simple example of a graph that sums two numbers and doubles them, where we want to inspect the values at the intermediate summing stage:
import tensorflow as tf
sess = tf.InteractiveSession()
import tensorflow_fold as td
inputs = td.Record((td.Scalar(), td.Scalar()))
sum_block = inputs >> td.Function(tf.add)
m = td.Composition()
with m.scope():
td.Metric('sum').reads(m.input)
m.output.reads(m.input)
double_block = sum_block >> m >> td.Function(lambda x: 2.0*x)
compiler = td.Compiler.create(double_block)
fd = compiler.build_feed_dict([(1,2),(3,4),(5,6)])
outputs, metrics = sess.run([compiler.output_tensors, compiler.metric_tensors], feed_dict=fd)
I imagine there's a cleaner way to write this, but this at least accomplishes what I want (avoiding multiple compilers). Thanks for the tip!
from fold.
@rudinger Thanks for the stitch! Do you know if the retrieved outputs can be used for further ops in the tensorflow graph? Will the gradients flow back correctly if this is done?
from fold.
@satwikkottur Sorry I don't have an answer, I've moved to a different project. Good luck!
from fold.
I tried this by dropping down to loom API. But looks like I can specify the outputs (which are Loom id ints), and they stack up as output. Is there any other way I can get more structured outputs from loom, rather than rearranging them post loom?
For eg., I want [1, 2, 5] outputs together and [3, 4, 6] together. (note that integers here are loom ids)
If you drop down to the loom layer, you can expose any part of the
computation as an output.
from fold.
Related Issues (20)
- NEAT Algorithm implementation HOT 1
- How do I get the root embedding tensors after training the model? HOT 1
- Is it still live?
- Compatability with Tensorflow 2.4+ HOT 3
- Not able to find GPU version of Fold
- can current fold work with tensorflow 1.3? Is it tested? HOT 8
- How to use fold in serving? HOT 2
- Fold & Eager HOT 3
- "undefined symbol" caused by building fold from source HOT 3
- Not compiled to use: SSE4.1 SSE4.2
- Is Fold not compatible with tensorflow-1.4.0? HOT 1
- Typos in sample
- using tf.constant() in td.Composition HOT 1
- "import tensorflow_fold" problem HOT 7
- How to build child-sum tree using tensorflow fold? HOT 2
- How to print intermediate outputs in recursion in TF Fold? HOT 4
- Printing predictions HOT 7
- Making Windows compatible HOT 2
- tensorflow_fold-0.0.1-cp27-cp27mu-manylinux1_x86_64.whl is not a supported wheel on this platform. HOT 1
- fold incompatible with TF-gpu 1.10.1 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fold.