keras example of seq2seq, auto title
https://kexue.fm/archives/5861
QQ交流群:67729435,微信群请加机器人微信号spaces_ac_cn
keras example of seq2seq, auto title
keras example of seq2seq, auto title
https://kexue.fm/archives/5861
QQ交流群:67729435,微信群请加机器人微信号spaces_ac_cn
我看到您写的是不用分词,那就是直接用单字啦?由于没有数据,所以问问确认一下,谢谢!
File "seq2seq_train.py", line 370, in
callbacks=[evaluator])
File "/data/zhu/.conda/envs/py27/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/data/zhu/.conda/envs/py27/lib/python2.7/site-packages/keras/engine/training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "/data/zhu/.conda/envs/py27/lib/python2.7/site-packages/keras/engine/training_generator.py", line 181, in fit_generator
generator_output = next(output_generator)
File "/data/zhu/.conda/envs/py27/lib/python2.7/site-packages/keras/utils/data_utils.py", line 709, in get
six.reraise(*sys.exc_info())
File "/data/zhu/.conda/envs/py27/lib/python2.7/site-packages/keras/utils/data_utils.py", line 685, in get
inputs = self.queue.get(block=True).get()
File "/data/zhu/.conda/envs/py27/lib/python2.7/multiprocessing/pool.py", line 567, in get
raise self._value
TypeError: 'encoding' is an invalid keyword argument for this function
苏神,能看一下错误嘛(python2.7 keras 2.2.4 tensorflow1.8)
运行到最后报错,IndexError: index 1 is out of bounds for axis 0 with size 1,苏神是否可以给一些修改意见?
您好,新手问题。在keras中有可以分批提取训练的方法么?(保证内存不溢出)比如100万条数据,则么处理在keras上会更好?
tf=2.6.0
keras=2.6.0
执行xy = Attention(8, 16)([y, x, x, x_mask])这一步时报错,
进入函数发现a = K.permute_dimensions(a, (0, 3, 2, 1))这里报错了
ValueError: in user code:
<ipython-input-106-d9f1e622f23a>:50 call *
a = K.permute_dimensions(a, (0, 3, 2, 1))
/root/anaconda3/envs/py364/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py:206 wrapper **
return target(*args, **kwargs)
/root/anaconda3/envs/py364/lib/python3.6/site-packages/keras/backend.py:3133 permute_dimensions
return tf.compat.v1.transpose(x, perm=pattern)
/root/anaconda3/envs/py364/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py:206 wrapper
return target(*args, **kwargs)
/root/anaconda3/envs/py364/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py:2309 transpose
return transpose_fn(a, perm, name=name)
/root/anaconda3/envs/py364/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py:11659 transpose
"Transpose", x=x, perm=perm, name=name)
/root/anaconda3/envs/py364/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:750 _apply_op_helper
attrs=attr_protos, op_def=op_def)
/root/anaconda3/envs/py364/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:3569 _create_op_internal
op_def=op_def)
/root/anaconda3/envs/py364/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:2042 __init__
control_input_ops, op_def)
/root/anaconda3/envs/py364/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:1883 _create_c_op
raise ValueError(str(e))
ValueError: Dimension must be 5 but is 4 for '{{node attention_11/transpose_4}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32](attention_11/truediv, attention_11/transpose_4/perm)' with input shapes: [?,8,?,8,?], [4].
请问这个代码使用的keras和tensorflow版本?
训练数据和模型能否分享分享
An operation has None
for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
1,mv这个变量对应的特征好像是每个decoder节点的公共特征,在文章里面好像没有提及这个特征
2,基于词频的先验概率求和好像不为1,后面词频的先验概率和模型的概率在平均时也没归一化处理
以上是两个疑问 希望能得到解答 感谢🙏
x_in = Input(shape=(None,), dtype='int32', name="seq2seq_in_layer_1")
y_in = Input(shape=(None,), dtype='int32', name="seq2seq_in_layer_2")
x, y = x_in, y_in
...
model = Model([x_in, y_in], xy)
model.add_loss(cross_entropy)
model.compile(Adam(1e-3))
报错
InvalidArgumentError Traceback (most recent call last)
in
51 model = Model([x_in, y_in], xy)
52 # model.summary()
---> 53 model.add_loss(cross_entropy)
54 model.compile(Adam(1e-3))
55
~/anaconda3/envs/py364/lib/python3.6/site-packages/keras/engine/base_layer_v1.py in add_loss(self, losses, inputs)
1052 for symbolic_loss in symbolic_losses:
1053 if getattr(self, '_is_graph_network', False):
-> 1054 self._graph_network_add_loss(symbolic_loss)
1055 else:
1056 # Possible a loss was added in a Layer's build
.
~/anaconda3/envs/py364/lib/python3.6/site-packages/keras/engine/functional.py in _graph_network_add_loss(self, symbolic_loss)
841
842 def _graph_network_add_loss(self, symbolic_loss):
--> 843 new_nodes, new_layers = _map_subgraph_network(self.inputs, [symbolic_loss])
844 # Losses must be keyed on inputs no matter what in order to be supported in
845 # DistributionStrategy.
~/anaconda3/envs/py364/lib/python3.6/site-packages/keras/engine/functional.py in _map_subgraph_network(inputs, outputs)
1071 """
1072 if not tf.compat.v1.executing_eagerly_outside_functions():
-> 1073 base_layer_utils.create_keras_history(outputs)
1074 # Keep only nodes and layers in the topology between inputs and outputs.
1075 _, nodes_by_depth, layers, _ = _map_graph_network(inputs, outputs)
~/anaconda3/envs/py364/lib/python3.6/site-packages/keras/engine/base_layer_utils.py in create_keras_history(tensors)
173 the raw Tensorflow operations.
174 """
--> 175 _, created_layers = _create_keras_history_helper(tensors, set(), [])
176 return created_layers
177
~/anaconda3/envs/py364/lib/python3.6/site-packages/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
252 layer_inputs = unnest_if_single_tensor(layer_inputs)
253 processed_ops, created_layers = _create_keras_history_helper(
--> 254 layer_inputs, processed_ops, created_layers)
255 name = op.name
256 node_def = op.node_def.SerializeToString()
~/anaconda3/envs/py364/lib/python3.6/site-packages/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
252 layer_inputs = unnest_if_single_tensor(layer_inputs)
253 processed_ops, created_layers = _create_keras_history_helper(
--> 254 layer_inputs, processed_ops, created_layers)
255 name = op.name
256 node_def = op.node_def.SerializeToString()
~/anaconda3/envs/py364/lib/python3.6/site-packages/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
252 layer_inputs = unnest_if_single_tensor(layer_inputs)
253 processed_ops, created_layers = _create_keras_history_helper(
--> 254 layer_inputs, processed_ops, created_layers)
255 name = op.name
256 node_def = op.node_def.SerializeToString()
~/anaconda3/envs/py364/lib/python3.6/site-packages/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
252 layer_inputs = unnest_if_single_tensor(layer_inputs)
253 processed_ops, created_layers = _create_keras_history_helper(
--> 254 layer_inputs, processed_ops, created_layers)
255 name = op.name
256 node_def = op.node_def.SerializeToString()
~/anaconda3/envs/py364/lib/python3.6/site-packages/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
252 layer_inputs = unnest_if_single_tensor(layer_inputs)
253 processed_ops, created_layers = _create_keras_history_helper(
--> 254 layer_inputs, processed_ops, created_layers)
255 name = op.name
256 node_def = op.node_def.SerializeToString()
~/anaconda3/envs/py364/lib/python3.6/site-packages/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
252 layer_inputs = unnest_if_single_tensor(layer_inputs)
253 processed_ops, created_layers = _create_keras_history_helper(
--> 254 layer_inputs, processed_ops, created_layers)
255 name = op.name
256 node_def = op.node_def.SerializeToString()
~/anaconda3/envs/py364/lib/python3.6/site-packages/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
249 else:
250 with tf.init_scope():
--> 251 constants[i] = backend.function([], op_input)([])
252 layer_inputs = unnest_if_single_tensor(layer_inputs)
253 processed_ops, created_layers = _create_keras_history_helper(
~/anaconda3/envs/py364/lib/python3.6/site-packages/keras/backend.py in call(self, inputs)
4030
4031 fetched = self._callable_fn(*array_vals,
-> 4032 run_metadata=self.run_metadata)
4033 self._call_fetch_callbacks(fetched[-len(self._fetches):])
4034 output_structure = tf.nest.pack_sequence_as(
~/anaconda3/envs/py364/lib/python3.6/site-packages/tensorflow/python/client/session.py in call(self, *args, **kwargs)
1478 ret = tf_session.TF_SessionRunCallable(self._session._session,
1479 self._handle, args,
-> 1480 run_metadata_ptr)
1481 if run_metadata:
1482 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
InvalidArgumentError: You must feed a value for placeholder tensor 'seq2seq_in_layer_2_9' with dtype int32 and shape [?,?]
[[{{node seq2seq_in_layer_2_9}}]]
你好,我想问一下,是seq2seq.py这一个脚本搞定了全部吗?测试脚本要自己写是吗,还没有读代码。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.