Giter VIP home page Giter VIP logo

seq2seq_chatbot_qa's Introduction

放在前头的“废话”

这个repo诞生比较早,那个时候tensorflow还没到1.0版本, 所以这个模型当时用的tf.contrib.seq2seq库,现在已经是tf.contrib.legacy_seq2seq了, 我想大家明白legacy的意思。

这个repo的本身目的是学习与实现seq2seq的相关内容, 并不是一个完整的software,所以它除了学习和别人参考来说,就有各种各样的问题。

有一些同学问我的一些问题,反而是一些常见的python编码问题, 还有一些是有些童鞋想在windows上调试整个模型, 这个我真的没用windows弄过, 鉴于这个repo的性质并不是一个完整软件,我也没有想过要做的太完美,很抱歉。

在我看来,bot,特指会与人产生互动的bot,是人机协作(互动)的一部分。 而关于自然语言的bot,又是它的一个子集。而chatbot本身,又是这种bot的一种子集。

以大家熟知的个人助理为例(siri,cortana,echo), 它包含了如dialogue system,各种qa system,当然也包含了chatbot。 而单看chatbot的实现,截止到今天,主要可实际落地的还是retrieval based的chatbot。

不过seq2seq模型,作为一个关于deeplearning的前沿,依然不断地奋斗在NLG,QA,chatbot等领域。

关于seq2seq的实现有很多,我后来觉得这个repo实际上有点对不起大家的stars, 但是这个repo本身因为那个legacy_seq2seq的问题,确实也没什么好“更新”的了,因为更新就是重写。

一部分是为了自己学习吧,我就重写了另一个repo(我是不是很表脸)

https://github.com/qhduan/just_another_seq2seq

这个repo主要是:

  • 增加了使用上的各种测试例子(翻译,NER,chatbot-adversial)
  • 各种中文注释与README(如果注释行数也算钱的话,这个里面注释可能比代码值钱)
  • 各种简单的测试用例,代码发布经过pylint检查

简单的来说如果有同学只是想抄个大作业的话,这个repo更有效……

我认为现在研究关于语言交互的bot有三个主要方面, 一方面人在主攻对话系统,我觉得微软的paper比较多,例如这篇 Xiujun Li, End-to-End Task-Completion Neural Dialogue Systems, 2017

另一拨人主要在做QA,或者QA相关的集成,这部分亚马逊相关的比较多,alexa prize相关的很多文章 都有这样的感觉, 例如这篇 Huiting Liu, RubyStar: A Non-Task-Oriented Mixture Model Dialog System, 2017

还有一拨人主要在做chatbot之类的,例如本repo和上面我提到我的repo相关的,例如这篇 Jiwei Li, Adversarial Learning for Neural Dialogue Generation, 2017

(诶?怎么感觉都是华裔?)

上面那三篇都是17年下半年的,现在才18年2月,其实每个方向现在都很前沿。

因为,本身一个QA的子课题,就已经是比较前沿的了, 把所有这些整合大系统的技术,应该主要握在大佬手里,这里重点指国外大佬手里, 国内我看不出哪个大佬有,大概都比较内敛,当然也是因为这方面研究并不是很有用

(单说能看到的国内的,很可能有错:图灵是一个数据不少的retrieval模型; 一个ai(世纪佳缘的)还有阿里的ruyi是一个简化后的dialogue system模型, 他们是“**版”的api.ai)

如果要对bot即相关技术有粗略了解的,推荐看看斯坦福正在写的这本 Speech and Language Processing 3rd 地址这里 看第28~30章

基于TensorFlow实现的闲聊机器人

GitHub上实际上有些实现,不过最出名的那个是torch实现的,DeepQA这个项目到是实现的不错,不过是针对英文的。

这个是用TensorFlow实现的sequence to sequence生成模型,代码参考的TensorFlow官方的

https://github.com/tensorflow/tensorflow/tree/master/tensorflow/models/rnn/translate

这个项目,还有就是DeepQA

https://github.com/Conchylicultor/DeepQA

语料文件是db/dgk_shooter_min.conv,来自于 https://github.com/rustch3n/dgk_lost_conv

参考论文:

Sequence to Sequence Learning with Neural Networks

A Neural Conversational Model

依赖

python3 是的这份代码应该不兼容python2吧

numpy 科学运算

sklearn 科学运算

tqdm 进度条

tensorflow 深度学习

大概也就依赖这些,如果只是测试,装一个cpu版本的TensorFlow就行了,也很快。

如果要训练还是要用CUDA,否则肯定超级慢超级慢~~

本包的使用说明

本包大体上是用上面提到的官方的translate的demo改的,官方那个是英文到法文的翻译模型

下面的步骤看似很复杂……其实很简单

第一步

输入:首先从这里下载一份dgk_shooter_min.conv.zip

输出:然后解压出来dgk_shooter_min.conv文件

第二步

项目录下执行decode_conv.py脚本

输入:python3 decode_conv.py

输出:会生成一个sqlite3格式的数据库文件在db/conversation.db

第三步

项目录下执行data_utils.py脚本

输入:python3 data_utils.py

输出:会生成一个bucket_dbs目录,里面包含了多个sqlite3格式的数据库,这是将数据按照大小分到不同的buckets里面

例如问题ask的长度小于等于5,并且,输出答案answer长度小于15,就会被放到bucket_5_15_db里面

第四步 训练

下面的参数仅仅为了测试,训练次数不多,不会训练出一个好的模型

size: 每层LSTM神经元数量

num_layers: 层数

num_epoch: 训练多少轮(回合)

num_per_epoch: 每轮(回合)训练多少样本

具体参数含义可以参考train.py

输入:

./train_model.sh

上面这个脚本内容相当于运行:

python3 s2s.py \
--size 1024 \
--num_layers 2 \
--num_epoch 5 \
--batch_size 64 \
--num_per_epoch 500000 \
--model_dir ./model/model1

输出:在 model/model1 目录会输出模型文件,上面的参数大概会生成700MB的模型

如果是GPU训练,尤其用的是<=4GB显存的显卡,很可能OOM(Out Of Memory), 这个时候就只能调小size,num_layers和batch_size

第五步 测试

下面的测试参数应该和上面的训练参数一样,只是最后加了--test true 进入测试模式

输入:

./train_model.sh test

上面这个脚本命令相当于运行:

python3 s2s.py \
--size 1024 \
--num_layers 2 \
--num_epoch 5 \
--batch_size 64 \
--num_per_epoch 500000 \
--model_dir ./model/model1 \
--test true

输出:在命令行输入问题,机器人就会回答哦!但是上面这个模型会回答的不是很好……当然可能怎么训练都不是很好,不要太期待~~

项目文件

db/chinese.txt 小学生必须掌握的2500个汉字

db/gb2312_level1.txt GB2312编码内的一级字库

db/gb2312_level2.txt GB2312编码内的二级字库

上面几个汉字文件主要是生成字典用的,我知道一般的办法可能是跑一遍数据库,然后生成词频(字频)之类的,然后自动生成一个词典,不过我就是不想那么做……总觉得那么做感觉不纯洁~~

db/dictionary.json 字典

测试结果

不同的参数和数据集,结果都可能变化很大,仅供参考

下面训练结果是用train_model.sh的参数训练的

你好 你好

你好呀 你好

你是谁 我是说,我们都是朋友

你从哪里来 我不知道

你到哪里去 你不是说你不是我的

你喜欢我吗? 我喜欢你

你吃了吗? 我还没吃饭呢

你喜欢喝酒吗? 我不知道

你讨厌我吗? 我不想让你失去我的家人

你喜欢电影吗? 我喜欢

陪我聊天吧 好啊

千山万水总是情 你不是说你不是我的错

你说话没有逻辑啊 没有

一枝红杏出墙来 你知道的

其他

很多论文进行 bleu 测试,这个本来是测试翻译模型的,其实对于对话没什么太大意义

不过如果想要,可以加 bleu 参数进行测试,例如

./train_model.sh bleu 1000

具体可以参考 s2s.py 里面的 test_bleu 函数

最后,这个跟现在的机器人平台,和他们所用的技术其实没啥关系, 如果对于机器人(平台)感兴趣,可以看看这里

更多问题欢迎与我交流

seq2seq_chatbot_qa's People

Contributors

miopas avatar qhduan avatar yangl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seq2seq_chatbot_qa's Issues

执行s2s时报错

InvalidArgumentError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to get matching files on ./model/model1/model: Not found: ./model/model1; No such file or directory

Error

File "s2s.py", line 13, in
import data_utils
File "/workspace/Seq2Seq_Chatbot_QA/data_utils.py", line 78, in
dim, dictionary, index_word, word_index = load_dictionary()
File "/workspace/Seq2Seq_Chatbot_QA/data_utils.py", line 55, in load_dictionary
dictionary = [EOS, UNK, PAD, GO] + json.load(fp)
File "/usr/lib/python3.5/json/init.py", line 265, in load
return loads(fp.read(),encoding='bytes',
File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 565: ordinal not in range(128)

关于词向量的问题

我有一个问题,这里面的词向量你是如何做的,在EmbeddingWrapper里面没有输入input,没有得到每个字的词向量,并且在rnn.rnn中的encoder_inputs是一个二维的,列是batch_size,行是bucket。那怎么表示词向量呢?

是版本問題嗎?

Seq2Seq_Chatbot_QA-master$ ./train_model.sh
Traceback (most recent call last):
File "s2s.py", line 11, in
import tensorflow as tf
File "/usr/local/lib/python3.4/dist-packages/tensorflow/init.py", line 23, in
from tensorflow.python import *
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/init.py", line 94, in
from tensorflow.python.platform import test
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/platform/test.py", line 62, in
from tensorflow.python.framework import test_util
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/test_util.py", line 41, in
from tensorflow.python.platform import googletest
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/platform/googletest.py", line 32, in
from tensorflow.python.platform import benchmark # pylint: disable=unused-import
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/platform/benchmark.py", line 112, in
class Benchmark(six.with_metaclass(_BenchmarkRegistrar, object)):
File "/usr/lib/python3/dist-packages/six.py", line 617, in with_metaclass
return meta("NewBase", bases, {})
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/platform/benchmark.py", line 107, in new
if not newclass.is_abstract():
AttributeError: type object 'NewBase' has no attribute 'is_abstract'

第4步训练时报错

执行 ./train_model.sh 时报错,请问是什么原因呢?

Traceback (most recent call last):
File "s2s.py", line 324, in
tf.app.run()
File "/var/lib/hadoop-hdfs/.venvs/py3/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "s2s.py", line 319, in main
train()
File "s2s.py", line 129, in train
model = create_model(sess, False)
File "s2s.py", line 110, in create_model
dtype
File "/var/lib/hadoop-hdfs/zhuangxuekun/research/Seq2Seq_Chatbot_QA/s2s_model.py", line 31, in init
cell = tf.nn.rnn_cell.BasicLSTMCell(size)
AttributeError: module 'tensorflow.python.ops.nn' has no attribute 'rnn_cell'

请教!交流下词典大小造成的问题

您好,自己在使用s2s的时候,会发现自己的最后词典大小越大在最后输出映射的时候占用空间大小越来越大。没到10000 12g的显存就爆掉了,您有经验这是什么问题么?我是用的deepqa的那份源码改的中文版的。最后的映射层用的就是给的linear线性变换啊。

语料的使用问题

为什么训练的时候使用单个字转化成索引输入网络,而不是使用像jieba这种分词工具分过之后的词对应的索引输入网络?

第四步训练时报错

错误内容:
/opt/chatbot/Seq2Seq_Chatbot_QA> ./train_model.sh
dim: 6865
准备数据
bucket 0 中有数据 506206 条
bucket 1 中有数据 1091400 条
bucket 2 中有数据 726867 条
bucket 3 中有数据 217104 条
共有数据 2541577 条
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Traceback (most recent call last):
File "s2s.py", line 324, in
tf.app.run()
File "/usr/lib/python3.4/site-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "s2s.py", line 319, in main
train()
File "s2s.py", line 129, in train
model = create_model(sess, False)
File "s2s.py", line 110, in create_model
dtype
File "/opt/chatbot/Seq2Seq_Chatbot_QA/s2s_model.py", line 31, in init
cell = tf.nn.rnn_cell.BasicLSTMCell(size)
AttributeError: 'module' object has no attribute 'rnn_cell'

谢谢

tensorflow版本为1.0.0

没有./model/model1/model文件夹?

image

错误信息:NotFoundError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ./model/model1/model
前四步已经执行完成
训练完整
查看了一下model1文件夹,下面是很多单个的文件
本来就没有./model/model1/model文件夹。。。
求教哪里错了,谢谢!

import s2s.data_util as data_util

$ python3 train.py
train.py can't import data_util

請問我沒有 data_util lib, 這要如何安裝呢? 查了網路上並找不到相關的答案,想要問你是否知道呢?

训练结果有问题

作者,你好,我将num_epoch改成40,然后训练了12个小时,结果仍然不好。
我 > 你叫啥?
AI > 我们在哪儿?
我 > 你吃饭了没?
AI > 我们不知道
我 > 你睡觉了吗?
AI > 我们在哪儿?
我 > 陪我聊天吧
AI > 我们不知道

想问下为什么是这个结果?谢谢

测试训练时显示input shape不对

现在tensorflow1.0已经发布了,API有变动,我把代码里API变动的地方都改了,但是测试训练时仍显示softmax loss function的matmul矩阵shape不对,不知道是哪里出了问题,先问问,回头有时间我自己再读读源码找一下~

log:

dim:  6865
准备数据
bucket 0 中有数据 164276 条
bucket 1 中有数据 127570 条
bucket 2 中有数据 32081 条
bucket 3 中有数据 10660 条
共有数据 334587 条
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
开启投影:512
Traceback (most recent call last):
  File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py", line 670, in _call_cpp_shape_fn_impl
    status)
  File "/usr/lib64/python3.5/contextlib.py", line 66, in __exit__
    next(self.gen)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 469, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape must be rank 2 but is rank 1 for 'model_with_buckets/sequence_loss/sequence_loss_by_example/sampled_softmax_loss/MatMul_1' (op: 'MatMul') with input shapes: [?], [?,1024].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "s2s.py", line 324, in <module>
    tf.app.run()
  File "/usr/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 44, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "s2s.py", line 319, in main
    train()
  File "s2s.py", line 129, in train
    model = create_model(sess, False)
  File "s2s.py", line 110, in create_model
    dtype
  File "/home/kurt/Seq2Seq_Chatbot_QA/s2s_model.py", line 143, in __init__
    softmax_loss_function=softmax_loss_function
  File "/usr/lib/python3.5/site-packages/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py", line 1195, in model_with_buckets
    softmax_loss_function=softmax_loss_function))
  File "/usr/lib/python3.5/site-packages/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py", line 1110, in sequence_loss
    softmax_loss_function=softmax_loss_function))
  File "/usr/lib/python3.5/site-packages/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py", line 1067, in sequence_loss_by_example
    crossent = softmax_loss_function(target, logit)
  File "/home/kurt/Seq2Seq_Chatbot_QA/s2s_model.py", line 67, in sampled_loss
    num_classes=self.target_vocab_size
  File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/nn_impl.py", line 1191, in sampled_softmax_loss
    name=name)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/nn_impl.py", line 995, in _compute_sampled_logits
    inputs, sampled_w, transpose_b=True) + sampled_b
  File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py", line 1855, in matmul
    a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/gen_math_ops.py", line 1454, in _mat_mul
    transpose_b=transpose_b, name=name)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
    op_def=op_def)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2397, in create_op
    set_shapes_for_outputs(ret)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1757, in set_shapes_for_outputs
    shapes = shape_func(op)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1707, in call_with_requiring
    return call_cpp_shape_fn(op, require_shape_fn=True)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn
    debug_python_shape_fn, require_shape_fn)
  File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py", line 675, in _call_cpp_shape_fn_impl
    raise ValueError(err.message)
ValueError: Shape must be rank 2 but is rank 1 for 'model_with_buckets/sequence_loss/sequence_loss_by_example/sampled_softmax_loss/MatMul_1' (op: 'MatMul') with input shapes: [?], [?,1024].

另外有人知道如何能让tensorflow CPU版能支持这些SIMD指令集吗?每次都弹警告好烦阿...

一个Epoch的训练时间特别长

之前的模型因为tf的版本兼容问题调整了一下代码,跑起来了,但是看上去时间特别长。我不知道这是正常的时间规模,还是我哪儿的代码改的不对。
请问这个时间是正常的吗?

用Mac pro,配置:

处理器:2.7 GHz Intel Core i5
内存:8 GB 1867 MHz DDR3

时间:

Epoch 1:
[--------------------]  1.0%  4992/500000  loss=4.379  16m28s/27h29m57s

在windows的git bash 中训练得到的模型测试时输出是乱码

因为作者提供的是.sh文件,我为了能够在windows上运行作者提供的训练代码就在git bash中进行了训练,最终得到的模型输出时的文字全是乱码,我上网查询了关于git bash中中文乱码的解决方案,都没有用,我发现并不是git bash导致的乱码,想求助一下作者,如何解决这个乱码问题

s2s一个小bug

ckpt = tf.train.get_checkpoint_state(FLAGS.model_dir)
print("ckpt path : ", ckpt.model_checkpoint_path)
if ckpt != None:
    print("load old model : ", ckpt.model_checkpoint_path)
    model.saver.restore(sess, ckpt.model_checkpoint_path)

第二行print,在ckpt为None时报错:

AttributeError: 'NoneType' object has no attribute 'model_checkpoint_path'

测试没有达到楼主的效果。。。。

参数按照楼主的设置,迭代样本为1000000次,效果不是很好,问:你好,回答不是你好,loss最终0.769,看了训练语料发现对白分割并不严格。。。达到楼主的效果参数是怎样的?

AttributeError

hi,
說明很清楚了,但在train時,有碰到AttributeError,不知你有沒有碰到同樣的問題。

error log 如下:
write_version=tf.train.SaverDef.V2
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1077, in init
self.build()
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1106, in build
restore_sequentially=self._restore_sequentially)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 702, in build
save_tensor = self._AddSaveOps(filename_tensor, saveables)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 309, in _AddSaveOps
save = self.save_op(filename_tensor, saveables)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 252, in save_op
return io_ops.save_v2(filename_tensor, tensor_names, tensor_slices,
AttributeError: module 'tensorflow.python.ops.io_ops' has no attribute 'save_v2'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.