musyoku / chainer-qrnn Goto Github PK
View Code? Open in Web Editor NEWChainer implementation of Q-RNN (Quasi-Reccurent Neural Networks)
Chainer implementation of Q-RNN (Quasi-Reccurent Neural Networks)
suppose I got two files,
one contains Questions for each line ,
one contains Answer for each line corresondingly,
in both of them, all the words divide by blank
now I knew that the what is 'cupy', but On Mac without GPU, how can I install it or run it without copy
python train.py --source-train ../../qa_new_data/_train.q1000 --target-train ../../qa_new_data/_train.a1000 --batchsize 64 --gpu-device -1
len(source_dataset_train) is 999
len(target_dataset_train) is 999
data #
train 999
vocab 1447 (source)
vocab 2099 (target)
buckets #data (train)
(5, 10) 141
(10, 15) 165
(20, 25) 318
(40, 50) 179
(100, 110) 93
(200, 210) 102
Epoch 1
Traceback (most recent call last):
File "train.py", line 212, in
main(args)
File "train.py", line 127, in main
if model.xp is cuda.cupy:
AttributeError: 'module' object has no attribute 'cupy'
source: 报 读 潮 汕 职 业 技 术 学 院 的 收 费 怎 么 样 ?
target: 汕 头 市 美 宝 化 妆 品 有 限 公 司 京 明 温 泉 度 假 村 有 限 公 司 深 圳 市 夏 尔 科 技 有 限 公 司 广 东 艺 通 装 饰 工 程 有 限 公 司 广 东 碧 桂 园 物 业 管 理 有 限 公 司 广 东 圣 都 模 具 股 份 有 限 公 司 广 州 珠 江 黄 埔 大 桥 . . .
predict: 的
source: h t c 的 手 机 , 安 卓 系 统 哪 个 版 本 的 好 ? 有 什 么 不 同 吗 . 版 . . .
target: 版 本 重 在 稳 定 吧 。 其 实 各 个 版 本 变 动 不 是 很 大 。 我 现 在 是 2 . 3 . 5 . 感 觉 就 挺 还 , 其 实 如 果 不 是 很 必 要 不 用 刷 系 统 。 如 果 你 想 刷 手 机 可 以 的 话 可 以 找 个 安 卓 4 . 0 刷 入 。 前 提 是 手 机 要 带 队 的 起 来 。 h t c 的 s e n c e 也 可 以 升 级 的 。
predict: 亲 的 亲
Traceback (most recent call last):
File "cupy/cuda/memory.pyx", line 362, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc (cupy/cuda/memory.cpp:7328)
File "cupy/cuda/memory.pyx", line 263, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:6020)
File "cupy/cuda/memory.pyx", line 264, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:5941)
File "cupy/cuda/memory.pyx", line 35, in cupy.cuda.memory.Memory.init (cupy/cuda/memory.cpp:1775)
File "cupy/cuda/runtime.pyx", line 207, in cupy.cuda.runtime.malloc (cupy/cuda/runtime.cpp:3429)
File "cupy/cuda/runtime.pyx", line 130, in cupy.cuda.runtime.check_status (cupy/cuda/runtime.cpp:2241)
cupy.cuda.runtime.CUDARuntimeError: cudaErrorMemoryAllocation: out of memory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "cupy/cuda/memory.pyx", line 368, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc (cupy/cuda/memory.cpp:7503)
File "cupy/cuda/memory.pyx", line 263, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:6020)
File "cupy/cuda/memory.pyx", line 264, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:5941)
File "cupy/cuda/memory.pyx", line 35, in cupy.cuda.memory.Memory.init (cupy/cuda/memory.cpp:1775)
File "cupy/cuda/runtime.pyx", line 207, in cupy.cuda.runtime.malloc (cupy/cuda/runtime.cpp:3429)
File "cupy/cuda/runtime.pyx", line 130, in cupy.cuda.runtime.check_status (cupy/cuda/runtime.cpp:2241)
cupy.cuda.runtime.CUDARuntimeError: cudaErrorMemoryAllocation: out of memory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 212, in
main(args)
File "train.py", line 174, in main
print("done in {} min, lr = {}, total {} min".format(int(elapsed_time), get_current_learning_rate(optimizer), int(total_time)))
File "/usr/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.5/dist-packages/chainer/configuration.py", line 125, in using_config
yield
File "train.py", line 170, in main
dump_random_source_target_translation(model, source_buckets_train, target_buckets_train, vocab_inv_source, vocab_inv_target, num_translate=5, beam_width=1)
File "/home/mldl/ub16_prj/chainer-qrnn/seq2seq/translate.py", line 360, in dump_random_source_target_translation
translation_batch = translate_greedy(model, source_batch, target_batch.shape[1] * 2, len(vocab_inv_target), beam_width)
File "/home/mldl/ub16_prj/chainer-qrnn/seq2seq/translate.py", line 70, in translate_greedy
u = model.decode_one_step(x, encoder_last_hidden_states)
File "/home/mldl/ub16_prj/chainer-qrnn/seq2seq/model.py", line 238, in decode_one_step
out_data = self._forward_decoder_layer_one_step(layer_index, sum(in_data) if self.densely_connected else in_data[-1], encoder_last_hidden_states[layer_index])
File "/home/mldl/ub16_prj/chainer-qrnn/seq2seq/model.py", line 217, in _forward_decoder_layer_one_step
out_data = decoder.forward_one_step(in_data, encoder_last_hidden_states)
File "../qrnn.py", line 179, in forward_one_step
WX = self.W(X)[..., -pad-1, None]
File "/usr/local/lib/python3.5/dist-packages/chainer/links/connection/convolution_nd.py", line 84, in call
x, self.W, self.b, self.stride, self.pad, cover_all=self.cover_all)
File "/usr/local/lib/python3.5/dist-packages/chainer/functions/connection/convolution_nd.py", line 438, in convolution_nd
return func(x, W, b)
File "/usr/local/lib/python3.5/dist-packages/chainer/function.py", line 200, in call
outputs = self.forward(in_data)
File "/usr/local/lib/python3.5/dist-packages/chainer/functions/connection/convolution_nd.py", line 177, in forward
return self._forward_xp(x, W, b, cuda.cupy)
File "/usr/local/lib/python3.5/dist-packages/chainer/functions/connection/convolution_nd.py", line 81, in _forward_xp
y = xp.tensordot(self.col, W, (axes, axes)).astype(x.dtype, copy=False)
File "/usr/local/lib/python3.5/dist-packages/cupy/linalg/product.py", line 193, in tensordot
return core.tensordot_core(a, b, None, n, m, k, ret_shape)
File "cupy/core/core.pyx", line 3206, in cupy.core.core.tensordot_core (cupy/core/core.cpp:80257)
File "cupy/core/core.pyx", line 3242, in cupy.core.core.tensordot_core (cupy/core/core.cpp:79283)
File "cupy/core/core.pyx", line 82, in cupy.core.core.ndarray.init (cupy/core/core.cpp:6389)
File "cupy/cuda/memory.pyx", line 283, in cupy.cuda.memory.alloc (cupy/cuda/memory.cpp:6078)
File "cupy/cuda/memory.pyx", line 436, in cupy.cuda.memory.MemoryPool.malloc (cupy/cuda/memory.cpp:9314)
File "cupy/cuda/memory.pyx", line 452, in cupy.cuda.memory.MemoryPool.malloc (cupy/cuda/memory.cpp:9220)
File "cupy/cuda/memory.pyx", line 347, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc (cupy/cuda/memory.cpp:7907)
File "cupy/cuda/memory.pyx", line 373, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc (cupy/cuda/memory.cpp:7682)
File "cupy/cuda/memory.pyx", line 263, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:6020)
File "cupy/cuda/memory.pyx", line 264, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:5941)
File "cupy/cuda/memory.pyx", line 35, in cupy.cuda.memory.Memory.init (cupy/cuda/memory.cpp:1775)
File "cupy/cuda/runtime.pyx", line 207, in cupy.cuda.runtime.malloc (cupy/cuda/runtime.cpp:3429)
File "cupy/cuda/runtime.pyx", line 130, in cupy.cuda.runtime.check_status (cupy/cuda/runtime.cpp:2241)
cupy.cuda.runtime.CUDARuntimeError: cudaErrorMemoryAllocation: out of memory
mldl@mldlUB1604:~/ub16_prj/chainer-qrnn/seq2seq$
What's the data format after preprocessing?
Like this:
发而皆中节,谓之和。
中也者,天下之大本也。
和也者,天下之达道也。
致中和,天地位焉,万物育焉。
or like this:
发 而 皆 中 节 , 谓 之 和 。
中 也 者 , 天 下 之 大 本 也 。
和 也 者 , 天 下 之 达 道 也 。
致 中 和 , 天 地 位 焉 , 万 物 育 焉 。
your code use conv1D, as your comment:
# remove right paddings
# e.g.
# kernel_size = 3
# pad = 2
# input sequence with paddings:
# [0, 0, x1, x2, x3, 0, 0]
# |< t1 >|
# |< t2 >|
# |< t3 >|
my question is : As far as I know, the convolution will use weighted-sum of neighbor elements around center target element, then assign the result to the center element.
that is: [0,0, x1]
will put the weighed sum result to original position of second 0
. and [0,x1,x2]
will put the weighed sum result to original position of x1
.
but the paper written: the filter must not collow any step to access information from futher steps?
how to acheive that ? in this case you just access information of x2
in case: [0,x1,x2]
こんにちは。
musyoku様が書かれたQuasi-Recurrent Neural Networks [1611.01576]のQRNNの言語モデルの再現実験を行なっているのですが、うまくperplexityが収束せずに困っています。よろしければ、学習を行なった際のパラメータ(学習回数など)を教えていただくことは可能でしょうか。
because you already define the def __call__
in QRNN, why you define forward_one_step
, I can't find where the code calls it?
the code at https://github.com/musyoku/chainer-qrnn/blob/master/qrnn.py#L114 and https://github.com/musyoku/chainer-qrnn/blob/master/qrnn.py#L116
self.ct = (1 - ft) * zt * xt
self.ct = ft * self.ct + it * zt * xt
But I checked the original paper formula (4) and (5) the right-most xt
doesn't involved in calculation then calculate result of ct
.
How to understand that? why you add xt
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.