Giter VIP home page Giter VIP logo

graph-wavenet's People

Contributors

gcorso avatar nnzhan avatar sshleifer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

graph-wavenet's Issues

Generate_training_data

Hi,
I notice that in generate_training_data.py, it first uses a sliding window to reshape the training data into (num_samples - seq_len * 2 + 1, seq_len, num_nodes, num_feature), then splits the training data with num_train = round(num_samples * 0.7), but this would cause data breach problem. Maybe it's better to split the dataset and then using a sliding window.

Replicating Paper Results

I ran the Forward Backward Adaptive Command:

python train.py --device cuda:0 --gcn_bool --adjtype doubletransition --addaptadj  --randomadj  --epoch 100 $ep --expid $expid

and got what I think are slightly worse results than Table 2 and 3 of the paper.

Table 3/METR-LA:

MAE, RMSE, MAPE = 3.04, 6.09, 8.23%
My results: 3.0737, 6.1674, 8.30%

Does that sound like a normal amount of error, wrong command, or bug?


Training finished
The valid loss on best model is 2.7565
Evaluate best model on test data for horizon 1, Test MAE: 2.2372, Test MAPE
: 0.0533, Test RMSE: 3.8697
Evaluate best model on test data for horizon 2, Test MAE: 2.5196, Test MAPE
: 0.0626, Test RMSE: 4.6753
Evaluate best model on test data for horizon 3, Test MAE: 2.7171, Test MAPE
: 0.0695, Test RMSE: 5.2287
Evaluate best model on test data for horizon 4, Test MAE: 2.8760, Test MAPE
: 0.0754, Test RMSE: 5.6681
Evaluate best model on test data for horizon 5, Test MAE: 3.0037, Test MAPE
: 0.0803, Test RMSE: 6.0149
Evaluate best model on test data for horizon 6, Test MAE: 3.1157, Test MAPE
: 0.0844, Test RMSE: 6.3154
Evaluate best model on test data for horizon 7, Test MAE: 3.2154, Test MAPE
: 0.0882, Test RMSE: 6.5706
Evaluate best model on test data for horizon 8, Test MAE: 3.3002, Test MAPE: 0.0913, Test RMSE: 6.7903
Evaluate best model on test data for horizon 9, Test MAE: 3.3777, Test MAP$: 0.0941, Test RMSE: 6.9856
Evaluate best model on test data for horizon 10, Test MAE: 3.4449, Test MA$E: 0.0965, Test RMSE: 7.1507
Evaluate best model on test data for horizon 11, Test MAE: 3.5081, Test MA$E: 0.0989, Test RMSE: 7.2993
Evaluate best model on test data for horizon 12, Test MAE: 3.5691, Test MA$E: 0.1011, Test RMSE: 7.4404

On average over 12 horizons, Test MAE: 3.0737, Test MAPE: 0.0830, Test RMS$: 6.1674
Total time spent: 4299.2252

Why output dimension is only select the first?

The code metrics = engine.train(trainx, trainy[:, 0, :, :]) in line84 of train.py seems only predict one dimension of total D dimension.
But the paper wrote the output dimension is D.

table2中预测15mins,30mins 和60mins的值,其output length 一直等于12就可以吗?

你好,想请教一下各位大神,table2中,是不是 将output length =12 一次性输出整个12个值,然后把第3,6,12处的值填进table2中就可以了?
还是output length =12 只是60mins时刻,我们需要将output length 分别设为 3 和 6 才能得到表格中的 15mins的值和30mins的值?
英语怕说不清楚, 写的中文。 希望能解答我的疑惑,万分谢谢!!!

adaptive adj problem

WaveNet is a very good paper for ST prediction, recently I take some experiments about the adaptive adj matrix. However, I found the adaptive matrix learned by the GWN model is not like the paper.

the adaptive adj listed in the paper is like this
image

but i get the results like this
image

I load the model from a performance-good .pth file.
image

I cannot find why for this problem, any experts can explain this error, thank you very very much!

my loss getting bigger and bigger when I train

Hi,I have a question. When I train, my loss is getting bigger and bigger, and I don't know what's causing it

Namespace(addaptadj=True, adjdata='data/sensor_graph/adj_mx.pkl', adjtype='doubletransition', aptonly=False, batch_size=64, data='data/METR-LA', device='cuda:0', dropout=0.3, epochs=100, expid=1, gcn_bool=True, in_dim=2, learning_rate=0.001, nhid=32, num_nodes=207, print_every=50, randomadj=True, save='./garage/metr', seq_length=12, weight_decay=0.0001)
start training...
Iter: 000, Train Loss: 11.5344, Train MAPE: 0.3075, Train RMSE: 13.9952
Iter: 050, Train Loss: 377.5500, Train MAPE: 7.0916, Train RMSE: 630.7411
Iter: 100, Train Loss: 352.2685, Train MAPE: 6.7022, Train RMSE: 619.5583
Iter: 150, Train Loss: 376.2829, Train MAPE: 7.7274, Train RMSE: 651.6628
Iter: 200, Train Loss: 415.7277, Train MAPE: 8.6364, Train RMSE: 692.0403
Iter: 250, Train Loss: 427.4500, Train MAPE: 8.3187, Train RMSE: 706.2341
Iter: 300, Train Loss: 403.1333, Train MAPE: 7.4553, Train RMSE: 686.9592
Iter: 350, Train Loss: 421.0964, Train MAPE: 8.3744, Train RMSE: 704.3835
Epoch: 001, Inference Time: 4.2194 secs
Epoch: 001, Train Loss: 388.0823, Train MAPE: 7.5629, Train RMSE: 660.2726, Valid Loss: 409.1663, Valid MAPE: 8.0573, Valid RMSE: 683.0002, Training Time: 1063.4692/epoch
Iter: 000, Train Loss: 383.0825, Train MAPE: 7.3515, Train RMSE: 672.6279
Iter: 050, Train Loss: 436.3956, Train MAPE: 7.9620, Train RMSE: 721.0291
Iter: 100, Train Loss: 423.3630, Train MAPE: 7.6969, Train RMSE: 715.8647
Iter: 150, Train Loss: 414.0448, Train MAPE: 9.1630, Train RMSE: 713.6893
Iter: 200, Train Loss: 396.0155, Train MAPE: 7.4647, Train RMSE: 699.6281
Iter: 250, Train Loss: 433.5888, Train MAPE: 8.1940, Train RMSE: 735.1317
Iter: 300, Train Loss: 445.8810, Train MAPE: 8.4676, Train RMSE: 747.1703
Iter: 350, Train Loss: 430.3223, Train MAPE: 8.1599, Train RMSE: 734.9741
Epoch: 002, Inference Time: 3.6677 secs
Epoch: 002, Train Loss: 423.8275, Train MAPE: 8.3188, Train RMSE: 721.4823, Valid Loss: 434.8144, Valid MAPE: 8.5525, Valid RMSE: 726.0127, Training Time: 122.6467/epoch
Iter: 000, Train Loss: 429.0443, Train MAPE: 7.9226, Train RMSE: 733.3237
Iter: 050, Train Loss: 417.4486, Train MAPE: 8.1126, Train RMSE: 723.7980
Iter: 100, Train Loss: 428.7359, Train MAPE: 8.0147, Train RMSE: 734.3259
Iter: 150, Train Loss: 443.9696, Train MAPE: 8.9476, Train RMSE: 747.8458
Iter: 200, Train Loss: 431.4466, Train MAPE: 8.3825, Train RMSE: 737.4367
Iter: 250, Train Loss: 418.8755, Train MAPE: 7.8954, Train RMSE: 726.3869
Iter: 300, Train Loss: 438.4908, Train MAPE: 8.5767, Train RMSE: 744.3538
Iter: 350, Train Loss: 421.3578, Train MAPE: 8.0956, Train RMSE: 728.9158
Epoch: 003, Inference Time: 4.0353 secs
Epoch: 003, Train Loss: 433.9845, Train MAPE: 8.4935, Train RMSE: 739.3757, Valid Loss: 437.7613, Valid MAPE: 8.6092, Valid RMSE: 730.9322, Training Time: 120.3957/epoch

question about padding

Thanks for your wonderful code! I am confused that why you use nn.functional.pad() on training and eval input but not on test input. Can you explain the reason?

Printing of results

Hi,

Currently I'm working on the Graph-WaveNet code with my own dataset (predicting the amount of patients occupying a bed in a hospital).

There is one thing I don't fully understand from the code.

In this line of code (test.py line 100-104):
afbeelding

y12 = realy[:,99,11].cpu().detach().numpy()
yhat12 = scaler.inverse_transform(yhat[:,99,11]).cpu().detach().numpy()

y3 = realy[:,99,2].cpu().detach().numpy()
yhat3 = scaler.inverse_transform(yhat[:,99,2]).cpu().detach().numpy()

Can anyone explain to me what they are doing in this piece of code? And what the difference is between y12 and y3?

I really hope someone can help me! :)

Thanks in advance!

关于数据的输入问题

感谢文章和代码!

请问一下,如果预测60分钟的交通流,数据首先输入t1-t12的,然后预测t13-t24时刻的交通流(即预测一个小时);然后第二步是的输入是t25-t36,然后预测t37-t48吗?以此类推,请问我这样理解的对吗?

AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.

Traceback (most recent call last):
File "test.py", line 111, in
main()
File "test.py", line 50, in main
model.load_state_dict(torch.load(args.checkpoint))
File "/public/home/hpc0919170203/anaconda3/envs/GWN2/lib/python3.7/site-packages/torch/serialization.py", line 386, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/public/home/hpc0919170203/anaconda3/envs/GWN2/lib/python3.7/site-packages/torch/serialization.py", line 548, in _load
_check_seekable(f)
File "/public/home/hpc0919170203/anaconda3/envs/GWN2/lib/python3.7/site-packages/torch/serialization.py", line 194, in _check_seekable
raise_err_msg(["seek", "tell"], e)
File "/public/home/hpc0919170203/anaconda3/envs/GWN2/lib/python3.7/site-packages/torch/serialization.py", line 187, in raise_err_msg
raise type(e)(msg)
AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.

I need some help! Expected 2D (unbatched) or 3D (batched) input to conv1d

Traceback (most recent call last):
File "C:\Users\Administrator\Desktop\Graph-WaveNet-master\train.py", line 177, in
main()
File "C:\Users\Administrator\Desktop\Graph-WaveNet-master\train.py", line 87, in main
metrics = engine.train(trainx, trainy[:,0,:,:])
File "C:\Users\Administrator\Desktop\Graph-WaveNet-master\engine.py", line 17, in train
output = self.model(input)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Administrator\Desktop\Graph-WaveNet-master\model.py", line 175, in forward
gate = self.gate_convsi
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\conv.py", line 307, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\conv.py", line 303, in _conv_forward
return F.conv1d(input, weight, bias, self.stride,

RuntimeError: Expected 2D (unbatched) or 3D (batched) input to conv1d, but got input of size: [64, 32, 207, 13]

Process finished with exit code 1

Question

how to solve this question?
AttributeError: 'gwnet' object has no attribute 'nodevec1'
Thanks a lot!!!!

About inverse_transform

Hi, I have run your code on some datasets that I constructed, and I found an problem in computing the MAPE metrics.

For the ground truth data, you normalize them into a small scale, and do the inserver normalize when evaluating the model. That will change some value like 0 to a small value like 1e-5. The computation of MAPE will take that small value into account and generate a very large MAPE value because mask only filter the value equals zero instead of these small value.

Question

AttributeError: 'NoneType' object has no attribute 'seek'.You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.

How to solve this problem!Please!!!!!For helping !!!!!!

RuntimeError: size of dimension does not match previous size, operand 1, dim 0.

I don't konw how to solve this problem,><,谢谢大家了
Traceback (most recent call last):
File "train.py", line 177, in
main()
File "train.py", line 88, in main
metrics = engine.train(trainx, trainy[:,0,:,:])
File "/home/mist/Graph-WaveNet-master/engine.py", line 17, in train
output = self.model(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mist/Graph-WaveNet-master/model.py", line 192, in forward
x = self.gconv[i](x, new_supports)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mist/Graph-WaveNet-master/model.py", line 36, in forward
x1 = self.nconv(x,a)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mist/Graph-WaveNet-master/model.py", line 13, in forward
x = torch.einsum('ncvl,vw->ncwl',(x,A))
File "/usr/local/lib/python3.6/dist-packages/torch/functional.py", line 342, in einsum
return einsum(equation, *_operands)
File "/usr/local/lib/python3.6/dist-packages/torch/functional.py", line 344, in einsum
return _VF.einsum(equation, operands) # type: ignore
RuntimeError: size of dimension does not match previous size, operand 1, dim 0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.