Giter VIP home page Giter VIP logo

timeseries_seq2seq's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

timeseries_seq2seq's Issues

Is this a typo ? otherwise can you pls explain

In the two notebooks Conv_Full and Conv_Full_Exog, go to the code block where you define your model (right aft 3. Building the Model - Architecture), you have the following lines,

z = Conv1D(16, 1, padding='same', activation='relu')(z) 
x = Add()([x, z])    
skips.append(z)

Should the middle line be,

z = Conv1D(16, 1, padding='same', activation='relu')(z) 
z = Add()([x, z])    
skips.append(z)

That is, the result of Add() is sent back to z, and get appended to skips.

such a simple yet effective method

Thank you for such a interesting and effective method.
I'm toying with single time series with multivariate input and multistep output. Can you direct me to right ways to construct model for my case?

is it always better for convnet to see entire time series at once? how about splitting one time series for multiple training inputs?
also, should I use convnet 2D instead of 1D for multivariate time series?

I'm sorry if what I'm saying makes no or little sense. I'm still learning.

Thank you so much for making your work public.

Requiremets.txt file of packages' version?

Hi,
First, thank you for this incredible job. Do you have the requirements.txt file of the environment specification, I mean the packages and python version such that I can produce the results?

Appreciate your help in advance.

output variable

Does anyone know what the output variable is for this Wavenet model?

Dimension Error

I am working on clinical time series forecasting with different architectures including wavenet. I came across your wavenet blog post, very informative, and reproduced the work.
However, I get an error using a codebase with the wavenet model. I have tried several times, but can't find a solution. Please advise. Thank you.

ValueError: Error when checking target: expected lambda_1 to have 3 dimensions, but got array with shape (64, 1)

Attention with Seq To Seq

Hi Joseph,
Thanks for such a precise notebooks. I am wondering are you planning to use attention as part of LSTM to see if it does better.

running into dimension error

Hi,

I am new to keras, and trying to learn the model by adapting it to predict a time series with 82 samples with 1234 time steps.

my series_array shape is (82, 1234)
assembled exog_array with shape (82, 1234, 59)
set pred_steps = 5 (instead of 60)

didn't change anything else in the model architecture.

Then, when try to fit the model, I got following error:
InvalidArgumentError: Incompatible shapes: [82,60,1] vs. [82,4,1]
[[{{node training/Adam/gradients/loss_1/lambda_1_loss/sub_grad/BroadcastGradientArgs}}]]

Where does it go wrong and how can I fix this?
Any help is appreciated!

Thanks!

===============================================================
The full error message is below:

Epoch 1/15

InvalidArgumentError Traceback (most recent call last)
in
10 history = model.fit(encoder_input_data, decoder_target_data,
11 batch_size=batch_size,
---> 12 epochs=epochs)

D:\Anaconda3_5.3.1\envs\MLBt\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
1037 initial_epoch=initial_epoch,
1038 steps_per_epoch=steps_per_epoch,
-> 1039 validation_steps=validation_steps)
1040
1041 def evaluate(self, x=None, y=None,

D:\Anaconda3_5.3.1\envs\MLBt\lib\site-packages\keras\engine\training_arrays.py in fit_loop(model, f, ins, out_labels, batch_size, epochs, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch, steps_per_epoch, validation_steps)
197 ins_batch[i] = ins_batch[i].toarray()
198
--> 199 outs = f(ins_batch)
200 outs = to_list(outs)
201 for l, o in zip(out_labels, outs):

D:\Anaconda3_5.3.1\envs\MLBt\lib\site-packages\keras\backend\tensorflow_backend.py in call(self, inputs)
2713 return self._legacy_call(inputs)
2714
-> 2715 return self._call(inputs)
2716 else:
2717 if py_any(is_tensor(x) for x in inputs):

D:\Anaconda3_5.3.1\envs\MLBt\lib\site-packages\keras\backend\tensorflow_backend.py in _call(self, inputs)
2673 fetched = self._callable_fn(*array_vals, run_metadata=self.run_metadata)
2674 else:
-> 2675 fetched = self._callable_fn(*array_vals)
2676 return fetched[:len(self.outputs)]
2677

D:\Anaconda3_5.3.1\envs\MLBt\lib\site-packages\tensorflow_core\python\client\session.py in call(self, *args, **kwargs)
1470 ret = tf_session.TF_SessionRunCallable(self._session._session,
1471 self._handle, args,
-> 1472 run_metadata_ptr)
1473 if run_metadata:
1474 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

InvalidArgumentError: Incompatible shapes: [82,60,1] vs. [82,4,1]
[[{{node training/Adam/gradients/loss_1/lambda_1_loss/sub_grad/BroadcastGradientArgs}}]]

last_step_exog for i = pred_step in full_exog

I try to rebuild your notebook code in R

Doing so I was wandering about this code in the prediction function / for loop i in range(pred_steps):

last_step_exog = input_tensor[:,[(-pred_steps+1)+i],1:]

When i=pred_steps -1 last_step_exog =

Input[:,(-pred_steps+1)+pred_steps-1,1:]

Input[:,0,1:]

So the last (pred_step -1) dates in input plus the first date???

Could you comment this?! I do not get it

Validation error vs. training error

The code is very useful and explained in a very effective way. I have one question: why the validation error is lower than the training error, which seems to be counter-intuitive. I very much appreciate your insight regarding this.

How to implement with columnar time series?

Hi - awesome notebooks! I have a dataset that has the date in a column rather than across the rows like in your example. So - something like this:

Date, val1, val2
2020-01-01, 12, 14
2020-01-02, 14, 17
2020-01-03, 13, 19

and so on..

Is there an easy way to adjust for this in the data transformation step?

Thanks!

Implementation questions

Hi Joseph,

Thanks for posting this notebook about WaveNet implementation in the context of pure time-series forecast. It is very helpful for me to understand the model. However I have the following questions about your implementation in the notebook:

  1. Why is there no activation function applied to the Conv1D layers? It looks like all non-linearity of the model comes from the last fully connected layer.
  2. Should the dilation_rates be (2, 4, 8, ...) according to the diagram?
  3. Since only last 14 output values are used in the calculation of loss, can we truncate the input sequence (encoding_interval) to the width of the receptive field (128) without affecting training?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.