Giter VIP home page Giter VIP logo

lstm-neural-network-for-time-series-prediction's People

Contributors

jaungiers avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lstm-neural-network-for-time-series-prediction's Issues

TypeError: only size-1 arrays can be converted to Python scalars

Hi,

I installed all the requirements (pip -r requirements.txt) , using python3 (tried 3.5 and 3.6).
This is debian buster, but I have tried many dockers.

Using TensorFlow backend.
[Model] Model Compiled
Time taken: 0:00:00.955068
Traceback (most recent call last):
File "run.py", line 84, in
main()
File "run.py", line 46, in main
normalise = configs['data']['normalise']
File "/tmp/LSTM-Neural-Network-for-Time-Series-Prediction/core/data_processor.py", line 44, in get_train_data
x, y = self._next_window(i, seq_len, normalise)
File "/tmp/LSTM-Neural-Network-for-Time-Series-Prediction/core/data_processor.py", line 68, in _next_window
window = self.normalise_windows(window, single_window=True)[0] if normalise else window
File "/tmp/LSTM-Neural-Network-for-Time-Series-Prediction/core/data_processor.py", line 78, in normalise_windows
normalised_window = [((float(p) / float(window[0])) - 1) for p in window]
File "/tmp/FinanceModels/LSTM-Neural-Network-for-Time-Series-Prediction/core/data_processor.py", line 78, in
normalised_window = [((float(p) / float(window[0])) - 1) for p in window]
TypeError: only size-1 arrays can be converted to Python scalars

Anyone out there get the standard example to work?

NT.

Predictions always up

Hello, thank you for this code
predictions are always up
I have not made any change to the code, I am running in Python 3.6 and tensorflow 1.2.1
prediction

TypeError: Expected int32, got <tensorflow.python.ops.variables.Variable object at 0x7f3d6a773a10> of type 'Variable' instead.

When I run the code,I meet the following problems:
Traceback (most recent call last):
File "run.py", line 36, in
model = lstm.build_model([1, 50, 100, 1])
File "/home/cuiyi/Downloads/SPARNN-release/data/LSTM/LSTM-Neural-Network-for-Time-Series-Prediction/lstm.py", line 51, in build_model
return_sequences=True))
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 276, in add
layer.create_input_layer(batch_input_shape, input_dtype)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 370, in create_input_layer
self(x)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 487, in call
self.build(input_shapes[0])
File "/usr/local/lib/python2.7/dist-packages/keras/layers/recurrent.py", line 710, in build
self.W = K.concatenate([self.W_i, self.W_f, self.W_c, self.W_o])
File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 716, in concatenate
return tf.concat(axis, [to_dense(x) for x in tensors])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1047, in concat
dtype=dtypes.int32).get_shape(
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 651, in convert_to_tensor
as_ref=False)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 716, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 176, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 165, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 367, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 302, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).name))
TypeError: Expected int32, got <tensorflow.python.ops.variables.Variable object at 0x7f3d6a773a10> of type 'Variable' instead.

ValueError when training with more than 1 epoch

Using the latest code base, a ValueError is thrown after training the first epoch.

  • Python 3.5.2
  • TensorFlow 1.10.1
  • Numpy 1.14.0
  • Keras 2.2.2
  • Matplotlib 2.1.2
[Model] Model Compiled
Time taken: 0:00:00.809316
[Model] Training Started
[Model] 2 epochs, 32 batch size, 124 batches per epoch
Epoch 1/2
124/124 [==============================] - 45s 363ms/step - loss: 0.0022
Epoch 2/2
  1/124 [..............................] - ETA: 36s - loss: 6.4717e-04
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-27-e3c645699881> in <module>()
     76 
     77 if __name__=='__main__':
---> 78     main()

<ipython-input-27-e3c645699881> in main()
     58         batch_size = configs['training']['batch_size'],
     59         steps_per_epoch = steps_per_epoch,
---> 60         save_dir = configs['model']['save_dir']
     61     )
     62 

/notebooks/storage/core/model.py in train_generator(self, data_gen, epochs, batch_size, steps_per_epoch, save_dir)
     81                         epochs=epochs,
     82                         callbacks=callbacks,
---> 83                         workers=1
     84 		)
     85 

/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name +
     90                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

/usr/local/lib/python3.5/dist-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   1413             use_multiprocessing=use_multiprocessing,
   1414             shuffle=shuffle,
-> 1415             initial_epoch=initial_epoch)
   1416 
   1417     @interfaces.legacy_generator_methods_support

/usr/local/lib/python3.5/dist-packages/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
    211                 outs = model.train_on_batch(x, y,
    212                                             sample_weight=sample_weight,
--> 213                                             class_weight=class_weight)
    214 
    215                 outs = to_list(outs)

/usr/local/lib/python3.5/dist-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight)
   1207             x, y,
   1208             sample_weight=sample_weight,
-> 1209             class_weight=class_weight)
   1210         if self._uses_dynamic_learning_phase():
   1211             ins = x + y + sample_weights + [1.]

/usr/local/lib/python3.5/dist-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
    747             feed_input_shapes,
    748             check_batch_axis=False,  # Don't enforce the batch size.
--> 749             exception_prefix='input')
    750 
    751         if y is not None:

/usr/local/lib/python3.5/dist-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    125                         ': expected ' + names[i] + ' to have ' +
    126                         str(len(shape)) + ' dimensions, but got array '
--> 127                         'with shape ' + str(data_shape))
    128                 if not check_batch_axis:
    129                     data_shape = data_shape[1:]

ValueError: Error when checking input: expected lstm_1_input to have 3 dimensions, but got array with shape (8, 1)

Did some debugging using the example config for stock market prediction. For each step in epoch 1, generate_train_batch yields two arrays x_batch and y_batch of shape (32, 49, 1) and (32, 1) respectively.

At the start of epoch 2, model.fit_generator requests a batch of train data from the generator which is now exhausted. At this point generate_train_batch yields arrays of shape (17,) and (17, 1) which causes model.fit_generator to throw an error.

According to The Sequential model API, model.fit_generator expects the generator to loop over its data indefinitely. An epoch finishes when steps_per_epoch batches have been seen by the model. However, the generate_train_batch implementation stops yielding data after one epoch (as described above).

TypeError when running

I tried running the code and installing dependencies using pip and pip3 and I tried running the code using python and python3, but I keep getting this:

Using TensorFlow backend.
> Loading data...
> Data Loaded. Compiling...
Traceback (most recent call last):
  File "run.py", line 36, in <module>
    model = lstm.build_model([1, 50, 100, 1])
  File "/root/LSTM/lstm.py", line 53, in build_model
    return_sequences=True))
  File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 87, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keras/layers/recurrent.py", line 931, in __init__
    super(LSTM, self).__init__(**kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keras/layers/recurrent.py", line 181, in __init__
    super(Recurrent, self).__init__(**kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 275, in __init__
    raise TypeError('Keyword argument not understood:', kwarg)
TypeError: ('Keyword argument not understood:', 'input_dim')

Unused imports

Import math in data_processor.py, model.py, import Activation from keras.layers in model.py are not used in the code.

Illegal instruction

Any Idear what could be went wrong?
All module versions are fitting, however I am using pythen 3.6 on a centos system rather then python 3.5

[haxus@bud LSTM-Neural-Network-for-Time-Series-Prediction]$ python3.6 run.py
Using TensorFlow backend.
Illegal instruction

In addition I see the folowing:
Installing collected packages: numpy, tensorflow
Found existing installation: numpy 1.15.0
Uninstalling numpy-1.15.0:
Successfully uninstalled numpy-1.15.0
Successfully installed numpy-1.14.5 tensorflow-1.10.0

But in the REadme it says the numpy1.15 should be installed.

I maybe found the problem. As I do not have a GPU in my VM I installed tensorflow (cpu) rather then GPU. could this work anyway on GPU less Maschines?

Normalise Function?

Thanks for the review!
I don't understand the process of data processing by this program. What is the meaning of the normalise function? Is it specific to the stock data?Can it be done by other methods? What effect will it have?
Thank you very much if someone takes the time to reply to me!

Cumulating prediction and normalization errors

Hi,

Looking at the playback portion of your code you are cumulating natural NN error and normalization error.

Prior beeing append to the i+1 test widow, prediction result from rank i should be de-normalized according to rank i normative value then re-normalized according to rank i+1 normative value.

This error is stacking as long as the prediction moves to steps in the future from the last known real value.

Issues reproducing sine wave example

I've tried to reproduce the sine wave example using the following code in the main function seen below.

Apparently, something went wrong, see the image I have received.

Do you know what I need to fix in order to reproduce your example?

Thanks!

#Main Run Thread
if name=='main':

os.system('clear')
global_start_time = time.time()
epochs  = 1
seq_len = 50

print '> Loading data... '

X_train, y_train, X_test, y_test = lstm.load_data('sinwave.csv', seq_len, True)

print '> Data Loaded. Compiling...'

model = lstm.build_model([1, 50, 100, 1])

model.fit(
    X_train,
    y_train,
    batch_size=512,
    nb_epoch=epochs,
    validation_split=0.05)

predicted = lstm.predict_point_by_point(model, X_test)        

print 'Training duration (s) : ', time.time() - global_start_time
plot_results(predicted, y_train)

prediction_sine_wave_01

Question about normalise_windows()

@jaungiers
Hello, I have a question about normalise_windows().
The dataset is divided into data sequence in window size. The original data is normalised in its window, like this,
normalised_window = [((float(p) / float(window[0])) - 1) for p in window]
But sequence A(window A) doesn't have the same base with sequence B(window B). They are normalised in their sequence's first data. I don't know whether it will affect the prediction result.
I have tried the global normalization, but I don't get apparently different result.
Due to the random of neural network, I get different result every time even if I run the original program.

Model parameter more than number of samples

Hi @jaungiers , thanks for the code.

This is a general question about training a LSTM model. When I check the model we have for sp500, there are totally 70901 parameters while we only have 3709 rows of training samples. How is overfitting not happening in this case? Or for RNN we have a different way to compare number of parameters with number of training samples? Thanks.
image

Possible to select FP16, FP32 or FP64 to analyse training duration and precision?

Hello @jaungiers ,

I am really impressed by your work, a few weeks ago I downloaded your LSTM model and corrected it for Tensorflow 1.0 / Python 3.5 (range not equal to xrange and so fourth...)...and now you have done the same :-)

I was wondering if you have looked into using different data types...? Since GPU computing offers the possibility to train models quite fast and especially when using data types that require less storage memory (e.g. FP16) it would be a very nice addition to be able to experiment with different data types. This would not only be to determine the potential increase in calculation speed but also to analyse the effects on model accuracy (for this, the random numbers should be seeded so that the weight and bias matrices are identical for each run). Do you think data type selection something that could easily be added to this project?

Please let me know if you have any questions for me, I'd be happy to assist with further thoughts and explanations. I recently started using tensorflow after struggling with Matlab quite a bit. (https://se.mathworks.com/matlabcentral/answers/285851-suggestions-for-improvement-s-on-narnet-multistep-ahead-predictions-on-the-solar-dataset)

Best Regards
Staffan

Bad prediction!

I test this code for my data but result was very bad. I dont understand where is wrong. can you help me and test code with my data?

Type error int32

When I runs, lstm.py appears one error in line 53(return_sequences=True). The error is that expected int32, got list containing Tensors of type '_Message' instead

One more Point

Hello, i want to get one more point out of the Plot.
True Data and Prediciton end at the same Time. I need prediction one Point in the future.

Like this:
anmerkung 2018-12-30 152657

How must i modify the Code?

Wrong numpy version

While installing on Ubuntu Linux I've got:

tensorflow-gpu 1.10.0 has requirement numpy<=1.14.5,>=1.13.3,
but you'll have numpy 1.15.0 which is incompatible.

I am guessing that numpy should be downgraded to 1.14.5 in requirements.txt.

IndexError; too many indices for array when trying to run

I'm trying to run this code and getting the error messages below:

IndexErrorTraceback (most recent call last)
in ()
4 print '> Loading data... '
5
----> 6 X_train, y_train, X_test, y_test = lstm.load_data('sinwave.csv', seq_len, True)
7
8 print '> Data Loaded. Compiling...'

in lstm.load_data(filename, seq_len, normalise_window)
18
19 row = round(0.9 * result.shape[0])
---> 20 train = result[:row, :]
21 np.random.shuffle(train)
22 x_train = train[:, :-1]

IndexError: too many indices for array

epochs set 2, train raise error

generate_train_batch is already add "while True:"
but, train raise error.
ValueError: Error when checking input: expected lstm_1_input to have 3 dimensions, but got array with shape (8, 1)

Predicting values outside the range?

Hi, (kinda new to LSTM models)

Quick question: instead of predicting the values at every 50th data point on the test set, how would I apply the model to predict future values? Example, predicting the next data point, or next 10, 20, or X amount?

How can I apply the model.predict function for that case?

LSTM prediction when time intervals are not fixed

Hello,

I have a dataset of raining time in a year. the dataset is something like this:

Time       Raining      Wind-speed         Temp
10            23           50               22
14            10           34               20
15             8           23               18

.......

magnitudes are just as an example. as you see time intervals are not fixed. I want to predict the next raining time by giving a previous time. But all examples I found about LSTM had fixed increasing time steps. for example 10,11,12,13,14 ...., such as stock market or sinus wave

But in my problem time steps are not fixed, but the time has an increasing rate. starts with a number and increases but the incident steps are not fixed. What should I do in these problems because I want to predict the next raining time

Rolling column 1 predicted data into other columns for next step

I could be wrong but in the model.py predict_sequences_multiple it seems to be taking the 1st range of sequence_length data from the test set for input to the prediction. Thus is this trying to simulate the 1st point in time after the sequence length using a set of the training on data prior to this sequence length?. Thus we can only start to predict for a point in time after the 1st sequence length of the test data.

I then see that the predicted 1D value (normalised price) is then used to populate the rolling sequence length data in every column. If we were using columns price, volume and date (assuming seasonal dependant stock) should it not try to predict in 3 dimensions and roll with predicted values of each of these?.

Addition of Exogenous Variable for LSTM Time Prediction.

Nice Repo!!

I wanted to ask if the addition of other factors (exogenous variables) than what we are predicting can be added to the model that you have created for sp500.csv file. For example, adding the book value, twitter sentiment score, etc.

ValueError: could not convert string to float:

Traceback (most recent call last):
File "/Users/gsk/Desktop/LSTM-Neural-Network-for-Time-Series-Prediction/run.py", line 32, in
X_train, y_train, X_test, y_test = lstm.load_data('ok.csv', seq_len, True)
File "/Users/gsk/Desktop/LSTM-Neural-Network-for-Time-Series-Prediction/lstm.py", line 23, in load_data
result = normalise_windows(result)
File "/Users/gsk/Desktop/LSTM-Neural-Network-for-Time-Series-Prediction/lstm.py", line 43, in normalise_windows
normalised_window = [((float(p) / float(window[0])) - 1) for p in window]
ValueError: could not convert string to float:

Not able to compile

Hi, I am new to time series prediction.As a beginner, i tried to run ur code for sinewave generation but i getting the following error message
"ValueError: Error when checking target: expected dense_2 to have 3 dimensions, but got array with shape (3950, 1)"
When the sequence length is 50, we have 49*1 matrix but the output is a single value.Is my understanding wrong.Please help

About normalization and de-normalization

I inspect the code and in my view, you normalize each window data that contains the input and result both. (e.g. window data is in size $seq_len$, [0:seq_len-1] is input and the last one should be output) In the case that I didn't know the output, i.e., I only know input data of size $seq_len-1$, how could I normalize the input for prediction?

de-normalise

Hello, thanks to your work. It is very useful to me. But I want to know how can I get the de-normalised result, for which I can contrast with the other methods. Thank you very much!

In- & Out-memory results differ quite a lot

Hello Jakob,
Thank you for this inspiring, well written piece of code !

I noticed results differ when using in-memory training & out-of-memory generative training. I did not modify the code otherwise in any other file than run.py: I'm just commenting the "out-of-memory generative traning" block and un-commenting the block "in-memory training" and results are the following:

Results without any modification (very similar to what you posted in your article, just another run without fixed seeds):
out-of-mem generative training

Results using in-memory training:
in-memory training

Looking at the code in file model.py I can't understand why in-memory training produces these results. Is this normal ? Could generative training produce much better results, as we have here ?

I also modified your code to check performance for binary up/down price prediction, and results are also better using generative training. I'm wondering what is happening here.

Thank you again for sharing your code, it really is the best software & article I could find about LSTM for stock prediction.

ZeroDivisionError: float division by zero

Using TensorFlow backend.
[Model] Model Compiled
Time taken: 0:00:00.847986
Traceback (most recent call last):
File "run.py", line 91, in
main()
File "run.py", line 50, in main
normalise=configs['data']['normalise']
File "/home/ubuntu/LSTM-Neural-Network-for-Time-Series-Prediction/core/data_processor.py", line 44, in get_train_data
x, y = self._next_window(i, seq_len, normalise)
File "/home/ubuntu/LSTM-Neural-Network-for-Time-Series-Prediction/core/data_processor.py", line 69, in _next_window
window = self.normalise_windows(window, single_window=True)[0] if normalise else window
File "/home/ubuntu/LSTM-Neural-Network-for-Time-Series-Prediction/core/data_processor.py", line 81, in normalise_windows
normalised_col = [((float(p) / float(window[0, col_i])) - 1) for p in window[:, col_i]]
File "/home/ubuntu/LSTM-Neural-Network-for-Time-Series-Prediction/core/data_processor.py", line 81, in
normalised_col = [((float(p) / float(window[0, col_i])) - 1) for p in window[:, col_i]]
ZeroDivisionError: float division by zero

Can you fix this please

Predictions are truncated to -1 and 1

Hi, the output of the model is always between -1 and 1, this makes the fitting of a real-world data impossible (in case I do not want to do a normalize). May I know how to fix this?

Span inconsistency of time series!

Your project is excellent!
However, I observed the given data 'sp500.csv' and found that the span of 'Date' is inconsistency. And I didn't find any solutions in the code.

How to run that code?

Hi, I'm new to all.

After installation of the dependencies, what is the procedure to run this code?

Thank you.

PS: I even have a Jupyter Notebook installed.

Update: Oh, got it... python run.py

Why is the result of each training LSTM different?

I have already set the random seed and let shuffle = False in fit. But I still get completely different results when I run model in the same data and config. Can you help me get out of the issues?
Thanks.

validity of testing method

Hello,
In the case of the sine wave, we are sampling the wave at equal "time" or x intervals. However, since its periodic, wouldn't that mean that test data is essentially a copy of the training data ? Would that be a valid method of testing though ?

Thanks !

@jaungiers when i try to change the data set. there is an issue that there are too many indices ?

File "C:\study\Finamics\Project\LSTM_TSP\run.py", line 90, in
main()
File "C:\study\Finamics\Project\LSTM_TSP\run.py", line 78, in main
normalise=configs['data']['normalise']
File "C:\study\Finamics\Project\LSTM_TSP\core\data_processor.py", line 33, in get_test_data
x = data_windows[:, :-1]
IndexError: too many indices for array
I have used Apple data for 1 year which has seven indices high,low,open,closed,date and adj close. kindly reply asap my boss is breaking my nerve.

Discuss about 'shuffle' data

@jaungiers
Hello, I have two questions about 'shuffle' data.
(1) in load_data() function
np.random.shuffle(train)
whether can be replaced in Keras's fit function, like this
model.fit(
X_train,
y_train,
batch_size=512,
nb_epoch=epochs,
verbose=1,
validation_split=0.05,
shuffle=True )
(2) Like the stock price data, it's a strict time series data. If we shuffle the dataset, we will destroy the sequence information. I'm not sure whether it will affect the result.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.