Giter VIP home page Giter VIP logo

stock-rnn's People

Contributors

lilianweng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stock-rnn's Issues

The "Time-reversed" Problem of Some Crawled Data

When I was downloading historical data of SP500 from Yahoo Finance ^GSPC, I found that the data was time-reversed, i.e. the latest entries of data were put on the top of the DataFrame. This phenomenon also exists in nearly all of the data in the provided data archive (stock-data-lilianweng.tar.gz) except SP500.csv and _SP500.csv.
Now here is the point: we did not sort the data by time to ensure the basic requirement of LSTM model! In data_model.py, Line 25 to 35:

        # Read csv file
        raw_df = pd.read_csv(os.path.join("data", "%s.csv" % stock_sym))

        # Merge into one sequence
        if close_price_only:
            self.raw_seq = raw_df['Close'].tolist()
        else:
            self.raw_seq = [price for tup in raw_df[['Open', 'Close']].values for price in tup]

        self.raw_seq = np.array(self.raw_seq)
        self.train_X, self.train_y, self.test_X, self.test_y = self._prepare_data(self.raw_seq)

We simply extracted the close prices out of the DataFrame without checking the time. Therefore we were using the earliest 10% for test instead of the latest 10%, which is unreasonable.

Maybe we should sort the data by time before extracting the closing prices, or make sure our data is read in a right order / a consistent format.

Error Running data_fetcher.py

When i run the code i am getting the following error. I did got lot of module errors and i have changed the urllib2 to urlib2.requests and beautifulSoap to bs4. Can you please check this issue and let me know.

I did add the future library to resolve the below issue but it failed.


UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "

Variables: name (type shape) [size]

dynamic_rnn/lstm_cell/kernel:0 (float32_ref 129x512) [66048, bytes: 264192]
dynamic_rnn/lstm_cell/bias:0 (float32_ref 512) [512, bytes: 2048]
w:0 (float32_ref 128x1) [128, bytes: 512]
b:0 (float32_ref 1) [1, bytes: 4]
Total size of variables: 66689
Total bytes of variables: 266756
len(merged_test_X) = 0
len(merged_test_y) = 0
len(merged_test_labels) = 0
{'SP500': array([], dtype=float64)}
Start training for stocks: ['SP500']
Traceback (most recent call last):
File "main.py", line 111, in
tf.app.run()
File "C:\Users\abc\PycharmProjects\Stocks-RNN\tensorflow\python\platform\app.py", line 125, in run
_sys.exit(main(argv))
File "main.py", line 104, in main
rnn_model.train(stock_data_list, FLAGS)
File "C:\Users\abc\PycharmProjects\Stocks-RNN\model_rnn.py", line 212, in train
for epoch in xrange(config.max_epoch):
NameError: name 'xrange' is not defined

Error with code

File "C:\Users\PRATISHRUTI\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1100, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (0,) for Tensor 'inputs:0', which has shape '(?, 30, 1)'
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program

c:\users\pratishruti\anaconda3\lib\site-packages\tensorflow\python\client\session.py(1100)_run()
-> % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))

The prediction accuracy is not good enough to tell the trend happens in the real market

Hi,
Thank you for your sharing, it is very good tutorial for us to learn how to predict stock price with LSTM method. I tested the SP500 data with lstm = 128 and epoch =500, but the result is not so good.
As the log shows below, the days that the network predict correctly is always up and down around the base value 103, and the profit is also going up and down around 0.
So I wonder how to optimize this network to make it more accuracy and profitable? Could you please give me some hint or tutorial? Thank you very much!
log_20181218.txt

(base) D:\Codes\lstm_stock_prediction\stock-rnn>python main.py --stock_symbol=SP500 --train --input_size=1 --lstm_size=128 --max_epoch=500
Start training for stocks: ['SP500']
Step:1 [Epoch:0] [Learning rate: 0.001000] train_loss:0.010059 test_loss:0.005225
profit = -0.025619479475635365, correct days = 103
Step:101 [Epoch:0] [Learning rate: 0.001000] train_loss:0.001643 test_loss:0.001153
profit = -0.025619479475635365, correct days = 103
Step:201 [Epoch:0] [Learning rate: 0.001000] train_loss:0.000441 test_loss:0.000495
profit = -0.025619479475635365, correct days = 103
Step:301 [Epoch:1] [Learning rate: 0.001000] train_loss:0.000119 test_loss:0.000147
profit = 0.03595531261737828, correct days = 109
Step:401 [Epoch:1] [Learning rate: 0.001000] train_loss:0.000701 test_loss:0.000571
profit = -0.025619479475635365, correct days = 103
Step:501 [Epoch:1] [Learning rate: 0.001000] train_loss:0.000854 test_loss:0.000868
profit = -0.025619479475635365, correct days = 103
Step:601 [Epoch:2] [Learning rate: 0.001000] train_loss:0.000214 test_loss:0.000535
profit = -0.0031643654923697584, correct days = 96
Step:701 [Epoch:2] [Learning rate: 0.001000] train_loss:0.000310 test_loss:0.000181
profit = -0.07838784660630804, correct days = 99
Step:801 [Epoch:3] [Learning rate: 0.001000] train_loss:0.000716 test_loss:0.000284
profit = 0.013234707421988623, correct days = 98
Step:901 [Epoch:3] [Learning rate: 0.001000] train_loss:0.000177 test_loss:0.000135
profit = -0.03711239240076847, correct days = 105
Step:1001 [Epoch:3] [Learning rate: 0.001000] train_loss:0.000171 test_loss:0.000161
profit = -0.15784789364253704, correct days = 94
Step:1101 [Epoch:4] [Learning rate: 0.001000] train_loss:0.000148 test_loss:0.000140
profit = -0.09045673542451504, correct days = 103
Step:1201 [Epoch:4] [Learning rate: 0.001000] train_loss:0.000112 test_loss:0.000126
profit = 0.011780450020617894, correct days = 96
Step:1301 [Epoch:5] [Learning rate: 0.000990] train_loss:0.000123 test_loss:0.000091
profit = 0.2644821257042925, correct days = 108
Step:1401 [Epoch:5] [Learning rate: 0.000990] train_loss:0.000087 test_loss:0.000110
profit = 0.03148794396024146, correct days = 109
Step:1501 [Epoch:5] [Learning rate: 0.000990] train_loss:0.000095 test_loss:0.000151
profit = -0.018273540195676286, correct days = 91
Step:1601 [Epoch:6] [Learning rate: 0.000980] train_loss:0.000209 test_loss:0.000134
profit = -0.10868626366788636, correct days = 95
Step:1701 [Epoch:6] [Learning rate: 0.000980] train_loss:0.000080 test_loss:0.000109
profit = 0.020400411456001355, correct days = 99
Step:1801 [Epoch:6] [Learning rate: 0.000980] train_loss:0.000095 test_loss:0.000112
profit = -0.014259134507715987, correct days = 102
Step:1901 [Epoch:7] [Learning rate: 0.000970] train_loss:0.000123 test_loss:0.000107
profit = -0.00934783026174768, correct days = 104
Step:2001 [Epoch:7] [Learning rate: 0.000970] train_loss:0.000070 test_loss:0.000113
profit = 0.1037896495909153, correct days = 102
Step:2101 [Epoch:8] [Learning rate: 0.000961] train_loss:0.000118 test_loss:0.000111
profit = -0.08947914245565769, correct days = 97
Step:2201 [Epoch:8] [Learning rate: 0.000961] train_loss:0.000136 test_loss:0.000178
profit = -0.02450670956447809, correct days = 102
Step:2301 [Epoch:8] [Learning rate: 0.000961] train_loss:0.000100 test_loss:0.000102
profit = -0.21570318289249812, correct days = 98
Step:2401 [Epoch:9] [Learning rate: 0.000951] train_loss:0.000088 test_loss:0.000083
profit = 0.09128718752173082, correct days = 107
Step:2501 [Epoch:9] [Learning rate: 0.000951] train_loss:0.000074 test_loss:0.000093
profit = -0.08180089271755475, correct days = 88
Step:2601 [Epoch:10] [Learning rate: 0.000941] train_loss:0.000094 test_loss:0.000108
profit = 0.011144804352940407, correct days = 100
Step:2701 [Epoch:10] [Learning rate: 0.000941] train_loss:0.000228 test_loss:0.000209
profit = 0.021127617819259314, correct days = 98
Step:2801 [Epoch:10] [Learning rate: 0.000941] train_loss:0.000115 test_loss:0.000087
profit = -0.032739640272618664, correct days = 96
Step:2901 [Epoch:11] [Learning rate: 0.000932] train_loss:0.000293 test_loss:0.000187
profit = 0.007579035723463634, correct days = 98
Step:3001 [Epoch:11] [Learning rate: 0.000932] train_loss:0.000024 test_loss:0.000093
profit = -0.0618005583512683, correct days = 100
Step:3101 [Epoch:12] [Learning rate: 0.000923] train_loss:0.000091 test_loss:0.000100
profit = -0.05968355101690015, correct days = 102
Step:3201 [Epoch:12] [Learning rate: 0.000923] train_loss:0.000051 test_loss:0.000099
profit = -0.09661448827067709, correct days = 104
Step:3301 [Epoch:12] [Learning rate: 0.000923] train_loss:0.000093 test_loss:0.000084
profit = 0.0021639726211727384, correct days = 94
Step:3401 [Epoch:13] [Learning rate: 0.000914] train_loss:0.000090 test_loss:0.000090
profit = 0.06872824583359372, correct days = 101
Step:3501 [Epoch:13] [Learning rate: 0.000914] train_loss:0.000041 test_loss:0.000098
profit = 0.023954294962177713, correct days = 103
Step:3601 [Epoch:13] [Learning rate: 0.000914] train_loss:0.000076 test_loss:0.000096
profit = -0.04558137123357253, correct days = 97
Step:3701 [Epoch:14] [Learning rate: 0.000904] train_loss:0.000054 test_loss:0.000085
profit = 0.15187505724517514, correct days = 108
Step:3801 [Epoch:14] [Learning rate: 0.000904] train_loss:0.000096 test_loss:0.000129
profit = 0.07888562137218325, correct days = 101
Step:3901 [Epoch:15] [Learning rate: 0.000895] train_loss:0.000121 test_loss:0.000094
profit = 0.0340183348742743, correct days = 96
Step:4001 [Epoch:15] [Learning rate: 0.000895] train_loss:0.000532 test_loss:0.000122
profit = 0.031501323257416725, correct days = 101
Step:4101 [Epoch:15] [Learning rate: 0.000895] train_loss:0.000069 test_loss:0.000086
profit = -0.21811953823266494, correct days = 90
Step:4201 [Epoch:16] [Learning rate: 0.000886] train_loss:0.000150 test_loss:0.000087
profit = 0.19773472237464762, correct days = 111
Step:4301 [Epoch:16] [Learning rate: 0.000886] train_loss:0.000075 test_loss:0.000092
profit = -0.045499799223870485, correct days = 100
Step:4401 [Epoch:17] [Learning rate: 0.000878] train_loss:0.000101 test_loss:0.000096
profit = 0.0660783915371651, correct days = 91
Step:4501 [Epoch:17] [Learning rate: 0.000878] train_loss:0.000078 test_loss:0.000110
profit = -0.028332832025963373, correct days = 98
Step:4601 [Epoch:17] [Learning rate: 0.000878] train_loss:0.000160 test_loss:0.000092
profit = -0.031151056620531414, correct days = 98
Step:4701 [Epoch:18] [Learning rate: 0.000869] train_loss:0.000140 test_loss:0.000083
profit = 0.17060743776145848, correct days = 113
Step:4801 [Epoch:18] [Learning rate: 0.000869] train_loss:0.000104 test_loss:0.000081
profit = 0.02184867743697627, correct days = 99
Step:4901 [Epoch:18] [Learning rate: 0.000869] train_loss:0.000080 test_loss:0.000093
profit = -0.02471952081072526, correct days = 95
Step:5001 [Epoch:19] [Learning rate: 0.000860] train_loss:0.000087 test_loss:0.000096
profit = 0.2184404621144641, correct days = 107
Step:5101 [Epoch:19] [Learning rate: 0.000860] train_loss:0.000172 test_loss:0.000083
profit = -0.011192151034966291, correct days = 98
Step:5201 [Epoch:20] [Learning rate: 0.000851] train_loss:0.000226 test_loss:0.000089
profit = 0.14785778266773175, correct days = 107
Step:5301 [Epoch:20] [Learning rate: 0.000851] train_loss:0.000060 test_loss:0.000084
profit = 0.1429639411365925, correct days = 103
Step:5401 [Epoch:20] [Learning rate: 0.000851] train_loss:0.000066 test_loss:0.000092
profit = 0.08617641740816884, correct days = 103
Step:5501 [Epoch:21] [Learning rate: 0.000843] train_loss:0.000162 test_loss:0.000096
profit = 0.15679533090121722, correct days = 103
Step:5601 [Epoch:21] [Learning rate: 0.000843] train_loss:0.000127 test_loss:0.000091
profit = -0.06225312490171231, correct days = 93
Step:5701 [Epoch:22] [Learning rate: 0.000835] train_loss:0.000086 test_loss:0.000094
profit = 0.006754305152290363, correct days = 103
Step:5801 [Epoch:22] [Learning rate: 0.000835] train_loss:0.000057 test_loss:0.000080
profit = 0.07339482376505135, correct days = 98
Step:5901 [Epoch:22] [Learning rate: 0.000835] train_loss:0.000087 test_loss:0.000091
profit = -0.0453350830909246, correct days = 98
Step:6001 [Epoch:23] [Learning rate: 0.000826] train_loss:0.000055 test_loss:0.000094
profit = -0.06847208303629626, correct days = 105
Step:6101 [Epoch:23] [Learning rate: 0.000826] train_loss:0.000188 test_loss:0.000078
profit = 0.11029498773202973, correct days = 108
Step:6201 [Epoch:24] [Learning rate: 0.000818] train_loss:0.000147 test_loss:0.000083
profit = -0.028336466555966555, correct days = 91
Step:6301 [Epoch:24] [Learning rate: 0.000818] train_loss:0.000090 test_loss:0.000084
profit = -0.1484526580651776, correct days = 86
Step:6401 [Epoch:24] [Learning rate: 0.000818] train_loss:0.000113 test_loss:0.000086
profit = 0.012674613668498425, correct days = 99
Step:6501 [Epoch:25] [Learning rate: 0.000810] train_loss:0.000153 test_loss:0.000088
profit = 0.04810375951538559, correct days = 109
Step:6601 [Epoch:25] [Learning rate: 0.000810] train_loss:0.000066 test_loss:0.000083
profit = -0.1404029246915609, correct days = 93
Step:6701 [Epoch:25] [Learning rate: 0.000810] train_loss:0.000041 test_loss:0.000088
profit = -0.05891110625639906, correct days = 91
Step:6801 [Epoch:26] [Learning rate: 0.000802] train_loss:0.000057 test_loss:0.000088
profit = 0.1322223423826442, correct days = 104
Step:6901 [Epoch:26] [Learning rate: 0.000802] train_loss:0.000063 test_loss:0.000085
profit = -0.06354392570483236, correct days = 88
Step:7001 [Epoch:27] [Learning rate: 0.000794] train_loss:0.000021 test_loss:0.000084
profit = -0.05990871436288636, correct days = 92
Step:7101 [Epoch:27] [Learning rate: 0.000794] train_loss:0.000073 test_loss:0.000083
profit = -0.06882257311584372, correct days = 95
Step:7201 [Epoch:27] [Learning rate: 0.000794] train_loss:0.000085 test_loss:0.000084
profit = 0.009471227578168873, correct days = 101
Step:7301 [Epoch:28] [Learning rate: 0.000786] train_loss:0.000332 test_loss:0.000088
profit = 0.013246611564542321, correct days = 98
Step:7401 [Epoch:28] [Learning rate: 0.000786] train_loss:0.000026 test_loss:0.000080
profit = -0.004331457631420066, correct days = 90
Step:7501 [Epoch:29] [Learning rate: 0.000778] train_loss:0.000102 test_loss:0.000082
profit = 0.08324904433963864, correct days = 105
Step:7601 [Epoch:29] [Learning rate: 0.000778] train_loss:0.000149 test_loss:0.000089
profit = -0.06631912575570487, correct days = 96
Step:7701 [Epoch:29] [Learning rate: 0.000778] train_loss:0.000032 test_loss:0.000078
profit = 0.11948236955133507, correct days = 108
Step:7801 [Epoch:30] [Learning rate: 0.000770] train_loss:0.000074 test_loss:0.000078
profit = 0.139726090498706, correct days = 106
Step:7901 [Epoch:30] [Learning rate: 0.000770] train_loss:0.000034 test_loss:0.000076
profit = -0.016972300396177586, correct days = 99
Step:8001 [Epoch:31] [Learning rate: 0.000762] train_loss:0.001605 test_loss:0.000131
profit = -0.09545949421338473, correct days = 90
Step:8101 [Epoch:31] [Learning rate: 0.000762] train_loss:0.000035 test_loss:0.000081
profit = 0.08556577392037978, correct days = 103
Step:8201 [Epoch:31] [Learning rate: 0.000762] train_loss:0.000091 test_loss:0.000091
profit = -0.19325060724099796, correct days = 94
Step:8301 [Epoch:32] [Learning rate: 0.000755] train_loss:0.000044 test_loss:0.000087
profit = -0.10109258556147382, correct days = 100
Step:8401 [Epoch:32] [Learning rate: 0.000755] train_loss:0.000031 test_loss:0.000079
profit = -0.11426217750847367, correct days = 103
Step:8501 [Epoch:32] [Learning rate: 0.000755] train_loss:0.000041 test_loss:0.000083
profit = 0.15642649504928452, correct days = 109
Step:8601 [Epoch:33] [Learning rate: 0.000747] train_loss:0.000079 test_loss:0.000079
profit = -0.13959935335863138, correct days = 97
Step:8701 [Epoch:33] [Learning rate: 0.000747] train_loss:0.000135 test_loss:0.000082
profit = -0.13748259876316216, correct days = 95
Step:8801 [Epoch:34] [Learning rate: 0.000740] train_loss:0.000061 test_loss:0.000081
profit = -0.1009435581331456, correct days = 100
Step:8901 [Epoch:34] [Learning rate: 0.000740] train_loss:0.000121 test_loss:0.000080
profit = -0.15250229851266173, correct days = 100
Step:9001 [Epoch:34] [Learning rate: 0.000740] train_loss:0.000032 test_loss:0.000084
profit = -0.0038252172792784256, correct days = 106
Step:9101 [Epoch:35] [Learning rate: 0.000732] train_loss:0.000088 test_loss:0.000082
profit = -0.026277332670350595, correct days = 94
Step:9201 [Epoch:35] [Learning rate: 0.000732] train_loss:0.000059 test_loss:0.000085
profit = -0.23764513492667605, correct days = 91
Step:9301 [Epoch:36] [Learning rate: 0.000725] train_loss:0.000049 test_loss:0.000085
profit = -0.13331856363652206, correct days = 101
Step:9401 [Epoch:36] [Learning rate: 0.000725] train_loss:0.000121 test_loss:0.000084
profit = -0.08096660712024006, correct days = 104
Step:9501 [Epoch:36] [Learning rate: 0.000725] train_loss:0.000050 test_loss:0.000079
profit = -0.035681628188596215, correct days = 97
Step:9601 [Epoch:37] [Learning rate: 0.000718] train_loss:0.000047 test_loss:0.000074
profit = 0.25255923647534206, correct days = 111
Step:9701 [Epoch:37] [Learning rate: 0.000718] train_loss:0.000052 test_loss:0.000082
profit = 0.1405303968308479, correct days = 105
Step:9801 [Epoch:37] [Learning rate: 0.000718] train_loss:0.000047 test_loss:0.000076
profit = -0.10761144730282124, correct days = 105
Step:9901 [Epoch:38] [Learning rate: 0.000711] train_loss:0.000035 test_loss:0.000078
profit = -0.23780492199793624, correct days = 93
Step:10001 [Epoch:38] [Learning rate: 0.000711] train_loss:0.000218 test_loss:0.000087
profit = 0.0986011845157021, correct days = 100
Step:10101 [Epoch:39] [Learning rate: 0.000703] train_loss:0.000072 test_loss:0.000075
profit = 0.3832697111375293, correct days = 113
Step:10201 [Epoch:39] [Learning rate: 0.000703] train_loss:0.000168 test_loss:0.000080
profit = 0.02633109726977001, correct days = 103
Step:10301 [Epoch:39] [Learning rate: 0.000703] train_loss:0.000037 test_loss:0.000082
profit = -0.07761773262888172, correct days = 102
Step:10401 [Epoch:40] [Learning rate: 0.000696] train_loss:0.000065 test_loss:0.000075
profit = 0.06679532257823129, correct days = 105
Step:10501 [Epoch:40] [Learning rate: 0.000696] train_loss:0.000049 test_loss:0.000082
profit = 0.017065803047925487, correct days = 110
Step:10601 [Epoch:41] [Learning rate: 0.000689] train_loss:0.000040 test_loss:0.000074
profit = 0.004761161756203225, correct days = 97
Step:10701 [Epoch:41] [Learning rate: 0.000689] train_loss:0.000043 test_loss:0.000078
profit = 0.14491704099142855, correct days = 100
Step:10801 [Epoch:41] [Learning rate: 0.000689] train_loss:0.000886 test_loss:0.000095
profit = -0.02807158063995374, correct days = 96
Step:10901 [Epoch:42] [Learning rate: 0.000683] train_loss:0.000043 test_loss:0.000078
profit = 0.2315954200925946, correct days = 111
Step:11001 [Epoch:42] [Learning rate: 0.000683] train_loss:0.000119 test_loss:0.000083
profit = 0.03790362437540595, correct days = 97
Step:11101 [Epoch:43] [Learning rate: 0.000676] train_loss:0.000064 test_loss:0.000089
profit = -0.005365731314907918, correct days = 103
Step:11201 [Epoch:43] [Learning rate: 0.000676] train_loss:0.000112 test_loss:0.000088
profit = 0.1674707399976857, correct days = 112
Step:11301 [Epoch:43] [Learning rate: 0.000676] train_loss:0.000103 test_loss:0.000081
profit = -0.11721664955763622, correct days = 93
Step:11401 [Epoch:44] [Learning rate: 0.000669] train_loss:0.000014 test_loss:0.000097
profit = 0.057232079693976146, correct days = 107
Step:11501 [Epoch:44] [Learning rate: 0.000669] train_loss:0.000081 test_loss:0.000079
profit = -0.052250048124326764, correct days = 93
Step:11601 [Epoch:44] [Learning rate: 0.000669] train_loss:0.000064 test_loss:0.000077
profit = 0.1571850086461729, correct days = 110
Step:11701 [Epoch:45] [Learning rate: 0.000662] train_loss:0.000121 test_loss:0.000078
profit = -0.18832862493339786, correct days = 92
Step:11801 [Epoch:45] [Learning rate: 0.000662] train_loss:0.000043 test_loss:0.000076
profit = 0.042838998582060195, correct days = 102
Step:11901 [Epoch:46] [Learning rate: 0.000656] train_loss:0.000250 test_loss:0.000078
profit = 0.043138550283145505, correct days = 101
Step:12001 [Epoch:46] [Learning rate: 0.000656] train_loss:0.000115 test_loss:0.000083
profit = -0.09371476046460014, correct days = 98
Step:12101 [Epoch:46] [Learning rate: 0.000656] train_loss:0.000033 test_loss:0.000080
profit = -0.05255161267961761, correct days = 94
Step:12201 [Epoch:47] [Learning rate: 0.000649] train_loss:0.000227 test_loss:0.000078
profit = -0.1225627615973589, correct days = 96
Step:12301 [Epoch:47] [Learning rate: 0.000649] train_loss:0.000050 test_loss:0.000080
profit = 0.15474970939366506, correct days = 117
Step:12401 [Epoch:48] [Learning rate: 0.000643] train_loss:0.000034 test_loss:0.000078
profit = 0.24680065663228767, correct days = 109
Step:12501 [Epoch:48] [Learning rate: 0.000643] train_loss:0.000111 test_loss:0.000080
profit = 0.09462146415524841, correct days = 105
Step:12601 [Epoch:48] [Learning rate: 0.000643] train_loss:0.000213 test_loss:0.000080
profit = 0.018383620867890582, correct days = 94
Step:12701 [Epoch:49] [Learning rate: 0.000636] train_loss:0.000107 test_loss:0.000079
profit = 0.06755930317461545, correct days = 102
Step:12801 [Epoch:49] [Learning rate: 0.000636] train_loss:0.000192 test_loss:0.000083
profit = -0.10714939290982162, correct days = 98
Step:12901 [Epoch:50] [Learning rate: 0.000630] train_loss:0.000079 test_loss:0.000081
profit = 0.1120229320455085, correct days = 104
Step:13001 [Epoch:50] [Learning rate: 0.000630] train_loss:0.000037 test_loss:0.000077
profit = 0.22241459898910543, correct days = 106
Step:13101 [Epoch:50] [Learning rate: 0.000630] train_loss:0.000149 test_loss:0.000078
profit = -0.03342298517580489, correct days = 102
...
Step:127901 [Epoch:495] [Learning rate: 0.000007] train_loss:0.000223 test_loss:0.000072
profit = -0.006937739971851875, correct days = 99
Step:128001 [Epoch:496] [Learning rate: 0.000007] train_loss:0.000021 test_loss:0.000072
profit = 0.07255974747543392, correct days = 105
Step:128101 [Epoch:496] [Learning rate: 0.000007] train_loss:0.000048 test_loss:0.000072
profit = -0.03654345662007563, correct days = 100
Step:128201 [Epoch:496] [Learning rate: 0.000007] train_loss:0.000022 test_loss:0.000072
profit = 0.0370857372949388, correct days = 106
Step:128301 [Epoch:497] [Learning rate: 0.000007] train_loss:0.000044 test_loss:0.000071
profit = 0.08336513334204343, correct days = 105
Step:128401 [Epoch:497] [Learning rate: 0.000007] train_loss:0.000052 test_loss:0.000072
profit = -0.001579383542026247, correct days = 101
Step:128501 [Epoch:498] [Learning rate: 0.000007] train_loss:0.000073 test_loss:0.000071
profit = 0.010168036018454285, correct days = 101
Step:128601 [Epoch:498] [Learning rate: 0.000007] train_loss:0.000269 test_loss:0.000072
profit = -0.032829104619034655, correct days = 103
Step:128701 [Epoch:498] [Learning rate: 0.000007] train_loss:0.000168 test_loss:0.000072
profit = 0.00882121564952476, correct days = 104
Step:128801 [Epoch:499] [Learning rate: 0.000007] train_loss:0.000034 test_loss:0.000071
profit = -0.00046510322141923854, correct days = 99
Step:128901 [Epoch:499] [Learning rate: 0.000007] train_loss:0.000060 test_loss:0.000071
profit = 0.03810759671788244, correct days = 98

TypeError: range object does not support item assignment

Hi,

I had to convert the code to run on Python3. Mostly changed all print statements, thats it.

For line 202 in model_rnn.py I changed the xrange to range. But could not get it to work. Tried looking up for a fix but its something beyond my understanding I guess so thought of opening up an issue here.

Any suggestions would be helpful. Thx.

(C:\GFApps\Anaconda3) C:\Users\tparmar\Documents\Python\predict stock market pri
ce using rnn>python main.py --stock_count=100 --train --input_size=1 --lstm_size
=128 --max_epoch=50 --embed_size=8
{'batch_size': 64,
 'embed_size': 8,
 'init_epoch': 5,
 'init_learning_rate': 0.001,
 'input_size': 1,
 'keep_prob': 0.8,
 'learning_rate_decay': 0.99,
 'lstm_size': 128,
 'max_epoch': 50,
 'num_layers': 1,
 'num_steps': 30,
 'sample_size': 4,
 'stock_count': 100,
 'stock_symbol': None,
 'train': True}
2017-12-16 23:32:55.276680: I C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\
36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instruct
ions that this TensorFlow binary was not compiled to use: AVX AVX2
C:\GFApps\Anaconda3\lib\site-packages\tensorflow\python\ops\gradients_impl.py:96
: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shap
e. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
---------
Variables: name (type shape) [size]
---------
embed_matrix:0 (float32_ref 100x8) [800, bytes: 3200]
dynamic_rnn/lstm_cell/kernel:0 (float32_ref 129x512) [66048, bytes: 264192]
dynamic_rnn/lstm_cell/bias:0 (float32_ref 512) [512, bytes: 2048]
w:0 (float32_ref 128x1) [128, bytes: 512]
b:0 (float32_ref 1) [1, bytes: 4]
Total size of variables: 67489
Total bytes of variables: 269956
{True: 497, False: 8}
main.py:58: FutureWarning: sort(columns=....) is deprecated, use sort_values(by=
.....)
  info = info.sort('market_cap', ascending=False).reset_index(drop=True)
Head of S&P 500 info:
   symbol                  name                  sector   price  \
0   AAPL            Apple Inc.  Information Technology  139.52
1  GOOGL  Alphabet Inc Class A  Information Technology  851.15
2   GOOG  Alphabet Inc Class C  Information Technology  831.91
3   MSFT       Microsoft Corp.  Information Technology   64.40
4   AMZN        Amazon.com Inc  Consumer Discretionary  846.02

   dividend_yield  price/earnings  earnings/share  book_value  52_week_low  \
0            1.63           16.75            8.33       25.19        89.47
1             NaN           30.53           27.88      201.12       672.66
2             NaN           29.84           27.88      201.12       663.28
3            2.43           30.31            2.12        8.90        48.03
4             NaN          172.66            4.90       40.43       538.58

   52_week_high  market_cap  ebitda  price/sales  price/book  \
0        140.28      732.00   69.75         3.35        5.53
1        867.00      588.50   29.86         6.49        4.21
2        841.95      575.20   29.86         6.34        4.12
3         65.91      497.65   27.74         5.80        7.22
4        860.86      403.70   11.67         2.97       20.94

                                         sec_filings file_exists
0  http://www.sec.gov/cgi-bin/browse-edgar?action...        True
1  http://www.sec.gov/cgi-bin/browse-edgar?action...        True
2  http://www.sec.gov/cgi-bin/browse-edgar?action...        True
3  http://www.sec.gov/cgi-bin/browse-edgar?action...        True
4  http://www.sec.gov/cgi-bin/browse-edgar?action...        True
len(merged_test_X) = 17838
len(merged_test_y) = 17838
len(merged_test_labels) = 17838
{'AAPL': array([  0,   1,   2,   3,   4,   5,   6,   7,   8,   9,  10,  11,  12,

        13,  14,  15,  16,  17,  18,  19,  20,  21,  22,  23,  24,  25,
        26,  27,  28,  29,  30,  31,  32,  33,  34,  35,  36,  37,  38,
        39,  40,  41,  42,  43,  44,  45,  46,  47,  48,  49,  50,  51,
        52,  53,  54,  55,  56,  57,  58,  59,  60,  61,  62,  63,  64,
        65,  66,  67,  68,  69,  70,  71,  72,  73,  74,  75,  76,  77,
        78,  79,  80,  81,  82,  83,  84,  85,  86,  87,  88,  89,  90,
        91,  92,  93,  94,  95,  96,  97,  98,  99, 100, 101, 102, 103,
       104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116,
       117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129,
       130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142,
       143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155,
       156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168,
       169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181,
       182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194,
       195, 196, 197, 198]), 'GOOGL': array([199, 200, 201, 202, 203, 204, 205,
206, 207, 208, 209, 210, 211,
       212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224,
       225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237,
       238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250,
       251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263,
       264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276,
       277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
       290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302,
       303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315,
       316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328,
       329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341,
       342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354,
       355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365]), 'GOOG': array([3
66, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378,
       379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391,
       392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404,
       405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417,
       418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430,
       431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443,
       444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456,
       457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469,
       470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482,
       483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495,
       496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508,
       509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521,
       522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532]), 'MSFT': array([5
33, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545,
       546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558,
       559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571,
       572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584,
       585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597,
       598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610,
       611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623,
       624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636,
       637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649,
       650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662,
       663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675,
       676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688,
       689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701,
       702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714,
       715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727,
       728, 729, 730, 731])}
Start training for stocks: ['AAPL', 'GOOGL', 'GOOG', 'MSFT', 'AMZN', 'FB', 'XOM'
, 'JNJ', 'JPM', 'WFC', 'BAC', 'GE', 'T', 'PG', 'WMT', 'CVX', 'V', 'PFE', 'VZ', '
MRK', 'KO', 'CMCSA', 'HD', 'DIS', 'ORCL', 'PM', 'CSCO', 'IBM', 'INTC', 'C', 'UNH
', 'PEP', 'MO', 'AMGN', 'MA', 'MMM', 'MDT', 'BA', 'SLB', 'KHC', 'MCD', 'GS', 'AB
BV', 'HON', 'CELG', 'BMY', 'NKE', 'USB', 'WBA', 'UPS', 'UTX', 'GILD', 'UNP', 'AV
GO', 'RAI', 'LLY', 'CHTR', 'MS', 'CVS', 'PCLN', 'QCOM', 'SBUX', 'AGN', 'TXN', 'A
BT', 'ACN', 'DOW', 'TWX', 'COST', 'AXP', 'LOW', 'DD', 'MDLZ', 'CL', 'CB', 'BLK',
 'BIIB', 'AIG', 'PNC', 'TMO', 'NEE', 'NFLX', 'DHR', 'ADBE', 'COP', 'NVDA', 'CRM'
, 'MET', 'GD', 'EOG', 'DUK', 'FOXA', 'CAT', 'GM', 'FOX', 'SCHW', 'SPG', 'PYPL',
'TJX', 'FDX']
Traceback (most recent call last):
  File "main.py", line 112, in <module>
    tf.app.run()
  File "C:\GFApps\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py"
, line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "main.py", line 105, in main
    rnn_model.train(stock_data_list, FLAGS)
  File "C:\Users\tparmar\Documents\Python\predict stock market price using rnn\m
odel_rnn.py", line 209, in train
    for batch_X, batch_y in d_.generate_one_epoch(config.batch_size):
  File "C:\Users\tparmar\Documents\Python\predict stock market price using rnn\d
ata_model.py", line 65, in generate_one_epoch
    random.shuffle(batch_indices)
  File "C:\GFApps\Anaconda3\lib\random.py", line 274, in shuffle
    x[i], x[j] = x[j], x[i]
TypeError: 'range' object does not support item assignment

(C:\GFApps\Anaconda3) C:\Users\tparmar\Documents\Python\predict stock market pri
ce using rnn>

Problem With data_fetcher.py

Hi, I am receiving and "AttributeError: 'DataFrame' object has no attribute 'sort'". I tried changing to sort_values and sort_index but cannot seem to get the data_fetcher to work. Any help would be appreciated. Thank you.

NameError: name 'xrange' is not defined

File "/content/drive/My Drive/stock-rnn-master/model_rnn.py", line 212, in train
for epoch in xrange(config.max_epoch):
NameError: name 'xrange' is not defined

Fabulous work

A great blog post and code write up.

Thank you

A sequel would be most welcome!

AttributeError: 'DataFrame' object has no attribute 'sort'

python main.py got error
Traceback (most recent call last):
File "main.py", line 111, in
tf.app.run()
File "/home/xxx/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "main.py", line 100, in main
target_symbol=FLAGS.stock_symbol,
File "main.py", line 58, in load_sp500
info = info.sort('market_cap', ascending=False).reset_index(drop=True)
File "/home/weilo/.local/lib/python2.7/site-packages/pandas/core/generic.py", line 5067, in getattr
return object.getattribute(self, name)
AttributeError: 'DataFrame' object has no attribute 'sort'

seems sort has been replaceed by sort_values, but not sure how to change to make it work

name 'xrange' is not defined error

When running
python main.py --stock_symbol=SP500 --train --input_size=1 --lstm_size=128 --max_epoch=50
, i keep getting the error NameError: name 'xrange' is not defined.
How can i fix this?

Data is not downloading(Fix)

Please change url in data_fetcher from

https://www.google.com/finance/historical?....
to
https://finance.google.com/finance/historical?....

Thanks

remove dropout for the test case?

Dear Lilian,
It seems that you apply keep_prob = 0.8 even for the test case...
Should we keep all neurons active after the training is done?
Thanks!

Link to Image in Blog #2 is Broken

For the referring blog post number 2 :
https://lilianweng.github.io/lil-log/2017/07/22/predict-stock-prices-using-RNN-part-2.html#price-prediction
the png picture with the following url is not displayed:

/lil-log/assets/images/rnn_embedding_result.png
(Xpath: /html/body/main/div/article/div[1]/p[34]/img )

Text before broken link:
With a small input_size, the model does not need to worry about the long-term growth curve. Once we increase input_size, the prediction would be much harder.

Error "python data_fetcher.py"

/media/scw4750/个人文件/donghui/stock-rnn-master/data_fetcher.py:38: FutureWarning: sort(columns=....) is deprecated, use sort_values(by=.....)
Loaded 505 stock symbols
df_sp500.sort('Market Cap', ascending=False, inplace=True)
Fetching AAPL ...
https://finance.google.com/finance/historical?output=csv&q=AAPL&startdate=Jan+1%2C+1980&enddate=Jan+10%2C+2019
Traceback (most recent call last):
File "/media/scw4750/个人文件/donghui/stock-rnn-master/data_fetcher.py", line 111, in
main()
File "/home/scw4750/anaconda2-caffe/lib/python2.7/site-packages/click/core.py", line 716, in call
return self.main(*args, **kwargs)
File "/home/scw4750/anaconda2-caffe/lib/python2.7/site-packages/click/core.py", line 696, in main
rv = self.invoke(ctx)
File "/home/scw4750/anaconda2-caffe/lib/python2.7/site-packages/click/core.py", line 889, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/scw4750/anaconda2-caffe/lib/python2.7/site-packages/click/core.py", line 534, in invoke
return callback(*args, **kwargs)
File "/media/scw4750/个人文件/donghui/stock-rnn-master/data_fetcher.py", line 103, in main
succeeded = fetch_prices(sym, out_name)
File "/media/scw4750/个人文件/donghui/stock-rnn-master/data_fetcher.py", line 65, in fetch_prices
f = urllib2.urlopen(symbol_url)
File "/home/scw4750/anaconda2-caffe/lib/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/home/scw4750/anaconda2-caffe/lib/python2.7/urllib2.py", line 429, in open
response = self._open(req, data)
File "/home/scw4750/anaconda2-caffe/lib/python2.7/urllib2.py", line 447, in _open
'_open', req)
File "/home/scw4750/anaconda2-caffe/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/home/scw4750/anaconda2-caffe/lib/python2.7/urllib2.py", line 1241, in https_open
context=self._context)
File "/home/scw4750/anaconda2-caffe/lib/python2.7/urllib2.py", line 1198, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 101] Network is unreachable>

Process finished with exit code 1

InvalidArgumentError: Nan in summary histogram for: pred

Hi,
Dear lilian thanks for your code and tutorial. I'm running your code on python 3 environment with the following setup:

tensorflow==1.8.0
pandas==0.24.1
scikit-learn==0.17.1
scipy==1.1.0
numpy==1.15.2
requests==2.9.1

This command works.
/usr/bin/python3.5 main.py --stock_symbol=SP500 --train --input_size=1 --lstm_size=128 --max_epoch=50

/usr/bin/python3.5 main.py --stock_count=100 --train --input_size=1 --lstm_size=128 --max_epoch=50 --embed_size=8

When I run above command it gives error with the following output:

Start training for stocks: ['AAPL', 'GOOGL', 'GOOG', 'MSFT', 'AMZN', 'FB', 'JPM', 'JNJ', 'XOM', 'BAC', 'WMT', 'WFC', 'V', 'T', 'HD', 'CVX', 'UNH', 'INTC', 'PFE', 'VZ', 'PG', 'BA', 'ORCL', 'CSCO', 'C', 'KO', 'MA', 'CMCSA', 'ABBV', 'DWDP', 'PEP', 'DIS', 'PM', 'MRK', 'IBM', 'MMM', 'NVDA', 'GE', 'MCD', 'AMGN', 'MO', 'NFLX', 'HON', 'MDT', 'GILD', 'NKE', 'UTX', 'BMY', 'ABT', 'UNP', 'TXN', 'ACN', 'LMT', 'MS', 'GS', 'SLB', 'UPS', 'QCOM', 'ADBE', 'AVGO', 'CAT', 'USB', 'PYPL', 'KHC', 'CHTR', 'BLK', 'LLY', 'TMO', 'LOW', 'COST', 'AXP', 'CRM', 'SBUX', 'CVS', 'CELG', 'PNC', 'WBA', 'SCHW', 'NEE', 'BIIB', 'CB', 'FDX', 'DHR', 'FOX', 'MDLZ', 'COP', 'GD', 'CL', 'GM', 'ANTM', 'EOG', 'AMT', 'RTN', 'NOC', 'SYK', 'AGN', 'BK', 'ITW', 'CME', 'AIG']
Step:1 [Epoch:0] [Learning rate: 0.001000] train_loss:0.523090 test_loss:0.378728

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1322, in _do_call
    return fn(*args)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Nan in summary histogram for: pred
	 [[Node: pred = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](pred/tag, add/_65)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "main.py", line 114, in <module>
    tf.app.run()
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 126, in run
    _sys.exit(main(argv))
  File "main.py", line 107, in main
    rnn_model.train(stock_data_list, FLAGS)
  File "/home/myuser/PycharmProjects/project1/stock-rnn-master/model_rnn.py", line 229, in train
    [self.loss, self.optim, self.merged_sum], train_data_feed)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 900, in run
    run_metadata_ptr)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1135, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1316, in _do_run
    run_metadata)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1335, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Nan in summary histogram for: pred
	 [[Node: pred = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](pred/tag, add/_65)]]

Caused by op 'pred', defined at:
  File "main.py", line 114, in <module>
    tf.app.run()
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 126, in run
    _sys.exit(main(argv))
  File "main.py", line 94, in main
    embed_size=FLAGS.embed_size,
  File "/home/myuser/PycharmProjects/project1/stock-rnn-master/model_rnn.py", line 53, in __init__
    self.build_graph()
  File "/home/myuser/PycharmProjects/project1/stock-rnn-master/model_rnn.py", line 119, in build_graph
    self.pred_summ = tf.summary.histogram("pred", self.pred)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/summary/summary.py", line 203, in histogram
    tag=tag, values=values, name=scope)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_logging_ops.py", line 283, in histogram_summary
    "HistogramSummary", tag=tag, values=values, name=name)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
    op_def=op_def)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1718, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Nan in summary histogram for: pred
	 [[Node: pred = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](pred/tag, add/_65)]]

Could you please help me on this issue? Is it related to my dataset?

Unable to install pandas==0.16.2

I have configure virtual environment by installing Python 2.7 and activated it.

As you suggested your environment, I installed Python 2.7 on existing 3.10.7 Ubuntu 22.10
BeautifulSoup==3.2.1 installation done!
numpy==1.13.1 installation done!

But when I tries to install pandas==0.16.2 there is a very big red color error message which first asked me to install cython, so I said : pip install Cython and when again tried to install pip install pandas==0.16.2 again same very huge red color error message on the terminal and last few lines are :

ERROR: Command errored out with exit status 1: /home/girish/py27/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-_5KVfS/pandas/setup.py'"'"'; file='"'"'/tmp/pip-install-_5KVfS/pandas/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-LfcnSx/install-record.txt --single-version-externally-managed --compile --install-headers /home/girish/py27/include/site/python2.7/pandas Check the logs for full command output.

Kindly help me, how do I proceed further to test your code.

session run is gray in tensorboard->graph, and unknown device

Hi,
Branch py3 working fine on my PC, I use ubuntu18.04, py3.5 and tensorflow 1.10 with singal video card Nvidia 1070.
I see my GPU usage is around 30% while most video card memory been occupied during training. I'd like to see if there's room to improve the performance so goto tensorboard.

But the device is unknown when I check it in tensorboard->graph, also could not see compute time.
Could you pls kindly let me know if any tip to fix it ?
Thanks a lot.

image

Prediction

Hi,
First, thanks for your code and tutorial, it is really interesting! I learned a lot.

Second, I have a question but I feel pretty stupid asking that since I got the impression I m missing the obvious :/
I did train my IA with all the stock_count (100 actually) and I would like to test how it will predict the price of GOOG.

So I did:

# python main.py --stock_symbol=GOOG --input_size=1 --lstm_size=128 --embed_size=8
{'batch_size': 64,
 'embed_size': 8,
 'init_epoch': 5,
 'init_learning_rate': 0.001,
 'input_size': 1,
 'keep_prob': 0.8,
 'learning_rate_decay': 0.99,
 'lstm_size': 128,
 'max_epoch': 50,
 'num_layers': 1,
 'num_steps': 30,
 'sample_size': 4,
 'stock_count': 100,
 'stock_symbol': 'GOOG',
 'train': False}
2018-02-14 14:47:55.716691: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-02-14 14:47:55.716715: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
inputs.shape: (?, 30, 1)
inputs_with_embed.shape: (?, 30, 9)
/home/mike/anaconda3/envs/IA/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py:95: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
---------
Variables: name (type shape) [size]
---------
embed_matrix:0 (float32_ref 100x8) [800, bytes: 3200]
dynamic_rnn/lstm_cell/kernel:0 (float32_ref 137x512) [70144, bytes: 280576]
dynamic_rnn/lstm_cell/bias:0 (float32_ref 512) [512, bytes: 2048]
w:0 (float32_ref 128x1) [128, bytes: 512]
b:0 (float32_ref 1) [1, bytes: 4]
Total size of variables: 71585
Total bytes of variables: 286340
 [*] Reading checkpoints...
 [*] Success to read stock_rnn_lstm128_step30_input1_embed8.model-218450

So it's working, but where can I get the prediction? It is writing no images and printing nothing.

Thanks again for you help, I have some idea of improvement for the project but I m stuck at this point :(

Best

Embedding configured incorrectly

self. inputs_with_embed holds the embeddings concatenated with self.inputs. However, the embeddings are not being used since the RRN is still using self.inputs:

val, state_ = tf.nn.dynamic_rnn(cell, self.inputs, dtype=tf.float32, scope="dynamic_rnn")

Unable to download .csv

Sorry! I'm stuck on the first step - downloading the dataset.
As you can see in the image I have gone to Yahoo Finance and applied the maximum years of data, but I don't find where to download the dataset.
截屏2023-02-03 下午1 19 18

Someone said that I can download .csv under apply but I didn't find it.
Please help me, how should I download the relevant dataset.

New complementary tool

My name is Luis, I'm a big-data machine-learning developer, I'm a fan of your work, and I usually check your updates.

I was afraid that my savings would be eaten by inflation. I have created a powerful tool that based on past technical patterns (volatility, moving averages, statistics, trends, candlesticks, support and resistance, stock index indicators).
All the ones you know (RSI, MACD, STOCH, Bolinger Bands, SMA, DEMARK, Japanese candlesticks, ichimoku, fibonacci, williansR, balance of power, murrey math, etc) and more than 200 others.

The tool creates prediction models of correct trading points (buy signal and sell signal, every stock is good traded in time and direction).
For this I have used big data tools like pandas python, stock market libraries like: tablib, TAcharts ,pandas_ta... For data collection and calculation.
And powerful machine-learning libraries such as: Sklearn.RandomForest , Sklearn.GradientBoosting, XGBoost, Google TensorFlow and Google TensorFlow LSTM.

With the models trained with the selection of the best technical indicators, the tool is able to predict trading points (where to buy, where to sell) and send real-time alerts to Telegram or Mail. The points are calculated based on the learning of the correct trading points of the last 2 years (including the change to bear market after the rate hike).

I think it could be useful to you, to improve, I would like to give it to you, and if you are interested in improving and collaborating I am also willing, and if not I would like to file it in the drawer.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.