可以看到,每次model save之后, loss都会突然增多,这是不正常的现象吧。。
model saved to save/model.ckpt
14111/26880 (epoch 41), train_loss = 5.211, time/batch = 0.103
14447/26880 (epoch 42), train_loss = 5.204, time/batch = 0.115
14783/26880 (epoch 43), train_loss = 5.198, time/batch = 0.104
15000/26880 (epoch 44), train_loss = 4.577, time/batch = 0.067
model saved to save/model.ckpt
15119/26880 (epoch 44), train_loss = 5.192, time/batch = 0.105
15455/26880 (epoch 45), train_loss = 5.187, time/batch = 0.105
15791/26880 (epoch 46), train_loss = 5.182, time/batch = 0.107
16000/26880 (epoch 47), train_loss = 4.723, time/batch = 0.071
model saved to save/model.ckpt
16127/26880 (epoch 47), train_loss = 5.176, time/batch = 0.107
16463/26880 (epoch 48), train_loss = 5.172, time/batch = 0.109
16799/26880 (epoch 49), train_loss = 5.167, time/batch = 0.108
17000/26880 (epoch 50), train_loss = 4.570, time/batch = 0.071
model saved to save/model.ckpt
17135/26880 (epoch 50), train_loss = 5.163, time/batch = 0.108
17471/26880 (epoch 51), train_loss = 5.158, time/batch = 0.108
17807/26880 (epoch 52), train_loss = 5.154, time/batch = 0.108
18000/26880 (epoch 53), train_loss = 4.672, time/batch = 0.065
model saved to save/model.ckpt
18143/26880 (epoch 53), train_loss = 5.150, time/batch = 0.106
18479/26880 (epoch 54), train_loss = 5.146, time/batch = 0.108
18815/26880 (epoch 55), train_loss = 5.142, time/batch = 0.107
19000/26880 (epoch 56), train_loss = 4.250, time/batch = 0.052
model saved to save/model.ckpt
19151/26880 (epoch 56), train_loss = 5.139, time/batch = 0.107