Update training options

Project: Lotr

Generated ID
3c5d90b0-0b43-11e9-9a2c-69b78d60ebdb
Name
Lotr
Created at
Sat Dec 29 2018 08:25:01 GMT+0000 (Coordinated Universal Time)
Updated at
Sat Dec 29 2018 10:10:50 GMT+0000 (Coordinated Universal Time)
Has training data
1
Training in progress
0
Contains trained model
1
corpus size 50068
using 74 unique characters of this corpus:
[' ', 'e', 'a', 'i', 'o', 'n', 'r', 't', 'l', 's', 'c', 'd', 'u', 'p', 'm', 'v', 'g', ',', 'b', '.', 'f', 'h', 'z', '’', '\n', '«', '»', 'B', 'q', 'ò', 'S', 'G', 'C', 'A', 'I', 'E', 'L', ';', 'à', 'N', '!', 'è', 'H', 'T', 'ì', 'M', ':', 'F', 'P', 'ù', 'D', 'V', 'é', 'O', '*', '?', 'R', ')', '(', 'Q', 'k', 'U', '-', '…', '“', '\t', '1', '3', '”', '2', 'y', 'K', 'Z', '\ufeff']
Error: 2018-12-29 08:32:40.669784: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
step: 100/10000...  loss: 3.0079...  0.5830 sec/batch
step: 200/10000...  loss: 2.5551...  0.5815 sec/batch
step: 300/10000...  loss: 2.4239...  0.5859 sec/batch
step: 400/10000...  loss: 2.3249...  0.5856 sec/batch
step: 500/10000...  loss: 2.2446...  0.5846 sec/batch
step: 600/10000...  loss: 2.1535...  0.5892 sec/batch
step: 700/10000...  loss: 2.1029...  0.5823 sec/batch
step: 800/10000...  loss: 2.0853...  0.5870 sec/batch
step: 900/10000...  loss: 2.0472...  0.5849 sec/batch
step: 1000/10000...  loss: 1.9953...  0.5864 sec/batch
step: 1100/10000...  loss: 1.9871...  0.5936 sec/batch
step: 1200/10000...  loss: 1.9212...  0.5827 sec/batch
step: 1300/10000...  loss: 1.8558...  0.5879 sec/batch
step: 1400/10000...  loss: 1.8417...  0.5831 sec/batch
step: 1500/10000...  loss: 1.8285...  0.5799 sec/batch
step: 1600/10000...  loss: 1.8368...  0.5836 sec/batch
step: 1700/10000...  loss: 1.7767...  0.5845 sec/batch
step: 1800/10000...  loss: 1.8546...  0.5871 sec/batch
step: 1900/10000...  loss: 1.7110...  0.6367 sec/batch
step: 2000/10000...  loss: 1.6603...  0.5893 sec/batch
step: 2100/10000...  loss: 1.6941...  0.5806 sec/batch
step: 2200/10000...  loss: 1.6515...  0.5817 sec/batch
step: 2300/10000...  loss: 1.6839...  0.5848 sec/batch
step: 2400/10000...  loss: 1.6275...  0.5815 sec/batch
step: 2500/10000...  loss: 1.7299...  0.5836 sec/batch
step: 2600/10000...  loss: 1.5737...  0.5851 sec/batch
step: 2700/10000...  loss: 1.5521...  0.5881 sec/batch
step: 2800/10000...  loss: 1.5645...  0.5953 sec/batch
step: 2900/10000...  loss: 1.5575...  0.5986 sec/batch
step: 3000/10000...  loss: 1.5866...  0.5902 sec/batch
step: 3100/10000...  loss: 1.5304...  0.5946 sec/batch
step: 3200/10000...  loss: 1.6890...  0.5930 sec/batch
step: 3300/10000...  loss: 1.4828...  0.6022 sec/batch
step: 3400/10000...  loss: 1.4628...  0.5810 sec/batch
step: 3500/10000...  loss: 1.4841...  0.5850 sec/batch
step: 3600/10000...  loss: 1.4526...  0.6035 sec/batch
step: 3700/10000...  loss: 1.4929...  0.5828 sec/batch
step: 3800/10000...  loss: 1.4564...  0.5828 sec/batch
step: 3900/10000...  loss: 1.6156...  0.5901 sec/batch
step: 4000/10000...  loss: 1.4299...  0.5837 sec/batch
step: 4100/10000...  loss: 1.3909...  0.5845 sec/batch
step: 4200/10000...  loss: 1.4175...  0.5801 sec/batch
step: 4300/10000...  loss: 1.4012...  0.5826 sec/batch
step: 4400/10000...  loss: 1.4420...  0.5840 sec/batch
step: 4500/10000...  loss: 1.4019...  0.5815 sec/batch
step: 4600/10000...  loss: 1.5961...  0.5993 sec/batch
step: 4700/10000...  loss: 1.3648...  0.5821 sec/batch
step: 4800/10000...  loss: 1.3575...  0.5824 sec/batch
step: 4900/10000...  loss: 1.3705...  0.5869 sec/batch
step: 5000/10000...  loss: 1.3514...  0.5847 sec/batch
step: 5100/10000...  loss: 1.3851...  0.5882 sec/batch
step: 5200/10000...  loss: 1.3693...  0.5881 sec/batch
step: 5300/10000...  loss: 1.5576...  0.5837 sec/batch
step: 5400/10000...  loss: 1.3401...  0.5832 sec/batch
step: 5500/10000...  loss: 1.3124...  0.5854 sec/batch
step: 5600/10000...  loss: 1.3335...  0.5977 sec/batch
step: 5700/10000...  loss: 1.3047...  0.5866 sec/batch
step: 5800/10000...  loss: 1.3487...  0.5855 sec/batch
step: 5900/10000...  loss: 1.3131...  0.5878 sec/batch
step: 6000/10000...  loss: 1.5459...  0.5959 sec/batch
step: 6100/10000...  loss: 1.2932...  0.5847 sec/batch
step: 6200/10000...  loss: 1.3012...  0.5880 sec/batch
step: 6300/10000...  loss: 1.3105...  0.5848 sec/batch
step: 6400/10000...  loss: 1.2806...  0.5888 sec/batch
step: 6500/10000...  loss: 1.3062...  0.5809 sec/batch
step: 6600/10000...  loss: 1.2844...  0.6026 sec/batch
step: 6700/10000...  loss: 1.5346...  0.5986 sec/batch
step: 6800/10000...  loss: 1.2511...  0.5837 sec/batch
step: 6900/10000...  loss: 1.2415...  0.5878 sec/batch
step: 7000/10000...  loss: 1.2824...  0.5936 sec/batch
step: 7100/10000...  loss: 1.2537...  0.5848 sec/batch
step: 7200/10000...  loss: 1.2814...  0.5830 sec/batch
step: 7300/10000...  loss: 1.2607...  0.5919 sec/batch
step: 7400/10000...  loss: 1.5070...  0.5926 sec/batch
step: 7500/10000...  loss: 1.2172...  0.5842 sec/batch
step: 7600/10000...  loss: 1.2171...  0.5837 sec/batch
step: 7700/10000...  loss: 1.2609...  0.5874 sec/batch
step: 7800/10000...  loss: 1.2271...  0.5891 sec/batch
step: 7900/10000...  loss: 1.2623...  0.5894 sec/batch
step: 8000/10000...  loss: 1.2160...  0.5888 sec/batch
step: 8100/10000...  loss: 1.4781...  0.5883 sec/batch
step: 8200/10000...  loss: 1.2083...  0.5985 sec/batch
step: 8300/10000...  loss: 1.2000...  0.5865 sec/batch
step: 8400/10000...  loss: 1.2365...  0.5909 sec/batch
step: 8500/10000...  loss: 1.1937...  0.5823 sec/batch
step: 8600/10000...  loss: 1.2518...  0.5908 sec/batch
step: 8700/10000...  loss: 1.2078...  0.5880 sec/batch
step: 8800/10000...  loss: 1.4573...  0.5897 sec/batch
step: 8900/10000...  loss: 1.2079...  0.6131 sec/batch
step: 9000/10000...  loss: 1.1830...  0.5883 sec/batch
step: 9100/10000...  loss: 1.1879...  0.5844 sec/batch
step: 9200/10000...  loss: 1.1654...  0.5826 sec/batch
step: 9300/10000...  loss: 1.2170...  0.5869 sec/batch
step: 9400/10000...  loss: 1.1999...  0.5844 sec/batch
step: 9500/10000...  loss: 1.4304...  0.5874 sec/batch
step: 9600/10000...  loss: 1.1861...  0.5823 sec/batch
step: 9700/10000...  loss: 1.1695...  0.5877 sec/batch
step: 9800/10000...  loss: 1.1899...  0.5841 sec/batch
step: 9900/10000...  loss: 1.1554...  0.5874 sec/batch
step: 10000/10000...  loss: 1.2034...  0.5848 sec/batch
{
  "num_seqs": 128,
  "max_steps": 10000,
  "use_embedding": false
}