Kernel: Python 3 (Anaconda 5)
In [1]:
In [2]:
In [3]:
In [4]:
In [5]:
Iteration 1, loss = 1.77063951
Iteration 2, loss = 4.72908520
Iteration 3, loss = 8.82222942
Iteration 4, loss = 4.06593009
Iteration 5, loss = 3.73400523
Iteration 6, loss = 1.83942540
Iteration 7, loss = 4.51964198
Iteration 8, loss = 8.41381569
Iteration 9, loss = 2.61646206
Iteration 10, loss = 7.36803433
Iteration 11, loss = 4.00347971
Iteration 12, loss = 6.36657009
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Training set score: 0.311321
Test set score: 0.361111
In [6]:
Iteration 1, loss = 1.77827784
Iteration 2, loss = 4.49834634
Iteration 3, loss = 7.88765912
Iteration 4, loss = 4.87924201
Iteration 5, loss = 3.22955851
Iteration 6, loss = 1.62954340
Iteration 7, loss = 4.07154797
Iteration 8, loss = 5.98427946
Iteration 9, loss = 3.50495470
Iteration 10, loss = 7.54515409
Iteration 11, loss = 3.08189821
Iteration 12, loss = 4.99134003
Iteration 13, loss = 6.64960888
Iteration 14, loss = 5.12671293
Iteration 15, loss = 3.80739780
Iteration 16, loss = 4.33422323
Iteration 17, loss = 8.42199232
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.77811957
Iteration 2, loss = 4.50774811
Iteration 3, loss = 8.08168985
Iteration 4, loss = 4.74776162
Iteration 5, loss = 3.21485353
Iteration 6, loss = 1.17258761
Iteration 7, loss = 2.25260075
Iteration 8, loss = 3.42530253
Iteration 9, loss = 4.38089696
Iteration 10, loss = 6.32984260
Iteration 11, loss = 6.99113039
Iteration 12, loss = 1.11574699
Iteration 13, loss = 1.90349104
Iteration 14, loss = 4.03414711
Iteration 15, loss = 6.16191850
Iteration 16, loss = 3.28913416
Iteration 17, loss = 3.38997080
Iteration 18, loss = 7.00729300
Iteration 19, loss = 4.27808681
Iteration 20, loss = 6.45707760
Iteration 21, loss = 5.37582966
Iteration 22, loss = 4.95559667
Iteration 23, loss = 6.50938800
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.77927287
Iteration 2, loss = 4.51453990
Iteration 3, loss = 8.02414092
Iteration 4, loss = 4.88914851
Iteration 5, loss = 3.08129918
Iteration 6, loss = 1.25561476
Iteration 7, loss = 2.99227798
Iteration 8, loss = 2.00530729
Iteration 9, loss = 4.08568609
Iteration 10, loss = 1.35636646
Iteration 11, loss = 4.89027590
Iteration 12, loss = 6.48944408
Iteration 13, loss = 3.28391871
Iteration 14, loss = 6.15419505
Iteration 15, loss = 3.08845345
Iteration 16, loss = 4.95875278
Iteration 17, loss = 7.77114112
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.77538287
Iteration 2, loss = 4.54116333
Iteration 3, loss = 7.96360307
Iteration 4, loss = 4.78472492
Iteration 5, loss = 3.42026560
Iteration 6, loss = 3.01269253
Iteration 7, loss = 6.82060334
Iteration 8, loss = 7.74930549
Iteration 9, loss = 4.09840623
Iteration 10, loss = 6.78508555
Iteration 11, loss = 4.10933443
Iteration 12, loss = 5.39328539
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.77517944
Iteration 2, loss = 4.50908779
Iteration 3, loss = 7.72731205
Iteration 4, loss = 5.11599799
Iteration 5, loss = 3.12931300
Iteration 6, loss = 2.38465222
Iteration 7, loss = 5.79664182
Iteration 8, loss = 8.41571955
Iteration 9, loss = 2.08377343
Iteration 10, loss = 6.10264768
Iteration 11, loss = 4.48537691
Iteration 12, loss = 6.80714484
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
In [7]:
[0.32432432 0.33333333 0.33333333 0.34285714 0.32352941]
In [8]:
In [9]:
In [10]:
In [0]: