Week #29
27 May 2019From the results from Resnet18 Training, I find that the network is not capable of learning all the features. So the training with res101 is performed on several data sets. Parameter size: 27,542,543
Group Id | iteration | Loss | Batch size |
0 | 200 | 0.02 | 4 |
1 | 200 | 0.08 | 4 |
2 | 200 | 0.13 | 4 |
3 | 470 | 0.135 | 4 |
4 | 450 | 0.122 | 4 |
5 | 450 | 0.119 | 4 |
While for QuickNat, the results are as follows: Parameter size: 3,515,346 |Group Id|iteration|Loss|Batch size| |0|999|0.0005|4| |3|999|0.018|4| |4|999|0.012|4| |5|999|0.011|4|
After discuss with Prof, I need to find out more about the theory why the ResNet doesn’t process well, not just focus on the loss result.
So for the week 30, the main point in to enhance the basic theory of neural network, to be clear about every parameter in the configuration of the network, also, keep track of the accuracy of each experiments is another important factor.