The results seems decent enough.
Use a configuration as follows:
Input 32 32 3 0
Conv 32 3 1 1 0.1
Pool 2 2
Conv 64 3 1 1 0.1
Pool 2 2
Full 72 0.2
Full 72 0.2
Output 10
LearningRate 0.0004
RegularizationFactor 0.003
MaxNorm 1e4
BatchSize 150
I can obtain correct ratio about 70% after days of training. If my library supported GPU acceleration, it would take hours instead of days of training. But I will stop enhance the library at this point.
沒有留言:
張貼留言