Overall accuracy: 0.7950
Baseline accuracy: 0.8484
Best validation accuracy: 0.8640
Input, output and hidden layers
Type: "sequential"
Layer (type) | Output Shape | Param # |
---|---|---|
flatten (Flatten) | (None, 784) | 0 |
dense (Dense) | (None, 256) | 200960 |
dense_1 (Dense) | (None, 128) | 32896 |
dropout (Dropout) | (None, 128) | 0 |
dense_2 (Dense) | (None, 10) | 1290 |
Total params: 235,146
Trainable params: 235,146
Non-trainable params: 0
Adam (SGD)
Adam optimization is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments.
Sparse categorical crossentropy
Xiao, H. et al. Fashion MNIST Dataset. GitHub; 2015.
Sampaio, C. Deep Learning. AOVS Sistemas de Informática S.A; 2019.
Chollet, F. et al. Keras. GitHub; 2015.