|
@@ -199,7 +199,7 @@
|
|
},
|
|
},
|
|
{
|
|
{
|
|
"cell_type": "code",
|
|
"cell_type": "code",
|
|
- "execution_count": 16,
|
|
|
|
|
|
+ "execution_count": 6,
|
|
"metadata": {
|
|
"metadata": {
|
|
"colab": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/",
|
|
"base_uri": "https://localhost:8080/",
|
|
@@ -251,7 +251,7 @@
|
|
},
|
|
},
|
|
{
|
|
{
|
|
"cell_type": "code",
|
|
"cell_type": "code",
|
|
- "execution_count": 17,
|
|
|
|
|
|
+ "execution_count": 8,
|
|
"metadata": {
|
|
"metadata": {
|
|
"colab": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/",
|
|
"base_uri": "https://localhost:8080/",
|
|
@@ -538,7 +538,6 @@
|
|
"- Trying different combinations of data augmentation layers.\n",
|
|
"- Trying different combinations of data augmentation layers.\n",
|
|
"- Increasing the number of filters, trying different sizes, different strides, and number of convolutional layers.\n",
|
|
"- Increasing the number of filters, trying different sizes, different strides, and number of convolutional layers.\n",
|
|
"- Adding more fully-connected layers with a different number of units.\n",
|
|
"- Adding more fully-connected layers with a different number of units.\n",
|
|
- "- Adding batch normalization to all conv layers. Batch normalization before the max pooling layer resulted in a drop in performance so it was moved to after the max pooling layer.\n",
|
|
|
|
"- Removing batch normalization layers after the fully-connected layers and replacing them with dropout layers. Before this change, the model's performance was a bit erratic.\n",
|
|
"- Removing batch normalization layers after the fully-connected layers and replacing them with dropout layers. Before this change, the model's performance was a bit erratic.\n",
|
|
"- Reducing the learning rate helped the model to converge.\n",
|
|
"- Reducing the learning rate helped the model to converge.\n",
|
|
"- Adding an early stopping callback that would stop the training if the validation loss didn't decrease for three consecutive epochs. Using the `restore_best_weights` argument ensured that the model would use weights from the epoch with the lowest validation loss only."
|
|
"- Adding an early stopping callback that would stop the training if the validation loss didn't decrease for three consecutive epochs. Using the `restore_best_weights` argument ensured that the model would use weights from the epoch with the lowest validation loss only."
|
|
@@ -819,4 +818,4 @@
|
|
},
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat": 4,
|
|
"nbformat_minor": 0
|
|
"nbformat_minor": 0
|
|
-}
|
|
|
|
|
|
+}
|