微信扫一扫打赏支持

《python深度学习》笔记---5.2-4、猫狗分类(数据增强)

《python深度学习》笔记---5.2-4、猫狗分类(数据增强)

一、总结

一句话总结:

相比于之前的基本模型,数据增强只是在对应的train_datagen的ImageDataGenerator中增加了一些数据增强的参数
可以看到,数据增强的效果非常好,测试集准确率从71提升到了83左右
train_datagen = ImageDataGenerator(     
    rescale=1./255,     
    rotation_range=40,     
    width_shift_range=0.2,     
    height_shift_range=0.2, 
    shear_range=0.2,     
    zoom_range=0.2,     
    horizontal_flip=True,) 
# 注意,不能增强验证数据
test_datagen = ImageDataGenerator(rescale=1./255) 

 

 

1、WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least steps_per_epoch * epochs batches?

保证batch_size(图像增强中)*steps_per_epoch(fit中)小于等于训练样本数
train_generator = train_datagen.flow_from_directory(         
    train_dir, # 目标目录         
    target_size=(150, 150), # 将所有图像的大小调整为 150×150
    batch_size=20, # 因为使用了 binary_crossentropy 损失,所以需要用二进制标签
    class_mode='binary')

history = model.fit(       
    train_generator,
    steps_per_epoch=100,
    epochs=150,
    validation_data=validation_generator,
    validation_steps=50)

# case 1
# 如果上面train_generator的batch_size是32,如果这里steps_per_epoch=100,那么会报错
"""
tensorflow:Your input ran out of data; interrupting training.
Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 50 batches).
You may need to use the repeat() function when building your dataset.
"""
# 因为train样本数是2000(猫1000,狗1000),小于100*32
# case 2
# 如果上面train_generator的batch_size是20,如果这里steps_per_epoch=100,那么不会报错
# 因为大小刚好
# case 3
# 如果上面train_generator的batch_size是32,如果这里steps_per_epoch=int(1000/32),
# 那么不会报错,但是会有警告,因为也是不整除
# 不会报错因为int(1000/32)*32 < 2000
# case 4
# 如果上面train_generator的batch_size是40,如果这里steps_per_epoch=100,照样报错
# 因为40*100>2000

 

2、图像预处理,显示图像?

img = image.load_img(img_path, target_size=(150, 150)) # 读取图像并调整大小
# 图像预处理 工具的模块
fnames = [os.path.join(train_cats_dir, fname) for fname in os.listdir(train_cats_dir)]
# 选择一张图像进行增强
img_path = fnames[3] 
# 读取图像并调整大小
img = image.load_img(img_path, target_size=(150, 150)) 
# 将其转换为形状 (150, 150, 3) 的 Numpy 数组
x = image.img_to_array(img) 
# 将其形状改变为 (1, 150, 150, 3)
x = x.reshape((1,) + x.shape) 

# 生成随机变换后的图像批量。 循环是无限的,因此你需要 在某个时刻终止循环
i = 0 
for batch in datagen.flow(x, batch_size=1):     
    plt.figure(i)     
    imgplot = plt.imshow(image.array_to_img(batch[0]))     
    i += 1     
    if i % 4 == 0:         
        break 
plt.show()

 

 

二、5.2-4、猫狗分类(数据增强)

博客对应课程的视频位置:

 

import os, shutil
# 原始数据集解压目录的路径
original_dataset_dir = 'E:\\78_recorded_lesson\\001_course_github\\AI_dataSet\\dogs-vs-cats\\kaggle_original_data\\train'
# 保存较小数据集的目录
base_dir = 'E:\\78_recorded_lesson\\001_course_github\\AI_dataSet\\dogs-vs-cats\\cats_and_dogs_small'
# 分别对应划分后的训练、验证和测试的目录
train_dir = os.path.join(base_dir, 'train') 
validation_dir = os.path.join(base_dir, 'validation') 
test_dir = os.path.join(base_dir, 'test') 
# 猫的训练图像目录
train_cats_dir = os.path.join(train_dir, 'cats') 
# 狗的训练图像目录
train_dogs_dir = os.path.join(train_dir, 'dogs')   
# 猫的验证图像目录
validation_cats_dir = os.path.join(validation_dir, 'cats')  
# 狗的验证图像目录
validation_dogs_dir = os.path.join(validation_dir, 'dogs') 
# 猫的测试图像目录
test_cats_dir = os.path.join(test_dir, 'cats')   
# 狗的测试图像目录
test_dogs_dir = os.path.join(test_dir, 'dogs')  
In [2]:
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
In [3]:
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator 
In [4]:
datagen = tf.keras.preprocessing.image.ImageDataGenerator(       
    rotation_range=40,       
    width_shift_range=0.2,       
    height_shift_range=0.2,       
    shear_range=0.2,       
    zoom_range=0.2,       
    horizontal_flip=True,       
    fill_mode='nearest')
In [5]:
# 图像预处理 工具的模块
fnames = [os.path.join(train_cats_dir, fname) for fname in os.listdir(train_cats_dir)]
# 选择一张图像进行增强
img_path = fnames[3] 
# 读取图像并调整大小
img = image.load_img(img_path, target_size=(150, 150)) 
# 将其转换为形状 (150, 150, 3) 的 Numpy 数组
x = image.img_to_array(img) 
# 将其形状改变为 (1, 150, 150, 3)
x = x.reshape((1,) + x.shape) 

# 生成随机变换后的图像批量。 循环是无限的,因此你需要 在某个时刻终止循环
i = 0 
for batch in datagen.flow(x, batch_size=1):     
    plt.figure(i)     
    imgplot = plt.imshow(image.array_to_img(batch[0]))     
    i += 1     
    if i % 4 == 0:         
        break 
plt.show()

2、定义一个包含 dropout 的新卷积神经网络

In [6]:
model = tf.keras.models.Sequential() 
model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu',                         
                        input_shape=(150, 150, 3))) 
model.add(tf.keras.layers.MaxPooling2D((2, 2))) 
model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu')) 
model.add(tf.keras.layers.MaxPooling2D((2, 2))) 
model.add(tf.keras.layers.Conv2D(128, (3, 3), activation='relu')) 
model.add(tf.keras.layers.MaxPooling2D((2, 2))) 
model.add(tf.keras.layers.Conv2D(128, (3, 3), activation='relu')) 
model.add(tf.keras.layers.MaxPooling2D((2, 2))) 
model.add(tf.keras.layers.Flatten()) 
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(512, activation='relu')) 
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.summary() 
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 148, 148, 32)      896       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 74, 74, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 72, 72, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 36, 36, 64)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 34, 34, 128)       73856     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 17, 17, 128)       0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 15, 15, 128)       147584    
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 7, 7, 128)         0         
_________________________________________________________________
flatten (Flatten)            (None, 6272)              0         
_________________________________________________________________
dropout (Dropout)            (None, 6272)              0         
_________________________________________________________________
dense (Dense)                (None, 512)               3211776   
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 513       
=================================================================
Total params: 3,453,121
Trainable params: 3,453,121
Non-trainable params: 0
_________________________________________________________________
In [7]:
from tensorflow.keras import optimizers
model.compile(loss='binary_crossentropy',               
              optimizer=optimizers.RMSprop(lr=1e-4),               
              metrics=['acc']) 

3、利用数据增强生成器训练卷积神经网络

相比于之前的基本模型,数据增强只是在对应的train_datagen的ImageDataGenerator中增加了一些数据增强的参数

In [8]:
train_datagen = ImageDataGenerator(     
    rescale=1./255,     
    rotation_range=40,     
    width_shift_range=0.2,     
    height_shift_range=0.2, 
    shear_range=0.2,     
    zoom_range=0.2,     
    horizontal_flip=True,) 
# 注意,不能增强验证数据
test_datagen = ImageDataGenerator(rescale=1./255) 


# 这里batch_size不能是32,不然就报如下错误
'''
WARNING:tensorflow:Your input ran out of data; 
interrupting training. Make sure that your dataset or generator can generate at least 
steps_per_epoch * epochs batches (in this case, 5000 batches). 
You may need to use the repeat() function when building your dataset.

'''
# 可能是整除关系吧



train_generator = train_datagen.flow_from_directory(         
    train_dir, # 目标目录         
    target_size=(150, 150), # 将所有图像的大小调整为 150×150
    batch_size=20, # 因为使用了 binary_crossentropy 损失,所以需要用二进制标签
    class_mode='binary') 

validation_generator = test_datagen.flow_from_directory(         
    validation_dir,         
    target_size=(150, 150),         
    batch_size=20,         
    class_mode='binary') 
 
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
In [9]:
for data_batch, labels_batch in train_generator: 
    print('data batch shape:', data_batch.shape)
    # print(data_batch)
    print('labels batch shape:', labels_batch.shape)
    # print(labels_batch)
    break 
for data_batch, labels_batch in validation_generator: 
    print('data batch shape:', data_batch.shape)
    # print(data_batch)
    print('labels batch shape:', labels_batch.shape)
    # print(labels_batch)
    break 
data batch shape: (20, 150, 150, 3)
labels batch shape: (20,)
data batch shape: (20, 150, 150, 3)
labels batch shape: (20,)
In [ ]:
i=0
for data_batch, labels_batch in train_generator:
    if(i%100==0):
        print(i)
    # print('data batch shape:', data_batch.shape)
    # print(data_batch)
    # print('labels batch shape:', labels_batch.shape)
    # print(labels_batch)
    i+=1
    if(i>20000): 
        break 
In [ ]:
i=0
for data_batch, labels_batch in validation_generator:
    if(i%100==0):
        print(i)
    # print('data batch shape:', data_batch.shape)
    # print(data_batch)
    # print('labels batch shape:', labels_batch.shape)
    # print(labels_batch)
    i+=1
    if(i>20000): 
        break 
In [10]:
# case 1
# 如果上面train_generator的batch_size是32,如果这里steps_per_epoch=100,那么会报错
"""
tensorflow:Your input ran out of data; interrupting training. 
Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 50 batches). 
You may need to use the repeat() function when building your dataset.
"""
# 因为train样本数是2000(猫1000,狗1000),小于100*32
# case 2
# 如果上面train_generator的batch_size是20,如果这里steps_per_epoch=100,那么不会报错
# 因为大小刚好
# case 3
# 如果上面train_generator的batch_size是32,如果这里steps_per_epoch=int(1000/32),
# 那么不会报错,但是会有警告,因为也是不整除
# 不会报错因为int(1000/32)*32 < 2000
# case 4
# 如果上面train_generator的batch_size是40,如果这里steps_per_epoch=100,照样报错
# 因为40*100>2000
history = model.fit(
    train_generator,
    steps_per_epoch=100,
    epochs=150,
    validation_data=validation_generator,
    validation_steps=50)
Epoch 1/150
100/100 [==============================] - 19s 186ms/step - loss: 0.6905 - acc: 0.5235 - val_loss: 0.6752 - val_acc: 0.5980
Epoch 2/150
100/100 [==============================] - 19s 189ms/step - loss: 0.6778 - acc: 0.5520 - val_loss: 0.6616 - val_acc: 0.5710
Epoch 3/150
100/100 [==============================] - 20s 203ms/step - loss: 0.6649 - acc: 0.5615 - val_loss: 0.6491 - val_acc: 0.5960
Epoch 4/150
100/100 [==============================] - 20s 196ms/step - loss: 0.6517 - acc: 0.6010 - val_loss: 0.6232 - val_acc: 0.6460
Epoch 5/150
100/100 [==============================] - 20s 198ms/step - loss: 0.6334 - acc: 0.6355 - val_loss: 0.6385 - val_acc: 0.6200
Epoch 6/150
100/100 [==============================] - 20s 200ms/step - loss: 0.6140 - acc: 0.6555 - val_loss: 0.5910 - val_acc: 0.6880
Epoch 7/150
100/100 [==============================] - 20s 196ms/step - loss: 0.6068 - acc: 0.6670 - val_loss: 0.5998 - val_acc: 0.6680
Epoch 8/150
100/100 [==============================] - 20s 204ms/step - loss: 0.6019 - acc: 0.6615 - val_loss: 0.6125 - val_acc: 0.6530
Epoch 9/150
100/100 [==============================] - 20s 204ms/step - loss: 0.6021 - acc: 0.6695 - val_loss: 0.5917 - val_acc: 0.6660
Epoch 10/150
100/100 [==============================] - 21s 207ms/step - loss: 0.5789 - acc: 0.7000 - val_loss: 0.6300 - val_acc: 0.6690
Epoch 11/150
100/100 [==============================] - 20s 198ms/step - loss: 0.5720 - acc: 0.6965 - val_loss: 0.5712 - val_acc: 0.6970
Epoch 12/150
100/100 [==============================] - 19s 189ms/step - loss: 0.5605 - acc: 0.7130 - val_loss: 0.5471 - val_acc: 0.7050
Epoch 13/150
100/100 [==============================] - 19s 189ms/step - loss: 0.5734 - acc: 0.7055 - val_loss: 0.5916 - val_acc: 0.6690
Epoch 14/150
100/100 [==============================] - 19s 191ms/step - loss: 0.5704 - acc: 0.7020 - val_loss: 0.5430 - val_acc: 0.7150
Epoch 15/150
100/100 [==============================] - 19s 189ms/step - loss: 0.5578 - acc: 0.7125 - val_loss: 0.5732 - val_acc: 0.6990
Epoch 16/150
100/100 [==============================] - 19s 189ms/step - loss: 0.5506 - acc: 0.7135 - val_loss: 0.6132 - val_acc: 0.6770
Epoch 17/150
100/100 [==============================] - 19s 189ms/step - loss: 0.5527 - acc: 0.7100 - val_loss: 0.5404 - val_acc: 0.7190
Epoch 18/150
100/100 [==============================] - 21s 206ms/step - loss: 0.5594 - acc: 0.7105 - val_loss: 0.5281 - val_acc: 0.7210
Epoch 19/150
100/100 [==============================] - 20s 196ms/step - loss: 0.5428 - acc: 0.7280 - val_loss: 0.5287 - val_acc: 0.7360
Epoch 20/150
100/100 [==============================] - 19s 188ms/step - loss: 0.5431 - acc: 0.7175 - val_loss: 0.5292 - val_acc: 0.7310
Epoch 21/150
100/100 [==============================] - 20s 196ms/step - loss: 0.5350 - acc: 0.7370 - val_loss: 0.5084 - val_acc: 0.7500
Epoch 22/150
100/100 [==============================] - 19s 189ms/step - loss: 0.5367 - acc: 0.7285 - val_loss: 0.5722 - val_acc: 0.7160
Epoch 23/150
100/100 [==============================] - 19s 191ms/step - loss: 0.5336 - acc: 0.7230 - val_loss: 0.5398 - val_acc: 0.7320
Epoch 24/150
100/100 [==============================] - 19s 191ms/step - loss: 0.5266 - acc: 0.7285 - val_loss: 0.5285 - val_acc: 0.7380
Epoch 25/150
100/100 [==============================] - 20s 198ms/step - loss: 0.5250 - acc: 0.7290 - val_loss: 0.4980 - val_acc: 0.7580
Epoch 26/150
100/100 [==============================] - 19s 194ms/step - loss: 0.5295 - acc: 0.7360 - val_loss: 0.5568 - val_acc: 0.7150
Epoch 27/150
100/100 [==============================] - 19s 191ms/step - loss: 0.5186 - acc: 0.7515 - val_loss: 0.5093 - val_acc: 0.7470
Epoch 28/150
100/100 [==============================] - 19s 193ms/step - loss: 0.5299 - acc: 0.7335 - val_loss: 0.5077 - val_acc: 0.7480
Epoch 29/150
100/100 [==============================] - 19s 192ms/step - loss: 0.5090 - acc: 0.7435 - val_loss: 0.5024 - val_acc: 0.7470
Epoch 30/150
100/100 [==============================] - 19s 190ms/step - loss: 0.5021 - acc: 0.7500 - val_loss: 0.4932 - val_acc: 0.7580
Epoch 31/150
100/100 [==============================] - 19s 192ms/step - loss: 0.5128 - acc: 0.7470 - val_loss: 0.4953 - val_acc: 0.7600
Epoch 32/150
100/100 [==============================] - 19s 190ms/step - loss: 0.5153 - acc: 0.7340 - val_loss: 0.5070 - val_acc: 0.7510
Epoch 33/150
100/100 [==============================] - 21s 207ms/step - loss: 0.5113 - acc: 0.7455 - val_loss: 0.5616 - val_acc: 0.7090
Epoch 34/150
100/100 [==============================] - 21s 207ms/step - loss: 0.4944 - acc: 0.7515 - val_loss: 0.5100 - val_acc: 0.7420
Epoch 35/150
100/100 [==============================] - 19s 191ms/step - loss: 0.4973 - acc: 0.7640 - val_loss: 0.5074 - val_acc: 0.7510
Epoch 36/150
100/100 [==============================] - 20s 199ms/step - loss: 0.5023 - acc: 0.7485 - val_loss: 0.5462 - val_acc: 0.7270
Epoch 37/150
100/100 [==============================] - 19s 195ms/step - loss: 0.5001 - acc: 0.7490 - val_loss: 0.4870 - val_acc: 0.7660
Epoch 38/150
100/100 [==============================] - 21s 206ms/step - loss: 0.5026 - acc: 0.7525 - val_loss: 0.5162 - val_acc: 0.7370
Epoch 39/150
100/100 [==============================] - 21s 214ms/step - loss: 0.4956 - acc: 0.7490 - val_loss: 0.5036 - val_acc: 0.7370
Epoch 40/150
100/100 [==============================] - 20s 196ms/step - loss: 0.4858 - acc: 0.7635 - val_loss: 0.5302 - val_acc: 0.7490
Epoch 41/150
100/100 [==============================] - 20s 200ms/step - loss: 0.4929 - acc: 0.7565 - val_loss: 0.4967 - val_acc: 0.7520
Epoch 42/150
100/100 [==============================] - 21s 209ms/step - loss: 0.4836 - acc: 0.7690 - val_loss: 0.4800 - val_acc: 0.7690
Epoch 43/150
100/100 [==============================] - 20s 203ms/step - loss: 0.4694 - acc: 0.7825 - val_loss: 0.5002 - val_acc: 0.7560
Epoch 44/150
100/100 [==============================] - 21s 213ms/step - loss: 0.4876 - acc: 0.7560 - val_loss: 0.4942 - val_acc: 0.7620
Epoch 45/150
100/100 [==============================] - 21s 213ms/step - loss: 0.4832 - acc: 0.7630 - val_loss: 0.5475 - val_acc: 0.7500
Epoch 46/150
100/100 [==============================] - 20s 203ms/step - loss: 0.4730 - acc: 0.7660 - val_loss: 0.5816 - val_acc: 0.7160
Epoch 47/150
100/100 [==============================] - 20s 202ms/step - loss: 0.4762 - acc: 0.7815 - val_loss: 0.4843 - val_acc: 0.77002s - loss: 0.48
Epoch 48/150
100/100 [==============================] - 20s 199ms/step - loss: 0.4578 - acc: 0.7765 - val_loss: 0.5921 - val_acc: 0.7200
Epoch 49/150
100/100 [==============================] - 19s 192ms/step - loss: 0.4634 - acc: 0.7740 - val_loss: 0.5188 - val_acc: 0.7410
Epoch 50/150
100/100 [==============================] - 19s 193ms/step - loss: 0.4550 - acc: 0.7890 - val_loss: 0.5562 - val_acc: 0.7530
Epoch 51/150
100/100 [==============================] - 19s 191ms/step - loss: 0.4769 - acc: 0.7780 - val_loss: 0.4832 - val_acc: 0.7640
Epoch 52/150
100/100 [==============================] - 19s 191ms/step - loss: 0.4564 - acc: 0.7780 - val_loss: 0.4603 - val_acc: 0.7770
Epoch 53/150
100/100 [==============================] - 19s 191ms/step - loss: 0.4539 - acc: 0.7815 - val_loss: 0.5134 - val_acc: 0.7750
Epoch 54/150
100/100 [==============================] - 19s 192ms/step - loss: 0.4648 - acc: 0.7760 - val_loss: 0.4833 - val_acc: 0.7600
Epoch 55/150
100/100 [==============================] - 19s 192ms/step - loss: 0.4620 - acc: 0.7780 - val_loss: 0.5712 - val_acc: 0.7340
Epoch 56/150
100/100 [==============================] - 19s 191ms/step - loss: 0.4640 - acc: 0.7880 - val_loss: 0.4919 - val_acc: 0.7630
Epoch 57/150
100/100 [==============================] - 19s 191ms/step - loss: 0.4428 - acc: 0.7935 - val_loss: 0.5800 - val_acc: 0.7380
Epoch 58/150
100/100 [==============================] - 19s 190ms/step - loss: 0.4410 - acc: 0.7885 - val_loss: 0.5539 - val_acc: 0.7430
Epoch 59/150
100/100 [==============================] - 19s 191ms/step - loss: 0.4624 - acc: 0.7780 - val_loss: 0.4764 - val_acc: 0.7870
Epoch 60/150
100/100 [==============================] - 19s 192ms/step - loss: 0.4518 - acc: 0.7845 - val_loss: 0.5179 - val_acc: 0.7540
Epoch 61/150
100/100 [==============================] - 20s 196ms/step - loss: 0.4491 - acc: 0.7835 - val_loss: 0.4666 - val_acc: 0.7840
Epoch 62/150
100/100 [==============================] - 20s 198ms/step - loss: 0.4403 - acc: 0.7910 - val_loss: 0.4837 - val_acc: 0.7720
Epoch 63/150
100/100 [==============================] - 19s 192ms/step - loss: 0.4550 - acc: 0.7900 - val_loss: 0.4852 - val_acc: 0.7780
Epoch 64/150
100/100 [==============================] - 19s 188ms/step - loss: 0.4436 - acc: 0.7820 - val_loss: 0.4886 - val_acc: 0.7790
Epoch 65/150
100/100 [==============================] - 19s 188ms/step - loss: 0.4309 - acc: 0.7970 - val_loss: 0.4897 - val_acc: 0.7870
Epoch 66/150
100/100 [==============================] - 19s 189ms/step - loss: 0.4360 - acc: 0.8010 - val_loss: 0.4877 - val_acc: 0.7720
Epoch 67/150
100/100 [==============================] - 19s 190ms/step - loss: 0.4340 - acc: 0.7955 - val_loss: 0.4614 - val_acc: 0.7890
Epoch 68/150
100/100 [==============================] - 19s 189ms/step - loss: 0.4432 - acc: 0.7995 - val_loss: 0.4591 - val_acc: 0.7870
Epoch 69/150
100/100 [==============================] - 19s 189ms/step - loss: 0.4168 - acc: 0.8000 - val_loss: 0.4526 - val_acc: 0.7910
Epoch 70/150
100/100 [==============================] - 19s 190ms/step - loss: 0.4411 - acc: 0.8025 - val_loss: 0.4652 - val_acc: 0.7910
Epoch 71/150
100/100 [==============================] - 19s 190ms/step - loss: 0.4232 - acc: 0.8025 - val_loss: 0.5617 - val_acc: 0.7510
Epoch 72/150
100/100 [==============================] - 19s 189ms/step - loss: 0.4272 - acc: 0.8040 - val_loss: 0.4994 - val_acc: 0.7770
Epoch 73/150
100/100 [==============================] - 19s 189ms/step - loss: 0.4197 - acc: 0.8085 - val_loss: 0.4954 - val_acc: 0.7870
Epoch 74/150
100/100 [==============================] - 19s 191ms/step - loss: 0.4251 - acc: 0.8020 - val_loss: 0.5375 - val_acc: 0.7760
Epoch 75/150
100/100 [==============================] - 19s 191ms/step - loss: 0.4305 - acc: 0.7925 - val_loss: 0.5180 - val_acc: 0.7690
Epoch 76/150
100/100 [==============================] - 19s 190ms/step - loss: 0.4273 - acc: 0.8050 - val_loss: 0.5246 - val_acc: 0.7690
Epoch 77/150
100/100 [==============================] - 19s 193ms/step - loss: 0.4106 - acc: 0.8140 - val_loss: 0.4889 - val_acc: 0.7890
Epoch 78/150
100/100 [==============================] - 20s 197ms/step - loss: 0.4297 - acc: 0.8060 - val_loss: 0.4900 - val_acc: 0.7850
Epoch 79/150
100/100 [==============================] - 19s 191ms/step - loss: 0.4248 - acc: 0.8060 - val_loss: 0.5104 - val_acc: 0.7890
Epoch 80/150
100/100 [==============================] - 19s 190ms/step - loss: 0.4294 - acc: 0.8070 - val_loss: 0.4485 - val_acc: 0.8050
Epoch 81/150
100/100 [==============================] - 19s 189ms/step - loss: 0.4042 - acc: 0.8240 - val_loss: 0.4727 - val_acc: 0.7890
Epoch 82/150
100/100 [==============================] - 20s 203ms/step - loss: 0.4189 - acc: 0.8020 - val_loss: 0.4912 - val_acc: 0.7680
Epoch 83/150
100/100 [==============================] - 19s 189ms/step - loss: 0.4211 - acc: 0.8125 - val_loss: 0.4655 - val_acc: 0.7930
Epoch 84/150
100/100 [==============================] - 19s 190ms/step - loss: 0.4087 - acc: 0.8115 - val_loss: 0.4350 - val_acc: 0.8050
Epoch 85/150
100/100 [==============================] - 19s 191ms/step - loss: 0.4118 - acc: 0.8175 - val_loss: 0.6072 - val_acc: 0.7400
Epoch 86/150
100/100 [==============================] - 19s 189ms/step - loss: 0.3945 - acc: 0.8235 - val_loss: 0.5251 - val_acc: 0.7680
Epoch 87/150
100/100 [==============================] - 19s 190ms/step - loss: 0.3938 - acc: 0.8255 - val_loss: 0.4774 - val_acc: 0.7720
Epoch 88/150
100/100 [==============================] - 19s 189ms/step - loss: 0.4195 - acc: 0.8095 - val_loss: 0.4514 - val_acc: 0.8020
Epoch 89/150
100/100 [==============================] - 19s 193ms/step - loss: 0.3938 - acc: 0.8200 - val_loss: 0.4262 - val_acc: 0.8070
Epoch 90/150
100/100 [==============================] - 19s 192ms/step - loss: 0.3905 - acc: 0.8295 - val_loss: 0.4760 - val_acc: 0.7850
Epoch 91/150
100/100 [==============================] - 19s 194ms/step - loss: 0.3963 - acc: 0.8280 - val_loss: 0.4795 - val_acc: 0.7810
Epoch 92/150
100/100 [==============================] - 19s 193ms/step - loss: 0.3921 - acc: 0.8220 - val_loss: 0.4969 - val_acc: 0.7860
Epoch 93/150
100/100 [==============================] - 20s 202ms/step - loss: 0.3933 - acc: 0.8220 - val_loss: 0.4445 - val_acc: 0.8050
Epoch 94/150
100/100 [==============================] - 20s 203ms/step - loss: 0.3935 - acc: 0.8230 - val_loss: 0.4397 - val_acc: 0.8030
Epoch 95/150
100/100 [==============================] - 20s 199ms/step - loss: 0.3922 - acc: 0.8175 - val_loss: 0.5155 - val_acc: 0.7740
Epoch 96/150
100/100 [==============================] - 20s 202ms/step - loss: 0.3877 - acc: 0.8260 - val_loss: 0.4616 - val_acc: 0.7990
Epoch 97/150
100/100 [==============================] - 20s 200ms/step - loss: 0.3858 - acc: 0.8185 - val_loss: 0.4942 - val_acc: 0.8110
Epoch 98/150
100/100 [==============================] - 19s 190ms/step - loss: 0.3820 - acc: 0.8285 - val_loss: 0.4545 - val_acc: 0.8080
Epoch 99/150
100/100 [==============================] - 19s 187ms/step - loss: 0.3677 - acc: 0.8350 - val_loss: 0.5160 - val_acc: 0.7960
Epoch 100/150
100/100 [==============================] - 19s 188ms/step - loss: 0.4040 - acc: 0.8140 - val_loss: 0.4953 - val_acc: 0.7960
Epoch 101/150
100/100 [==============================] - 19s 187ms/step - loss: 0.3666 - acc: 0.8290 - val_loss: 0.4610 - val_acc: 0.8010
Epoch 102/150
100/100 [==============================] - 19s 187ms/step - loss: 0.3710 - acc: 0.8355 - val_loss: 0.4853 - val_acc: 0.7760
Epoch 103/150
100/100 [==============================] - 19s 188ms/step - loss: 0.3703 - acc: 0.8325 - val_loss: 0.4059 - val_acc: 0.8150
Epoch 104/150
100/100 [==============================] - 18s 182ms/step - loss: 0.3659 - acc: 0.8400 - val_loss: 0.4376 - val_acc: 0.8060
Epoch 105/150
100/100 [==============================] - 17s 166ms/step - loss: 0.3755 - acc: 0.8300 - val_loss: 0.4163 - val_acc: 0.8110
Epoch 106/150
100/100 [==============================] - 17s 169ms/step - loss: 0.3671 - acc: 0.8385 - val_loss: 0.4459 - val_acc: 0.8050
Epoch 107/150
100/100 [==============================] - 17s 167ms/step - loss: 0.3856 - acc: 0.8330 - val_loss: 0.4134 - val_acc: 0.8260
Epoch 108/150
100/100 [==============================] - 17s 165ms/step - loss: 0.3619 - acc: 0.8350 - val_loss: 0.4778 - val_acc: 0.7970
Epoch 109/150
100/100 [==============================] - 17s 165ms/step - loss: 0.3665 - acc: 0.8230 - val_loss: 0.4582 - val_acc: 0.8020
Epoch 110/150
100/100 [==============================] - 17s 166ms/step - loss: 0.3732 - acc: 0.8345 - val_loss: 0.4388 - val_acc: 0.8110
Epoch 111/150
100/100 [==============================] - 17s 167ms/step - loss: 0.3597 - acc: 0.8400 - val_loss: 0.4187 - val_acc: 0.8130
Epoch 112/150
100/100 [==============================] - 17s 170ms/step - loss: 0.3707 - acc: 0.8290 - val_loss: 0.4684 - val_acc: 0.8090
Epoch 113/150
100/100 [==============================] - 19s 186ms/step - loss: 0.3772 - acc: 0.8345 - val_loss: 0.4184 - val_acc: 0.8210
Epoch 114/150
100/100 [==============================] - 19s 195ms/step - loss: 0.3721 - acc: 0.8370 - val_loss: 0.5399 - val_acc: 0.7630
Epoch 115/150
100/100 [==============================] - 21s 208ms/step - loss: 0.3797 - acc: 0.8290 - val_loss: 0.4255 - val_acc: 0.8120
Epoch 116/150
100/100 [==============================] - 21s 212ms/step - loss: 0.3502 - acc: 0.8500 - val_loss: 0.3919 - val_acc: 0.8400
Epoch 117/150
100/100 [==============================] - 21s 210ms/step - loss: 0.3637 - acc: 0.8335 - val_loss: 0.4310 - val_acc: 0.7990
Epoch 118/150
100/100 [==============================] - 23s 232ms/step - loss: 0.3648 - acc: 0.8410 - val_loss: 0.3958 - val_acc: 0.8300
Epoch 119/150
100/100 [==============================] - 22s 217ms/step - loss: 0.3428 - acc: 0.8435 - val_loss: 0.5620 - val_acc: 0.7790
Epoch 120/150
100/100 [==============================] - 21s 206ms/step - loss: 0.3681 - acc: 0.8375 - val_loss: 0.4655 - val_acc: 0.7970
Epoch 121/150
100/100 [==============================] - 20s 198ms/step - loss: 0.3466 - acc: 0.8420 - val_loss: 0.6645 - val_acc: 0.7470
Epoch 122/150
100/100 [==============================] - 19s 193ms/step - loss: 0.3563 - acc: 0.8410 - val_loss: 0.3849 - val_acc: 0.8390
Epoch 123/150
100/100 [==============================] - 20s 199ms/step - loss: 0.3490 - acc: 0.8505 - val_loss: 0.4024 - val_acc: 0.8350
Epoch 124/150
100/100 [==============================] - 19s 194ms/step - loss: 0.3584 - acc: 0.8375 - val_loss: 0.4305 - val_acc: 0.8200
Epoch 125/150
100/100 [==============================] - 19s 195ms/step - loss: 0.3553 - acc: 0.8485 - val_loss: 0.4188 - val_acc: 0.8100
Epoch 126/150
100/100 [==============================] - 20s 197ms/step - loss: 0.3624 - acc: 0.8460 - val_loss: 0.4694 - val_acc: 0.8140
Epoch 127/150
100/100 [==============================] - 20s 196ms/step - loss: 0.3517 - acc: 0.8525 - val_loss: 0.4633 - val_acc: 0.8290
Epoch 128/150
100/100 [==============================] - 21s 211ms/step - loss: 0.3429 - acc: 0.8550 - val_loss: 0.5451 - val_acc: 0.7910
Epoch 129/150
100/100 [==============================] - 20s 196ms/step - loss: 0.3626 - acc: 0.8425 - val_loss: 0.4010 - val_acc: 0.8180
Epoch 130/150
100/100 [==============================] - 20s 200ms/step - loss: 0.3215 - acc: 0.8685 - val_loss: 0.5456 - val_acc: 0.7860
Epoch 131/150
100/100 [==============================] - 21s 207ms/step - loss: 0.3530 - acc: 0.8440 - val_loss: 0.4460 - val_acc: 0.8000
Epoch 132/150
100/100 [==============================] - 22s 221ms/step - loss: 0.3537 - acc: 0.8335 - val_loss: 0.4433 - val_acc: 0.8190
Epoch 133/150
100/100 [==============================] - 22s 216ms/step - loss: 0.3253 - acc: 0.8580 - val_loss: 0.5426 - val_acc: 0.7980
Epoch 134/150
100/100 [==============================] - 21s 210ms/step - loss: 0.3424 - acc: 0.8395 - val_loss: 0.5706 - val_acc: 0.8010
Epoch 135/150
100/100 [==============================] - 23s 230ms/step - loss: 0.3354 - acc: 0.8465 - val_loss: 0.4248 - val_acc: 0.8260
Epoch 136/150
100/100 [==============================] - 25s 246ms/step - loss: 0.3489 - acc: 0.8485 - val_loss: 0.4579 - val_acc: 0.8140
Epoch 137/150
100/100 [==============================] - 21s 209ms/step - loss: 0.3220 - acc: 0.8530 - val_loss: 0.4860 - val_acc: 0.8090
Epoch 138/150
100/100 [==============================] - 21s 207ms/step - loss: 0.3288 - acc: 0.8605 - val_loss: 0.5140 - val_acc: 0.7950
Epoch 139/150
100/100 [==============================] - 20s 202ms/step - loss: 0.3242 - acc: 0.8570 - val_loss: 0.5453 - val_acc: 0.7870
Epoch 140/150
100/100 [==============================] - 20s 199ms/step - loss: 0.3245 - acc: 0.8545 - val_loss: 0.5518 - val_acc: 0.8160
Epoch 141/150
100/100 [==============================] - 21s 210ms/step - loss: 0.3247 - acc: 0.8550 - val_loss: 0.5010 - val_acc: 0.7970
Epoch 142/150
100/100 [==============================] - 26s 257ms/step - loss: 0.3341 - acc: 0.8635 - val_loss: 0.5568 - val_acc: 0.7920
Epoch 143/150
100/100 [==============================] - 30s 303ms/step - loss: 0.3434 - acc: 0.8560 - val_loss: 0.4338 - val_acc: 0.8170
Epoch 144/150
100/100 [==============================] - 30s 298ms/step - loss: 0.3468 - acc: 0.8540 - val_loss: 0.4489 - val_acc: 0.8030
Epoch 145/150
100/100 [==============================] - 24s 237ms/step - loss: 0.3220 - acc: 0.8565 - val_loss: 0.6322 - val_acc: 0.7850
Epoch 146/150
100/100 [==============================] - 31s 311ms/step - loss: 0.3234 - acc: 0.8630 - val_loss: 0.4366 - val_acc: 0.8210
Epoch 147/150
100/100 [==============================] - 28s 283ms/step - loss: 0.3145 - acc: 0.8660 - val_loss: 0.4855 - val_acc: 0.8050
Epoch 148/150
100/100 [==============================] - 22s 222ms/step - loss: 0.3156 - acc: 0.8675 - val_loss: 0.4345 - val_acc: 0.8280
Epoch 149/150
100/100 [==============================] - 22s 224ms/step - loss: 0.3297 - acc: 0.8585 - val_loss: 0.4707 - val_acc: 0.8330
Epoch 150/150
100/100 [==============================] - 21s 206ms/step - loss: 0.3179 - acc: 0.8615 - val_loss: 0.4427 - val_acc: 0.8140
In [11]:
model.save('cats_and_dogs_small_4.h5') 
In [12]:
acc = history.history['acc'] 
val_acc = history.history['val_acc'] 
loss = history.history['loss'] 
val_loss = history.history['val_loss'] 
 
epochs = range(1, len(acc) + 1) 
 
plt.plot(epochs, acc, 'bo', label='Training acc') 
plt.plot(epochs, val_acc, 'b', label='Validation acc') 
plt.title('Training and validation accuracy') 
plt.legend() 
 
plt.figure() 
 
plt.plot(epochs, loss, 'bo', label='Training loss') 
plt.plot(epochs, val_loss, 'b', label='Validation loss') 
plt.title('Training and validation loss') 
plt.legend() 
 
plt.show()
In [ ]:
 
 
posted @ 2020-10-10 23:17  范仁义  阅读(613)  评论(0)    收藏  举报