微信扫一扫打赏支持

Tensorflow2(预课程)---7.5、cifar10分类-层方式-卷积神经网络-VGG16

Tensorflow2(预课程)---7.5、cifar10分类-层方式-卷积神经网络-VGG16

一、总结

一句话总结:

可以看到,cifar10用vgg16分类,测试集准确率有86.50
# 构建容器
model = tf.keras.Sequential()

# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=(3, 3),padding='same',input_shape=(32,32,3))) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBAPD
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')) # 池化层
model.add(tf.keras.layers.Dropout(0.2))

# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=128, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBAPD
model.add(tf.keras.layers.Conv2D(filters=128, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')) # 池化层
model.add(tf.keras.layers.Dropout(0.2))

# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBAPD
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')) # 池化层
model.add(tf.keras.layers.Dropout(0.2))


# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBAPD
model.add(tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')) # 池化层
model.add(tf.keras.layers.Dropout(0.2))


# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBAPD
model.add(tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')) # 池化层
model.add(tf.keras.layers.Dropout(0.2))



# 全连接层
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(512,activation='relu'))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Dense(512,activation='relu'))
model.add(tf.keras.layers.Dropout(0.2))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()

 

 

二、cifar10分类-层方式-卷积神经网络-VGG16

博客对应课程的视频位置:

 

步骤

1、读取数据集
2、拆分数据集(拆分成训练数据集和测试数据集)
3、构建模型
4、训练模型
5、检验模型

需求

cifar10(物品分类)


该数据集共有60000张彩色图像,这些图像是32*32,分为10个类,每类6000张图。这里面有50000张用于训练,构成了5个训练批,每一批10000张图;另外10000用于测试,单独构成一批。测试批的数据里,取自10类中的每一类,每一类随机取1000张。抽剩下的就随机排列组成了训练批。注意一个训练批中的各类图像并不一定数量相同,总的来看训练批,每一类都有5000张图。


 

 

In [1]:
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

1、读取数据集

直接从tensorflow的dataset来读取数据集即可

In [2]:
 (train_x, train_y), (test_x, test_y) = tf.keras.datasets.cifar10.load_data()
print(train_x.shape, train_y.shape)
(50000, 32, 32, 3) (50000, 1)

这是32*32的彩色图,rgb三个通道如何处理呢

In [3]:
plt.imshow(train_x[0])
plt.show()
In [4]:
plt.figure()
plt.imshow(train_x[1])
plt.figure()
plt.imshow(train_x[2])
plt.show()
In [5]:
print(test_y)
[[3]
 [8]
 [8]
 ...
 [5]
 [1]
 [7]]
In [6]:
# 像素值 RGB
np.max(train_x[0])
Out[6]:
255

2、拆分数据集(拆分成训练数据集和测试数据集)

上一步做了拆分数据集的工作

In [7]:
# 图片数据如何归一化
# 直接除255即可
train_x = train_x/255.0
test_x = test_x/255.0
In [8]:
# 像素值 RGB
np.max(train_x[0])
Out[8]:
1.0
In [9]:
train_y=train_y.flatten()
test_y=test_y.flatten()
train_y = tf.one_hot(train_y, depth=10)
test_y = tf.one_hot(test_y, depth=10)
print(test_y.shape)
(10000, 10)

3、构建模型

应该构建一个怎么样的模型:

In [10]:
# 构建容器
model = tf.keras.Sequential()

# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=(3, 3),padding='same',input_shape=(32,32,3))) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBAPD
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')) # 池化层
model.add(tf.keras.layers.Dropout(0.2))

# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=128, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBAPD
model.add(tf.keras.layers.Conv2D(filters=128, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')) # 池化层
model.add(tf.keras.layers.Dropout(0.2))

# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBAPD
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')) # 池化层
model.add(tf.keras.layers.Dropout(0.2))


# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBAPD
model.add(tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')) # 池化层
model.add(tf.keras.layers.Dropout(0.2))


# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBA
model.add(tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
# 卷积层:CBAPD
model.add(tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3),padding='same')) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')) # 池化层
model.add(tf.keras.layers.Dropout(0.2))



# 全连接层
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(512,activation='relu'))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Dense(512,activation='relu'))
model.add(tf.keras.layers.Dropout(0.2))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 32, 32, 64)        1792      
_________________________________________________________________
batch_normalization (BatchNo (None, 32, 32, 64)        256       
_________________________________________________________________
activation (Activation)      (None, 32, 32, 64)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 32, 32, 64)        36928     
_________________________________________________________________
batch_normalization_1 (Batch (None, 32, 32, 64)        256       
_________________________________________________________________
activation_1 (Activation)    (None, 32, 32, 64)        0         
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 16, 16, 64)        0         
_________________________________________________________________
dropout (Dropout)            (None, 16, 16, 64)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 16, 16, 128)       73856     
_________________________________________________________________
batch_normalization_2 (Batch (None, 16, 16, 128)       512       
_________________________________________________________________
activation_2 (Activation)    (None, 16, 16, 128)       0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 16, 16, 128)       147584    
_________________________________________________________________
batch_normalization_3 (Batch (None, 16, 16, 128)       512       
_________________________________________________________________
activation_3 (Activation)    (None, 16, 16, 128)       0         
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 8, 8, 128)         0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 8, 8, 128)         0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 8, 8, 256)         295168    
_________________________________________________________________
batch_normalization_4 (Batch (None, 8, 8, 256)         1024      
_________________________________________________________________
activation_4 (Activation)    (None, 8, 8, 256)         0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 8, 8, 256)         590080    
_________________________________________________________________
batch_normalization_5 (Batch (None, 8, 8, 256)         1024      
_________________________________________________________________
activation_5 (Activation)    (None, 8, 8, 256)         0         
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 8, 8, 256)         590080    
_________________________________________________________________
batch_normalization_6 (Batch (None, 8, 8, 256)         1024      
_________________________________________________________________
activation_6 (Activation)    (None, 8, 8, 256)         0         
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 256)         0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 4, 4, 256)         0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 4, 4, 512)         1180160   
_________________________________________________________________
batch_normalization_7 (Batch (None, 4, 4, 512)         2048      
_________________________________________________________________
activation_7 (Activation)    (None, 4, 4, 512)         0         
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 4, 4, 512)         2359808   
_________________________________________________________________
batch_normalization_8 (Batch (None, 4, 4, 512)         2048      
_________________________________________________________________
activation_8 (Activation)    (None, 4, 4, 512)         0         
_________________________________________________________________
conv2d_9 (Conv2D)            (None, 4, 4, 512)         2359808   
_________________________________________________________________
batch_normalization_9 (Batch (None, 4, 4, 512)         2048      
_________________________________________________________________
activation_9 (Activation)    (None, 4, 4, 512)         0         
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 2, 2, 512)         0         
_________________________________________________________________
dropout_3 (Dropout)          (None, 2, 2, 512)         0         
_________________________________________________________________
conv2d_10 (Conv2D)           (None, 2, 2, 512)         2359808   
_________________________________________________________________
batch_normalization_10 (Batc (None, 2, 2, 512)         2048      
_________________________________________________________________
activation_10 (Activation)   (None, 2, 2, 512)         0         
_________________________________________________________________
conv2d_11 (Conv2D)           (None, 2, 2, 512)         2359808   
_________________________________________________________________
batch_normalization_11 (Batc (None, 2, 2, 512)         2048      
_________________________________________________________________
activation_11 (Activation)   (None, 2, 2, 512)         0         
_________________________________________________________________
conv2d_12 (Conv2D)           (None, 2, 2, 512)         2359808   
_________________________________________________________________
batch_normalization_12 (Batc (None, 2, 2, 512)         2048      
_________________________________________________________________
activation_12 (Activation)   (None, 2, 2, 512)         0         
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 1, 1, 512)         0         
_________________________________________________________________
dropout_4 (Dropout)          (None, 1, 1, 512)         0         
_________________________________________________________________
flatten (Flatten)            (None, 512)               0         
_________________________________________________________________
dense (Dense)                (None, 512)               262656    
_________________________________________________________________
dropout_5 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               262656    
_________________________________________________________________
dropout_6 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                5130      
=================================================================
Total params: 15,262,026
Trainable params: 15,253,578
Non-trainable params: 8,448
_________________________________________________________________

4、训练模型

In [11]:
# 配置优化函数和损失器
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
# 开始训练
history = model.fit(train_x,train_y,epochs=50,validation_data=(test_x,test_y))
Epoch 1/50
   2/1563 [..............................] - ETA: 40s - loss: 3.3212 - acc: 0.0781WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0180s vs `on_train_batch_end` time: 0.0329s). Check your callbacks.
1563/1563 [==============================] - 88s 56ms/step - loss: 1.9112 - acc: 0.2321 - val_loss: 1.8226 - val_acc: 0.3101
Epoch 2/50
1563/1563 [==============================] - 93s 59ms/step - loss: 1.5323 - acc: 0.3927 - val_loss: 1.4066 - val_acc: 0.4753
Epoch 3/50
1563/1563 [==============================] - 99s 63ms/step - loss: 1.2141 - acc: 0.5739 - val_loss: 1.2506 - val_acc: 0.5504
Epoch 4/50
1563/1563 [==============================] - 101s 64ms/step - loss: 0.9981 - acc: 0.6602 - val_loss: 0.9641 - val_acc: 0.6734
Epoch 5/50
1563/1563 [==============================] - 108s 69ms/step - loss: 0.8685 - acc: 0.7084 - val_loss: 1.0634 - val_acc: 0.6379
Epoch 6/50
1563/1563 [==============================] - 110s 71ms/step - loss: 0.7750 - acc: 0.7416 - val_loss: 1.1065 - val_acc: 0.6434
Epoch 7/50
1563/1563 [==============================] - 116s 74ms/step - loss: 0.6833 - acc: 0.7757 - val_loss: 0.9586 - val_acc: 0.6973
Epoch 8/50
1563/1563 [==============================] - 115s 73ms/step - loss: 0.6164 - acc: 0.7997 - val_loss: 0.7474 - val_acc: 0.7489
Epoch 9/50
1563/1563 [==============================] - 119s 76ms/step - loss: 0.5519 - acc: 0.8210 - val_loss: 0.9824 - val_acc: 0.6918
Epoch 10/50
1563/1563 [==============================] - 117s 75ms/step - loss: 0.5121 - acc: 0.8355 - val_loss: 0.5503 - val_acc: 0.8213
Epoch 11/50
1563/1563 [==============================] - 118s 75ms/step - loss: 0.4572 - acc: 0.8529 - val_loss: 0.6606 - val_acc: 0.7841
Epoch 12/50
1563/1563 [==============================] - 117s 75ms/step - loss: 0.4242 - acc: 0.8641 - val_loss: 0.6633 - val_acc: 0.7956
Epoch 13/50
1563/1563 [==============================] - 118s 75ms/step - loss: 0.3866 - acc: 0.8772 - val_loss: 0.5328 - val_acc: 0.8368
Epoch 14/50
1563/1563 [==============================] - 117s 75ms/step - loss: 0.3595 - acc: 0.8852 - val_loss: 0.5956 - val_acc: 0.8200
Epoch 15/50
1563/1563 [==============================] - 118s 75ms/step - loss: 0.3215 - acc: 0.8975 - val_loss: 0.5988 - val_acc: 0.8226
Epoch 16/50
1563/1563 [==============================] - 121s 78ms/step - loss: 0.2982 - acc: 0.9050 - val_loss: 0.4690 - val_acc: 0.8510
Epoch 17/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.2752 - acc: 0.9132 - val_loss: 0.5398 - val_acc: 0.8392
Epoch 18/50
1563/1563 [==============================] - 127s 81ms/step - loss: 0.2462 - acc: 0.9221 - val_loss: 0.6144 - val_acc: 0.8256
Epoch 19/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.2300 - acc: 0.9287 - val_loss: 0.5244 - val_acc: 0.8556
Epoch 20/50
1563/1563 [==============================] - 127s 81ms/step - loss: 0.2162 - acc: 0.9307 - val_loss: 0.5017 - val_acc: 0.8599
Epoch 21/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.1958 - acc: 0.9383 - val_loss: 0.5361 - val_acc: 0.8489
Epoch 22/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.1844 - acc: 0.9420 - val_loss: 0.6854 - val_acc: 0.8206
Epoch 23/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.1776 - acc: 0.9459 - val_loss: 0.8824 - val_acc: 0.7656
Epoch 24/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.1664 - acc: 0.9477 - val_loss: 0.5055 - val_acc: 0.8646
Epoch 25/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.1442 - acc: 0.9551 - val_loss: 0.5287 - val_acc: 0.8553
Epoch 26/50
1563/1563 [==============================] - 124s 79ms/step - loss: 0.1457 - acc: 0.9552 - val_loss: 0.5265 - val_acc: 0.8602
Epoch 27/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.1410 - acc: 0.9568 - val_loss: 0.5813 - val_acc: 0.8546
Epoch 28/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.1252 - acc: 0.9628 - val_loss: 0.5836 - val_acc: 0.8447
Epoch 29/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.1244 - acc: 0.9612 - val_loss: 0.5174 - val_acc: 0.8706
Epoch 30/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.1128 - acc: 0.9666 - val_loss: 0.5781 - val_acc: 0.8572
Epoch 31/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.1175 - acc: 0.9653 - val_loss: 0.5762 - val_acc: 0.8508
Epoch 32/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.1086 - acc: 0.9670 - val_loss: 0.5318 - val_acc: 0.8701
Epoch 33/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0943 - acc: 0.9719 - val_loss: 0.6562 - val_acc: 0.8565
Epoch 34/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0923 - acc: 0.9713 - val_loss: 0.5576 - val_acc: 0.8657
Epoch 35/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0971 - acc: 0.9712 - val_loss: 0.5937 - val_acc: 0.8640
Epoch 36/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0882 - acc: 0.9744 - val_loss: 0.6062 - val_acc: 0.8527
Epoch 37/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0849 - acc: 0.9751 - val_loss: 0.5711 - val_acc: 0.8561
Epoch 38/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0774 - acc: 0.9773 - val_loss: 0.5984 - val_acc: 0.8693
Epoch 39/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0843 - acc: 0.9747 - val_loss: 0.6511 - val_acc: 0.8597
Epoch 40/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0794 - acc: 0.9773 - val_loss: 0.6369 - val_acc: 0.8601
Epoch 41/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0776 - acc: 0.9775 - val_loss: 0.6581 - val_acc: 0.8675
Epoch 42/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0821 - acc: 0.9772 - val_loss: 0.5915 - val_acc: 0.8660
Epoch 43/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0743 - acc: 0.9779 - val_loss: 0.6092 - val_acc: 0.8636
Epoch 44/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0666 - acc: 0.9809 - val_loss: 0.6472 - val_acc: 0.8649
Epoch 45/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0678 - acc: 0.9812 - val_loss: 0.5751 - val_acc: 0.8675
Epoch 46/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0676 - acc: 0.9801 - val_loss: 0.6414 - val_acc: 0.8650
Epoch 47/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0757 - acc: 0.9791 - val_loss: 0.6101 - val_acc: 0.8698
Epoch 48/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0611 - acc: 0.9826 - val_loss: 0.6959 - val_acc: 0.8575
Epoch 49/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0702 - acc: 0.9811 - val_loss: 0.5780 - val_acc: 0.8735
Epoch 50/50
1563/1563 [==============================] - 125s 80ms/step - loss: 0.0584 - acc: 0.9832 - val_loss: 0.6632 - val_acc: 0.8656
In [12]:
plt.plot(history.epoch,history.history.get('loss'))
plt.title("train data loss")
plt.show()
In [13]:
plt.plot(history.epoch,history.history.get('val_loss'))
plt.title("test data loss")
plt.show()
In [14]:
plt.plot(history.epoch,history.history.get('acc'))
plt.title("train data acc")
plt.show()
In [15]:
plt.plot(history.epoch,history.history.get('val_acc'))
plt.title("test data acc")
plt.show()

5、检验模型

In [16]:
# 看一下模型的预测能力
pridict_y=model.predict(test_x)
print(pridict_y)
print(test_y)
[[1.3860699e-06 1.5969115e-07 8.5872371e-06 ... 3.8204995e-05
  1.7430723e-07 4.9147188e-06]
 [8.5149146e-22 2.7103116e-13 2.4058401e-30 ... 1.1719342e-33
  1.0000000e+00 2.0469961e-15]
 [1.2890460e-05 1.4855738e-03 5.9510251e-07 ... 1.0485307e-07
  9.9828267e-01 1.6780110e-04]
 ...
 [1.4299101e-17 1.1163256e-15 3.1742367e-08 ... 6.0528778e-14
  1.7793608e-18 6.4473787e-16]
 [8.6224048e-07 9.9970168e-01 7.5743039e-09 ... 4.3463556e-13
  1.8568939e-04 1.1160478e-04]
 [1.6353867e-21 2.7299391e-24 9.5140450e-23 ... 1.0000000e+00
  6.7911336e-32 2.0024532e-17]]
tf.Tensor(
[[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 1. 0.]
 [0. 0. 0. ... 0. 1. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 1. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 1. 0. 0.]], shape=(10000, 10), dtype=float32)
In [17]:
# 在pridict_y中找最大值的索引,横向
pridict_y = tf.argmax(pridict_y, axis=1)
print(pridict_y)
#
test_y = tf.argmax(test_y, axis=1)
print(test_y)
tf.Tensor([3 8 8 ... 5 1 7], shape=(10000,), dtype=int64)
tf.Tensor([3 8 8 ... 5 1 7], shape=(10000,), dtype=int64)
In [18]:
plt.figure()
plt.imshow(test_x[0])
plt.figure()
plt.imshow(test_x[1])
plt.figure()
plt.imshow(test_x[2])
plt.figure()
plt.imshow(test_x[3])
plt.show()
In [ ]:
 

 

 

 

 
posted @ 2020-09-21 01:34  范仁义  阅读(457)  评论(0)    收藏  举报