微信扫一扫打赏支持

Tensorflow2(预课程)---7.4、cifar10分类-层方式-卷积神经网络-AlexNet8

Tensorflow2(预课程)---7.4、cifar10分类-层方式-卷积神经网络-AlexNet8

一、总结

一句话总结:

cifar10用AlexNet训练一下,测试集成功率有74
# 构建容器
model = tf.keras.Sequential()

# 卷积层 一
model.add(tf.keras.layers.Conv2D(filters=96, kernel_size=(3, 3),input_shape=(32,32,3))) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(3, 3), strides=2)) # 池化层
# dropout层没有

# 卷积层 二
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3))) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(3, 3), strides=2)) # 池化层
# dropout层没有


# 卷积层 三
model.add(tf.keras.layers.Conv2D(filters=384, kernel_size=(3, 3),padding='same',activation='relu')) # 卷积层
          
# 卷积层 四
model.add(tf.keras.layers.Conv2D(filters=384, kernel_size=(3, 3),padding='same',activation='relu')) # 卷积层
          
# 卷积层 五
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3),padding='same',activation='relu')) # 卷积层
model.add(tf.keras.layers.MaxPool2D(pool_size=(3, 3), strides=2)) # 池化层



# 全连接层
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(2048,activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(2048,activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()

 

 

1、要使卷积核的通道数与输入特征图的通道数一致 实例?

第一个卷积层:model.add(tf.keras.layers.Conv2D(filters=96, kernel_size=(3, 3),input_shape=(32,32,3))) # 卷积层
第一个卷积层参数:3*3*3*96+96=2688
第二个卷积层:model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3))) # 卷积层
第二个卷积层参数:3*3*96*256+256=221440

 

 

二、cifar10分类-层方式-卷积神经网络-AlexNet8

博客对应课程的视频位置:

 

步骤

1、读取数据集
2、拆分数据集(拆分成训练数据集和测试数据集)
3、构建模型
4、训练模型
5、检验模型

需求

cifar10(物品分类)


该数据集共有60000张彩色图像,这些图像是32*32,分为10个类,每类6000张图。这里面有50000张用于训练,构成了5个训练批,每一批10000张图;另外10000用于测试,单独构成一批。测试批的数据里,取自10类中的每一类,每一类随机取1000张。抽剩下的就随机排列组成了训练批。注意一个训练批中的各类图像并不一定数量相同,总的来看训练批,每一类都有5000张图。


 

 

In [1]:
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

1、读取数据集

直接从tensorflow的dataset来读取数据集即可

In [2]:
 (train_x, train_y), (test_x, test_y) = tf.keras.datasets.cifar10.load_data()
print(train_x.shape, train_y.shape)
(50000, 32, 32, 3) (50000, 1)

这是32*32的彩色图,rgb三个通道如何处理呢

In [3]:
plt.imshow(train_x[0])
plt.show()
In [4]:
plt.figure()
plt.imshow(train_x[1])
plt.figure()
plt.imshow(train_x[2])
plt.show()
In [5]:
print(test_y)
[[3]
 [8]
 [8]
 ...
 [5]
 [1]
 [7]]
In [6]:
# 像素值 RGB
np.max(train_x[0])
Out[6]:
255

2、拆分数据集(拆分成训练数据集和测试数据集)

上一步做了拆分数据集的工作

In [7]:
# 图片数据如何归一化
# 直接除255即可
train_x = train_x/255.0
test_x = test_x/255.0
In [8]:
# 像素值 RGB
np.max(train_x[0])
Out[8]:
1.0
In [9]:
train_y=train_y.flatten()
test_y=test_y.flatten()
train_y = tf.one_hot(train_y, depth=10)
test_y = tf.one_hot(test_y, depth=10)
print(test_y.shape)
(10000, 10)

3、构建模型

应该构建一个怎么样的模型:

In [10]:
# 构建容器
model = tf.keras.Sequential()

# 卷积层 一
model.add(tf.keras.layers.Conv2D(filters=96, kernel_size=(3, 3),input_shape=(32,32,3))) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(3, 3), strides=2)) # 池化层
# dropout层没有

# 卷积层 二
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3))) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(3, 3), strides=2)) # 池化层
# dropout层没有


# 卷积层 三
model.add(tf.keras.layers.Conv2D(filters=384, kernel_size=(3, 3),padding='same',activation='relu')) # 卷积层
          
# 卷积层 四
model.add(tf.keras.layers.Conv2D(filters=384, kernel_size=(3, 3),padding='same',activation='relu')) # 卷积层
          
# 卷积层 五
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3),padding='same',activation='relu')) # 卷积层
model.add(tf.keras.layers.MaxPool2D(pool_size=(3, 3), strides=2)) # 池化层



# 全连接层
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(2048,activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(2048,activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 30, 30, 96)        2688      
_________________________________________________________________
batch_normalization (BatchNo (None, 30, 30, 96)        384       
_________________________________________________________________
activation (Activation)      (None, 30, 30, 96)        0         
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 96)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 12, 12, 256)       221440    
_________________________________________________________________
batch_normalization_1 (Batch (None, 12, 12, 256)       1024      
_________________________________________________________________
activation_1 (Activation)    (None, 12, 12, 256)       0         
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 256)         0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 5, 5, 384)         885120    
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 5, 5, 384)         1327488   
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 5, 5, 256)         884992    
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 2, 2, 256)         0         
_________________________________________________________________
flatten (Flatten)            (None, 1024)              0         
_________________________________________________________________
dense (Dense)                (None, 2048)              2099200   
_________________________________________________________________
dropout (Dropout)            (None, 2048)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 2048)              4196352   
_________________________________________________________________
dropout_1 (Dropout)          (None, 2048)              0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                20490     
=================================================================
Total params: 9,639,178
Trainable params: 9,638,474
Non-trainable params: 704
_________________________________________________________________

参数

第一个卷积层:3*3*3*96+96=2688

第二个卷积层:3*3*96*256+256=221440

第三个卷积层:3*3*256*384+384=885120

其它层的参数也很好算

下一层的卷积核的维度肯定要和上一层保存一致, 比如最开始输出层rgb3维图片,如果卷积核平面尺寸是3*3,那么就是3*3*3

如果第一层是96维,那么同样3*3,就是3*3*96

要使卷积核的通道数与输入特征图的通道数一致实例

第一个卷积层:3*3*3*96+96=2688
model.add(tf.keras.layers.Conv2D(filters=96, kernel_size=(3, 3),input_shape=(32,32,3))) # 卷积层
第二个卷积层:3*3*96*256+256=221440
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3))) # 卷积层
第三个卷积层:3*3*256*384+384=885120
model.add(tf.keras.layers.Conv2D(filters=384, kernel_size=(3, 3),padding='same',activation='relu')) # 卷积层

4、训练模型

In [11]:
# 配置优化函数和损失器
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
# 开始训练
history = model.fit(train_x,train_y,epochs=100,validation_data=(test_x,test_y))
Epoch 1/100
1563/1563 [==============================] - 38s 24ms/step - loss: 1.6303 - acc: 0.3989 - val_loss: 1.5941 - val_acc: 0.4496
Epoch 2/100
1563/1563 [==============================] - 39s 25ms/step - loss: 1.2793 - acc: 0.5473 - val_loss: 1.7525 - val_acc: 0.4228
Epoch 3/100
1563/1563 [==============================] - 41s 26ms/step - loss: 1.1533 - acc: 0.5983 - val_loss: 1.1737 - val_acc: 0.5912
Epoch 4/100
1563/1563 [==============================] - 43s 27ms/step - loss: 1.0713 - acc: 0.6268 - val_loss: 1.5180 - val_acc: 0.5195
Epoch 5/100
1563/1563 [==============================] - 44s 28ms/step - loss: 1.0031 - acc: 0.6560 - val_loss: 1.0900 - val_acc: 0.6163
Epoch 6/100
1563/1563 [==============================] - 44s 28ms/step - loss: 0.9506 - acc: 0.6733 - val_loss: 1.0509 - val_acc: 0.6481
Epoch 7/100
1563/1563 [==============================] - 44s 28ms/step - loss: 0.8976 - acc: 0.6948 - val_loss: 1.0462 - val_acc: 0.6547
Epoch 8/100
1563/1563 [==============================] - 46s 29ms/step - loss: 0.8615 - acc: 0.7075 - val_loss: 0.9751 - val_acc: 0.6656
Epoch 9/100
1563/1563 [==============================] - 46s 30ms/step - loss: 0.8182 - acc: 0.7235 - val_loss: 1.0283 - val_acc: 0.6553
Epoch 10/100
1563/1563 [==============================] - 48s 31ms/step - loss: 0.7916 - acc: 0.7326 - val_loss: 1.3925 - val_acc: 0.5557
Epoch 11/100
1563/1563 [==============================] - 49s 31ms/step - loss: 0.7599 - acc: 0.7439 - val_loss: 1.1455 - val_acc: 0.6356
Epoch 12/100
1563/1563 [==============================] - 50s 32ms/step - loss: 0.7339 - acc: 0.7525 - val_loss: 0.9820 - val_acc: 0.6751
Epoch 13/100
1563/1563 [==============================] - 48s 30ms/step - loss: 0.7106 - acc: 0.7614 - val_loss: 0.9010 - val_acc: 0.7017
Epoch 14/100
1563/1563 [==============================] - 52s 33ms/step - loss: 0.6905 - acc: 0.7673 - val_loss: 0.8818 - val_acc: 0.6947
Epoch 15/100
1563/1563 [==============================] - 49s 32ms/step - loss: 0.6734 - acc: 0.7738 - val_loss: 0.9274 - val_acc: 0.6878
Epoch 16/100
1563/1563 [==============================] - 50s 32ms/step - loss: 0.6520 - acc: 0.7838 - val_loss: 0.8183 - val_acc: 0.7326
Epoch 17/100
1563/1563 [==============================] - 56s 36ms/step - loss: 0.6360 - acc: 0.7875 - val_loss: 0.9067 - val_acc: 0.6956
Epoch 18/100
1563/1563 [==============================] - 57s 36ms/step - loss: 0.6127 - acc: 0.7960 - val_loss: 1.0491 - val_acc: 0.6567
Epoch 19/100
1563/1563 [==============================] - 53s 34ms/step - loss: 0.6089 - acc: 0.7980 - val_loss: 0.9037 - val_acc: 0.7031
Epoch 20/100
1563/1563 [==============================] - 52s 33ms/step - loss: 0.5851 - acc: 0.8067 - val_loss: 0.9115 - val_acc: 0.6999
Epoch 21/100
1563/1563 [==============================] - 56s 36ms/step - loss: 0.5792 - acc: 0.8106 - val_loss: 0.9337 - val_acc: 0.6876
Epoch 22/100
1563/1563 [==============================] - 56s 36ms/step - loss: 0.5661 - acc: 0.8152 - val_loss: 0.8442 - val_acc: 0.7264
Epoch 23/100
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5625 - acc: 0.8158 - val_loss: 0.9215 - val_acc: 0.7064
Epoch 24/100
1563/1563 [==============================] - 54s 35ms/step - loss: 0.5426 - acc: 0.8205 - val_loss: 0.8929 - val_acc: 0.7288
Epoch 25/100
1563/1563 [==============================] - 57s 36ms/step - loss: 0.5322 - acc: 0.8257 - val_loss: 0.8580 - val_acc: 0.7339
Epoch 26/100
1563/1563 [==============================] - 55s 35ms/step - loss: 0.5277 - acc: 0.8278 - val_loss: 0.8435 - val_acc: 0.7374
Epoch 27/100
1563/1563 [==============================] - 59s 38ms/step - loss: 0.5066 - acc: 0.8339 - val_loss: 0.8606 - val_acc: 0.7182
Epoch 28/100
1563/1563 [==============================] - 60s 38ms/step - loss: 0.4909 - acc: 0.8398 - val_loss: 0.8938 - val_acc: 0.7156
Epoch 29/100
1563/1563 [==============================] - 60s 38ms/step - loss: 0.5055 - acc: 0.8359 - val_loss: 1.0380 - val_acc: 0.6713
Epoch 30/100
1563/1563 [==============================] - 60s 38ms/step - loss: 0.4783 - acc: 0.8453 - val_loss: 0.8991 - val_acc: 0.7146
Epoch 31/100
1563/1563 [==============================] - 60s 39ms/step - loss: 0.4841 - acc: 0.8442 - val_loss: 1.1015 - val_acc: 0.6419
Epoch 32/100
1563/1563 [==============================] - 60s 39ms/step - loss: 0.4717 - acc: 0.8487 - val_loss: 0.8398 - val_acc: 0.7327
Epoch 33/100
1563/1563 [==============================] - 61s 39ms/step - loss: 0.4710 - acc: 0.8484 - val_loss: 0.8387 - val_acc: 0.7423
Epoch 34/100
1563/1563 [==============================] - 60s 39ms/step - loss: 0.4450 - acc: 0.8575 - val_loss: 0.9046 - val_acc: 0.7253
Epoch 35/100
1563/1563 [==============================] - 60s 39ms/step - loss: 0.4556 - acc: 0.8532 - val_loss: 1.3140 - val_acc: 0.6719
Epoch 36/100
1563/1563 [==============================] - 60s 39ms/step - loss: 0.4573 - acc: 0.8565 - val_loss: 1.1376 - val_acc: 0.6715
Epoch 37/100
1563/1563 [==============================] - 59s 37ms/step - loss: 0.4841 - acc: 0.8496 - val_loss: 0.8927 - val_acc: 0.7382
Epoch 38/100
1563/1563 [==============================] - 58s 37ms/step - loss: 0.4170 - acc: 0.8658 - val_loss: 0.9590 - val_acc: 0.7206
Epoch 39/100
1563/1563 [==============================] - 58s 37ms/step - loss: 0.4242 - acc: 0.8653 - val_loss: 0.9467 - val_acc: 0.7291
Epoch 40/100
1563/1563 [==============================] - 60s 39ms/step - loss: 0.4233 - acc: 0.8682 - val_loss: 0.9516 - val_acc: 0.7053
Epoch 41/100
1563/1563 [==============================] - 57s 37ms/step - loss: 0.4394 - acc: 0.8626 - val_loss: 1.1436 - val_acc: 0.7207
Epoch 42/100
1563/1563 [==============================] - 58s 37ms/step - loss: 0.4010 - acc: 0.8730 - val_loss: 0.9686 - val_acc: 0.6976
Epoch 43/100
1563/1563 [==============================] - 58s 37ms/step - loss: 0.4313 - acc: 0.8664 - val_loss: 0.8941 - val_acc: 0.7128
Epoch 44/100
1563/1563 [==============================] - 58s 37ms/step - loss: 0.3964 - acc: 0.8738 - val_loss: 0.9653 - val_acc: 0.7053
Epoch 45/100
1563/1563 [==============================] - 58s 37ms/step - loss: 0.5500 - acc: 0.8291 - val_loss: 0.8242 - val_acc: 0.7342
Epoch 46/100
1563/1563 [==============================] - 58s 37ms/step - loss: 0.3850 - acc: 0.8802 - val_loss: 0.8740 - val_acc: 0.7332
Epoch 47/100
1563/1563 [==============================] - 65s 41ms/step - loss: 0.3725 - acc: 0.8857 - val_loss: 0.8659 - val_acc: 0.7348
Epoch 48/100
1563/1563 [==============================] - 66s 42ms/step - loss: 0.3801 - acc: 0.8838 - val_loss: 0.8794 - val_acc: 0.7402
Epoch 49/100
1563/1563 [==============================] - 61s 39ms/step - loss: 0.4000 - acc: 0.8761 - val_loss: 0.8927 - val_acc: 0.7377
Epoch 50/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.3691 - acc: 0.8875 - val_loss: 0.9222 - val_acc: 0.7472
Epoch 51/100
1563/1563 [==============================] - 63s 40ms/step - loss: 0.3794 - acc: 0.8841 - val_loss: 0.9681 - val_acc: 0.7401
Epoch 52/100
1563/1563 [==============================] - 63s 40ms/step - loss: 0.5097 - acc: 0.8408 - val_loss: 0.8992 - val_acc: 0.7303
Epoch 53/100
1563/1563 [==============================] - 63s 40ms/step - loss: 0.3487 - acc: 0.8927 - val_loss: 0.9432 - val_acc: 0.7361
Epoch 54/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.3489 - acc: 0.8943 - val_loss: 0.9539 - val_acc: 0.7076
Epoch 55/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.3547 - acc: 0.8912 - val_loss: 1.0314 - val_acc: 0.7215
Epoch 56/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.3705 - acc: 0.8883 - val_loss: 0.9910 - val_acc: 0.7250
Epoch 57/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.3252 - acc: 0.9019 - val_loss: 0.9889 - val_acc: 0.7246
Epoch 58/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.3342 - acc: 0.9004 - val_loss: 0.9719 - val_acc: 0.7165
Epoch 59/100
1563/1563 [==============================] - 63s 40ms/step - loss: 0.3730 - acc: 0.8915 - val_loss: 0.9922 - val_acc: 0.7130
Epoch 60/100
1563/1563 [==============================] - 61s 39ms/step - loss: 0.3756 - acc: 0.8875 - val_loss: 1.0081 - val_acc: 0.6875
Epoch 61/100
1563/1563 [==============================] - 61s 39ms/step - loss: 0.3529 - acc: 0.8934 - val_loss: 0.9128 - val_acc: 0.7400
Epoch 62/100
1563/1563 [==============================] - 62s 39ms/step - loss: 0.3503 - acc: 0.8985 - val_loss: 1.0445 - val_acc: 0.7125
Epoch 63/100
1563/1563 [==============================] - 62s 39ms/step - loss: 0.3154 - acc: 0.9072 - val_loss: 1.4581 - val_acc: 0.6356
Epoch 64/100
1563/1563 [==============================] - 62s 39ms/step - loss: 0.3227 - acc: 0.9053 - val_loss: 0.9711 - val_acc: 0.7451
Epoch 65/100
1563/1563 [==============================] - 62s 39ms/step - loss: 0.3183 - acc: 0.9077 - val_loss: 0.9790 - val_acc: 0.7058
Epoch 66/100
1563/1563 [==============================] - 62s 39ms/step - loss: 0.2879 - acc: 0.9154 - val_loss: 1.1663 - val_acc: 0.7329
Epoch 67/100
1563/1563 [==============================] - 62s 39ms/step - loss: 0.4216 - acc: 0.8828 - val_loss: 0.9167 - val_acc: 0.7420
Epoch 68/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.3215 - acc: 0.9061 - val_loss: 0.9708 - val_acc: 0.7354
Epoch 69/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2834 - acc: 0.9162 - val_loss: 1.1277 - val_acc: 0.7067
Epoch 70/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2922 - acc: 0.9157 - val_loss: 1.1370 - val_acc: 0.7279
Epoch 71/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.3100 - acc: 0.9091 - val_loss: 1.0563 - val_acc: 0.7159
Epoch 72/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.3017 - acc: 0.9132 - val_loss: 1.0066 - val_acc: 0.7017
Epoch 73/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.3042 - acc: 0.9111 - val_loss: 1.2718 - val_acc: 0.7091
Epoch 74/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.3416 - acc: 0.9056 - val_loss: 1.1925 - val_acc: 0.7359
Epoch 75/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.3364 - acc: 0.9057 - val_loss: 0.9275 - val_acc: 0.7548
Epoch 76/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2749 - acc: 0.9211 - val_loss: 1.1070 - val_acc: 0.7485
Epoch 77/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2700 - acc: 0.9223 - val_loss: 1.1561 - val_acc: 0.7358
Epoch 78/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2921 - acc: 0.9180 - val_loss: 1.2479 - val_acc: 0.7493
Epoch 79/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2919 - acc: 0.9197 - val_loss: 1.0197 - val_acc: 0.6979
Epoch 80/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2826 - acc: 0.9177 - val_loss: 1.2195 - val_acc: 0.7426
Epoch 81/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2630 - acc: 0.9258 - val_loss: 1.1744 - val_acc: 0.7078
Epoch 82/100
1563/1563 [==============================] - 61s 39ms/step - loss: 0.2920 - acc: 0.9178 - val_loss: 1.0606 - val_acc: 0.7386
Epoch 83/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.4702 - acc: 0.8721 - val_loss: 1.1290 - val_acc: 0.7402
Epoch 84/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.3206 - acc: 0.9084 - val_loss: 1.0723 - val_acc: 0.7492
Epoch 85/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2507 - acc: 0.9278 - val_loss: 1.2565 - val_acc: 0.7351
Epoch 86/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2404 - acc: 0.9330 - val_loss: 1.4762 - val_acc: 0.7230
Epoch 87/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2784 - acc: 0.9209 - val_loss: 1.1823 - val_acc: 0.7553
Epoch 88/100
1563/1563 [==============================] - 64s 41ms/step - loss: 0.3047 - acc: 0.9153 - val_loss: 1.0362 - val_acc: 0.7492
Epoch 89/100
1563/1563 [==============================] - 63s 40ms/step - loss: 0.3435 - acc: 0.9060 - val_loss: 1.0214 - val_acc: 0.7390
Epoch 90/100
1563/1563 [==============================] - 60s 38ms/step - loss: 0.2444 - acc: 0.9317 - val_loss: 1.8323 - val_acc: 0.5924
Epoch 91/100
1563/1563 [==============================] - 61s 39ms/step - loss: 0.3034 - acc: 0.9166 - val_loss: 1.3562 - val_acc: 0.7563
Epoch 92/100
1563/1563 [==============================] - 61s 39ms/step - loss: 0.2506 - acc: 0.9296 - val_loss: 1.2950 - val_acc: 0.6997
Epoch 93/100
1563/1563 [==============================] - 64s 41ms/step - loss: 0.2904 - acc: 0.9200 - val_loss: 1.1188 - val_acc: 0.7430
Epoch 94/100
1563/1563 [==============================] - 63s 40ms/step - loss: 0.2706 - acc: 0.9276 - val_loss: 1.1878 - val_acc: 0.7454
Epoch 95/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2779 - acc: 0.9244 - val_loss: 1.1429 - val_acc: 0.7323
Epoch 96/100
1563/1563 [==============================] - 63s 40ms/step - loss: 0.3107 - acc: 0.9165 - val_loss: 0.9513 - val_acc: 0.7428
Epoch 97/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2383 - acc: 0.9352 - val_loss: 1.0517 - val_acc: 0.7466
Epoch 98/100
1563/1563 [==============================] - 62s 40ms/step - loss: 0.2745 - acc: 0.9269 - val_loss: 1.1517 - val_acc: 0.7222
Epoch 99/100
1563/1563 [==============================] - 65s 42ms/step - loss: 0.2710 - acc: 0.9259 - val_loss: 2.0091 - val_acc: 0.6598
Epoch 100/100
1563/1563 [==============================] - 63s 41ms/step - loss: 0.2288 - acc: 0.9381 - val_loss: 1.1315 - val_acc: 0.7347
In [12]:
plt.plot(history.epoch,history.history.get('loss'))
plt.title("train data loss")
plt.show()
In [13]:
plt.plot(history.epoch,history.history.get('val_loss'))
plt.title("test data loss")
plt.show()
In [14]:
plt.plot(history.epoch,history.history.get('acc'))
plt.title("train data acc")
plt.show()
In [15]:
plt.plot(history.epoch,history.history.get('val_acc'))
plt.title("test data acc")
plt.show()

5、检验模型

In [16]:
# 看一下模型的预测能力
pridict_y=model.predict(test_x)
print(pridict_y)
print(test_y)
[[3.0895378e-03 9.6215597e-05 1.6071843e-03 ... 4.0405225e-03
  2.1688156e-02 6.0750626e-04]
 [8.0435174e-03 6.8792435e-03 7.0092967e-04 ... 8.1004895e-05
  9.7917753e-01 2.8321645e-03]
 [4.1994657e-02 3.7909746e-01 1.6969688e-02 ... 1.7185479e-02
  1.5301260e-01 2.7992797e-01]
 ...
 [1.5912231e-16 1.1275286e-16 1.5126912e-07 ... 8.6301055e-09
  9.9213949e-20 1.8491353e-16]
 [3.6481027e-02 9.5197400e-03 3.6190969e-01 ... 6.8093874e-02
  1.0013008e-02 5.0117713e-03]
 [3.1893183e-08 1.0194241e-11 9.3353165e-08 ... 9.9798810e-01
  1.5699721e-11 3.2265575e-07]]
tf.Tensor(
[[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 1. 0.]
 [0. 0. 0. ... 0. 1. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 1. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 1. 0. 0.]], shape=(10000, 10), dtype=float32)
In [17]:
# 在pridict_y中找最大值的索引,横向
pridict_y = tf.argmax(pridict_y, axis=1)
print(pridict_y)
#
test_y = tf.argmax(test_y, axis=1)
print(test_y)
tf.Tensor([3 8 1 ... 5 2 7], shape=(10000,), dtype=int64)
tf.Tensor([3 8 8 ... 5 1 7], shape=(10000,), dtype=int64)
In [18]:
plt.figure()
plt.imshow(test_x[0])
plt.figure()
plt.imshow(test_x[1])
plt.figure()
plt.imshow(test_x[2])
plt.figure()
plt.imshow(test_x[3])
plt.show()
In [ ]:
 

 

 
posted @ 2020-09-19 22:27  范仁义  阅读(419)  评论(0)    收藏  举报