微信扫一扫打赏支持

Tensorflow2(预课程)---5.2、手写数字识别-层方式-卷积神经网络

Tensorflow2(预课程)---5.2、手写数字识别-层方式-卷积神经网络

一、总结

一句话总结:

一、用到卷积神经网络的时候,需要把训练和测试的x的颜色通道数指出来
二、train_x = tf.reshape(train_x,[-1,28,28,1])
三、test_x = tf.reshape(test_x,[-1,28,28,1])
# 用到卷积神经网络的时候,需要把训练和测试的x的颜色通道数指出来
train_x = tf.reshape(train_x,[-1,28,28,1])
test_x = tf.reshape(test_x,[-1,28,28,1])

# 构建容器
model = tf.keras.Sequential()
# 卷积层
model.add(tf.keras.layers.Conv2D(filters=6, kernel_size=(3, 3), padding='same',input_shape=(28,28,1))) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')) # 池化层
model.add(tf.keras.layers.Dropout(0.5)) # dropout层

# 全连接层
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()

 

 

 

二、手写数字识别-层方式-卷积神经网络

博客对应课程的视频位置:

 

步骤

1、读取数据集
2、拆分数据集(拆分成训练数据集和测试数据集)
3、构建模型
4、训练模型
5、检验模型

需求

手写数字识别

In [1]:
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

1、读取数据集

直接从tensorflow的dataset来读取数据集即可

In [2]:
 (train_x, train_y), (test_x, test_y) = tf.keras.datasets.mnist.load_data()
print(train_x.shape, train_y.shape)
(60000, 28, 28) (60000,)
In [3]:
plt.imshow(train_x[0])
plt.show()
In [4]:
plt.figure()
plt.imshow(train_x[1])
plt.figure()
plt.imshow(train_x[2])
plt.show()
In [5]:
print(test_y)
[7 2 1 ... 4 5 6]
In [6]:
# 像素值 RGB
np.max(train_x[0])
Out[6]:
255

2、拆分数据集(拆分成训练数据集和测试数据集)

上一步做了拆分数据集的工作

In [7]:
# 图片数据如何归一化
# 直接除255即可
train_x = train_x/255.0
test_x = test_x/255.0
In [8]:
# 像素值 RGB
np.max(train_x[0])
Out[8]:
1.0
In [9]:
train_y = tf.one_hot(train_y, depth=10)
test_y = tf.one_hot(test_y, depth=10)
print(test_y.shape)
(10000, 10)

3、构建模型

In [10]:
# 用到卷积神经网络的时候,需要把训练和测试的x的颜色通道数指出来
train_x = tf.reshape(train_x,[-1,28,28,1])
test_x = tf.reshape(test_x,[-1,28,28,1])

# 构建容器
model = tf.keras.Sequential()
# 卷积层
model.add(tf.keras.layers.Conv2D(filters=6, kernel_size=(3, 3), padding='same',input_shape=(28,28,1))) # 卷积层
model.add(tf.keras.layers.BatchNormalization()) # BN层
model.add(tf.keras.layers.Activation('relu')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')) # 池化层
model.add(tf.keras.layers.Dropout(0.5)) # dropout层

# 全连接层
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 28, 28, 6)         60        
_________________________________________________________________
batch_normalization (BatchNo (None, 28, 28, 6)         24        
_________________________________________________________________
activation (Activation)      (None, 28, 28, 6)         0         
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 6)         0         
_________________________________________________________________
dropout (Dropout)            (None, 14, 14, 6)         0         
_________________________________________________________________
flatten (Flatten)            (None, 1176)              0         
_________________________________________________________________
dense (Dense)                (None, 256)               301312    
_________________________________________________________________
dense_1 (Dense)              (None, 128)               32896     
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290      
=================================================================
Total params: 335,582
Trainable params: 335,570
Non-trainable params: 12
_________________________________________________________________
In [13]:
print(train_x.shape)
print(test_x.shape)
(60000, 28, 28, 1)
(10000, 28, 28, 1)

4、训练模型

In [14]:
# 配置优化函数和损失器
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
# 开始训练
history = model.fit(train_x,train_y,epochs=50,validation_data=(test_x,test_y))
Epoch 1/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.2703 - acc: 0.9139 - val_loss: 0.0729 - val_acc: 0.9765
Epoch 2/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1320 - acc: 0.9577 - val_loss: 0.0620 - val_acc: 0.9794
Epoch 3/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1117 - acc: 0.9643 - val_loss: 0.0483 - val_acc: 0.9855
Epoch 4/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0921 - acc: 0.9698 - val_loss: 0.0492 - val_acc: 0.9856
Epoch 5/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0849 - acc: 0.9734 - val_loss: 0.0441 - val_acc: 0.9862
Epoch 6/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0742 - acc: 0.9771 - val_loss: 0.0433 - val_acc: 0.9870
Epoch 7/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0673 - acc: 0.9779 - val_loss: 0.0417 - val_acc: 0.9865
Epoch 8/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0621 - acc: 0.9808 - val_loss: 0.0394 - val_acc: 0.9878
Epoch 9/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0583 - acc: 0.9809 - val_loss: 0.0353 - val_acc: 0.9894
Epoch 10/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0571 - acc: 0.9811 - val_loss: 0.0383 - val_acc: 0.9876
Epoch 11/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0514 - acc: 0.9830 - val_loss: 0.0373 - val_acc: 0.9888
Epoch 12/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0495 - acc: 0.9837 - val_loss: 0.0388 - val_acc: 0.9892
Epoch 13/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0467 - acc: 0.9848 - val_loss: 0.0438 - val_acc: 0.9872
Epoch 14/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0467 - acc: 0.9849 - val_loss: 0.0387 - val_acc: 0.9891
Epoch 15/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0451 - acc: 0.9848 - val_loss: 0.0387 - val_acc: 0.9894
Epoch 16/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0409 - acc: 0.9866 - val_loss: 0.0362 - val_acc: 0.9891
Epoch 17/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0407 - acc: 0.9867 - val_loss: 0.0413 - val_acc: 0.9891
Epoch 18/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0406 - acc: 0.9866 - val_loss: 0.0365 - val_acc: 0.9888
Epoch 19/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0393 - acc: 0.9872 - val_loss: 0.0363 - val_acc: 0.9900
Epoch 20/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0369 - acc: 0.9877 - val_loss: 0.0383 - val_acc: 0.9890
Epoch 21/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0367 - acc: 0.9879 - val_loss: 0.0363 - val_acc: 0.9895
Epoch 22/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0340 - acc: 0.9885 - val_loss: 0.0338 - val_acc: 0.9909
Epoch 23/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0356 - acc: 0.9883 - val_loss: 0.0344 - val_acc: 0.9900
Epoch 24/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0344 - acc: 0.9883 - val_loss: 0.0347 - val_acc: 0.9897
Epoch 25/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0321 - acc: 0.9893 - val_loss: 0.0354 - val_acc: 0.9891
Epoch 26/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0322 - acc: 0.9889 - val_loss: 0.0397 - val_acc: 0.9895
Epoch 27/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0323 - acc: 0.9898 - val_loss: 0.0386 - val_acc: 0.9892
Epoch 28/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0305 - acc: 0.9899 - val_loss: 0.0326 - val_acc: 0.9904
Epoch 29/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0285 - acc: 0.9906 - val_loss: 0.0355 - val_acc: 0.9887
Epoch 30/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0298 - acc: 0.9899 - val_loss: 0.0395 - val_acc: 0.9891
Epoch 31/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0292 - acc: 0.9905 - val_loss: 0.0388 - val_acc: 0.9896
Epoch 32/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0288 - acc: 0.9905 - val_loss: 0.0364 - val_acc: 0.9899
Epoch 33/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0270 - acc: 0.9911 - val_loss: 0.0375 - val_acc: 0.9900
Epoch 34/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0261 - acc: 0.9909 - val_loss: 0.0326 - val_acc: 0.9898
Epoch 35/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0282 - acc: 0.9912 - val_loss: 0.0369 - val_acc: 0.9900
Epoch 36/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0261 - acc: 0.9914 - val_loss: 0.0349 - val_acc: 0.9902
Epoch 37/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0265 - acc: 0.9915 - val_loss: 0.0365 - val_acc: 0.9904
Epoch 38/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0258 - acc: 0.9916 - val_loss: 0.0368 - val_acc: 0.9900
Epoch 39/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0250 - acc: 0.9920 - val_loss: 0.0384 - val_acc: 0.9894
Epoch 40/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0254 - acc: 0.9911 - val_loss: 0.0363 - val_acc: 0.9902
Epoch 41/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0234 - acc: 0.9923 - val_loss: 0.0383 - val_acc: 0.9899
Epoch 42/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0234 - acc: 0.9921 - val_loss: 0.0350 - val_acc: 0.9902
Epoch 43/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0238 - acc: 0.9924 - val_loss: 0.0343 - val_acc: 0.9903
Epoch 44/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0226 - acc: 0.9922 - val_loss: 0.0347 - val_acc: 0.9899
Epoch 45/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0227 - acc: 0.9927 - val_loss: 0.0352 - val_acc: 0.9912
Epoch 46/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0206 - acc: 0.9930 - val_loss: 0.0407 - val_acc: 0.9902
Epoch 47/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0228 - acc: 0.9924 - val_loss: 0.0366 - val_acc: 0.9909
Epoch 48/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0226 - acc: 0.9923 - val_loss: 0.0357 - val_acc: 0.9906
Epoch 49/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0217 - acc: 0.9925 - val_loss: 0.0364 - val_acc: 0.9907
Epoch 50/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0215 - acc: 0.9927 - val_loss: 0.0342 - val_acc: 0.9908
In [15]:
plt.plot(history.epoch,history.history.get('loss'))
plt.title("train data loss")
plt.show()
In [16]:
plt.plot(history.epoch,history.history.get('val_loss'))
plt.title("test data loss")
plt.show()
In [17]:
plt.plot(history.epoch,history.history.get('acc'))
plt.title("train data acc")
plt.show()
In [18]:
plt.plot(history.epoch,history.history.get('val_acc'))
plt.title("test data acc")
plt.show()

5、检验模型

In [19]:
# 看一下模型的预测能力
pridict_y=model.predict(test_x)
print(pridict_y)
print(test_y)
[[1.1710568e-13 1.6193080e-10 8.3055004e-12 ... 9.9999988e-01
  1.4683314e-11 7.1416366e-08]
 [2.8014698e-14 4.2932507e-12 1.0000000e+00 ... 4.5650758e-10
  1.0476748e-12 4.6428832e-15]
 [1.7001256e-12 1.0000000e+00 6.7324246e-10 ... 2.2142102e-10
  2.5032607e-09 5.0770850e-14]
 ...
 [1.5704575e-14 4.0898483e-12 1.1391642e-15 ... 1.3346999e-10
  3.1469749e-09 7.1328909e-10]
 [4.5596516e-15 9.4885729e-14 7.3918566e-20 ... 3.1930221e-17
  1.2603473e-09 4.0514066e-13]
 [3.0351821e-15 6.5886786e-23 2.0204347e-21 ... 2.9252839e-29
  6.4592590e-19 9.0667954e-18]]
tf.Tensor(
[[0. 0. 0. ... 1. 0. 0.]
 [0. 0. 1. ... 0. 0. 0.]
 [0. 1. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]], shape=(10000, 10), dtype=float32)
In [20]:
# 在pridict_y中找最大值的索引,横向
pridict_y = tf.argmax(pridict_y, axis=1)
print(pridict_y)
#
test_y = tf.argmax(test_y, axis=1)
print(test_y)
tf.Tensor([7 2 1 ... 4 5 6], shape=(10000,), dtype=int64)
tf.Tensor([7 2 1 ... 4 5 6], shape=(10000,), dtype=int64)
In [22]:
# 这里需要把形状变回来
test_x = tf.reshape(test_x,[-1,28,28])

plt.figure()
plt.imshow(test_x[0])
plt.figure()
plt.imshow(test_x[1])
plt.figure()
plt.imshow(test_x[2])
plt.figure()
plt.imshow(test_x[3])
plt.show()
In [ ]:
 
 
posted @ 2020-09-17 23:39  范仁义  阅读(393)  评论(0编辑  收藏  举报