微信扫一扫打赏支持

Tensorflow2(预课程)---5.3.2、手写数字识别-层方式-卷积神经网络-LeNet-5稍改

Tensorflow2(预课程)---5.3.2、手写数字识别-层方式-卷积神经网络-LeNet-5稍改

一、总结

一句话总结:

对LeNet稍微改变,改变激活函数为Relu,加上dropout层,50epoch测试集准确率有99.4+,多训练,准确率会更高
# 用到卷积神经网络的时候,需要把训练和测试的x的颜色通道数指出来
train_x = tf.reshape(train_x,[-1,28,28,1])
test_x = tf.reshape(test_x,[-1,28,28,1])

# 构建容器
model = tf.keras.Sequential()

# LeNet
model.add(tf.keras.layers.Conv2D(32,(5,5),strides=(1,1),input_shape=(28,28,1),padding='valid',activation='relu',kernel_initializer='uniform'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(tf.keras.layers.Dropout(0.5)) # dropout层

model.add(tf.keras.layers.Conv2D(64,(5,5),strides=(1,1),padding='valid',activation='relu',kernel_initializer='uniform'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(tf.keras.layers.Dropout(0.5)) # dropout层


# 全连接层
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(512,activation='relu'))
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()

 

 

二、手写数字识别-层方式-卷积神经网络-LeNet-5稍改

博客对应课程的视频位置:

 

步骤

1、读取数据集
2、拆分数据集(拆分成训练数据集和测试数据集)
3、构建模型
4、训练模型
5、检验模型

需求

手写数字识别

In [1]:
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

1、读取数据集

直接从tensorflow的dataset来读取数据集即可

In [2]:
 (train_x, train_y), (test_x, test_y) = tf.keras.datasets.mnist.load_data()
print(train_x.shape, train_y.shape)
(60000, 28, 28) (60000,)
In [3]:
plt.imshow(train_x[0])
plt.show()
In [4]:
plt.figure()
plt.imshow(train_x[1])
plt.figure()
plt.imshow(train_x[2])
plt.show()
In [5]:
print(test_y)
[7 2 1 ... 4 5 6]
In [6]:
# 像素值 RGB
np.max(train_x[0])
Out[6]:
255

2、拆分数据集(拆分成训练数据集和测试数据集)

上一步做了拆分数据集的工作

In [7]:
# 图片数据如何归一化
# 直接除255即可
train_x = train_x/255.0
test_x = test_x/255.0
In [8]:
# 像素值 RGB
np.max(train_x[0])
Out[8]:
1.0
In [9]:
train_y = tf.one_hot(train_y, depth=10)
test_y = tf.one_hot(test_y, depth=10)
print(test_y.shape)
(10000, 10)

3、构建模型

In [10]:
# 用到卷积神经网络的时候,需要把训练和测试的x的颜色通道数指出来
train_x = tf.reshape(train_x,[-1,28,28,1])
test_x = tf.reshape(test_x,[-1,28,28,1])

# 构建容器
model = tf.keras.Sequential()

# LeNet
model.add(tf.keras.layers.Conv2D(32,(5,5),strides=(1,1),input_shape=(28,28,1),padding='valid',activation='relu',kernel_initializer='uniform'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(tf.keras.layers.Dropout(0.5)) # dropout层

model.add(tf.keras.layers.Conv2D(64,(5,5),strides=(1,1),padding='valid',activation='relu',kernel_initializer='uniform'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(tf.keras.layers.Dropout(0.5)) # dropout层


# 全连接层
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(512,activation='relu'))
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 24, 24, 32)        832       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 12, 12, 32)        0         
_________________________________________________________________
dropout (Dropout)            (None, 12, 12, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 8, 8, 64)          51264     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 4, 4, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 1024)              0         
_________________________________________________________________
dense (Dense)                (None, 512)               524800    
_________________________________________________________________
dense_1 (Dense)              (None, 256)               131328    
_________________________________________________________________
dense_2 (Dense)              (None, 128)               32896     
_________________________________________________________________
dense_3 (Dense)              (None, 10)                1290      
=================================================================
Total params: 742,410
Trainable params: 742,410
Non-trainable params: 0
_________________________________________________________________
In [11]:
print(train_x.shape)
print(test_x.shape)
(60000, 28, 28, 1)
(10000, 28, 28, 1)

4、训练模型

In [12]:
# 配置优化函数和损失器
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
# 开始训练
history = model.fit(train_x,train_y,epochs=50,validation_data=(test_x,test_y))
Epoch 1/50
   1/1875 [..............................] - ETA: 1s - loss: 2.3070 - acc: 0.0312WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0010s vs `on_train_batch_end` time: 0.0020s). Check your callbacks.
1875/1875 [==============================] - 8s 4ms/step - loss: 0.2078 - acc: 0.9328 - val_loss: 0.0469 - val_acc: 0.9862
Epoch 2/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0887 - acc: 0.9743 - val_loss: 0.0433 - val_acc: 0.9871
Epoch 3/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0745 - acc: 0.9787 - val_loss: 0.0339 - val_acc: 0.9891
Epoch 4/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0621 - acc: 0.9814 - val_loss: 0.0272 - val_acc: 0.9921
Epoch 5/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0595 - acc: 0.9834 - val_loss: 0.0285 - val_acc: 0.9913
Epoch 6/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0539 - acc: 0.9844 - val_loss: 0.0251 - val_acc: 0.9923
Epoch 7/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0508 - acc: 0.9855 - val_loss: 0.0279 - val_acc: 0.9935
Epoch 8/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0473 - acc: 0.9863 - val_loss: 0.0254 - val_acc: 0.9933
Epoch 9/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0449 - acc: 0.9872 - val_loss: 0.0256 - val_acc: 0.9928
Epoch 10/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0433 - acc: 0.9876 - val_loss: 0.0233 - val_acc: 0.9935
Epoch 11/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0441 - acc: 0.9878 - val_loss: 0.0265 - val_acc: 0.9920
Epoch 12/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0406 - acc: 0.9886 - val_loss: 0.0239 - val_acc: 0.9929
Epoch 13/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0421 - acc: 0.9884 - val_loss: 0.0213 - val_acc: 0.9938
Epoch 14/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0389 - acc: 0.9887 - val_loss: 0.0261 - val_acc: 0.9922
Epoch 15/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0362 - acc: 0.9896 - val_loss: 0.0258 - val_acc: 0.9920
Epoch 16/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0363 - acc: 0.9895 - val_loss: 0.0183 - val_acc: 0.9935
Epoch 17/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0339 - acc: 0.9897 - val_loss: 0.0252 - val_acc: 0.9934
Epoch 18/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0398 - acc: 0.9891 - val_loss: 0.0200 - val_acc: 0.9940
Epoch 19/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0357 - acc: 0.9904 - val_loss: 0.0230 - val_acc: 0.9940
Epoch 20/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0376 - acc: 0.9896 - val_loss: 0.0226 - val_acc: 0.9938
Epoch 21/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0309 - acc: 0.9914 - val_loss: 0.0199 - val_acc: 0.9941
Epoch 22/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0343 - acc: 0.9907 - val_loss: 0.0237 - val_acc: 0.9935
Epoch 23/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0326 - acc: 0.9907 - val_loss: 0.0223 - val_acc: 0.9935
Epoch 24/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0339 - acc: 0.9907 - val_loss: 0.0248 - val_acc: 0.9929
Epoch 25/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0341 - acc: 0.9909 - val_loss: 0.0194 - val_acc: 0.9937
Epoch 26/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0300 - acc: 0.9917 - val_loss: 0.0218 - val_acc: 0.9936
Epoch 27/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0326 - acc: 0.9912 - val_loss: 0.0290 - val_acc: 0.9919
Epoch 28/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0322 - acc: 0.9915 - val_loss: 0.0201 - val_acc: 0.9939
Epoch 29/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0323 - acc: 0.9917 - val_loss: 0.0225 - val_acc: 0.9935
Epoch 30/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0282 - acc: 0.9922 - val_loss: 0.0218 - val_acc: 0.9937
Epoch 31/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0292 - acc: 0.9921 - val_loss: 0.0216 - val_acc: 0.9946
Epoch 32/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0316 - acc: 0.9916 - val_loss: 0.0195 - val_acc: 0.9945
Epoch 33/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0295 - acc: 0.9924 - val_loss: 0.0229 - val_acc: 0.9942
Epoch 34/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0319 - acc: 0.9916 - val_loss: 0.0208 - val_acc: 0.9941
Epoch 35/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0284 - acc: 0.9927 - val_loss: 0.0199 - val_acc: 0.9940
Epoch 36/50
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0294 - acc: 0.9926 - val_loss: 0.0204 - val_acc: 0.9939
Epoch 37/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0286 - acc: 0.9925 - val_loss: 0.0183 - val_acc: 0.9941
Epoch 38/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0308 - acc: 0.9924 - val_loss: 0.0233 - val_acc: 0.9933
Epoch 39/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0263 - acc: 0.9937 - val_loss: 0.0224 - val_acc: 0.9933
Epoch 40/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0246 - acc: 0.9934 - val_loss: 0.0250 - val_acc: 0.9943
Epoch 41/50
1875/1875 [==============================] - 8s 5ms/step - loss: 0.0281 - acc: 0.9929 - val_loss: 0.0287 - val_acc: 0.9933
Epoch 42/50
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0260 - acc: 0.9931 - val_loss: 0.0262 - val_acc: 0.9941
Epoch 43/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0255 - acc: 0.9933 - val_loss: 0.0193 - val_acc: 0.9939
Epoch 44/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0242 - acc: 0.9933 - val_loss: 0.0229 - val_acc: 0.9939
Epoch 45/50
1875/1875 [==============================] - 8s 5ms/step - loss: 0.0266 - acc: 0.9934 - val_loss: 0.0224 - val_acc: 0.9937
Epoch 46/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0256 - acc: 0.9938 - val_loss: 0.0259 - val_acc: 0.9948
Epoch 47/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0280 - acc: 0.9934 - val_loss: 0.0284 - val_acc: 0.9937
Epoch 48/50
1875/1875 [==============================] - 8s 5ms/step - loss: 0.0293 - acc: 0.9929 - val_loss: 0.0271 - val_acc: 0.9945
Epoch 49/50
1875/1875 [==============================] - 8s 5ms/step - loss: 0.0266 - acc: 0.9936 - val_loss: 0.0265 - val_acc: 0.9934
Epoch 50/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0269 - acc: 0.9932 - val_loss: 0.0361 - val_acc: 0.9935
In [13]:
plt.plot(history.epoch,history.history.get('loss'))
plt.title("train data loss")
plt.show()
In [14]:
plt.plot(history.epoch,history.history.get('val_loss'))
plt.title("test data loss")
plt.show()
In [15]:
plt.plot(history.epoch,history.history.get('acc'))
plt.title("train data acc")
plt.show()
In [16]:
plt.plot(history.epoch,history.history.get('val_acc'))
plt.title("test data acc")
plt.show()

5、检验模型

In [17]:
# 看一下模型的预测能力
pridict_y=model.predict(test_x)
print(pridict_y)
print(test_y)
[[5.41453682e-14 7.27399205e-17 4.34760317e-09 ... 1.00000000e+00
  1.21722277e-14 4.16180477e-08]
 [2.94957783e-08 7.29475759e-17 9.99993443e-01 ... 1.12780036e-07
  3.61859907e-06 9.80991291e-18]
 [0.00000000e+00 1.00000000e+00 4.24653289e-36 ... 0.00000000e+00
  2.81850896e-32 0.00000000e+00]
 ...
 [0.00000000e+00 0.00000000e+00 0.00000000e+00 ... 0.00000000e+00
  7.47120688e-27 0.00000000e+00]
 [3.10926765e-24 0.00000000e+00 0.00000000e+00 ... 0.00000000e+00
  1.69827789e-20 3.17124317e-18]
 [1.41145406e-21 9.62470104e-28 0.00000000e+00 ... 0.00000000e+00
  4.19468715e-15 0.00000000e+00]]
tf.Tensor(
[[0. 0. 0. ... 1. 0. 0.]
 [0. 0. 1. ... 0. 0. 0.]
 [0. 1. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]], shape=(10000, 10), dtype=float32)
In [18]:
# 在pridict_y中找最大值的索引,横向
pridict_y = tf.argmax(pridict_y, axis=1)
print(pridict_y)
#
test_y = tf.argmax(test_y, axis=1)
print(test_y)
tf.Tensor([7 2 1 ... 4 5 6], shape=(10000,), dtype=int64)
tf.Tensor([7 2 1 ... 4 5 6], shape=(10000,), dtype=int64)
In [19]:
# 这里需要把形状变回来
test_x = tf.reshape(test_x,[-1,28,28])

plt.figure()
plt.imshow(test_x[0])
plt.figure()
plt.imshow(test_x[1])
plt.figure()
plt.imshow(test_x[2])
plt.figure()
plt.imshow(test_x[3])
plt.show()
In [ ]:
 

 

 

 
posted @ 2020-09-18 20:27  范仁义  阅读(476)  评论(0)    收藏  举报