微信扫一扫打赏支持

Tensorflow2(预课程)---7.3、cifar10分类-层方式-卷积神经网络-简化LeNet

Tensorflow2(预课程)---7.3、cifar10分类-层方式-卷积神经网络-简化LeNet

一、总结

一句话总结:

全连接层前面是两个卷积层,第一个是6@5*5,第二个是16@5*5,激活函数是sigmoid,没有批标准化,没有dropout
# 构建容器
model = tf.keras.Sequential()

# 卷积层 一
model.add(tf.keras.layers.Conv2D(filters=6, kernel_size=(5, 5), padding='valid',input_shape=(32,32,3))) # 卷积层
# bn没有
model.add(tf.keras.layers.Activation('sigmoid')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='valid')) # 池化层
# dropout层没有

# 卷积层 二
model.add(tf.keras.layers.Conv2D(filters=16, kernel_size=(5, 5), padding='valid',input_shape=(32,32,3))) # 卷积层
# bn没有
model.add(tf.keras.layers.Activation('sigmoid')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='valid')) # 池化层
# dropout层没有

# 全连接层
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()

 

 

 

二、cifar10分类-层方式-卷积神经网络-简化LeNet

博客对应课程的视频位置:

 

步骤

1、读取数据集
2、拆分数据集(拆分成训练数据集和测试数据集)
3、构建模型
4、训练模型
5、检验模型

需求

cifar10(物品分类)


该数据集共有60000张彩色图像,这些图像是32*32,分为10个类,每类6000张图。这里面有50000张用于训练,构成了5个训练批,每一批10000张图;另外10000用于测试,单独构成一批。测试批的数据里,取自10类中的每一类,每一类随机取1000张。抽剩下的就随机排列组成了训练批。注意一个训练批中的各类图像并不一定数量相同,总的来看训练批,每一类都有5000张图。


 

 

In [1]:
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

1、读取数据集

直接从tensorflow的dataset来读取数据集即可

In [2]:
 (train_x, train_y), (test_x, test_y) = tf.keras.datasets.cifar10.load_data()
print(train_x.shape, train_y.shape)
(50000, 32, 32, 3) (50000, 1)

这是32*32的彩色图,rgb三个通道如何处理呢

In [3]:
plt.imshow(train_x[0])
plt.show()
In [4]:
plt.figure()
plt.imshow(train_x[1])
plt.figure()
plt.imshow(train_x[2])
plt.show()
In [5]:
print(test_y)
[[3]
 [8]
 [8]
 ...
 [5]
 [1]
 [7]]
In [6]:
# 像素值 RGB
np.max(train_x[0])
Out[6]:
255

2、拆分数据集(拆分成训练数据集和测试数据集)

上一步做了拆分数据集的工作

In [7]:
# 图片数据如何归一化
# 直接除255即可
train_x = train_x/255.0
test_x = test_x/255.0
In [8]:
# 像素值 RGB
np.max(train_x[0])
Out[8]:
1.0
In [9]:
train_y=train_y.flatten()
test_y=test_y.flatten()
train_y = tf.one_hot(train_y, depth=10)
test_y = tf.one_hot(test_y, depth=10)
print(test_y.shape)
(10000, 10)

3、构建模型

应该构建一个怎么样的模型:

In [10]:
# 构建容器
model = tf.keras.Sequential()

# 卷积层 一
model.add(tf.keras.layers.Conv2D(filters=6, kernel_size=(5, 5), padding='valid',input_shape=(32,32,3))) # 卷积层
# bn没有
model.add(tf.keras.layers.Activation('sigmoid')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='valid')) # 池化层
# dropout层没有

# 卷积层 二
model.add(tf.keras.layers.Conv2D(filters=16, kernel_size=(5, 5), padding='valid',input_shape=(32,32,3))) # 卷积层
# bn没有
model.add(tf.keras.layers.Activation('sigmoid')) # 激活层
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='valid')) # 池化层
# dropout层没有

# 全连接层
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 28, 28, 6)         456       
_________________________________________________________________
activation (Activation)      (None, 28, 28, 6)         0         
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 6)         0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 10, 10, 16)        2416      
_________________________________________________________________
activation_1 (Activation)    (None, 10, 10, 16)        0         
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 16)          0         
_________________________________________________________________
flatten (Flatten)            (None, 400)               0         
_________________________________________________________________
dense (Dense)                (None, 256)               102656    
_________________________________________________________________
dense_1 (Dense)              (None, 128)               32896     
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290      
=================================================================
Total params: 139,714
Trainable params: 139,714
Non-trainable params: 0
_________________________________________________________________

4、训练模型

In [11]:
# 配置优化函数和损失器
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
# 开始训练
history = model.fit(train_x,train_y,epochs=50,validation_data=(test_x,test_y))
Epoch 1/50
1563/1563 [==============================] - 7s 4ms/step - loss: 2.0595 - acc: 0.2229 - val_loss: 1.8737 - val_acc: 0.3082
Epoch 2/50
1563/1563 [==============================] - 7s 4ms/step - loss: 1.7982 - acc: 0.3443 - val_loss: 1.7143 - val_acc: 0.3764
Epoch 3/50
1563/1563 [==============================] - 7s 4ms/step - loss: 1.6844 - acc: 0.3875 - val_loss: 1.6099 - val_acc: 0.4181
Epoch 4/50
1563/1563 [==============================] - 7s 4ms/step - loss: 1.5837 - acc: 0.4257 - val_loss: 1.5129 - val_acc: 0.4510
Epoch 5/50
1563/1563 [==============================] - 7s 4ms/step - loss: 1.5030 - acc: 0.4551 - val_loss: 1.5018 - val_acc: 0.4594
Epoch 6/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.4397 - acc: 0.4809 - val_loss: 1.4057 - val_acc: 0.4874
Epoch 7/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.3910 - acc: 0.4970 - val_loss: 1.3685 - val_acc: 0.5048
Epoch 8/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.3559 - acc: 0.5098 - val_loss: 1.3799 - val_acc: 0.5038
Epoch 9/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.3150 - acc: 0.5283 - val_loss: 1.3588 - val_acc: 0.5139
Epoch 10/50
1563/1563 [==============================] - 8s 5ms/step - loss: 1.2849 - acc: 0.5391 - val_loss: 1.2992 - val_acc: 0.5283
Epoch 11/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.2559 - acc: 0.5500 - val_loss: 1.2966 - val_acc: 0.5302
Epoch 12/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.2322 - acc: 0.5569 - val_loss: 1.2696 - val_acc: 0.5418
Epoch 13/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.2078 - acc: 0.5672 - val_loss: 1.2618 - val_acc: 0.5504
Epoch 14/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.1901 - acc: 0.5734 - val_loss: 1.2636 - val_acc: 0.5495
Epoch 15/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.1704 - acc: 0.5817 - val_loss: 1.2443 - val_acc: 0.5591
Epoch 16/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.1531 - acc: 0.5857 - val_loss: 1.2532 - val_acc: 0.5553
Epoch 17/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.1362 - acc: 0.5932 - val_loss: 1.2245 - val_acc: 0.5674
Epoch 18/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.1179 - acc: 0.6013 - val_loss: 1.2290 - val_acc: 0.5755
Epoch 19/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.1017 - acc: 0.6047 - val_loss: 1.2291 - val_acc: 0.5655
Epoch 20/50
1563/1563 [==============================] - 7s 4ms/step - loss: 1.0921 - acc: 0.6084 - val_loss: 1.2557 - val_acc: 0.5669
Epoch 21/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.0777 - acc: 0.6164 - val_loss: 1.2056 - val_acc: 0.5746
Epoch 22/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.0625 - acc: 0.6205 - val_loss: 1.2263 - val_acc: 0.5738
Epoch 23/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.0534 - acc: 0.6245 - val_loss: 1.2124 - val_acc: 0.5788
Epoch 24/50
1563/1563 [==============================] - 7s 4ms/step - loss: 1.0393 - acc: 0.6297 - val_loss: 1.1826 - val_acc: 0.5873
Epoch 25/50
1563/1563 [==============================] - 7s 4ms/step - loss: 1.0275 - acc: 0.6309 - val_loss: 1.1944 - val_acc: 0.5757
Epoch 26/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.0170 - acc: 0.6373 - val_loss: 1.1782 - val_acc: 0.5917
Epoch 27/50
1563/1563 [==============================] - 7s 5ms/step - loss: 1.0044 - acc: 0.6416 - val_loss: 1.1946 - val_acc: 0.5870
Epoch 28/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.9930 - acc: 0.6453 - val_loss: 1.2051 - val_acc: 0.5826
Epoch 29/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.9855 - acc: 0.6484 - val_loss: 1.1930 - val_acc: 0.5883
Epoch 30/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.9732 - acc: 0.6485 - val_loss: 1.2169 - val_acc: 0.5849
Epoch 31/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.9650 - acc: 0.6548 - val_loss: 1.1990 - val_acc: 0.5879
Epoch 32/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.9571 - acc: 0.6588 - val_loss: 1.1996 - val_acc: 0.5883
Epoch 33/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.9478 - acc: 0.6624 - val_loss: 1.1997 - val_acc: 0.5836
Epoch 34/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.9423 - acc: 0.6618 - val_loss: 1.1900 - val_acc: 0.5865
Epoch 35/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.9317 - acc: 0.6674 - val_loss: 1.1949 - val_acc: 0.5955
Epoch 36/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.9230 - acc: 0.6706 - val_loss: 1.2059 - val_acc: 0.5905
Epoch 37/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.9131 - acc: 0.6739 - val_loss: 1.1959 - val_acc: 0.5932
Epoch 38/50
1563/1563 [==============================] - 8s 5ms/step - loss: 0.9084 - acc: 0.6740 - val_loss: 1.2171 - val_acc: 0.5914
Epoch 39/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.9010 - acc: 0.6781 - val_loss: 1.2106 - val_acc: 0.5896
Epoch 40/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.8935 - acc: 0.6811 - val_loss: 1.2035 - val_acc: 0.5893
Epoch 41/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.8832 - acc: 0.6864 - val_loss: 1.2339 - val_acc: 0.5858
Epoch 42/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.8799 - acc: 0.6846 - val_loss: 1.2089 - val_acc: 0.5926
Epoch 43/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.8729 - acc: 0.6872 - val_loss: 1.2065 - val_acc: 0.5978
Epoch 44/50
1563/1563 [==============================] - 8s 5ms/step - loss: 0.8668 - acc: 0.6902 - val_loss: 1.2333 - val_acc: 0.5842
Epoch 45/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.8585 - acc: 0.6923 - val_loss: 1.2233 - val_acc: 0.5937
Epoch 46/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.8541 - acc: 0.6939 - val_loss: 1.2260 - val_acc: 0.5960
Epoch 47/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.8492 - acc: 0.6981 - val_loss: 1.2454 - val_acc: 0.5892
Epoch 48/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.8432 - acc: 0.6973 - val_loss: 1.2392 - val_acc: 0.5949
Epoch 49/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.8363 - acc: 0.7024 - val_loss: 1.2603 - val_acc: 0.5866
Epoch 50/50
1563/1563 [==============================] - 7s 5ms/step - loss: 0.8322 - acc: 0.7012 - val_loss: 1.2662 - val_acc: 0.5916
In [12]:
plt.plot(history.epoch,history.history.get('loss'))
plt.title("train data loss")
plt.show()
In [13]:
plt.plot(history.epoch,history.history.get('val_loss'))
plt.title("test data loss")
plt.show()
In [14]:
plt.plot(history.epoch,history.history.get('acc'))
plt.title("train data acc")
plt.show()
In [15]:
plt.plot(history.epoch,history.history.get('val_acc'))
plt.title("test data acc")
plt.show()

5、检验模型

In [16]:
# 看一下模型的预测能力
pridict_y=model.predict(test_x)
print(pridict_y)
print(test_y)
[[1.3584137e-03 4.8200751e-04 5.8215936e-03 ... 5.3081061e-03
  2.1438805e-02 5.4746214e-04]
 [5.8662478e-02 7.6073211e-01 4.4167458e-04 ... 2.1592616e-06
  1.6843069e-01 1.1687050e-02]
 [3.3079621e-03 6.9468123e-01 3.3711942e-05 ... 1.7604169e-05
  1.0326697e-01 1.9867758e-01]
 ...
 [1.4156199e-05 5.7651152e-07 8.9695854e-03 ... 4.5708120e-03
  2.1283584e-05 1.4811366e-05]
 [1.6959395e-02 2.6070757e-03 8.5781664e-03 ... 8.8634342e-02
  7.9935264e-05 3.1141951e-03]
 [6.6272560e-06 8.8053673e-07 8.4913532e-05 ... 9.9337065e-01
  3.3276342e-07 6.9845664e-06]]
tf.Tensor(
[[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 1. 0.]
 [0. 0. 0. ... 0. 1. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 1. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 1. 0. 0.]], shape=(10000, 10), dtype=float32)
In [17]:
# 在pridict_y中找最大值的索引,横向
pridict_y = tf.argmax(pridict_y, axis=1)
print(pridict_y)
#
test_y = tf.argmax(test_y, axis=1)
print(test_y)
tf.Tensor([3 1 1 ... 5 3 7], shape=(10000,), dtype=int64)
tf.Tensor([3 8 8 ... 5 1 7], shape=(10000,), dtype=int64)
In [18]:
plt.figure()
plt.imshow(test_x[0])
plt.figure()
plt.imshow(test_x[1])
plt.figure()
plt.imshow(test_x[2])
plt.figure()
plt.imshow(test_x[3])
plt.show()
In [ ]:
 
 
posted @ 2020-09-18 01:55  范仁义  阅读(289)  评论(0)    收藏  举报