前向传播实战
Tips:
我设想在未来,我们可能就相当于机器人的宠物狗,到那时我也会支持机器人的。−克劳德·香农
前言
记录下学习Tensorflow过程中的第一个实战-前向传播实战,尽可能详细地做了代码注释,便于理解与之后的回顾。
概念引入
前向传播:神经网络从输入到输出的计算过程
神经网络的前向传播过程,也是数据张量(Tensor)从第一层流动(Flow)至输出层的过程,即从输入数据开始,途径每个隐藏层,直至得到输出并计算误差,这也是 TensorFlow框架名字由来。
实现
out = 𝑅𝑒𝐿𝑈{𝑅𝑒𝐿𝑈{𝑅𝑒𝐿𝑈[𝑿@𝑾1 + 𝒃1]@𝑾2 + 𝒃2}@𝑾 + 𝒃 }
操作
采用的数据集是 MNIST 手写数字图片集,输入节点数为 784,第一层的输出节点数是256,第二层的输出节点数是 128,第三层的输出节点是 10,也就是当前样本属于 10 类别的概率。
代码部分
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = '2' # 忽略warning
import tensorflow as tf
from tensorflow.keras import datasets # 管理数据集工具
# 加载数据集
# x:[60k,28,28]
# y:[60k]
(x, y), _ = datasets.mnist.load_data()
# Tensor转换
# x: [0~255] => [0~1.]
x = tf.convert_to_tensor(x, dtype=tf.float32) / 255.
y = tf.convert_to_tensor(y, dtype=tf.int32)
print(x.shape, y.shape, x.dtype, y.dtype)
print(tf.reduce_min(x), tf.reduce_max(x))
print(tf.reduce_min(y), tf.reduce_max(y))
# 创建数据集,一次取128个数值
train_db = tf.data.Dataset.from_tensor_slices((x, y)).batch(128)
# 创建叠加器
train_iter = iter(train_db)
sample = next(train_iter) # 迭代
print('batch:', sample[0].shape, sample[1].shape)
# 创建权值
# 每层的张量都需要被优化,故使用 Variable 类型,并使用截断的正太分布初始化权值张量
# 偏置向量初始化为 0 即可
# [b, 784] => [b, 512] => [b, 128] => [b, 10]
# [dim_in, dim_out], [dim_out]
w1 = tf.Variable(tf.random.truncated_normal([784, 256], stddev=0.1)) # 解决梯度爆炸(nan)
b1 = tf.Variable(tf.zeros([256]))
w2 = tf.Variable(tf.random.truncated_normal([256, 128], stddev=0.1))
b2 = tf.Variable(tf.zeros([128]))
w3 = tf.Variable(tf.random.truncated_normal([128, 10], stddev=0.1))
b3 = tf.Variable(tf.zeros([10]))
# learning rate
lr = 1e-3
# 前瞻运算
for epoch in range(10): # 多次迭代
for step, (x, y) in enumerate(train_db): # 对所有数据集的循环
# x: [128, 28, 28]
# y: [128]
# 维度变换
# [b, 28, 28] => [b, 28*28]
x = tf.reshape(x, [-1, 28*28])
with tf.GradientTape() as tape: # 自动求导 自动跟踪tf.Variable
# 梯度计算
# x: [b, 28*28]
# h1 = x@w1 + b1
# [b, 784]@[784, 256] + [256] => [b, 256] + [256] => [b, 256] + [b, 256]
h1 = x@w1 + tf.broadcast_to(b1, [x.shape[0], 256])
h1 = tf.nn.relu(h1) # 非线性转换
# [b, 256] => [b, 128]
h2 = h1@w2 + b2
h2 = tf.nn.relu(h2)
# [b, 128] => [b, 10]
out = h2@w3 + b3
# 计算误差
# out: [b,10]
# y: [b] => [b, 10]
y_onehot = tf.one_hot(y, depth=10)
# 均方差
# mse = mean(sum(y-out)^2)
# 方差
# [b, 10]
loss = tf.square(y_onehot - out)
# 均值
# mean: scalar
loss = tf.reduce_mean(loss)
# 得到梯度
grads = tape.gradient(loss, [w1, b1, w2, b2, w3, b3])
# w1 = w1 - lr * w1_grad
# assign_sub 原地更新(类型不变)
w1.assign_sub(lr * grads[0])
b1.assign_sub(lr * grads[1])
w2.assign_sub(lr * grads[2])
b2.assign_sub(lr * grads[3])
w3.assign_sub(lr * grads[4])
b3.assign_sub(lr * grads[5])
if step % 100 == 0:
print(epoch, step, 'loss:', float(loss))
(60000, 28, 28) (60000,) <dtype: 'float32'> <dtype: 'int32'>
tf.Tensor(0.0, shape=(), dtype=float32) tf.Tensor(1.0, shape=(), dtype=float32)
tf.Tensor(0, shape=(), dtype=int32) tf.Tensor(9, shape=(), dtype=int32)
batch: (128, 28, 28) (128,)
0 0 loss: 0.33976617455482483
0 100 loss: 0.1958806812763214
0 200 loss: 0.18718643486499786
0 300 loss: 0.15601249039173126
0 400 loss: 0.17516227066516876
1 0 loss: 0.1537063717842102
1 100 loss: 0.14795097708702087
1 200 loss: 0.15245798230171204
1 300 loss: 0.1325622797012329
1 400 loss: 0.14938321709632874
2 0 loss: 0.1319803148508072
2 100 loss: 0.13147899508476257
2 200 loss: 0.13439854979515076
2 300 loss: 0.1185288205742836
2 400 loss: 0.13274045288562775
3 0 loss: 0.1175210028886795
3 100 loss: 0.12028291076421738
3 200 loss: 0.12187065184116364
3 300 loss: 0.10856960713863373
3 400 loss: 0.12117012590169907
4 0 loss: 0.10719207674264908
4 100 loss: 0.11207938194274902
4 200 loss: 0.11259110271930695
4 300 loss: 0.10115315020084381
4 400 loss: 0.11270163208246231
5 0 loss: 0.09946105629205704
5 100 loss: 0.10572906583547592
5 200 loss: 0.10535679757595062
5 300 loss: 0.09530024230480194
5 400 loss: 0.10628950595855713
6 0 loss: 0.09349677711725235
6 100 loss: 0.10055620968341827
6 200 loss: 0.09956973791122437
6 300 loss: 0.09057899564504623
6 400 loss: 0.10117393732070923
7 0 loss: 0.08874528855085373
7 100 loss: 0.09627220034599304
7 200 loss: 0.09485194087028503
7 300 loss: 0.08672039955854416
7 400 loss: 0.0969923660159111
8 0 loss: 0.08484948426485062
8 100 loss: 0.09272643178701401
8 200 loss: 0.09091418981552124
8 300 loss: 0.08345989137887955
8 400 loss: 0.09351988136768341
9 0 loss: 0.08157537877559662
9 100 loss: 0.08971865475177765
9 200 loss: 0.08754660189151764
9 300 loss: 0.08066332340240479
9 400 loss: 0.09057702869176865
进程已结束,退出代码0
注意
- 1.详细讲解数据集的加载
- 2.解决梯度爆炸问题的原理
- 3.
tf.GradientTape()代码讲解 - 4.以简便代码完成
compute gradients过程
结尾
此次实战基于对MNIST手写数字集的识别,结合流程以更好地理解。

浙公网安备 33010602011771号