微信扫一扫打赏支持

200813_tensorflow2---3、iris分类简单神经网络

200813_tensorflow2---3、iris分类简单神经网络

一、总结

一句话总结:

(A)、具体相乘就是 每个batch(这里是32)的数据来整体和w1相乘,然后加上b1,b1维度不够,肯定是用了广播
(B)、y = tf.matmul(x_train, w1) + b1  # 神经网络乘加运算
(C)、shape=(32, 4) * shape=(4, 3) + shape=(3,)
# 训练部分
for epoch in range(epoch):  #数据集级别的循环,每个epoch循环一次数据集
    for step, (x_train, y_train) in enumerate(train_db):  #batch级别的循环 ,每个step循环一个batch
        with tf.GradientTape() as tape:  # with结构记录梯度信息
            y = tf.matmul(x_train, w1) + b1  # 神经网络乘加运算
            y = tf.nn.softmax(y)  # 使输出y符合概率分布(此操作后与独热码同量级,可相减求loss)
            y_ = tf.one_hot(y_train, depth=3)  # 将标签值转换为独热码格式,方便计算loss和accuracy
            loss = tf.reduce_mean(tf.square(y_ - y))  # 采用均方误差损失函数mse = mean(sum(y-out)^2)
            loss_all += loss.numpy()  # 将每个step计算出的loss累加,为后续求loss平均值提供数据,这样计算的loss更准确
        # 计算loss对各个参数的梯度
        grads = tape.gradient(loss, [w1, b1])

        # 实现梯度更新 w1 = w1 - lr * w1_grad    b = b - lr * b_grad
        w1.assign_sub(lr * grads[0])  # 参数w1自更新
        b1.assign_sub(lr * grads[1])  # 参数b自更新

    # 每个epoch,打印loss信息
    print("Epoch {}, loss: {}".format(epoch, loss_all/4))
    train_loss_results.append(loss_all / 4)  # 将4个step的loss求平均记录在此变量中
    loss_all = 0  # loss_all归零,为记录下一个epoch的loss做准备

 

 

1、将打乱后的数据集分割为训练集和测试集,训练集为前120行,测试集为后30行?

x_train = x_data[:-30]
x_test = x_data[-30:]

 

2、打乱数据的时候,如果保证x和y一一对应?

使用相同的seed,保证输入特征和标签一一对应
np.random.seed(116)  # 使用相同的seed,保证输入特征和标签一一对应
np.random.shuffle(x_data)
np.random.seed(116)
np.random.shuffle(y_data)

 

 

3、from_tensor_slices函数使输入特征和标签值一一对应。(把数据集分批次,每个批次batch组数据)?

train_db = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(32)

 

 

4、生成神经网络的参数,4个输入特征故,输入层为4个输入节点;因为3分类,故输出层为3个神经元?

w1 = tf.Variable(tf.random.truncated_normal([4, 3], stddev=0.1, seed=1))

 

 

5、用tf.Variable()标记参数可训练?

w1 = tf.Variable(tf.random.truncated_normal([4, 3], stddev=0.1, seed=1))

 

 

6、使用seed使每次生成的随机数相同(方便教学,使大家结果都一致,在现实使用时不写seed)?

(I)、 w1 = tf.Variable(tf.random.truncated_normal([4, 3], stddev=0.1, seed=1))
(II)、 b1 = tf.Variable(tf.random.truncated_normal([3], stddev=0.1, seed=1))

 

 

7、参数w1自更新?

w1.assign_sub(lr * grads[0]) 

 

8、y = tf.matmul(x_train, w1) + b1,b1的维度是不够的,会怎么办?

进行y = tf.matmul(x_train, w1) + b1 的时候,b1的维度是不够的,b1是会自动广播的
print([[1,2],[3,4]]+[5,6])
v1=tf.constant([[1,2],[3,4]])
v2=tf.constant([5,6])
print(v1)
print(v2)
print(v1+v2)

结果
[[1, 2], [3, 4], 5, 6]
tf.Tensor(
[[1 2]
 [3 4]], shape=(2, 2), dtype=int32)
tf.Tensor([5 6], shape=(2,), dtype=int32)
tf.Tensor(
[[ 6  8]
 [ 8 10]], shape=(2, 2), dtype=int32)

 

 

9、对每个batch中y进行softmax操作的时候,到底是怎么做的?

softmax的时候,自然是每一个测试数据的哪些分类的概率的和为1,也就是三种分类的和为1
结果
y = tf.nn.softmax(y)
 tf.Tensor(
[[0.28231245 0.13475922 0.58292836]
 [0.30029225 0.13068524 0.5690225 ]
 [0.5755953  0.04495765 0.37944704]], shape=(3, 3), dtype=float32)
y_ = tf.one_hot(y_train, depth=3)
 tf.Tensor(
[[1. 0. 0.]
 [1. 0. 0.]
 [0. 0. 1.]], shape=(3, 3), dtype=float32)

 

 

10、无论是softmax还是one_hot编码,本身都是概率形式,自然可以计算?

(①)、y = tf.matmul(x_train, w1) + b1  # 神经网络乘加运算
(②)、y = tf.nn.softmax(y)  # 使输出y符合概率分布(此操作后与独热码同量级,可相减求loss)
(③)、y_ = tf.one_hot(y_train, depth=3)  # 将标签值转换为独热码格式,方便计算loss和accuracy
(④)、loss = tf.reduce_mean(tf.square(y_ - y))  # 采用均方误差损失函数mse = mean(sum(y-out)^2)
计算均方误差的时候,预测值y是softmax操作之后的概率,y_是one_hot编码之后的样子,
所以计算出的的基础就是-1到1之间有正有负的形式,其实也好理解,无论是softmax还是one_hot编码,本身都是概率形式,自然可以计算

结果
y = tf.nn.softmax(y)
 tf.Tensor(
[[0.28231245 0.13475922 0.58292836]
 [0.30029225 0.13068524 0.5690225 ]
 [0.5755953  0.04495765 0.37944704]], shape=(3, 3), dtype=float32)
y_ = tf.one_hot(y_train, depth=3)
 tf.Tensor(
[[1. 0. 0.]
 [1. 0. 0.]
 [0. 0. 1.]], shape=(3, 3), dtype=float32)
y_ - y
 tf.Tensor(
[[ 0.71768755 -0.13475922 -0.58292836]
 [ 0.69970775 -0.13068524 -0.5690225 ]
 [-0.5755953  -0.04495765  0.62055296]], shape=(3, 3), dtype=float32)
tf.square(y_ - y)
 tf.Tensor(
[[0.51507545 0.01816005 0.33980548]
 [0.48959094 0.01707863 0.3237866 ]
 [0.33130997 0.00202119 0.38508597]], shape=(3, 3), dtype=float32)
loss = tf.reduce_mean(tf.square(y_ - y))
 tf.Tensor(0.2691016, shape=(), dtype=float32)

 

 

11、求均方差的时候,是每个batch里面的[batch,type]里面的每个元素平方,然后求均值?

loss = tf.reduce_mean(tf.square(y_ - y))  # 采用均方误差损失函数mse = mean(sum(y-out)^2)
y = tf.nn.softmax(y)
 tf.Tensor(
[[0.28231245 0.13475922 0.58292836]
 [0.30029225 0.13068524 0.5690225 ]
 [0.5755953  0.04495765 0.37944704]], shape=(3, 3), dtype=float32)
y_ = tf.one_hot(y_train, depth=3)
 tf.Tensor(
[[1. 0. 0.]
 [1. 0. 0.]
 [0. 0. 1.]], shape=(3, 3), dtype=float32)
y_ - y
 tf.Tensor(
[[ 0.71768755 -0.13475922 -0.58292836]
 [ 0.69970775 -0.13068524 -0.5690225 ]
 [-0.5755953  -0.04495765  0.62055296]], shape=(3, 3), dtype=float32)
tf.square(y_ - y)
 tf.Tensor(
[[0.51507545 0.01816005 0.33980548]
 [0.48959094 0.01707863 0.3237866 ]
 [0.33130997 0.00202119 0.38508597]], shape=(3, 3), dtype=float32)
loss = tf.reduce_mean(tf.square(y_ - y))
 tf.Tensor(0.2691016, shape=(), dtype=float32)

 

 

 

二、iris分类简单神经网络

博客对应课程的视频位置:

# -*- coding: UTF-8 -*-
# 利用鸢尾花数据集,实现前向传播、反向传播,可视化loss曲线

# 导入所需模块
import tensorflow as tf
from sklearn import datasets
from matplotlib import pyplot as plt
import numpy as np

# 导入数据,分别为输入特征和标签
x_data = datasets.load_iris().data
y_data = datasets.load_iris().target

# 随机打乱数据(因为原始数据是顺序的,顺序不打乱会影响准确率)
# seed: 随机数种子,是一个整数,当设置之后,每次生成的随机数都一样(为方便教学,以保每位同学结果一致)
np.random.seed(116)  # 使用相同的seed,保证输入特征和标签一一对应
np.random.shuffle(x_data)
np.random.seed(116)
np.random.shuffle(y_data)
tf.random.set_seed(116)

# 将打乱后的数据集分割为训练集和测试集,训练集为前120行,测试集为后30行
x_train = x_data[:-30]
y_train = y_data[:-30]
x_test = x_data[-30:]
y_test = y_data[-30:]

# 转换x的数据类型,否则后面矩阵相乘时会因数据类型不一致报错
x_train = tf.cast(x_train, tf.float32)
x_test = tf.cast(x_test, tf.float32)

# from_tensor_slices函数使输入特征和标签值一一对应。(把数据集分批次,每个批次batch组数据)
train_db = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(32)
test_db = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)

# 生成神经网络的参数,4个输入特征故,输入层为4个输入节点;因为3分类,故输出层为3个神经元
# 用tf.Variable()标记参数可训练
# 使用seed使每次生成的随机数相同(方便教学,使大家结果都一致,在现实使用时不写seed)
w1 = tf.Variable(tf.random.truncated_normal([4, 3], stddev=0.1, seed=1))
b1 = tf.Variable(tf.random.truncated_normal([3], stddev=0.1, seed=1))

lr = 0.1  # 学习率为0.1
train_loss_results = []  # 将每轮的loss记录在此列表中,为后续画loss曲线提供数据
test_acc = []  # 将每轮的acc记录在此列表中,为后续画acc曲线提供数据
epoch = 500  # 循环500轮
loss_all = 0  # 每轮分4个step,loss_all记录四个step生成的4个loss的和

# 训练部分
for epoch in range(epoch):  #数据集级别的循环,每个epoch循环一次数据集
    for step, (x_train, y_train) in enumerate(train_db):  #batch级别的循环 ,每个step循环一个batch
        with tf.GradientTape() as tape:  # with结构记录梯度信息
            y = tf.matmul(x_train, w1) + b1  # 神经网络乘加运算
            y = tf.nn.softmax(y)  # 使输出y符合概率分布(此操作后与独热码同量级,可相减求loss)
            y_ = tf.one_hot(y_train, depth=3)  # 将标签值转换为独热码格式,方便计算loss和accuracy
            loss = tf.reduce_mean(tf.square(y_ - y))  # 采用均方误差损失函数mse = mean(sum(y-out)^2)
            loss_all += loss.numpy()  # 将每个step计算出的loss累加,为后续求loss平均值提供数据,这样计算的loss更准确
        # 计算loss对各个参数的梯度
        grads = tape.gradient(loss, [w1, b1])

        # 实现梯度更新 w1 = w1 - lr * w1_grad    b = b - lr * b_grad
        w1.assign_sub(lr * grads[0])  # 参数w1自更新
        b1.assign_sub(lr * grads[1])  # 参数b自更新

    # 每个epoch,打印loss信息
    print("Epoch {}, loss: {}".format(epoch, loss_all/4))
    train_loss_results.append(loss_all / 4)  # 将4个step的loss求平均记录在此变量中
    loss_all = 0  # loss_all归零,为记录下一个epoch的loss做准备

    # 测试部分
    # total_correct为预测对的样本个数, total_number为测试的总样本数,将这两个变量都初始化为0
    total_correct, total_number = 0, 0
    for x_test, y_test in test_db:
        # 使用更新后的参数进行预测
        y = tf.matmul(x_test, w1) + b1
        y = tf.nn.softmax(y)
        pred = tf.argmax(y, axis=1)  # 返回y中最大值的索引,即预测的分类
        # 将pred转换为y_test的数据类型
        pred = tf.cast(pred, dtype=y_test.dtype)
        # 若分类正确,则correct=1,否则为0,将bool型的结果转换为int型
        correct = tf.cast(tf.equal(pred, y_test), dtype=tf.int32)
        # 将每个batch的correct数加起来
        correct = tf.reduce_sum(correct)
        # 将所有batch中的correct数加起来
        total_correct += int(correct)
        # total_number为测试的总样本数,也就是x_test的行数,shape[0]返回变量的行数
        total_number += x_test.shape[0]
    # 总的准确率等于total_correct/total_number
    acc = total_correct / total_number
    test_acc.append(acc)
    print("Test_acc:", acc)
    print("--------------------------")

# 绘制 loss 曲线
plt.title('Loss Function Curve')  # 图片标题
plt.xlabel('Epoch')  # x轴变量名称
plt.ylabel('Loss')  # y轴变量名称
plt.plot(train_loss_results, label="$Loss$")  # 逐点画出trian_loss_results值并连线,连线图标是Loss
plt.legend()  # 画出曲线图标
plt.show()  # 画出图像

# 绘制 Accuracy 曲线
plt.title('Acc Curve')  # 图片标题
plt.xlabel('Epoch')  # x轴变量名称
plt.ylabel('Acc')  # y轴变量名称
plt.plot(test_acc, label="$Accuracy$")  # 逐点画出test_acc值并连线,连线图标是Accuracy
plt.legend()
plt.show()
Epoch 0, loss: 0.2821310982108116
Test_acc: 0.16666666666666666
--------------------------
Epoch 1, loss: 0.25459614396095276
Test_acc: 0.16666666666666666
--------------------------
Epoch 2, loss: 0.22570250555872917
Test_acc: 0.16666666666666666
--------------------------
Epoch 3, loss: 0.21028399839997292
Test_acc: 0.16666666666666666
--------------------------
Epoch 4, loss: 0.19942265003919601
Test_acc: 0.16666666666666666
--------------------------
Epoch 5, loss: 0.18873637542128563
Test_acc: 0.5
--------------------------
Epoch 6, loss: 0.17851298674941063
Test_acc: 0.5333333333333333
--------------------------
Epoch 7, loss: 0.16922875493764877
Test_acc: 0.5333333333333333
--------------------------
Epoch 8, loss: 0.16107673197984695
Test_acc: 0.5333333333333333
--------------------------
Epoch 9, loss: 0.15404684469103813
Test_acc: 0.5333333333333333
--------------------------
Epoch 10, loss: 0.14802725985646248
Test_acc: 0.5333333333333333
--------------------------
Epoch 11, loss: 0.14287303388118744
Test_acc: 0.5333333333333333
--------------------------
Epoch 12, loss: 0.1384414117783308
Test_acc: 0.5333333333333333
--------------------------
Epoch 13, loss: 0.13460607267916203
Test_acc: 0.5333333333333333
--------------------------
Epoch 14, loss: 0.13126072101294994
Test_acc: 0.5333333333333333
--------------------------
Epoch 15, loss: 0.12831821851432323
Test_acc: 0.5333333333333333
--------------------------
Epoch 16, loss: 0.12570794485509396
Test_acc: 0.5333333333333333
--------------------------
Epoch 17, loss: 0.12337299063801765
Test_acc: 0.5333333333333333
--------------------------
Epoch 18, loss: 0.12126746773719788
Test_acc: 0.5333333333333333
--------------------------
Epoch 19, loss: 0.11935433186590672
Test_acc: 0.5333333333333333
--------------------------
Epoch 20, loss: 0.11760355159640312
Test_acc: 0.5333333333333333
--------------------------
Epoch 21, loss: 0.11599068343639374
Test_acc: 0.5333333333333333
--------------------------
Epoch 22, loss: 0.11449568346142769
Test_acc: 0.5333333333333333
--------------------------
Epoch 23, loss: 0.11310207471251488
Test_acc: 0.5333333333333333
--------------------------
Epoch 24, loss: 0.11179621331393719
Test_acc: 0.5333333333333333
--------------------------
Epoch 25, loss: 0.11056671850383282
Test_acc: 0.5333333333333333
--------------------------
Epoch 26, loss: 0.1094040758907795
Test_acc: 0.5333333333333333
--------------------------
Epoch 27, loss: 0.10830028168857098
Test_acc: 0.5333333333333333
--------------------------
Epoch 28, loss: 0.10724855028092861
Test_acc: 0.5333333333333333
--------------------------
Epoch 29, loss: 0.10624313168227673
Test_acc: 0.5333333333333333
--------------------------
Epoch 30, loss: 0.10527909733355045
Test_acc: 0.5333333333333333
--------------------------
Epoch 31, loss: 0.10435222275555134
Test_acc: 0.5333333333333333
--------------------------
Epoch 32, loss: 0.10345886088907719
Test_acc: 0.5333333333333333
--------------------------
Epoch 33, loss: 0.1025958750396967
Test_acc: 0.5333333333333333
--------------------------
Epoch 34, loss: 0.10176052898168564
Test_acc: 0.5333333333333333
--------------------------
Epoch 35, loss: 0.10095042549073696
Test_acc: 0.5333333333333333
--------------------------
Epoch 36, loss: 0.10016347467899323
Test_acc: 0.5333333333333333
--------------------------
Epoch 37, loss: 0.09939785115420818
Test_acc: 0.5333333333333333
--------------------------
Epoch 38, loss: 0.0986519306898117
Test_acc: 0.5333333333333333
--------------------------
Epoch 39, loss: 0.09792428836226463
Test_acc: 0.5333333333333333
--------------------------
Epoch 40, loss: 0.09721365198493004
Test_acc: 0.5333333333333333
--------------------------
Epoch 41, loss: 0.09651889465749264
Test_acc: 0.5333333333333333
--------------------------
Epoch 42, loss: 0.095839012414217
Test_acc: 0.5333333333333333
--------------------------
Epoch 43, loss: 0.09517310559749603
Test_acc: 0.5333333333333333
--------------------------
Epoch 44, loss: 0.09452036954462528
Test_acc: 0.5333333333333333
--------------------------
Epoch 45, loss: 0.0938800759613514
Test_acc: 0.5333333333333333
--------------------------
Epoch 46, loss: 0.09325155802071095
Test_acc: 0.5333333333333333
--------------------------
Epoch 47, loss: 0.09263424575328827
Test_acc: 0.5333333333333333
--------------------------
Epoch 48, loss: 0.09202759340405464
Test_acc: 0.5333333333333333
--------------------------
Epoch 49, loss: 0.09143111854791641
Test_acc: 0.5333333333333333
--------------------------
Epoch 50, loss: 0.09084436483681202
Test_acc: 0.5666666666666667
--------------------------
Epoch 51, loss: 0.09026693925261497
Test_acc: 0.5666666666666667
--------------------------
Epoch 52, loss: 0.08969846554100513
Test_acc: 0.5666666666666667
--------------------------
Epoch 53, loss: 0.08913860842585564
Test_acc: 0.6
--------------------------
Epoch 54, loss: 0.08858705312013626
Test_acc: 0.6
--------------------------
Epoch 55, loss: 0.08804351650178432
Test_acc: 0.6
--------------------------
Epoch 56, loss: 0.08750772662460804
Test_acc: 0.6
--------------------------
Epoch 57, loss: 0.0869794450700283
Test_acc: 0.6
--------------------------
Epoch 58, loss: 0.08645843341946602
Test_acc: 0.6
--------------------------
Epoch 59, loss: 0.08594449236989021
Test_acc: 0.6
--------------------------
Epoch 60, loss: 0.08543741330504417
Test_acc: 0.6
--------------------------
Epoch 61, loss: 0.08493702113628387
Test_acc: 0.6
--------------------------
Epoch 62, loss: 0.08444313704967499
Test_acc: 0.6333333333333333
--------------------------
Epoch 63, loss: 0.08395560085773468
Test_acc: 0.6333333333333333
--------------------------
Epoch 64, loss: 0.08347426541149616
Test_acc: 0.6333333333333333
--------------------------
Epoch 65, loss: 0.08299898356199265
Test_acc: 0.6333333333333333
--------------------------
Epoch 66, loss: 0.08252961002290249
Test_acc: 0.6333333333333333
--------------------------
Epoch 67, loss: 0.08206603676080704
Test_acc: 0.6333333333333333
--------------------------
Epoch 68, loss: 0.08160812966525555
Test_acc: 0.6333333333333333
--------------------------
Epoch 69, loss: 0.08115577884018421
Test_acc: 0.6333333333333333
--------------------------
Epoch 70, loss: 0.08070887811481953
Test_acc: 0.6333333333333333
--------------------------
Epoch 71, loss: 0.08026731014251709
Test_acc: 0.6333333333333333
--------------------------
Epoch 72, loss: 0.07983098737895489
Test_acc: 0.6666666666666666
--------------------------
Epoch 73, loss: 0.07939981110394001
Test_acc: 0.6666666666666666
--------------------------
Epoch 74, loss: 0.0789736956357956
Test_acc: 0.6666666666666666
--------------------------
Epoch 75, loss: 0.07855254411697388
Test_acc: 0.7
--------------------------
Epoch 76, loss: 0.078136270865798
Test_acc: 0.7
--------------------------
Epoch 77, loss: 0.07772480882704258
Test_acc: 0.7
--------------------------
Epoch 78, loss: 0.07731806114315987
Test_acc: 0.7
--------------------------
Epoch 79, loss: 0.07691597007215023
Test_acc: 0.7
--------------------------
Epoch 80, loss: 0.07651844993233681
Test_acc: 0.7
--------------------------
Epoch 81, loss: 0.07612543925642967
Test_acc: 0.7333333333333333
--------------------------
Epoch 82, loss: 0.07573685422539711
Test_acc: 0.7333333333333333
--------------------------
Epoch 83, loss: 0.07535265013575554
Test_acc: 0.7333333333333333
--------------------------
Epoch 84, loss: 0.07497274503111839
Test_acc: 0.7333333333333333
--------------------------
Epoch 85, loss: 0.07459708210080862
Test_acc: 0.7666666666666667
--------------------------
Epoch 86, loss: 0.07422559149563313
Test_acc: 0.7666666666666667
--------------------------
Epoch 87, loss: 0.0738582294434309
Test_acc: 0.7666666666666667
--------------------------
Epoch 88, loss: 0.0734949205070734
Test_acc: 0.7666666666666667
--------------------------
Epoch 89, loss: 0.0731356143951416
Test_acc: 0.7666666666666667
--------------------------
Epoch 90, loss: 0.07278026826679707
Test_acc: 0.7666666666666667
--------------------------
Epoch 91, loss: 0.07242879550904036
Test_acc: 0.7666666666666667
--------------------------
Epoch 92, loss: 0.07208117935806513
Test_acc: 0.7666666666666667
--------------------------
Epoch 93, loss: 0.0717373350635171
Test_acc: 0.8
--------------------------
Epoch 94, loss: 0.07139723561704159
Test_acc: 0.8
--------------------------
Epoch 95, loss: 0.07106081955134869
Test_acc: 0.8
--------------------------
Epoch 96, loss: 0.07072804030030966
Test_acc: 0.8
--------------------------
Epoch 97, loss: 0.07039883825927973
Test_acc: 0.8
--------------------------
Epoch 98, loss: 0.07007318455725908
Test_acc: 0.8333333333333334
--------------------------
Epoch 99, loss: 0.06975101493299007
Test_acc: 0.8666666666666667
--------------------------
Epoch 100, loss: 0.06943228747695684
Test_acc: 0.8666666666666667
--------------------------
Epoch 101, loss: 0.06911696959286928
Test_acc: 0.8666666666666667
--------------------------
Epoch 102, loss: 0.06880500074476004
Test_acc: 0.8666666666666667
--------------------------
Epoch 103, loss: 0.06849635019898415
Test_acc: 0.8666666666666667
--------------------------
Epoch 104, loss: 0.06819096114486456
Test_acc: 0.8666666666666667
--------------------------
Epoch 105, loss: 0.06788879726082087
Test_acc: 0.8666666666666667
--------------------------
Epoch 106, loss: 0.06758982129395008
Test_acc: 0.8666666666666667
--------------------------
Epoch 107, loss: 0.0672939820215106
Test_acc: 0.9
--------------------------
Epoch 108, loss: 0.06700124498456717
Test_acc: 0.9
--------------------------
Epoch 109, loss: 0.06671156641095877
Test_acc: 0.9
--------------------------
Epoch 110, loss: 0.0664249137043953
Test_acc: 0.9
--------------------------
Epoch 111, loss: 0.06614123564213514
Test_acc: 0.9
--------------------------
Epoch 112, loss: 0.06586050614714622
Test_acc: 0.9
--------------------------
Epoch 113, loss: 0.06558268330991268
Test_acc: 0.9
--------------------------
Epoch 114, loss: 0.06530773546546698
Test_acc: 0.9
--------------------------
Epoch 115, loss: 0.06503560114651918
Test_acc: 0.9
--------------------------
Epoch 116, loss: 0.06476627010852098
Test_acc: 0.9
--------------------------
Epoch 117, loss: 0.0644997013732791
Test_acc: 0.9333333333333333
--------------------------
Epoch 118, loss: 0.06423585209995508
Test_acc: 0.9333333333333333
--------------------------
Epoch 119, loss: 0.06397469434887171
Test_acc: 0.9333333333333333
--------------------------
Epoch 120, loss: 0.06371619366109371
Test_acc: 0.9333333333333333
--------------------------
Epoch 121, loss: 0.06346031092107296
Test_acc: 0.9333333333333333
--------------------------
Epoch 122, loss: 0.06320701260119677
Test_acc: 0.9333333333333333
--------------------------
Epoch 123, loss: 0.06295627076178789
Test_acc: 0.9333333333333333
--------------------------
Epoch 124, loss: 0.06270804442465305
Test_acc: 0.9333333333333333
--------------------------
Epoch 125, loss: 0.062462314032018185
Test_acc: 0.9333333333333333
--------------------------
Epoch 126, loss: 0.06221904046833515
Test_acc: 0.9333333333333333
--------------------------
Epoch 127, loss: 0.061978189274668694
Test_acc: 0.9333333333333333
--------------------------
Epoch 128, loss: 0.06173973251134157
Test_acc: 0.9333333333333333
--------------------------
Epoch 129, loss: 0.06150364316999912
Test_acc: 0.9333333333333333
--------------------------
Epoch 130, loss: 0.06126988120377064
Test_acc: 0.9333333333333333
--------------------------
Epoch 131, loss: 0.06103844102472067
Test_acc: 0.9333333333333333
--------------------------
Epoch 132, loss: 0.06080926302820444
Test_acc: 0.9333333333333333
--------------------------
Epoch 133, loss: 0.06058233417570591
Test_acc: 0.9333333333333333
--------------------------
Epoch 134, loss: 0.06035762373358011
Test_acc: 0.9333333333333333
--------------------------
Epoch 135, loss: 0.06013510935008526
Test_acc: 0.9333333333333333
--------------------------
Epoch 136, loss: 0.05991474911570549
Test_acc: 0.9333333333333333
--------------------------
Epoch 137, loss: 0.05969652906060219
Test_acc: 0.9333333333333333
--------------------------
Epoch 138, loss: 0.059480417519807816
Test_acc: 0.9333333333333333
--------------------------
Epoch 139, loss: 0.05926638934761286
Test_acc: 0.9333333333333333
--------------------------
Epoch 140, loss: 0.059054410085082054
Test_acc: 0.9333333333333333
--------------------------
Epoch 141, loss: 0.058844463899731636
Test_acc: 0.9333333333333333
--------------------------
Epoch 142, loss: 0.058636522851884365
Test_acc: 0.9333333333333333
--------------------------
Epoch 143, loss: 0.058430569246411324
Test_acc: 0.9333333333333333
--------------------------
Epoch 144, loss: 0.058226557448506355
Test_acc: 0.9333333333333333
--------------------------
Epoch 145, loss: 0.05802448280155659
Test_acc: 0.9333333333333333
--------------------------
Epoch 146, loss: 0.05782430898398161
Test_acc: 0.9333333333333333
--------------------------
Epoch 147, loss: 0.057626026682555676
Test_acc: 0.9333333333333333
--------------------------
Epoch 148, loss: 0.05742959212511778
Test_acc: 0.9333333333333333
--------------------------
Epoch 149, loss: 0.0572349950671196
Test_acc: 0.9333333333333333
--------------------------
Epoch 150, loss: 0.05704221874475479
Test_acc: 0.9333333333333333
--------------------------
Epoch 151, loss: 0.056851218454539776
Test_acc: 0.9333333333333333
--------------------------
Epoch 152, loss: 0.05666198953986168
Test_acc: 0.9333333333333333
--------------------------
Epoch 153, loss: 0.05647451616823673
Test_acc: 0.9333333333333333
--------------------------
Epoch 154, loss: 0.05628875829279423
Test_acc: 0.9333333333333333
--------------------------
Epoch 155, loss: 0.056104714050889015
Test_acc: 0.9333333333333333
--------------------------
Epoch 156, loss: 0.05592233967036009
Test_acc: 0.9333333333333333
--------------------------
Epoch 157, loss: 0.055741630494594574
Test_acc: 0.9333333333333333
--------------------------
Epoch 158, loss: 0.05556255951523781
Test_acc: 0.9333333333333333
--------------------------
Epoch 159, loss: 0.055385113693773746
Test_acc: 0.9333333333333333
--------------------------
Epoch 160, loss: 0.05520926974713802
Test_acc: 0.9333333333333333
--------------------------
Epoch 161, loss: 0.05503500811755657
Test_acc: 0.9333333333333333
--------------------------
Epoch 162, loss: 0.054862307384610176
Test_acc: 0.9333333333333333
--------------------------
Epoch 163, loss: 0.054691143333911896
Test_acc: 0.9333333333333333
--------------------------
Epoch 164, loss: 0.05452151130884886
Test_acc: 0.9666666666666667
--------------------------
Epoch 165, loss: 0.05435337871313095
Test_acc: 0.9666666666666667
--------------------------
Epoch 166, loss: 0.05418673623353243
Test_acc: 0.9666666666666667
--------------------------
Epoch 167, loss: 0.05402155593037605
Test_acc: 0.9666666666666667
--------------------------
Epoch 168, loss: 0.053857834078371525
Test_acc: 0.9666666666666667
--------------------------
Epoch 169, loss: 0.05369555111974478
Test_acc: 0.9666666666666667
--------------------------
Epoch 170, loss: 0.0535346744582057
Test_acc: 0.9666666666666667
--------------------------
Epoch 171, loss: 0.05337520223110914
Test_acc: 0.9666666666666667
--------------------------
Epoch 172, loss: 0.053217110224068165
Test_acc: 0.9666666666666667
--------------------------
Epoch 173, loss: 0.053060383535921574
Test_acc: 0.9666666666666667
--------------------------
Epoch 174, loss: 0.05290501844137907
Test_acc: 0.9666666666666667
--------------------------
Epoch 175, loss: 0.05275098141282797
Test_acc: 0.9666666666666667
--------------------------
Epoch 176, loss: 0.052598257549107075
Test_acc: 0.9666666666666667
--------------------------
Epoch 177, loss: 0.05244683753699064
Test_acc: 0.9666666666666667
--------------------------
Epoch 178, loss: 0.05229670647531748
Test_acc: 0.9666666666666667
--------------------------
Epoch 179, loss: 0.05214785132557154
Test_acc: 0.9666666666666667
--------------------------
Epoch 180, loss: 0.052000245079398155
Test_acc: 0.9666666666666667
--------------------------
Epoch 181, loss: 0.05185388308018446
Test_acc: 0.9666666666666667
--------------------------
Epoch 182, loss: 0.05170875042676926
Test_acc: 0.9666666666666667
--------------------------
Epoch 183, loss: 0.051564828492701054
Test_acc: 0.9666666666666667
--------------------------
Epoch 184, loss: 0.05142210703343153
Test_acc: 0.9666666666666667
--------------------------
Epoch 185, loss: 0.051280577667057514
Test_acc: 1.0
--------------------------
Epoch 186, loss: 0.05114021338522434
Test_acc: 1.0
--------------------------
Epoch 187, loss: 0.05100100859999657
Test_acc: 1.0
--------------------------
Epoch 188, loss: 0.05086294375360012
Test_acc: 1.0
--------------------------
Epoch 189, loss: 0.05072600208222866
Test_acc: 1.0
--------------------------
Epoch 190, loss: 0.05059019569307566
Test_acc: 1.0
--------------------------
Epoch 191, loss: 0.050455485470592976
Test_acc: 1.0
--------------------------
Epoch 192, loss: 0.05032186862081289
Test_acc: 1.0
--------------------------
Epoch 193, loss: 0.05018934141844511
Test_acc: 1.0
--------------------------
Epoch 194, loss: 0.050057862885296345
Test_acc: 1.0
--------------------------
Epoch 195, loss: 0.04992745164781809
Test_acc: 1.0
--------------------------
Epoch 196, loss: 0.04979807883501053
Test_acc: 1.0
--------------------------
Epoch 197, loss: 0.04966974351555109
Test_acc: 1.0
--------------------------
Epoch 198, loss: 0.04954242613166571
Test_acc: 1.0
--------------------------
Epoch 199, loss: 0.049416118301451206
Test_acc: 1.0
--------------------------
Epoch 200, loss: 0.04929080791771412
Test_acc: 1.0
--------------------------
Epoch 201, loss: 0.04916647635400295
Test_acc: 1.0
--------------------------
Epoch 202, loss: 0.04904312081634998
Test_acc: 1.0
--------------------------
Epoch 203, loss: 0.048920733854174614
Test_acc: 1.0
--------------------------
Epoch 204, loss: 0.04879929218441248
Test_acc: 1.0
--------------------------
Epoch 205, loss: 0.04867880139499903
Test_acc: 1.0
--------------------------
Epoch 206, loss: 0.048559242859482765
Test_acc: 1.0
--------------------------
Epoch 207, loss: 0.04844059329479933
Test_acc: 1.0
--------------------------
Epoch 208, loss: 0.04832287225872278
Test_acc: 1.0
--------------------------
Epoch 209, loss: 0.04820604436099529
Test_acc: 1.0
--------------------------
Epoch 210, loss: 0.04809010773897171
Test_acc: 1.0
--------------------------
Epoch 211, loss: 0.04797505680471659
Test_acc: 1.0
--------------------------
Epoch 212, loss: 0.04786087851971388
Test_acc: 1.0
--------------------------
Epoch 213, loss: 0.047747558914124966
Test_acc: 1.0
--------------------------
Epoch 214, loss: 0.0476350886747241
Test_acc: 1.0
--------------------------
Epoch 215, loss: 0.04752346873283386
Test_acc: 1.0
--------------------------
Epoch 216, loss: 0.0474126823246479
Test_acc: 1.0
--------------------------
Epoch 217, loss: 0.047302715480327606
Test_acc: 1.0
--------------------------
Epoch 218, loss: 0.047193581238389015
Test_acc: 1.0
--------------------------
Epoch 219, loss: 0.047085246071219444
Test_acc: 1.0
--------------------------
Epoch 220, loss: 0.04697771091014147
Test_acc: 1.0
--------------------------
Epoch 221, loss: 0.04687096457928419
Test_acc: 1.0
--------------------------
Epoch 222, loss: 0.04676500242203474
Test_acc: 1.0
--------------------------
Epoch 223, loss: 0.046659817919135094
Test_acc: 1.0
--------------------------
Epoch 224, loss: 0.046555391512811184
Test_acc: 1.0
--------------------------
Epoch 225, loss: 0.04645172692835331
Test_acc: 1.0
--------------------------
Epoch 226, loss: 0.046348823234438896
Test_acc: 1.0
--------------------------
Epoch 227, loss: 0.04624664504081011
Test_acc: 1.0
--------------------------
Epoch 228, loss: 0.04614521004259586
Test_acc: 1.0
--------------------------
Epoch 229, loss: 0.04604450147598982
Test_acc: 1.0
--------------------------
Epoch 230, loss: 0.0459445109590888
Test_acc: 1.0
--------------------------
Epoch 231, loss: 0.045845236629247665
Test_acc: 1.0
--------------------------
Epoch 232, loss: 0.045746659860014915
Test_acc: 1.0
--------------------------
Epoch 233, loss: 0.0456487787887454
Test_acc: 1.0
--------------------------
Epoch 234, loss: 0.04555159900337458
Test_acc: 1.0
--------------------------
Epoch 235, loss: 0.0454550925642252
Test_acc: 1.0
--------------------------
Epoch 236, loss: 0.04535926412791014
Test_acc: 1.0
--------------------------
Epoch 237, loss: 0.0452641062438488
Test_acc: 1.0
--------------------------
Epoch 238, loss: 0.04516960587352514
Test_acc: 1.0
--------------------------
Epoch 239, loss: 0.04507576208561659
Test_acc: 1.0
--------------------------
Epoch 240, loss: 0.044982570223510265
Test_acc: 1.0
--------------------------
Epoch 241, loss: 0.04489002004265785
Test_acc: 1.0
--------------------------
Epoch 242, loss: 0.04479810409247875
Test_acc: 1.0
--------------------------
Epoch 243, loss: 0.044706812128424644
Test_acc: 1.0
--------------------------
Epoch 244, loss: 0.044616157189011574
Test_acc: 1.0
--------------------------
Epoch 245, loss: 0.04452610295265913
Test_acc: 1.0
--------------------------
Epoch 246, loss: 0.04443667083978653
Test_acc: 1.0
--------------------------
Epoch 247, loss: 0.044347843155264854
Test_acc: 1.0
--------------------------
Epoch 248, loss: 0.04425960686057806
Test_acc: 1.0
--------------------------
Epoch 249, loss: 0.04417196847498417
Test_acc: 1.0
--------------------------
Epoch 250, loss: 0.044084908440709114
Test_acc: 1.0
--------------------------
Epoch 251, loss: 0.04399843607097864
Test_acc: 1.0
--------------------------
Epoch 252, loss: 0.043912540189921856
Test_acc: 1.0
--------------------------
Epoch 253, loss: 0.043827205896377563
Test_acc: 1.0
--------------------------
Epoch 254, loss: 0.043742443434894085
Test_acc: 1.0
--------------------------
Epoch 255, loss: 0.043658241629600525
Test_acc: 1.0
--------------------------
Epoch 256, loss: 0.043574584648013115
Test_acc: 1.0
--------------------------
Epoch 257, loss: 0.0434914818033576
Test_acc: 1.0
--------------------------
Epoch 258, loss: 0.04340892285108566
Test_acc: 1.0
--------------------------
Epoch 259, loss: 0.04332689754664898
Test_acc: 1.0
--------------------------
Epoch 260, loss: 0.0432454077526927
Test_acc: 1.0
--------------------------
Epoch 261, loss: 0.04316444229334593
Test_acc: 1.0
--------------------------
Epoch 262, loss: 0.04308399744331837
Test_acc: 1.0
--------------------------
Epoch 263, loss: 0.04300406388938427
Test_acc: 1.0
--------------------------
Epoch 264, loss: 0.04292464908212423
Test_acc: 1.0
--------------------------
Epoch 265, loss: 0.04284573905169964
Test_acc: 1.0
--------------------------
Epoch 266, loss: 0.04276733938604593
Test_acc: 1.0
--------------------------
Epoch 267, loss: 0.042689429596066475
Test_acc: 1.0
--------------------------
Epoch 268, loss: 0.042612007819116116
Test_acc: 1.0
--------------------------
Epoch 269, loss: 0.042535084299743176
Test_acc: 1.0
--------------------------
Epoch 270, loss: 0.042458645068109035
Test_acc: 1.0
--------------------------
Epoch 271, loss: 0.042382679879665375
Test_acc: 1.0
--------------------------
Epoch 272, loss: 0.042307195253670216
Test_acc: 1.0
--------------------------
Epoch 273, loss: 0.04223217163234949
Test_acc: 1.0
--------------------------
Epoch 274, loss: 0.04215762298554182
Test_acc: 1.0
--------------------------
Epoch 275, loss: 0.04208352975547314
Test_acc: 1.0
--------------------------
Epoch 276, loss: 0.04200989659875631
Test_acc: 1.0
--------------------------
Epoch 277, loss: 0.041936714202165604
Test_acc: 1.0
--------------------------
Epoch 278, loss: 0.04186398349702358
Test_acc: 1.0
--------------------------
Epoch 279, loss: 0.04179169703274965
Test_acc: 1.0
--------------------------
Epoch 280, loss: 0.041719851084053516
Test_acc: 1.0
--------------------------
Epoch 281, loss: 0.04164844751358032
Test_acc: 1.0
--------------------------
Epoch 282, loss: 0.04157747142016888
Test_acc: 1.0
--------------------------
Epoch 283, loss: 0.041506921872496605
Test_acc: 1.0
--------------------------
Epoch 284, loss: 0.04143680538982153
Test_acc: 1.0
--------------------------
Epoch 285, loss: 0.04136710148304701
Test_acc: 1.0
--------------------------
Epoch 286, loss: 0.04129781946539879
Test_acc: 1.0
--------------------------
Epoch 287, loss: 0.04122895281761885
Test_acc: 1.0
--------------------------
Epoch 288, loss: 0.04116049408912659
Test_acc: 1.0
--------------------------
Epoch 289, loss: 0.041092448867857456
Test_acc: 1.0
--------------------------
Epoch 290, loss: 0.04102479573339224
Test_acc: 1.0
--------------------------
Epoch 291, loss: 0.040957554243505
Test_acc: 1.0
--------------------------
Epoch 292, loss: 0.040890700183808804
Test_acc: 1.0
--------------------------
Epoch 293, loss: 0.04082423262298107
Test_acc: 1.0
--------------------------
Epoch 294, loss: 0.040758166462183
Test_acc: 1.0
--------------------------
Epoch 295, loss: 0.04069248400628567
Test_acc: 1.0
--------------------------
Epoch 296, loss: 0.04062717594206333
Test_acc: 1.0
--------------------------
Epoch 297, loss: 0.040562245063483715
Test_acc: 1.0
--------------------------
Epoch 298, loss: 0.040497698821127415
Test_acc: 1.0
--------------------------
Epoch 299, loss: 0.04043352138251066
Test_acc: 1.0
--------------------------
Epoch 300, loss: 0.04036971274763346
Test_acc: 1.0
--------------------------
Epoch 301, loss: 0.0403062729164958
Test_acc: 1.0
--------------------------
Epoch 302, loss: 0.04024319164454937
Test_acc: 1.0
--------------------------
Epoch 303, loss: 0.04018046706914902
Test_acc: 1.0
--------------------------
Epoch 304, loss: 0.0401181080378592
Test_acc: 1.0
--------------------------
Epoch 305, loss: 0.040056094992905855
Test_acc: 1.0
--------------------------
Epoch 306, loss: 0.039994434453547
Test_acc: 1.0
--------------------------
Epoch 307, loss: 0.03993312316015363
Test_acc: 1.0
--------------------------
Epoch 308, loss: 0.039872157853096724
Test_acc: 1.0
--------------------------
Epoch 309, loss: 0.039811530616134405
Test_acc: 1.0
--------------------------
Epoch 310, loss: 0.03975124517455697
Test_acc: 1.0
--------------------------
Epoch 311, loss: 0.039691298734396696
Test_acc: 1.0
--------------------------
Epoch 312, loss: 0.03963167453184724
Test_acc: 1.0
--------------------------
Epoch 313, loss: 0.039572388399392366
Test_acc: 1.0
--------------------------
Epoch 314, loss: 0.03951342450454831
Test_acc: 1.0
--------------------------
Epoch 315, loss: 0.03945478983223438
Test_acc: 1.0
--------------------------
Epoch 316, loss: 0.03939648298546672
Test_acc: 1.0
--------------------------
Epoch 317, loss: 0.03933848673477769
Test_acc: 1.0
--------------------------
Epoch 318, loss: 0.03928081365302205
Test_acc: 1.0
--------------------------
Epoch 319, loss: 0.03922344697639346
Test_acc: 1.0
--------------------------
Epoch 320, loss: 0.039166401606053114
Test_acc: 1.0
--------------------------
Epoch 321, loss: 0.03910966124385595
Test_acc: 1.0
--------------------------
Epoch 322, loss: 0.039053223095834255
Test_acc: 1.0
--------------------------
Epoch 323, loss: 0.03899709461256862
Test_acc: 1.0
--------------------------
Epoch 324, loss: 0.038941262755542994
Test_acc: 1.0
--------------------------
Epoch 325, loss: 0.03888573916628957
Test_acc: 1.0
--------------------------
Epoch 326, loss: 0.0388304959051311
Test_acc: 1.0
--------------------------
Epoch 327, loss: 0.03877556184306741
Test_acc: 1.0
--------------------------
Epoch 328, loss: 0.03872091509401798
Test_acc: 1.0
--------------------------
Epoch 329, loss: 0.0386665565893054
Test_acc: 1.0
--------------------------
Epoch 330, loss: 0.038612480740994215
Test_acc: 1.0
--------------------------
Epoch 331, loss: 0.03855870105326176
Test_acc: 1.0
--------------------------
Epoch 332, loss: 0.03850520076230168
Test_acc: 1.0
--------------------------
Epoch 333, loss: 0.0384519724175334
Test_acc: 1.0
--------------------------
Epoch 334, loss: 0.038399036042392254
Test_acc: 1.0
--------------------------
Epoch 335, loss: 0.03834636928513646
Test_acc: 1.0
--------------------------
Epoch 336, loss: 0.03829398099333048
Test_acc: 1.0
--------------------------
Epoch 337, loss: 0.03824185347184539
Test_acc: 1.0
--------------------------
Epoch 338, loss: 0.038189999759197235
Test_acc: 1.0
--------------------------
Epoch 339, loss: 0.038138418924063444
Test_acc: 1.0
--------------------------
Epoch 340, loss: 0.03808710351586342
Test_acc: 1.0
--------------------------
Epoch 341, loss: 0.03803604608401656
Test_acc: 1.0
--------------------------
Epoch 342, loss: 0.03798525454476476
Test_acc: 1.0
--------------------------
Epoch 343, loss: 0.037934715393930674
Test_acc: 1.0
--------------------------
Epoch 344, loss: 0.037884439807385206
Test_acc: 1.0
--------------------------
Epoch 345, loss: 0.03783441847190261
Test_acc: 1.0
--------------------------
Epoch 346, loss: 0.03778465138748288
Test_acc: 1.0
--------------------------
Epoch 347, loss: 0.03773513715714216
Test_acc: 1.0
--------------------------
Epoch 348, loss: 0.03768586413934827
Test_acc: 1.0
--------------------------
Epoch 349, loss: 0.037636840250343084
Test_acc: 1.0
--------------------------
Epoch 350, loss: 0.037588066421449184
Test_acc: 1.0
--------------------------
Epoch 351, loss: 0.03753953380510211
Test_acc: 1.0
--------------------------
Epoch 352, loss: 0.03749124752357602
Test_acc: 1.0
--------------------------
Epoch 353, loss: 0.03744319686666131
Test_acc: 1.0
--------------------------
Epoch 354, loss: 0.03739538788795471
Test_acc: 1.0
--------------------------
Epoch 355, loss: 0.03734781965613365
Test_acc: 1.0
--------------------------
Epoch 356, loss: 0.03730047354474664
Test_acc: 1.0
--------------------------
Epoch 357, loss: 0.037253367248922586
Test_acc: 1.0
--------------------------
Epoch 358, loss: 0.037206494715064764
Test_acc: 1.0
--------------------------
Epoch 359, loss: 0.03715984430164099
Test_acc: 1.0
--------------------------
Epoch 360, loss: 0.03711343323811889
Test_acc: 1.0
--------------------------
Epoch 361, loss: 0.03706724150106311
Test_acc: 1.0
--------------------------
Epoch 362, loss: 0.037021270021796227
Test_acc: 1.0
--------------------------
Epoch 363, loss: 0.03697552578523755
Test_acc: 1.0
--------------------------
Epoch 364, loss: 0.036930006463080645
Test_acc: 1.0
--------------------------
Epoch 365, loss: 0.03688469948247075
Test_acc: 1.0
--------------------------
Epoch 366, loss: 0.03683961136266589
Test_acc: 1.0
--------------------------
Epoch 367, loss: 0.03679474908858538
Test_acc: 1.0
--------------------------
Epoch 368, loss: 0.036750094033777714
Test_acc: 1.0
--------------------------
Epoch 369, loss: 0.03670564666390419
Test_acc: 1.0
--------------------------
Epoch 370, loss: 0.0366614181548357
Test_acc: 1.0
--------------------------
Epoch 371, loss: 0.036617396865040064
Test_acc: 1.0
--------------------------
Epoch 372, loss: 0.03657358791679144
Test_acc: 1.0
--------------------------
Epoch 373, loss: 0.03652997827157378
Test_acc: 1.0
--------------------------
Epoch 374, loss: 0.03648658888414502
Test_acc: 1.0
--------------------------
Epoch 375, loss: 0.03644338669255376
Test_acc: 1.0
--------------------------
Epoch 376, loss: 0.036400395911186934
Test_acc: 1.0
--------------------------
Epoch 377, loss: 0.036357597913593054
Test_acc: 1.0
--------------------------
Epoch 378, loss: 0.03631501505151391
Test_acc: 1.0
--------------------------
Epoch 379, loss: 0.036272619385272264
Test_acc: 1.0
--------------------------
Epoch 380, loss: 0.036230423487722874
Test_acc: 1.0
--------------------------
Epoch 381, loss: 0.03618842409923673
Test_acc: 1.0
--------------------------
Epoch 382, loss: 0.0361466147005558
Test_acc: 1.0
--------------------------
Epoch 383, loss: 0.036105004604905844
Test_acc: 1.0
--------------------------
Epoch 384, loss: 0.0360635737888515
Test_acc: 1.0
--------------------------
Epoch 385, loss: 0.036022352520376444
Test_acc: 1.0
--------------------------
Epoch 386, loss: 0.035981301218271255
Test_acc: 1.0
--------------------------
Epoch 387, loss: 0.03594044363126159
Test_acc: 1.0
--------------------------
Epoch 388, loss: 0.035899773240089417
Test_acc: 1.0
--------------------------
Epoch 389, loss: 0.03585928678512573
Test_acc: 1.0
--------------------------
Epoch 390, loss: 0.03581898845732212
Test_acc: 1.0
--------------------------
Epoch 391, loss: 0.035778870806097984
Test_acc: 1.0
--------------------------
Epoch 392, loss: 0.03573893290013075
Test_acc: 1.0
--------------------------
Epoch 393, loss: 0.035699174739420414
Test_acc: 1.0
--------------------------
Epoch 394, loss: 0.03565959073603153
Test_acc: 1.0
--------------------------
Epoch 395, loss: 0.035620191134512424
Test_acc: 1.0
--------------------------
Epoch 396, loss: 0.035580961499363184
Test_acc: 1.0
--------------------------
Epoch 397, loss: 0.03554190881550312
Test_acc: 1.0
--------------------------
Epoch 398, loss: 0.035503033082932234
Test_acc: 1.0
--------------------------
Epoch 399, loss: 0.035464323591440916
Test_acc: 1.0
--------------------------
Epoch 400, loss: 0.035425792913883924
Test_acc: 1.0
--------------------------
Epoch 401, loss: 0.03538742754608393
Test_acc: 1.0
--------------------------
Epoch 402, loss: 0.0353492246940732
Test_acc: 1.0
--------------------------
Epoch 403, loss: 0.03531120577827096
Test_acc: 1.0
--------------------------
Epoch 404, loss: 0.03527334099635482
Test_acc: 1.0
--------------------------
Epoch 405, loss: 0.03523564524948597
Test_acc: 1.0
--------------------------
Epoch 406, loss: 0.03519811574369669
Test_acc: 1.0
--------------------------
Epoch 407, loss: 0.03516074130311608
Test_acc: 1.0
--------------------------
Epoch 408, loss: 0.03512354148551822
Test_acc: 1.0
--------------------------
Epoch 409, loss: 0.035086498130112886
Test_acc: 1.0
--------------------------
Epoch 410, loss: 0.03504961263388395
Test_acc: 1.0
--------------------------
Epoch 411, loss: 0.03501288965344429
Test_acc: 1.0
--------------------------
Epoch 412, loss: 0.03497632220387459
Test_acc: 1.0
--------------------------
Epoch 413, loss: 0.03493991028517485
Test_acc: 1.0
--------------------------
Epoch 414, loss: 0.0349036636762321
Test_acc: 1.0
--------------------------
Epoch 415, loss: 0.03486755769699812
Test_acc: 1.0
--------------------------
Epoch 416, loss: 0.03483161563053727
Test_acc: 1.0
--------------------------
Epoch 417, loss: 0.0347958211787045
Test_acc: 1.0
--------------------------
Epoch 418, loss: 0.03476017527282238
Test_acc: 1.0
--------------------------
Epoch 419, loss: 0.034724689088761806
Test_acc: 1.0
--------------------------
Epoch 420, loss: 0.03468934912234545
Test_acc: 1.0
--------------------------
Epoch 421, loss: 0.034654161892831326
Test_acc: 1.0
--------------------------
Epoch 422, loss: 0.03461911762133241
Test_acc: 1.0
--------------------------
Epoch 423, loss: 0.034584223292768
Test_acc: 1.0
--------------------------
Epoch 424, loss: 0.03454947005957365
Test_acc: 1.0
--------------------------
Epoch 425, loss: 0.0345148635096848
Test_acc: 1.0
--------------------------
Epoch 426, loss: 0.03448040969669819
Test_acc: 1.0
--------------------------
Epoch 427, loss: 0.03444609045982361
Test_acc: 1.0
--------------------------
Epoch 428, loss: 0.034411918837577105
Test_acc: 1.0
--------------------------
Epoch 429, loss: 0.03437788691371679
Test_acc: 1.0
--------------------------
Epoch 430, loss: 0.0343439974822104
Test_acc: 1.0
--------------------------
Epoch 431, loss: 0.03431024681776762
Test_acc: 1.0
--------------------------
Epoch 432, loss: 0.034276628866791725
Test_acc: 1.0
--------------------------
Epoch 433, loss: 0.03424314875155687
Test_acc: 1.0
--------------------------
Epoch 434, loss: 0.03420981951057911
Test_acc: 1.0
--------------------------
Epoch 435, loss: 0.03417660994455218
Test_acc: 1.0
--------------------------
Epoch 436, loss: 0.0341435419395566
Test_acc: 1.0
--------------------------
Epoch 437, loss: 0.034110613632947206
Test_acc: 1.0
--------------------------
Epoch 438, loss: 0.03407781198620796
Test_acc: 1.0
--------------------------
Epoch 439, loss: 0.03404514491558075
Test_acc: 1.0
--------------------------
Epoch 440, loss: 0.03401260497048497
Test_acc: 1.0
--------------------------
Epoch 441, loss: 0.033980210311710835
Test_acc: 1.0
--------------------------
Epoch 442, loss: 0.03394793486222625
Test_acc: 1.0
--------------------------
Epoch 443, loss: 0.03391578374430537
Test_acc: 1.0
--------------------------
Epoch 444, loss: 0.03388377372175455
Test_acc: 1.0
--------------------------
Epoch 445, loss: 0.033851887565106153
Test_acc: 1.0
--------------------------
Epoch 446, loss: 0.033820133190602064
Test_acc: 1.0
--------------------------
Epoch 447, loss: 0.033788496162742376
Test_acc: 1.0
--------------------------
Epoch 448, loss: 0.033756986260414124
Test_acc: 1.0
--------------------------
Epoch 449, loss: 0.03372561139985919
Test_acc: 1.0
--------------------------
Epoch 450, loss: 0.03369434690102935
Test_acc: 1.0
--------------------------
Epoch 451, loss: 0.03366320999339223
Test_acc: 1.0
--------------------------
Epoch 452, loss: 0.03363220253959298
Test_acc: 1.0
--------------------------
Epoch 453, loss: 0.03360130963847041
Test_acc: 1.0
--------------------------
Epoch 454, loss: 0.03357054013758898
Test_acc: 1.0
--------------------------
Epoch 455, loss: 0.03353989636525512
Test_acc: 1.0
--------------------------
Epoch 456, loss: 0.03350935876369476
Test_acc: 1.0
--------------------------
Epoch 457, loss: 0.03347895201295614
Test_acc: 1.0
--------------------------
Epoch 458, loss: 0.03344865143299103
Test_acc: 1.0
--------------------------
Epoch 459, loss: 0.03341847471892834
Test_acc: 1.0
--------------------------
Epoch 460, loss: 0.03338842140510678
Test_acc: 1.0
--------------------------
Epoch 461, loss: 0.03335847705602646
Test_acc: 1.0
--------------------------
Epoch 462, loss: 0.03332865331321955
Test_acc: 1.0
--------------------------
Epoch 463, loss: 0.033298940397799015
Test_acc: 1.0
--------------------------
Epoch 464, loss: 0.03326933644711971
Test_acc: 1.0
--------------------------
Epoch 465, loss: 0.03323985682800412
Test_acc: 1.0
--------------------------
Epoch 466, loss: 0.03321048431098461
Test_acc: 1.0
--------------------------
Epoch 467, loss: 0.03318122588098049
Test_acc: 1.0
--------------------------
Epoch 468, loss: 0.0331520726904273
Test_acc: 1.0
--------------------------
Epoch 469, loss: 0.03312302753329277
Test_acc: 1.0
--------------------------
Epoch 470, loss: 0.03309410251677036
Test_acc: 1.0
--------------------------
Epoch 471, loss: 0.03306529065594077
Test_acc: 1.0
--------------------------
Epoch 472, loss: 0.0330365770496428
Test_acc: 1.0
--------------------------
Epoch 473, loss: 0.03300797380506992
Test_acc: 1.0
--------------------------
Epoch 474, loss: 0.03297948185354471
Test_acc: 1.0
--------------------------
Epoch 475, loss: 0.03295108396559954
Test_acc: 1.0
--------------------------
Epoch 476, loss: 0.03292280435562134
Test_acc: 1.0
--------------------------
Epoch 477, loss: 0.032894630916416645
Test_acc: 1.0
--------------------------
Epoch 478, loss: 0.03286655806005001
Test_acc: 1.0
--------------------------
Epoch 479, loss: 0.03283859323710203
Test_acc: 1.0
--------------------------
Epoch 480, loss: 0.0328107294626534
Test_acc: 1.0
--------------------------
Epoch 481, loss: 0.03278296673670411
Test_acc: 1.0
--------------------------
Epoch 482, loss: 0.03275530692189932
Test_acc: 1.0
--------------------------
Epoch 483, loss: 0.03272775374352932
Test_acc: 1.0
--------------------------
Epoch 484, loss: 0.032700295094400644
Test_acc: 1.0
--------------------------
Epoch 485, loss: 0.032672947738319635
Test_acc: 1.0
--------------------------
Epoch 486, loss: 0.03264568746089935
Test_acc: 1.0
--------------------------
Epoch 487, loss: 0.03261852730065584
Test_acc: 1.0
--------------------------
Epoch 488, loss: 0.032591485884040594
Test_acc: 1.0
--------------------------
Epoch 489, loss: 0.032564534805715084
Test_acc: 1.0
--------------------------
Epoch 490, loss: 0.032537671737372875
Test_acc: 1.0
--------------------------
Epoch 491, loss: 0.03251090971753001
Test_acc: 1.0
--------------------------
Epoch 492, loss: 0.03248425014317036
Test_acc: 1.0
--------------------------
Epoch 493, loss: 0.03245767951011658
Test_acc: 1.0
--------------------------
Epoch 494, loss: 0.032431216444820166
Test_acc: 1.0
--------------------------
Epoch 495, loss: 0.032404834404587746
Test_acc: 1.0
--------------------------
Epoch 496, loss: 0.03237855713814497
Test_acc: 1.0
--------------------------
Epoch 497, loss: 0.03235237207263708
Test_acc: 1.0
--------------------------
Epoch 498, loss: 0.03232626663520932
Test_acc: 1.0
--------------------------
Epoch 499, loss: 0.032300274819135666
Test_acc: 1.0
--------------------------

详细过程

In [1]:
# -*- coding: UTF-8 -*-
# 利用鸢尾花数据集,实现前向传播、反向传播,可视化loss曲线

# 导入所需模块
import tensorflow as tf
from sklearn import datasets
from matplotlib import pyplot as plt
import numpy as np

# 导入数据,分别为输入特征和标签
x_data = datasets.load_iris().data
y_data = datasets.load_iris().target

# 随机打乱数据(因为原始数据是顺序的,顺序不打乱会影响准确率)
# seed: 随机数种子,是一个整数,当设置之后,每次生成的随机数都一样(为方便教学,以保每位同学结果一致)
np.random.seed(116)  # 使用相同的seed,保证输入特征和标签一一对应
np.random.shuffle(x_data)
np.random.seed(116)
np.random.shuffle(y_data)
tf.random.set_seed(116)

# 将打乱后的数据集分割为训练集和测试集,训练集为前120行,测试集为后30行
x_train = x_data[:-30]
y_train = y_data[:-30]
x_test = x_data[-30:]
y_test = y_data[-30:]

# 转换x的数据类型,否则后面矩阵相乘时会因数据类型不一致报错
x_train = tf.cast(x_train, tf.float32)
x_test = tf.cast(x_test, tf.float32)

# from_tensor_slices函数使输入特征和标签值一一对应。(把数据集分批次,每个批次batch组数据)
train_db = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(32)
test_db = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)

# 生成神经网络的参数,4个输入特征故,输入层为4个输入节点;因为3分类,故输出层为3个神经元
# 用tf.Variable()标记参数可训练
# 使用seed使每次生成的随机数相同(方便教学,使大家结果都一致,在现实使用时不写seed)
w1 = tf.Variable(tf.random.truncated_normal([4, 3], stddev=0.1, seed=1))
b1 = tf.Variable(tf.random.truncated_normal([3], stddev=0.1, seed=1))

lr = 0.1  # 学习率为0.1
train_loss_results = []  # 将每轮的loss记录在此列表中,为后续画loss曲线提供数据
test_acc = []  # 将每轮的acc记录在此列表中,为后续画acc曲线提供数据
epoch = 500  # 循环500轮
loss_all = 0  # 每轮分4个step,loss_all记录四个step生成的4个loss的和
In [5]:
print(train_db)
<BatchDataset shapes: ((None, 4), (None,)), types: (tf.float32, tf.int32)>
In [7]:
# 这样得到的x_train和y_train的数据都是一个batch,也就是32
# 120-32*3=24 所以总共有3+1=4个batch,最后一个batch的数据不满,是24个
for step, (x_train, y_train) in enumerate(train_db):  #batch级别的循环 ,每个step循环一个batch
    print(step)
    print(x_train)
    print(y_train)
    print("===================")
0
tf.Tensor(
[[6.1 2.8 4.  1.3]
 [6.3 3.3 4.7 1.6]
 [6.8 2.8 4.8 1.4]
 [5.3 3.7 1.5 0.2]
 [5.4 3.4 1.7 0.2]
 [6.5 3.  5.8 2.2]
 [7.2 3.6 6.1 2.5]
 [6.3 3.3 6.  2.5]
 [6.7 3.  5.  1.7]
 [4.9 3.1 1.5 0.2]
 [5.4 3.9 1.7 0.4]
 [6.5 3.2 5.1 2. ]
 [5.7 4.4 1.5 0.4]
 [5.9 3.2 4.8 1.8]
 [5.1 3.3 1.7 0.5]
 [5.1 2.5 3.  1.1]
 [4.9 3.6 1.4 0.1]
 [6.2 2.9 4.3 1.3]
 [5.8 2.8 5.1 2.4]
 [6.8 3.2 5.9 2.3]
 [6.2 2.2 4.5 1.5]
 [4.9 3.1 1.5 0.1]
 [4.7 3.2 1.6 0.2]
 [4.6 3.4 1.4 0.3]
 [5.2 4.1 1.5 0.1]
 [6.3 2.5 4.9 1.5]
 [5.4 3.  4.5 1.5]
 [6.  2.2 4.  1. ]
 [4.6 3.2 1.4 0.2]
 [6.4 2.8 5.6 2.1]
 [5.  3.4 1.5 0.2]
 [6.7 3.1 5.6 2.4]], shape=(32, 4), dtype=float32)
tf.Tensor([1 1 1 0 0 2 2 2 1 0 0 2 0 1 0 1 0 1 2 2 1 0 0 0 0 1 1 1 0 2 0 2], shape=(32,), dtype=int32)
===================
1
tf.Tensor(
[[5.7 2.6 3.5 1. ]
 [7.7 2.8 6.7 2. ]
 [6.9 3.1 5.4 2.1]
 [4.9 2.5 4.5 1.7]
 [5.4 3.4 1.5 0.4]
 [6.3 2.9 5.6 1.8]
 [7.7 3.8 6.7 2.2]
 [6.9 3.1 4.9 1.5]
 [5.1 3.5 1.4 0.2]
 [6.3 2.8 5.1 1.5]
 [6.4 3.2 5.3 2.3]
 [6.1 3.  4.9 1.8]
 [6.5 3.  5.5 1.8]
 [5.7 3.8 1.7 0.3]
 [5.5 4.2 1.4 0.2]
 [5.4 3.9 1.3 0.4]
 [5.1 3.5 1.4 0.3]
 [5.  3.  1.6 0.2]
 [5.5 3.5 1.3 0.2]
 [4.4 3.2 1.3 0.2]
 [4.4 3.  1.3 0.2]
 [5.  3.2 1.2 0.2]
 [5.8 2.6 4.  1.2]
 [6.3 3.4 5.6 2.4]
 [6.2 3.4 5.4 2.3]
 [4.9 2.4 3.3 1. ]
 [5.6 2.7 4.2 1.3]
 [5.  3.5 1.6 0.6]
 [5.  3.5 1.3 0.3]
 [5.5 2.4 3.8 1.1]
 [4.8 3.4 1.9 0.2]
 [5.6 3.  4.5 1.5]], shape=(32, 4), dtype=float32)
tf.Tensor([1 2 2 2 0 2 2 1 0 2 2 2 2 0 0 0 0 0 0 0 0 0 1 2 2 1 1 0 0 1 0 1], shape=(32,), dtype=int32)
===================
2
tf.Tensor(
[[5.6 2.9 3.6 1.3]
 [7.  3.2 4.7 1.4]
 [7.7 2.6 6.9 2.3]
 [5.  3.4 1.6 0.4]
 [5.2 3.4 1.4 0.2]
 [4.6 3.6 1.  0.2]
 [6.2 2.8 4.8 1.8]
 [4.5 2.3 1.3 0.3]
 [5.6 2.5 3.9 1.1]
 [5.8 4.  1.2 0.2]
 [6.6 3.  4.4 1.4]
 [5.5 2.4 3.7 1. ]
 [7.2 3.  5.8 1.6]
 [7.1 3.  5.9 2.1]
 [5.9 3.  5.1 1.8]
 [5.8 2.7 5.1 1.9]
 [6.8 3.  5.5 2.1]
 [4.7 3.2 1.3 0.2]
 [5.  2.  3.5 1. ]
 [5.2 2.7 3.9 1.4]
 [6.5 3.  5.2 2. ]
 [5.  3.6 1.4 0.2]
 [6.4 2.9 4.3 1.3]
 [6.7 3.1 4.7 1.5]
 [4.8 3.1 1.6 0.2]
 [5.8 2.7 5.1 1.9]
 [6.3 2.5 5.  1.9]
 [6.1 2.6 5.6 1.4]
 [6.  2.9 4.5 1.5]
 [6.1 2.9 4.7 1.4]
 [7.6 3.  6.6 2.1]
 [6.9 3.1 5.1 2.3]], shape=(32, 4), dtype=float32)
tf.Tensor([1 1 2 0 0 0 2 0 1 0 1 1 2 2 2 2 2 0 1 1 2 0 1 1 0 2 2 2 1 1 2 2], shape=(32,), dtype=int32)
===================
3
tf.Tensor(
[[7.9 3.8 6.4 2. ]
 [5.6 3.  4.1 1.3]
 [7.7 3.  6.1 2.3]
 [5.7 2.5 5.  2. ]
 [5.1 3.4 1.5 0.2]
 [5.9 3.  4.2 1.5]
 [6.  3.4 4.5 1.6]
 [6.4 2.7 5.3 1.9]
 [7.2 3.2 6.  1.8]
 [6.9 3.2 5.7 2.3]
 [5.7 3.  4.2 1.2]
 [6.  3.  4.8 1.8]
 [6.  2.7 5.1 1.6]
 [5.2 3.5 1.5 0.2]
 [6.4 3.1 5.5 1.8]
 [6.7 3.  5.2 2.3]
 [6.4 2.8 5.6 2.2]
 [6.7 2.5 5.8 1.8]
 [6.7 3.3 5.7 2.5]
 [4.8 3.  1.4 0.1]
 [6.7 3.1 4.4 1.4]
 [5.1 3.8 1.5 0.3]
 [5.  2.3 3.3 1. ]
 [6.  2.2 5.  1.5]], shape=(24, 4), dtype=float32)
tf.Tensor([2 1 2 2 0 1 1 2 2 2 1 2 1 0 2 2 2 2 2 0 1 0 1 2], shape=(24,), dtype=int32)
===================
In [ ]:
# 具体相乘就是 每个batch(这里是32)的数据来整体和w1相乘,然后加上b1,b1维度不够,肯定是用了广播
# y = tf.matmul(x_train, w1) + b1  # 神经网络乘加运算
shape=(32, 4) * shape=(4, 3) + shape=(3,)
In [8]:
print(w1)
print(b1)
<tf.Variable 'Variable:0' shape=(4, 3) dtype=float32, numpy=
array([[ 0.08249953, -0.0683137 ,  0.19668601],
       [-0.05480815,  0.04570521,  0.1357149 ],
       [ 0.07750896, -0.16734955, -0.10294553],
       [ 0.15784004, -0.13311003,  0.06045312]], dtype=float32)>
<tf.Variable 'Variable:0' shape=(3,) dtype=float32, numpy=array([-0.09194934, -0.12376948, -0.05381497], dtype=float32)>
In [13]:
# 进行y = tf.matmul(x_train, w1) + b1 的时候,b1的维度是不够的,b1是会自动广播的
print([[1,2],[3,4]]+[5,6])
v1=tf.constant([[1,2],[3,4]])
v2=tf.constant([5,6])
print(v1)
print(v2)
print(v1+v2)
[[1, 2], [3, 4], 5, 6]
tf.Tensor(
[[1 2]
 [3 4]], shape=(2, 2), dtype=int32)
tf.Tensor([5 6], shape=(2,), dtype=int32)
tf.Tensor(
[[ 6  8]
 [ 8 10]], shape=(2, 2), dtype=int32)

为了观察中间过程,将epoch设置为1,将batch设置为3

In [1]:
# -*- coding: UTF-8 -*-
# 利用鸢尾花数据集,实现前向传播、反向传播,可视化loss曲线

# 导入所需模块
import tensorflow as tf
from sklearn import datasets
from matplotlib import pyplot as plt
import numpy as np

# 导入数据,分别为输入特征和标签
x_data = datasets.load_iris().data
y_data = datasets.load_iris().target

# 随机打乱数据(因为原始数据是顺序的,顺序不打乱会影响准确率)
# seed: 随机数种子,是一个整数,当设置之后,每次生成的随机数都一样(为方便教学,以保每位同学结果一致)
np.random.seed(116)  # 使用相同的seed,保证输入特征和标签一一对应
np.random.shuffle(x_data)
np.random.seed(116)
np.random.shuffle(y_data)
tf.random.set_seed(116)

# 将打乱后的数据集分割为训练集和测试集,训练集为前120行,测试集为后30行
x_train = x_data[:-30]
y_train = y_data[:-30]
x_test = x_data[-30:]
y_test = y_data[-30:]

# 转换x的数据类型,否则后面矩阵相乘时会因数据类型不一致报错
x_train = tf.cast(x_train, tf.float32)
x_test = tf.cast(x_test, tf.float32)

# from_tensor_slices函数使输入特征和标签值一一对应。(把数据集分批次,每个批次batch组数据)
train_db = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(3)
test_db = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(3)

# 生成神经网络的参数,4个输入特征故,输入层为4个输入节点;因为3分类,故输出层为3个神经元
# 用tf.Variable()标记参数可训练
# 使用seed使每次生成的随机数相同(方便教学,使大家结果都一致,在现实使用时不写seed)
w1 = tf.Variable(tf.random.truncated_normal([4, 3], stddev=0.1, seed=1))
b1 = tf.Variable(tf.random.truncated_normal([3], stddev=0.1, seed=1))

lr = 0.1  # 学习率为0.1
train_loss_results = []  # 将每轮的loss记录在此列表中,为后续画loss曲线提供数据
test_acc = []  # 将每轮的acc记录在此列表中,为后续画acc曲线提供数据
epoch = 1  # 循环500轮
loss_all = 0  # 每轮分4个step,loss_all记录四个step生成的4个loss的和
# 训练部分
for epoch in range(epoch):  #数据集级别的循环,每个epoch循环一次数据集
    for step, (x_train, y_train) in enumerate(train_db):  #batch级别的循环 ,每个step循环一个batch
        # 只打印前面两个例子
        if step==2:
            break
        print("==========each step============")
        with tf.GradientTape() as tape:  # with结构记录梯度信息
            print("x_train\n",x_train)
            print("w1\n",w1)
            print("b1\n",b1)
            y = tf.matmul(x_train, w1) + b1  # 神经网络乘加运算
            print("y = tf.matmul(x_train, w1) + b1\n",y)
            y = tf.nn.softmax(y)  # 使输出y符合概率分布(此操作后与独热码同量级,可相减求loss)
            print("y = tf.nn.softmax(y)\n",y)
            y_ = tf.one_hot(y_train, depth=3)  # 将标签值转换为独热码格式,方便计算loss和accuracy
            print("y_ = tf.one_hot(y_train, depth=3)\n",y_)
            print("y_ - y\n",y_ - y)
            print("tf.square(y_ - y)\n",tf.square(y_ - y))
            loss = tf.reduce_mean(tf.square(y_ - y))  # 采用均方误差损失函数mse = mean(sum(y-out)^2)
            print("loss = tf.reduce_mean(tf.square(y_ - y))\n",loss)
            
            loss_all += loss.numpy()  # 将每个step计算出的loss累加,为后续求loss平均值提供数据,这样计算的loss更准确
        # 计算loss对各个参数的梯度
        grads = tape.gradient(loss, [w1, b1])

        # 实现梯度更新 w1 = w1 - lr * w1_grad    b = b - lr * b_grad
        w1.assign_sub(lr * grads[0])  # 参数w1自更新
        b1.assign_sub(lr * grads[1])  # 参数b自更新

    # 每个epoch,打印loss信息
    print("Epoch {}, loss: {}".format(epoch, loss_all/4))
    train_loss_results.append(loss_all / 4)  # 将4个step的loss求平均记录在此变量中
    loss_all = 0  # loss_all归零,为记录下一个epoch的loss做准备

    # 测试部分
    # total_correct为预测对的样本个数, total_number为测试的总样本数,将这两个变量都初始化为0
    total_correct, total_number = 0, 0
    for x_test, y_test in test_db:
        # 使用更新后的参数进行预测
        y = tf.matmul(x_test, w1) + b1
        y = tf.nn.softmax(y)
        pred = tf.argmax(y, axis=1)  # 返回y中最大值的索引,即预测的分类
        # 将pred转换为y_test的数据类型
        pred = tf.cast(pred, dtype=y_test.dtype)
        # 若分类正确,则correct=1,否则为0,将bool型的结果转换为int型
        correct = tf.cast(tf.equal(pred, y_test), dtype=tf.int32)
        # 将每个batch的correct数加起来
        correct = tf.reduce_sum(correct)
        # 将所有batch中的correct数加起来
        total_correct += int(correct)
        # total_number为测试的总样本数,也就是x_test的行数,shape[0]返回变量的行数
        total_number += x_test.shape[0]
    # 总的准确率等于total_correct/total_number
    acc = total_correct / total_number
    test_acc.append(acc)
    print("Test_acc:", acc)
    print("--------------------------")
==========each step============
x_train
 tf.Tensor(
[[6.1 2.8 4.  1.3]
 [6.3 3.3 4.7 1.6]
 [6.8 2.8 4.8 1.4]], shape=(3, 4), dtype=float32)
w1
 <tf.Variable 'Variable:0' shape=(4, 3) dtype=float32, numpy=
array([[ 0.08249953, -0.0683137 ,  0.19668601],
       [-0.05480815,  0.04570521,  0.1357149 ],
       [ 0.07750896, -0.16734955, -0.10294553],
       [ 0.15784004, -0.13311003,  0.06045312]], dtype=float32)>
b1
 <tf.Variable 'Variable:0' shape=(3,) dtype=float32, numpy=array([-0.09194934, -0.12376948, -0.05381497], dtype=float32)>
y = tf.matmul(x_train, w1) + b1
 tf.Tensor(
[[ 0.7730628  -1.2549498   1.1927782 ]
 [ 0.86376697 -1.4028375   1.2460471 ]
 [ 0.9086038  -1.44996     1.2541474 ]], shape=(3, 3), dtype=float32)
y = tf.nn.softmax(y)
 tf.Tensor(
[[0.376914   0.04960067 0.5734854 ]
 [0.38921314 0.04034722 0.57043964]
 [0.39883322 0.03771205 0.56345475]], shape=(3, 3), dtype=float32)
y_ = tf.one_hot(y_train, depth=3)
 tf.Tensor(
[[0. 1. 0.]
 [0. 1. 0.]
 [0. 1. 0.]], shape=(3, 3), dtype=float32)
y_ - y
 tf.Tensor(
[[-0.376914    0.95039934 -0.5734854 ]
 [-0.38921314  0.9596528  -0.57043964]
 [-0.39883322  0.96228796 -0.56345475]], shape=(3, 3), dtype=float32)
tf.square(y_ - y)
 tf.Tensor(
[[0.14206415 0.9032589  0.32888547]
 [0.15148687 0.9209335  0.32540137]
 [0.15906793 0.92599815 0.31748125]], shape=(3, 3), dtype=float32)
loss = tf.reduce_mean(tf.square(y_ - y))
 tf.Tensor(0.46384192, shape=(), dtype=float32)
==========each step============
x_train
 tf.Tensor(
[[5.3 3.7 1.5 0.2]
 [5.4 3.4 1.7 0.2]
 [6.5 3.  5.8 2.2]], shape=(3, 4), dtype=float32)
w1
 <tf.Variable 'Variable:0' shape=(4, 3) dtype=float32, numpy=
array([[ 0.0900598 , -0.04318555,  0.16399759],
       [-0.05128299,  0.05737336,  0.12052159],
       [ 0.08283258, -0.14975834, -0.12586035],
       [ 0.15954217, -0.12749009,  0.05313105]], dtype=float32)>
b1
 <tf.Variable 'Variable:0' shape=(3,) dtype=float32, numpy=array([-0.09076596, -0.11982609, -0.05894174], dtype=float32)>
y = tf.matmul(x_train, w1) + b1
 tf.Tensor(
[[ 0.35296127 -0.38656357  1.078011  ]
 [ 0.39391863 -0.4380458   1.0330824 ]
 [ 1.1721956  -1.3774886   0.7555057 ]], shape=(3, 3), dtype=float32)
y = tf.nn.softmax(y)
 tf.Tensor(
[[0.28231245 0.13475922 0.58292836]
 [0.30029225 0.13068524 0.5690225 ]
 [0.5755953  0.04495765 0.37944704]], shape=(3, 3), dtype=float32)
y_ = tf.one_hot(y_train, depth=3)
 tf.Tensor(
[[1. 0. 0.]
 [1. 0. 0.]
 [0. 0. 1.]], shape=(3, 3), dtype=float32)
y_ - y
 tf.Tensor(
[[ 0.71768755 -0.13475922 -0.58292836]
 [ 0.69970775 -0.13068524 -0.5690225 ]
 [-0.5755953  -0.04495765  0.62055296]], shape=(3, 3), dtype=float32)
tf.square(y_ - y)
 tf.Tensor(
[[0.51507545 0.01816005 0.33980548]
 [0.48959094 0.01707863 0.3237866 ]
 [0.33130997 0.00202119 0.38508597]], shape=(3, 3), dtype=float32)
loss = tf.reduce_mean(tf.square(y_ - y))
 tf.Tensor(0.2691016, shape=(), dtype=float32)
Epoch 0, loss: 0.18323587626218796
Test_acc: 0.0
--------------------------

softmax的时候,自然是每一个数据的和为1,也就是三种分类的和为1

y_ = tf.one_hot(y_train, depth=3) # 将标签值转换为独热码格式,方便计算loss和accuracy

loss = tf.reducemean(tf.square(y - y)) # 采用均方误差损失函数mse = mean(sum(y-out)^2)

计算均方误差的时候,预测值y是softmax操作之后的概率,y_是one_hot编码之后的样子,

所以计算出的的基础就是-1到1之间有正有负的形式,其实也好理解,无论是softmax还是one_hot编码,本身都是概率形式,自然可以计算

求均方差的时候,是每个batch里面的[batch,type]里面的每个元素平方,然后求均值 loss = tf.reducemean(tf.square(y - y)) # 采用均方误差损失函数mse = mean(sum(y-out)^2)

y = tf.nn.softmax(y)
 tf.Tensor(
[[0.28231245 0.13475922 0.58292836]
 [0.30029225 0.13068524 0.5690225 ]
 [0.5755953  0.04495765 0.37944704]], shape=(3, 3), dtype=float32)
y_ = tf.one_hot(y_train, depth=3)
 tf.Tensor(
[[1. 0. 0.]
 [1. 0. 0.]
 [0. 0. 1.]], shape=(3, 3), dtype=float32)
y_ - y
 tf.Tensor(
[[ 0.71768755 -0.13475922 -0.58292836]
 [ 0.69970775 -0.13068524 -0.5690225 ]
 [-0.5755953  -0.04495765  0.62055296]], shape=(3, 3), dtype=float32)
tf.square(y_ - y)
 tf.Tensor(
[[0.51507545 0.01816005 0.33980548]
 [0.48959094 0.01707863 0.3237866 ]
 [0.33130997 0.00202119 0.38508597]], shape=(3, 3), dtype=float32)
loss = tf.reduce_mean(tf.square(y_ - y))
 tf.Tensor(0.2691016, shape=(), dtype=float32)
In [ ]:
 

 

 

 
posted @ 2020-08-14 16:46  范仁义  阅读(287)  评论(0)    收藏  举报