反向传播 2019.07.20

反向传播:  使训练数据的损失函数最小

损失函数(loss) : 预测值(y) 与 已知答案(y_)的差距

均方误差 MSE : MSE(y_, y) = $\frac{\sum_{n}^{i=1}\left ( y -y{}'\right )}{n}$

loss = tf.reduce_mean(tf.square(y - y_))

 

反向传播训练方法: 以减小loss值为优化目标

train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
train_step = tf.train.MomentumOptimizer(learning_rate, momentum).minimize(loss)
train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss)

学习率 : 决定参数每次更新的幅度  (可以先选一个比较小的值填)  

 

 

学习笔记:

产生一个随机数组

import numpy as np

rng = np.random.RandomState(seed)
X = rng.rand(0, 1, (32, 2)) #产生一个每个随机数都在 [0, 1]区间的32行2列的数组
# X = rng.rand(32, 2)  产生一个随机的32行2列的数组

 

python推导式

X = [i for i in range(0, 10) if i % 2 == 0]

print X

输出:

[0, 2, 4, 6, 8]

 


python求平均值

tf.reduce_mean()

 

#coding:utf-8
  2 import tensorflow as tf
  3 import numpy as np
  4 BATCH_SIZE = 8
  5 seed = 1
  6 #基于seed产生随机数
  7 rd = np.random.RandomState(seed)
  8 #随机数返回32行2列的矩阵
  9 X = rd.rand(32, 2)
 10 #从这个32行2列的矩阵中 依次取出一行 判断 如果两个元素相加 < 1 则给Y赋值1 否>    则赋值 0
 11 Y = [[int(x0 + x1 < 1)] for (x0, x1) in X]
 12 print "X:\n",X
 13 print
 14 print "Y:\n",Y
 15 None,
 16 #定义神经网络的输入 参数和输出, 定义前向传播过程. 
 17 x = tf.placeholder(tf.float32, shape = (None, 2))
 18 y_ = tf.placeholder(tf.float32, shape = (None, 1))
 19 
 20 w1 = tf.Variable(tf.random_normal([2, 3]))
 21 w2 = tf.Variable(tf.random_normal([3, 1]))
22 
 23 a = tf.matmul(x, w1)
 24 y = tf.matmul(a, w2)
 25 
 26 #定义损失函数及神经网络的传播方法
 27 loss = tf.reduce_mean(tf.square(y_ - y))
 28 train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss)
 29 
 30 #生成会话, 训练STEPS轮
 31 with tf.Session() as sess:
 32     sess.run(tf.global_variables_initializer())
 33     #iiii输出未经过训练的参数取值
 34     print "w1:\n", sess.run(w1)
 35     print "w2:\n", sess.run(w2)
 36     print "\n"
 37 
 38     #训练模型
 39     STEPS = 3000
 40     for i in range(STEPS):
 41         start = (i * BATCH_SIZE) % 32
42         end = start + BATCH_SIZE
 43         sess.run(train_step,feed_dict =  {x : X[start:end], y_: Y[start:end]    })
 44         if i % 500 == 0: #每500次输出当前的loss值
 45             total_loss = sess.run(loss,feed_dict =  {x : X, y_ : Y})
 46             print i, total_loss

 

posted @ 2019-07-20 11:36  WTSRUVF  阅读(218)  评论(0编辑  收藏  举报