《神经网络与机器学习》第6讲后向传播算法

神经网络与机器学习

 

 

神经网络应用遍及各个领域,是机器学习和数据挖掘的核心技术。然而神经网络的发展曾历经十余年的停滞,生物神经网络仍是科研的前沿待攻克的难题。同样,学习神经网络亦非坦途,初涉其中的神奇感会随着数学知识的加深而快速散去,面临着理论和应用的挑战,本课程提供一个学习路径,而非大而全的道路,也不是什么捷径。

 

 

 

 

第6章 反向传播-多层前馈神经网络训练

推广随机梯度下降法来训练多层神经网络,利用链式法则来求取目标函数对于各层神经网络中参数的梯度,从而将误差与网络参数之间的敏感度进行后向传播,使得参数收敛到最优解。这种算法在Rumelhart的并行分布式处理一书中提及,也激发了对于神经网络的不断研究,通过反向传播训练前馈多层神经网络成了最广泛应用的神经网络。

§6.1 前馈多层神经网络

单层感知机只能解决线性可分问题,对于非线性分类问题不实用,这个观点也阻碍了神经网络发展近20年,多层的感知机则解决了很多问题,再次掀起神经网络研究高潮。一个三层的前馈多层神经网络如下图6.1所示.

图6.1 三层前馈神经网络

这里上标1,2,3表示层数,那么第$m$层第$i$个神经元的输入为

\[n_i^m=\sum_{j=1}^{s^{m-1}}w_{i,j}^ma_j^{m-1}+b_i^m\]

第$m$层第$i$个神经元的输出为

\[a_j^m=f^m(n_i^m)=f^m\left ( \sum_{j=1}^{s^{m-1}}w_{i,j}^ma_j^{m-1}+b_i^m)\right )\]

本章将介绍多层的前馈神经网络在模式识别和函数逼近中的应用。我们可以表示为矩阵形式

\[a^{m+1}=f^{m+1}(W^{m+1}a^m+b^{m+1}),m=0,1,2,\cdots,M-1\]

\[a^0=p\]

$a^M$称为神经网络输出。

图6.2 三层前馈神经网络 简化图

§6.2 后向传播算法

为了训练此类网络,我们借鉴上一章的随机梯度下降法,使用平方误差作为目标函数(性能指标),设输出有$S$个,那么对于第$k$次训练,平方误差表示为

\[F(w,b)=\sum_s[t_s(k)-a_s(k)]^2=e^{\mathrm{T}}(k)e(k)\]

那么设学习率$\alpha$,第$m$层的权系数$w_{i,j}^m$和偏置系数$b_i^m$的随机梯度下降算法为

\[w_{i,j}^m(k+1)=w_{i,j}^m(k)-\alpha \frac{\partial F}{\partial w_{i,j}^m}\]

\[b_i^m(k+1)=b_i^m(k)-\alpha \frac{\partial F}{\partial b_i^m}\]

关键的难点在于梯度$\frac{\partial F}{\partial w_{i,j}^m}$,$\frac{\partial F}{\partial b_i^m}$.

链式法则

欲求$\frac{\partial F}{\partial w_{i,j}^m}$,那么先对$m$层的输入$n_i^m=\sum_{j=1}^{s^{m-1}}w_{i,j}^ma_j^{m-1}+b_i^m$求导数,因此为

\[\frac{\partial F}{\partial w_{i,j}^m}=\frac{\partial F}{\partial n_i^m}\frac{\partial n_i^m}{\partial w_{i,j}^m}=\frac{\partial F}{\partial n_i^m}a_j^{m-1}\]

(只有前面$m-1$层的第$j$个神经元输入$a_j^{m-1}$乘的权系数$w_{i,j}^m$,其他$S^{m-1}$个神经元输出没有权系数$w_{i,j}^m$)

那么设目标函数对于$m$层的输入$n_i^m$的导数特别地定义为敏感度

\[\delta_i^{m}=\frac{\partial F}{\partial n_i^m}\]

则第$m$层的权系数$w_{i,j}^m$和偏置系数$b_i^m$的随机梯度下降算法重新写为

\[w_{i,j}^m(k+1)=w_{i,j}^m(k)-\alpha \delta_i^ma_j^{m-1}\]

\[b_i^m(k+1)=b_i^m(k)-\alpha \delta_i^m\]

矩阵(向量)形式

$\bm{W}^m(k+1)=\bm{W}^m(k)-\alpha |bm{\delta}^m(\bm{a}^{m-1})^{\mathrm{T}}$

 $\bm{b}^m(k+1)=\bm{b}^m(k)-\alpha \bm{\delta}^m$

$\delta^m=\begin{bmatrix} \delta_1^m\\ \delta_2^m\\ \vdots\\ \delta_{s^m}^m \end{bmatrix}=\begin{bmatrix} \frac{\partial F}{\partial n_1^m}\\ \frac{\partial F}{\partial n_2^m}\\ \vdots\\ \frac{\partial F}{\partial n_{s^m}^m} \end{bmatrix}=\frac{\partial F}{\partial \bm{n}^m}$

敏感度$\delta^m$的计算

注意$F$目标函数是最后一层,即神经网络输出$a^m$与目标的误差平方和,第$m$层的敏感度可以这样链式求解

\[\delta^m=\frac{\partial F}{\partial n^m}=\frac{\partial F}{\partial n^{m+1}}\frac{\partial n^{m+1}}{\partial n^m}=\delta ^{m+1}\frac{\partial n^{m+1}}{\partial n^m}\]

所以我们只需要求解

\[\frac{\partial n^{m+1}}{\partial n^m}=\begin{bmatrix} \frac{\partial n_1^{m+1}}{\partial n_1^m} & \frac{\partial n_1^{m+1}}{\partial n_2^m} & \cdots & \frac{\partial n_1^{m+1}}{\partial n_{s^m}^m}\\ \frac{\partial n_2^{m+1}}{\partial n_1^m} & \frac{\partial n_2^{m+1}}{\partial n_2^m} & \cdots & \frac{\partial n_2^{m+1}}{\partial n_{s^m}^m}\\ \vdots & \vdots & \ddots & \vdots\\ \frac{\partial n_{s^{m+1}}^{m+1}}{\partial n_1^m} & \frac{\partial n_{s^{m+1}}^{m+1}}{\partial n_2^m} & \cdots & \frac{\partial n_{s^{m+1}}^{m+1}}{\partial n_{s^m}^m} \end{bmatrix}\]

这个矩阵称为Jacobi矩阵,其i行j列的元素为

\[\frac{\partial n_i^{m+1}}{\partial n_j^m}=\frac{\partial \left ( \sum_{l=1}^{s^{m+1}}w_{i,l}^{m+1}a_l^m+b_i^{m+1} \right )}{\partial n_j^m}\]

\[=w_{i,j}^{m+1}\frac{\partial a_j^m}{\partial n_j^m}(l=j)=w_{i,j}^{m+1}\frac{\partial f^m(n_j^m)}{\partial n_j^m}=w_{i,j}^{m+1}f^m(n_j^m)\]

Jacobi矩阵可以写成

\[\frac{\partial n^{m+1}}{\partial n^m}=\begin{bmatrix} w_{1,1}^{m+1} & w_{1,2}^{m+1} & \cdots & w_{1,s^m}^{m+1}\\ w_{2,1}^{m+1} & w_{2,2}^{m+1} & \cdots & w_{2,s^m}^{m+1}\\ \vdots & \vdots & \ddots & \vdots\\ w_{s^{m+1},1}^{m+1} & w_{s^{m+1},2}^{m+1} & \cdots & w_{s^{m+1},s^m}^{m+1} \end{bmatrix}\begin{bmatrix} f^m(n_1^m) & 0 & \cdots & 0\\ 0 & f^m(n_2^m) & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & f^m(n_{s^m}^m) \end{bmatrix}\]

\[=W^{m+1}f^m(n^m)\]

 

所以我们记为

\[\delta^m=\frac{\partial F}{\partial n^m}=(\frac{\partial n^{m+1}}{\partial n^m})^{\mathrm{T}}\frac{\partial F}{\partial n^{m+1}}=(\frac{\partial n^{m+1}}{\partial n^m})^{\mathrm{T}}\delta^{m+1}=f^m(n^m)(W^{m+1})^{\mathrm{T}}\delta^{m+1}\]

第$m$层的敏感度总是由上面$m+1$层的敏感度计算得到,从最后一层$M$层反向传播到第一层

\[\delta^M\rightarrow \delta^{M-1}\rightarrow \cdots \rightarrow \delta^1\]

故而称为反向传播,是一种递归关系。

现在还要解决递归的起点,那就是最后一层第$M$层的敏感度

\[\delta_i^M=\frac{\partial F}{\partial n_i^M}=\frac{\partial \sum_{j=1}^{s^m}(t_j-a_j)^2}{\partial n_i^M}=-2(t_i-a_i)\frac{\partial a_i}{\partial n_i^M}=-2(t_i-a_i)f^M(n_i^M)\]

矩阵形式为

\[\delta^M=-2f^M(n^M)(t-a)\]

反向传播算法总结

性能指标:均方误差

\[F(w,b)=E[e^{\mathrm{T}}e]=E[(t-a)^{\mathrm{T}}(t-a)]\]

近似性能指标:瞬时平方误差(第k次训练)

\[F(w,b)=e^{\mathrm{T}}(k)e(k)=(t(k)-a(k))^{\mathrm{T}}(t(k)-a(k))\]

敏感度定义

\[\delta^m=\begin{bmatrix} \delta_1^m\\ \delta_2^m\\ \vdots\\ \delta_{s^m}^m \end{bmatrix}=\begin{bmatrix} \frac{\partial F}{\partial n_1^m}\\ \frac{\partial F}{\partial n_2^m}\\ \vdots\\ \frac{\partial F}{\partial n_{s^m}^m} \end{bmatrix}=\frac{\partial F}{\partial n^m}\]

1 网络前向计算

\[\left\{\begin{matrix} a^0=p\\ a^{m+1}=f^{m+1}(W^{m+1}a^m+b^{m+1}),m=0,1,\cdots,M-1\\ a=a^M \end{matrix}\right.\] 

2 后向传播

\[\left\{\begin{matrix} \delta^M=-2f^M(n^M)(t-a),\\ \delta^m=f^m(n^m)(W^{m+1})^{\mathrm{T}}\delta^{m+1}),m=M-1,M-2,\cdots,2,1, \end{matrix}\right.\]

\[f^m(n^m)=\begin{bmatrix} f^m(n_1^m) & 0 & \cdots & 0\\ 0 & f^m(n_2^m) & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & f^m(n_{s^m}^m) \end{bmatrix}\]

3 网络参数随机梯度法更新(学习率)

\[\left\{\begin{matrix} W^m(k+1)=W^m(k)-\alpha\delta^m(a^{m-1})^{\mathrm{T}}\\ b^m(k+1)=b^m(k)-\alpha\delta^m \end{matrix}\right.\]

 

§6.3 后向传播算法例子

上一节详细推导了反向传播算法,我们这里举两个例子分别说明函数逼近和模式分类问题。

例子1:函数逼近问题。

图6.3 函数逼近网络示例

图6.4 函数逼近自适应神经网络工作示意图

网络设计:我们采用$R\times S^1\times S^2$的神经网络,就是1个输入,隐藏层2个神经元,输出层为1个神经元,如图6.3所示,输入$p$是标量$R\times R=1\times 1$的,第$m=1$层两个神经元,所以神经元数目是$S^1=2$,$S^1\times R$维的权向量为

$\bm{w}^1=\begin{bmatrix} w_{1,1}^1\\ w_{2,1}^1 \end{bmatrix}_{S^1\times R=2*1}$

偏置向量

$\bm{b}^1=\begin{bmatrix} b_1^1\\ b_2^1 \end{bmatrix}_{S^1\times 1}$

神经元函数都为logsig函数

\[a^1=\begin{bmatrix} a_1^1\\ a_2^1 \end{bmatrix}_{S^1\times R}=f^1(w^1p+b^1)\]

\[f^1(x)=\frac{1}{1+e^{-x}}\]

第$m=2$层1个神经元,所以神经元数目是$S^2=1$,权向量为$S^2\times S^1=1\times 2$

\[w^2=[w_{1,1}^2\quad w_{1,2}^2]_{1\times 2}\] 

偏置向量就是一个标量$b^2$。

我们选择函数

\[g(p)=1+sin(\frac{\pi p}{4}),-2\leq p\leq 2\]

假设我们已知21个点,$p=-2,-1.8,\cdots,+2$,目标值$t=g(p)=1+sin(\frac{\pi p}{4})$也知道,那么利用随机梯度下降算法训练上面网络,同时进行测试。

比如选择一个样本$p=1$,$t=g(1)=1+sin(\frac{\pi}{4})$,初值选

\[w^1=\begin{bmatrix} -0.27\\ -0.41 \end{bmatrix},b^1=\begin{bmatrix} -0.48\\ -0.13 \end{bmatrix},w^2=\begin{bmatrix} 0.09 & -0.17 \end{bmatrix}_{1\times 2},b^2=0.48\] 

1 网络前向计算

\[\left\{\begin{matrix} a^0=p=1\\ a^1=f^1(W^1a^0+b^1)=f^1\left ( \begin{bmatrix} -0.27\\ -0.41 \end{bmatrix}\times 1+\begin{bmatrix} -0.48\\ -0.13 \end{bmatrix} \right )=\begin{bmatrix} 0.321\\ 0.368 \end{bmatrix},\\ f^1(x)=\frac{1}{1+e^{-x}}\\ a^2=f^2(W^2a^1+b^1)=W^2a^1+b^2=\begin{bmatrix} 0.09 & -0.17 \end{bmatrix}\begin{bmatrix} 0.321\\ 0.368 \end{bmatrix}+0.48=0.446\\ f^2(x)=x\\ a=a^2 \end{matrix}\right.\]

2 后向传播

\[\left\{\begin{matrix} \delta^2=-2f^2(n^2)(t-a)=-2\times 1\times (1+sin(\frac{\pi}{4})-0.446)=-2.552,\\ f^2(x)=1\\ \delta^1=f^1(n^1)(W^2)^{\mathrm{T}}\delta^2=\begin{bmatrix} (1-a_1^1)a_1^1 & 0\\ 0 & (1-a_2^1)a_2^1 \end{bmatrix}\begin{bmatrix} w_{1,1}^2\\ w_{1,2}^2 \end{bmatrix}\delta^2=\begin{bmatrix} -0.0495\\ 0.0997 \end{bmatrix}, \end{matrix}\right.\]

\[f^1(x)=\left ( 1-\frac{1}{1+e^{-x}} \right )\frac{1}{1+e^{-x}}=(1-a^1)a^1,f^1(n^1)\]

\[=\begin{bmatrix} (1-a_1^1)a_1^1 & 0\\ 0 & (1-a_2^1)a_2^1 \end{bmatrix}\]

3 网络参数随机梯度法更新(学习率$\alpha$ 

\[\left\{\begin{matrix} W^m(k+1)=W^m(k)-\alpha\delta^m(a^{m-1})^{\mathrm{T}}\\ b^m(k+1)=b^m(k)-\alpha\delta^m \end{matrix}\right.\]

\[\left\{\begin{matrix} W^2(1)=W^2(0)-\alpha\delta^2(a^1)^{\mathrm{T}}\\ b^2(1)=b^2(0)-\alpha\delta^2\\ W^1(1)=W^1(0)-\alpha\delta^1(\alpha^0)^{\mathrm{T}}\\ b^1(1)=b^1(0)-\alpha\delta^1 \end{matrix}\right.\] 

图6.5 平方误差学习曲线以及函数逼近图形

程序

% 函数逼近g(p)=1+sin(pi/4*P), 1*2*1前馈3层神经网络

 

R=1;% 输入p的维数

S1=2; %1层神经元维数

S2=1;%2层神经元维数

W1 = 2*rand(S1,R)-1;

b1 = 2*rand(S1,1)-1;

W2 = 2*rand(S2,S1)-1;

b2 = 2*rand(S2,1)-1;%初始化权向量和偏置

alpha=0.1;%学习率

%------------

P=[-2:0.2:2];%输入数据

T = 1+sin(pi/4*P);%目标值

F=0;%目标函数初始

N=100;%训练次数

for times=1:N

fprintf('Training times is %d\n',times);

 

for k=1:length(P)

p=P(k);%取一个输入值,可以按顺序取,也可以随机取

t=T(k);%目标值

%前向过程

a1 = 1./(1+exp(-(W1*p+b1)));

a2 = W2*a1+b2;

e=t-a2;

F=F+e.^2;%瞬时平方误差

%反向传播

delta2 = -2*e;

delta1 = (1-a1).*a1.*(W2'*delta2);

 

%权向量的更新过程

W1 = W1-alpha*delta1*p';

b1 = b1-alpha*delta1;

W2 = W2-alpha*delta2*a1';

b2 = b2-alpha*delta2;

end

Fcum(times)=F;

F=0;

end

%%绘图

figure

plot(log10(Fcum),'mo-','LineWidth',2)

 

%testing 测试

P=[-2.2:0.01:2.2];%输入数据

T = 1+sin(pi/4*P);%目标真实值

figure

plot(P,T,'r-','LineWidth',2);

hold on

for k=1:length(P)

p=P(k);%取一个输入值,可以按顺序取,也可以随机取

t=T(k);%目标值

%函数逼近

a1 = 1./(1+exp(-(W1*p+b1)));

a2 = W2*a1+b2;

Y(k)=a2;

end

plot(P,Y,'b-.','LineWidth',2)

*********************************************

作业:g(p)=1+sin(6pi/4*P),1*5*1前馈3层神经网络进行训练测试

注意:

delta1 = (1-a1).*a1.*(W2'*delta2);

符号 .*称为Hadamard积,是Matlab的写法,有些书和文献记为☉,比如

\[\begin{bmatrix} 1\\ 2 \end{bmatrix}.*\begin{bmatrix} 3\\ 4 \end{bmatrix}=\begin{bmatrix} 1*3\\ 2*4 \end{bmatrix},\begin{bmatrix} 1\\ 2 \end{bmatrix}\bigodot \begin{bmatrix} 3\\ 4 \end{bmatrix}=\begin{bmatrix} 3\\ 8 \end{bmatrix}\]

我们看

\[\delta^1=f^1(n^1)(W^2)^{\mathrm{T}}\delta^2=\begin{bmatrix} (1-a_1^1)a_1^1 & 0\\ 0 & (1-a_2^1)a_2^1 \end{bmatrix}\begin{bmatrix} w_{1,1}^2\\ w_{1,2}^2 \end{bmatrix}\delta^2\]

和上面的语句用Hadamard积是一样的

\[\delta^1=f^1(n^1)(W^2)^{\mathrm{T}}\delta^2=\begin{bmatrix} (1-a_1^1)a_1^1 \\ (1-a_2^1)a_2^1 \end{bmatrix}\bigodot \begin{bmatrix} w_{1,1}^2\\ w_{1,2}^2 \end{bmatrix}\delta^2\]

因此我们可以将后向传播改写为

2 后向传播

\[\left\{\begin{matrix} \delta^M=-2(t-a)\bigodot f^M(n^M),\\ \delta^m=(W^{m+1})^{\mathrm{T}}\bigodot f^m(n^m),m=M-1,M-2,\cdots,2,1,\\ f^m(n^m)=\begin{bmatrix} f^m(n_1^m)\\ f^m(n_2^m)\\ \vdots\\ f^m(n_{s^m}^m) \end{bmatrix} \end{matrix}\right.\]

**还有一种最繁琐的,但是可能比较好理解的方式推导后向传播算法

样本$p=1$,$t=g(1)=1+sin(\frac{\pi}{4})$,初值选

\[w^1=\begin{bmatrix} -0.27\\ -0.41 \end{bmatrix},b^1=\begin{bmatrix} -0.48\\ -0.13 \end{bmatrix},w^2=\begin{bmatrix} 0.09 & -0.17 \end{bmatrix}_{1\times 2},b^2=0.48\] 

1 网络前向计算

\[\left\{\begin{matrix} a^0=p=1\\ a^1=f^1(W^1a^0+b^1)=f^1\left ( \begin{bmatrix} -0.27\\ -0.41 \end{bmatrix}\times 1+\begin{bmatrix} -0.48\\ -0.13 \end{bmatrix} \right )\\ f^1(x)=\frac{1}{1+e^{-x}}\\ a^2=f^2(W^2a^1+b^1)=W^2a^1+b^2\\ f^2(x)=x,a=a^2\\e=t-a \end{matrix}\right.\]

2 后向传播

为了更新权$W^2=[w_{1,1}^2 \quad w_{1,2}^2]$,$e=t-a$,分别求误差函数对于其导数

\[\frac{\partial F}{\partial w_{1,1}^2}=2e\times -\frac{\partial a^2}{\partial w_{1,1}^2}=-2ea_1^1\]

\[\frac{\partial F}{\partial w_{1,2}^2}=2e\times -\frac{\partial a^2}{\partial w_{1,2}^2}=-2ea_2^1\]

\[\frac{\partial F}{\partial w_{1,1}^1}=2e\times -\frac{\partial a^2}{\partial w_{1,1}^1}=2e\times -\frac{\partial a^2}{\partial n^2}\times \frac{\partial n^2}{\partial a_1^1}\times \frac{\partial a_1^1}{\partial n_1^1}\frac{\partial n_1^1}{\partial w_{1,1}^1}\]

\[=2e\times -1\times w_{1,1}^2\times f^1(a_1^1)\times p\]

\[\frac{\partial F}{\partial w_{2,1}^1}=2e\times -\frac{\partial a^2}{\partial w_{2,1}^1}=2e\times -\frac{\partial a^2}{\partial n^2}\times \frac{\partial n^2}{\partial a_2^1}\times \frac{\partial a_2^1}{\partial n_2^1}\frac{\partial n_2^1}{\partial w_{2,1}^1}\]

 \[=2e\times -1\times w_{1,2}^2\times f^1(a_2^1)\times p\]

最后组合成矩阵向量形式也可以!

例子2 模式分类实验

图6.6 半月集合数据

如图所示两类数据分布在区域A和B,两个区域都是半月形状,宽度为width, 半径r,或者是r+width/2和r-width/2的两个半圆所夹持的区域,并且二者距离为距离x1轴距离d,负数d表示数据相互包含的几何形式,越小则两类数据越难以分类。任务就是如何给出一个决策分界线把两类数据分隔开。

网络设计:$R\times S^1\times S^2=2\times 20\times 1$,输入层2个坐标值$R=2$,隐藏层20个神经元$S^1=20$,输出层1个神经元$S^2=1$,类别两种,用0,1表示。故网络是$R\times S^1\times S^2=2\times 20\times 1$。所有的神经元激活函数是

\[f(x)=tanh(x)=\frac{1-exp(-2x)}{1+exp(-2x)}\]

 

第1步:产生半月集合数据

function [data, data_shuffled]=halfmoon(rad,width,d,n_samp)

% A function to generate the halfmoon data

%         rad  - central radius of the half moon

%        width - width of the half moon

%           d  - distance between two half moon

%      n_samp  - total number of the samples

%       Output:

%         data - output data

%data_shuffled - shuffled data

 

if rad < width/2,

    error('The radius should be at least larger than half the width');

end

 

if mod(n_samp,2)~=0,

    error('Please make sure the number of samples is even');

end

 

aa = rand(2,n_samp/2);

radius = (rad-width/2) + width*aa(1,:);

theta = pi*aa(2,:);

  

x     = radius.*cos(theta);

y     = radius.*sin(theta);

label = 1*ones(1,length(x));  % label for Class 1

 

x1    = radius.*cos(-theta) + rad;

y1    = radius.*sin(-theta) - d;

label1= -1*ones(1,length(x)); % label for Class 2

 

data  = [x, x1;

         y, y1;

         label, label1];

     

[n_row, n_col] = size(data);

 

shuffle_seq = randperm(n_col);% 两类数据洗牌相互打乱顺序,标记不乱随数据

 

for i = (1:n_col),

    data_shuffled(:,i) = data(:,shuffle_seq(i));

end;

 

 

function y = d_hyperb(x)

% y = d_hyperb (x)

% differentiation of hyperbolic function

y = (4*exp(2*x))./((1 + exp(2*x)).^2);

end

 

 

主程序

% Multiple Layer Perceptron Classifier using Backpropagation

%%=========== Step 0: 产生半月集合 ============================

rad = 10; % 半径

width = 6; % 宽度

dist = -4; % 数据距离

num_tr = 1000; % 训练数据个数

num_te = 2000; % 测试数据个数

num_samp = num_tr+num_te; % 数据总数

[data, data_shuffled] = halfmoon(rad,width,dist,num_samp);

 

%%=== Step 1: Initialization of Multilayer Perceptron (MLP)初始化

fprintf('Initializing the MLP ...\n');

n_in = 2; % number of input neuron 输入2个神经元

n_hd = 20; % number of hidden neurons 隐层20

n_ou = 1; % number of output neuron 输出1个神经元

%w = cell(2,1);

%w1{1} = rand(n_hd,n_in+1)./2-0.25; % initial weights between input layer to hidden layer

w1{1} = rand(n_hd,n_in+1); 20*3矩阵,有1个是偏置, 结构数组w1

dw0{1}= zeros(n_hd,n_in+1); % rand(n_hd,n_in)./2- 0.25;% 增量初始化

%w1{2} = rand(n_ou,n_hd+1)./2 - 0.25; % initial weights between hidden layer to output layer

w1{2} = rand(n_ou,n_hd+1); 1*21

dw0{2}= zeros(n_ou,n_hd+1); %rand(n_ou,n_hd)./2 - 0.25;% 增量初始化

 

num_Epoch = 50; % number of epochs 训练回合

mse_thres = 1E-3; % MSE threshold

mse_train = Inf; % MSE for training data

epoch = 1;

err = 0; % a counter to denote the number of error outputs

%eta2 = 0.1; % learning-rate for output weights

%eta1 = 0.1; % learning-rate for hidden weights

eta1 = annealing(0.1,1E-5,num_Epoch); %线性化变化的学习率,从0.1变化到1e-5

eta2 = annealing(0.1,1E-5,num_Epoch);

 

%%== Preprocess the input data : remove mean and normalize =数据标准化

mean1 = [mean(data(1:2,:)')';0];

for i = 1:num_samp,

nor_data(:,i) = data_shuffled(:,i) - mean1; %减掉均值

end

max1 = [max(abs(nor_data(1:2,:)'))';1];

for i = 1:num_samp,

nor_data(:,i) = nor_data(:,i)./max1;%标准化

end

 

%%====Main Loop for Training 训练

st = cputime;

fprintf('Training the MLP using back-propagation ...\n');

fprintf(' ------------------------------------\n');

while mse_train > mse_thres && epoch <= num_Epoch

fprintf(' Epoch #: %d ->',epoch);

%% shuffle the training data for every epoch

[n_row, n_col] = size(nor_data);

shuffle_seq = randperm(num_tr);

nor_data1 = nor_data(:,shuffle_seq); %数据随机打乱

 

%% using all data for training for this epoch

for i = 1:num_tr

%% 前向计算

x = [nor_data1(1:2,i);1]; % fetching 输入数据 from database

%d = myint2vec(nor_data1(3,i));% fetching from database

d = nor_data1(3,i);% fetching desired 期望响应response from database

hd = [tanh(w1{1}*x);1]; % hidden neurons are nonlinear

o = tanh(w1{2}*hd); % output neuron is nonlinear

e(:,i) = d - o;

 

%%后向计算 d_hyperb导数单独定义的

%%%(4*exp(2*x))./((1 + exp(2*x)).^2);

 

delta_ou = e(:,i).*d_hyperb(w1{2}*hd);

delta_hd = d_hyperb(w1{1}*x).*(w1{2}(:,1:n_hd)'*delta_ou);

 

dw1{1} = eta1(epoch)*delta_hd*x';

dw1{2} = eta2(epoch)*delta_ou*hd';

 

%%更新

w2{1} = w1{1} + dw1{1}; % weights input -> hidden

w2{2} = w1{2} + dw1{2}; % weights hidden-> output

 

%%

dw0 = dw1;

w1 = w2;

end

 

mse(epoch) =sum(mean(e'.^2));

mse_train = mse(epoch);

fprintf('MSE = %f\n',mse_train);

epoch = epoch + 1;

end

fprintf(' Points trained : %d\n',num_tr);

fprintf(' Epochs conducted: %d\n',epoch-1);

fprintf(' Time cost : %4.2f seconds\n',cputime - st);

fprintf(' ------------------------------------\n');

 

%%=============== Plotting Learning Curve ==

figure;

plot(mse,'k');

title('Learning curve');

xlabel('Number of epochs');ylabel('MSE');

%%===== 绘图 ====

figure;

hold on;

xmin = min(data_shuffled(1,:));

xmax = max(data_shuffled(1,:));

ymin = min(data_shuffled(2,:));

ymax = max(data_shuffled(2,:));

[x_b,y_b]= meshgrid(xmin:(xmax-xmin)/100:xmax,ymin:(ymax-ymin)/100:ymax);

z_b = 0*ones(size(x_b));

%wh = waitbar(0,'Plotting testing result...');

for x1 = 1 : size(x_b,1)

for y1 = 1 : size(x_b,2)

input = [(x_b(x1,y1)-mean1(1))/max1(1);(y_b(x1,y1)-mean1(2))/max1(2);1];

hd= [tanh(w1{1}*input);1];

z_b(x1,y1) = tanh(w1{2}*hd);

end

%waitbar((x1)/size(x,1),wh)

%set(wh,'name',['Progress = ' sprintf('%2.1f',(x1)/size(x,1)*100) '%']);

end

%% Adding colormap to the final figure

%figure;

sp = pcolor(x_b,y_b,z_b);

load red_black_colmap;

colormap(red_black);

shading flat;

set(gca,'XLim',[xmin xmax],'YLim',[ymin ymax]);

%%========================== Testing ========训练后进行测试

fprintf('Testing the MLP ...\n');

for i = num_tr+1:num_samp,

x = [nor_data(1:2,i);1];

hd = [tanh(w1{1}*x);1];

o(:,i)= tanh(w1{2}*hd);

xx = max1(1:2,:).*x(1:2,:) + mean1(1:2,:);

if o(:,i)>0%myvec2int(o(:,i)) == 1,

plot(xx(1),xx(2),'rx'); 分类

end

if o(:,i)<0%myvec2int(o(:,i)) == -1,

plot(xx(1),xx(2),'k+');

end

end

xlabel('x');ylabel('y');

title(['Classification using MLP with dist = ',num2str(dist), ', radius = ',...

num2str(rad), ' and width = ',num2str(width)]);

% Calculate testing error rate

for i = num_tr+1:num_samp,

if abs(mysign(o(i)) - nor_data(3,i)) > 1E-6,

err = err + 1;

end

end

fprintf(' ------------------------------------\n');

fprintf(' Points tested : %d\n',num_te);

fprintf(' Error points : %d (%5.2f%%)\n',err,(err/num_te)*100);

fprintf(' ------------------------------------\n');

 

fprintf('Mission accomplished!\n');

fprintf('_________________________________________\n');

%%=======画出决策边界Plot decision boundary ====

%% Adding contour to show the boundary

contour(x_b,y_b,z_b,[0 0],'k','Linewidth',1);

%contour(x_b,y_b,z_b,[-1 -1],'k:','Linewidth',2);

%contour(x_b,y_b,z_b,[1 1],'k:','Linewidth',2);

set(gca,'XLim',[xmin xmax],'YLim',[ymin ymax]);

图6.7 半月集合数据分类决策分界线

posted @ 2021-02-05 19:31  bingoloser  阅读(1316)  评论(0)    收藏  举报