1、Sparse Autoencoder

  没想到,时隔多年后,再次搞这UFLDL,快速撸过吧。哎。

  之前这第一个练习,没搞懂,现在感觉主要是这个推导并不完整,来的有点突然,要结合Tom的《机器学习》一起看,才能看懂。而且,其实这UFLDL的这一块BP就是看Tom的《机器学习》整出来的。主要是,之前一直没有搞懂这个残差这个概念,现在看Tom的《机器学习》算是看懂了。,这个就是对于一个单元的所有输入整的一个概念。不过总感觉UFLDL的这块推导不好。当年可能也就是这一块卡住了,哎。

  • 问题:

1)对于BP这块代码,其实自己编写还是有点困难,矩阵化运算完全编写,以后自己再重新编写过一遍。

2)由于对于MATLAB不熟,整体代码还是有很多不了解。

3)

4)

5)

  UFLDL Sparse Autoencoder

  本练习的代码为sparseae_exercise.zip,需要补充sampleIMAGES.m sparseAutoencoderCost.m computeNumericalGradient.m中的代码。

整体代码如下:

  其中lxsparseae_exercise.m为自己练习,查看资源的代码。

  其中IMAGES,有10幅512x512的灰白图像(已经被白化处理过)。保存为double类型,所以,整出负值来了

  其实以前感觉推导有问题,现在总结,主要是有两个原因。1、UFLDL对于BP的描述太简单,感觉完整推导不懂。2、Maltab不熟,对于里面有些语句,从面上都没办法完全理解。

  虽然对于里面的细节语句还是不懂,但是总体现在就这样吧。

  下面是这次实验的几个代码:sampleIMAGES.m

function patches = sampleIMAGES()
% sampleIMAGES
% Returns 10000 patches for training

load IMAGES;    % load images from disk 

patchsize = 8;  % we'll use 8x8 patches 
numpatches = 10000;

% Initialize patches with zeros.  Your code will fill in this matrix--one
% column per patch, 10000 columns. 
patches = zeros(patchsize*patchsize, numpatches);

%% ---------- YOUR CODE HERE --------------------------------------
%  Instructions: Fill in the variable called "patches" using data 
%  from IMAGES.  
%  
%  IMAGES is a 3D array containing 10 images
%  For instance, IMAGES(:,:,6) is a 512x512 array containing the 6th image,
%  and you can type "imagesc(IMAGES(:,:,6)), colormap gray;" to visualize
%  it. (The contrast on these images look a bit off because they have
%  been preprocessed using using "whitening."  See the lecture notes for
%  more details.) As a second example, IMAGES(21:30,21:30,1) is an image
%  patch corresponding to the pixels in the block (21,21) to (30,30) of
%  Image 1
%这图像已经经过白化处理,所以里面有整数和负数

% tic;

for imageNum=1:size(IMAGES,3)%遍历每一张图像
    %虽然知道了图像尺寸,但希望得到一个通用的程序
    [sizeRow,sizeCol]=size(IMAGES(:,:,imageNum));
    for patchNum=1:numpatches/size(IMAGES,3)%这样整,有点犯贱。。。
        
%{        
        randx=randi(512-8+1);%取随机数,由于已经知道了图像的尺寸,所以就直接带入了数值,
        randy=randi(512-8+1);
%1、        
%         tempPatch=IMAGES(randx:randx+7,randy:randy+7,imageNum);%取小块
%         patches(:,(imageNum-1)*1000+patchNum)=reshape(tempPatch,64,1);%这reshape必须带上两个维度

%2、
%         patches(:,(imageNum-1)*1000+patchNum)=reshape(IMAGES(randx:randx+7,randy:randy+7,imageNum),64,1);%这reshape必须带上两个维度
%       2相对于1,取消了赋予的中间变量tempPatch,运行速度更快一点。所以没必要的中间变量不要创建
        
%}
        
%3、
%        patches(:,(imageNum-1)*1000+patchNum)=reshape(IMAGES(randi(512-8+1):randi(512-8+1)+7,randi(512-8+1):randi(512-8+1)+7,imageNum),64,1);%这reshape必须带上两个维度
%       上面这个代码不行,因为reshape的元素不能为可变值。违反语法规则,没办法执行。

%4、 
    %可以直接减8,但是想得到通用程序
    xPos=randi(sizeRow-patchsize+1);
    yPos=randi(sizeCol-patchsize+1);
    patches(:,(imageNum-1)*1000+patchNum)=reshape(IMAGES(xPos:xPos+7,yPos:yPos+7,imageNum),64,1);
    end
end

% toc;
% disp(['sampleIMAGES运行时间为:',num2str(toc)]);

%% ---------------------------------------------------------------
% For the autoencoder to work well we need to normalize the data
% Specifically, since the output of the network is bounded between [0,1]
% (due to the sigmoid activation function), we have to make sure 
% the range of pixel values is also bounded between [0,1]
patches = normalizeData(patches);

end


%% ---------------------------------------------------------------
function patches = normalizeData(patches)

% Squash data to [0.1, 0.9] since we use sigmoid as the activation
% function in the output layer

% Remove DC (mean of images). 
patches = bsxfun(@minus, patches, mean(patches));

% Truncate to +/-3 standard deviations and scale to -1 to 1
pstd = 3 * std(patches(:));
patches = max(min(patches, pstd), -pstd) / pstd;

% Rescale from [-1,1] to [0.1,0.9]
patches = (patches + 1) * 0.4 + 0.1;

end

  sparseAutoencoderCost.m 这个函数,结合博客 SparseAutoEncoder 稀疏编码详解(Andrew ng课程系列)里面的描述,比较好理解。虽然对于矩阵方面的运算还是不熟,但是先就这样了。

STEP 2: 实现 sparseAutoencoderCost,稀疏损失函数

1)通过反转向量,把向量还原为权值矩阵

2)计算损失函数和梯度

由于画图和矩阵的原因,原稿使用ppt来制作的,现在就只能通过贴图的方式展现了,有需要ppt的可以私信我

  sparseAutoencoderCost.m

function [cost,grad] = sparseAutoencoderCost(theta, visibleSize, hiddenSize, ...
                                             lambda, sparsityParam, beta, data)

% visibleSize: the number of input units (probably 64) 
% hiddenSize: the number of hidden units (probably 25) 
% lambda: weight decay parameter
% sparsityParam: The desired average activation for the hidden units (denoted in the lecture
%                           notes by the greek alphabet rho, which looks like a lower-case "p").
% beta: weight of sparsity penalty term
% data: Our 64x10000 matrix containing the training data.  So, data(:,i) is the i-th training example. 
  
% The input theta is a vector (because minFunc expects the parameters to be a vector). 
% We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this 
% follows the notation convention of the lecture notes. 



%W1尺寸为[hiddenSize, visibleSize],W2尺寸为[visibleSize, hiddenSize]
%b1尺寸为[hiddenSize, 1],b2尺寸为[visibleSize, 1]
W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize);
W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize);
b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end);



% Cost and gradient variables (your code needs to compute these values). 
% Here, we initialize them to zeros. 
cost = 0;
W1grad = zeros(size(W1)); 
W2grad = zeros(size(W2));
b1grad = zeros(size(b1)); 
b2grad = zeros(size(b2));

%% ---------- YOUR CODE HERE --------------------------------------
%  Instructions: Compute the cost/optimization objective J_sparse(W,b) for the Sparse Autoencoder,
%                and the corresponding gradients W1grad, W2grad, b1grad, b2grad.
%
% W1grad, W2grad, b1grad and b2grad should be computed using backpropagation.
% Note that W1grad has the same dimensions as W1, b1grad has the same dimensions
% as b1, etc.  Your code should set W1grad to be the partial derivative of J_sparse(W,b) with
% respect to W1.  I.e., W1grad(i,j) should be the partial derivative of J_sparse(W,b) 
% with respect to the input parameter W1(i,j).  Thus, W1grad should be equal to the term 
% [(1/m) \Delta W^{(1)} + \lambda W^{(1)}] in the last block of pseudo-code in Section 2.2 
% of the lecture notes (and similarly for W2grad, b1grad, b2grad).
% 
% Stated differently, if we were using batch gradient descent to optimize the parameters,
% the gradient descent update to W1 would be W1 := W1 - alpha * W1grad, and similarly for W2, b1, b2. 
% 



%自己直接还是编不出来呀,还要参考被人的博客,哎。
%其实就这个过程维度的变换比较复杂,还理解不到位,看网页
%http://www.cnblogs.com/yymn/articles/4956333.html
%中对于维度变化的过程描述非常好。

J_cost=0;%直接的误差
J_weight=0;%权值衰减
J_sparse=0;%稀疏惩罚

%得到数据的维度和样本个数
[~,dataNum]=size(data);

%前向传播
%W1尺寸为[hiddenSize, visibleSize],data尺寸为[visibleSize, dataNum]
%b1尺寸为[hiddenSize,1]
%z2尺寸为[hiddenSize, dataNum]
z2=W1*data+repmat(b1,1,dataNum);
a2=sigmoid(z2);
%W2尺寸为[visibleSize, hiddenSize],b2尺寸为[visibleSize, 1]
%z3尺寸为[visibleSize, dataNum]
z3=W2*a2+repmat(b2,1,dataNum);
a3=sigmoid(z3);

%计算直接的误差
%sum对于矩阵是先进行列加,下面这个正好是每个样本,每个维度差值先相加,然后再累加
J_cost=(0.5/dataNum)*sum(sum((a3-data).^2));

%计算权值衰减项
%由于是平方的累加和,所以不用考虑那么多累加顺序
%W1尺寸为[hiddenSize, visibleSize],W2尺寸为[visibleSize, hiddenSize]
%sum默认是先累加列,然后再加行,等于先累加前面层和后面层所有连接的系数到前面每个神经元
%这样其实和UFLDL中的公式是一致的
J_weight=(1/2)*(sum(sum(W1.^2))+sum(sum(W2.^2)));

%稀疏惩罚
%先要计算自编码网络自身的激活度,由于是一个三层的网络,所以就是第二层的激活度
%如果是n层的网络,就是2到n-2层的激活度
%就这还得看前面 自编码算法与稀疏性的章节 才能看懂

%由于a2尺寸为[hiddenSize, dataNum],所以sum要是行累加,再除以样本个数,就是每个隐单元的激活度
%rho尺寸为[hiddenSize, 1]
%也有代码是 (1/m).*sum(a2,2); 但感觉下面这种格式更简单
rho=sum(a2,2)/dataNum;
%一开始也想着 .* 和 ./ 后面发现,一个常数和矩阵的乘除 直接 * /就可以了。
%一开始也是整个一堆的公司,但感觉这样太乱了,不方便查看,调用一个函数更好。
%J_sparse=sum(sparsityParam*log(sparsityParam/rho)+(1-sparsityParam)*log((1-sparsityParam)/(1-rho)));

J_sparse=KL(sparsityParam,rho);

%计算总的代价函数
cost=J_cost+lambda*J_weight+beta*J_sparse;

%下面开始整BP
%先计算最后一层的delta
%delta是总误差相对于每一个神经元的输入的导数,但是总误差对于输入的导数
%还要根据链式法则,对于激活函数求导,所以才会出现下面这两项
%现在还是感觉UFLDL公式的推导是不完整的,对于delta的定义也不严谨,还是得看书
%delta3尺寸为[visibleSize, dataNum] 
delta3=-(data-a3).*sigmoidDer(z3);

%计算残差中添加的稀疏项
%其实UFLDL中并没有给出加了稀疏后的详细推导,自己在同文件编写了一个查看损耗中稀疏和delta中稀疏项的小代码
%在这个代码中,其余对于损耗函数的稀疏项的图像和UFLDL中的图像一直,但是在delta中的项不一致
%所以对于下面这项不懂,以后学点稀疏的时候,再来看。
deltaSparsity=beta*(-sparsityParam./rho+(1-sparsityParam)./(1-rho));

%W2尺寸为[visibleSize, hiddenSize],delta3尺寸为[visibleSize, dataNum]
%deltaSparsity尺寸为[hiddenSize, 1],z2尺寸为[hiddenSize, dataNum]
%delta2尺寸为[hiddenSize, dataNum] 
delta2=(W2'*delta3+repmat(deltaSparsity,1,dataNum)).*sigmoidDer(z2);

%计算W1grad
%W1grad尺寸为[hiddenSize, visibleSize]
W1grad=W1grad+delta2*data';
W1grad = (1/dataNum)*W1grad+lambda*W1;

%计算b1grad 
b1grad = b1grad+sum(delta2,2);
b1grad = (1/dataNum)*b1grad;%注意b的偏导是一个向量,所以这里应该把每一行的值累加起来

%计算W2grad
W2grad=W2grad+delta3*a2';
W2grad = (1/dataNum)*W2grad+lambda*W2;

%计算b2grad 
b2grad = b2grad+sum(delta3,2);
b2grad = (1/dataNum)*b2grad;%注意b的偏导是一个向量,所以这里应该把每一行的值累加起来


%-------------------------------------------------------------------
% After computing the cost and gradient, we will convert the gradients back
% to a vector format (suitable for minFunc).  Specifically, we will unroll
% your gradient matrices into a vector.

grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)];

end

%-------------------------------------------------------------------
% Here's an implementation of the sigmoid function, which you may find useful
% in your computation of the costs and the gradients.  This inputs a (row or
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). 

function sigm = sigmoid(x)
  
    sigm = 1 ./ (1 + exp(-x));
end



%上面是源程序自带的,下面是自己编写的
function kl = KL(p,pj)
    %p为设定的常数,pj为隐单元的平均激活度
    kl=sum(p.*log(p./pj)+(1.-p).*log((1.-p)./(1.-pj)));
end

function sigmDer = sigmoidDer(x)
  
    sigmDer = sigmoid(x).*(1-sigmoid(x));
end

  computeNumericalGradient.m 这个函数由于一开始 没理解看明白J是一个函数,所以看不懂。

function numgrad = computeNumericalGradient(J, theta)
% numgrad = computeNumericalGradient(J, theta)
% theta: a vector of parameters
% J: a function that outputs a real-number. Calling y = J(theta) will return the
% function value at theta. 
  
% Initialize numgrad with zeros
numgrad = zeros(size(theta));

%% ---------- YOUR CODE HERE --------------------------------------
% Instructions: 
% Implement numerical gradient checking, and return the result in numgrad.  
% (See Section 2.3 of the lecture notes.)
% You should write code so that numgrad(i) is (the numerical approximation to) the 
% partial derivative of J with respect to the i-th input argument, evaluated at theta.  
% I.e., numgrad(i) should be the (approximately) the partial derivative of J with 
% respect to theta(i).
%                
% Hint: You will probably want to compute the elements of numgrad one at a time. 


%这段要运行代码的时候,看看数据的维度,才比较清楚
epsilon=1e-4;
%theta是个 3289(64x25x2+64+25) x 1的向量,所以下面是size(theta,1)
n=size(theta,1);
E=eye(n);
for i=1:n
    delta=E(:,i)*epsilon;
    numgrad(i)=(J(theta+delta)-J(theta-delta))/(epsilon*2.0);
end



%% ---------------------------------------------------------------
end

  train.m 加了一点备注。

clear;clc;close all;
%% CS294A/CS294W Programming Assignment Starter Code

%  Instructions
%  ------------
% 
%  This file contains code that helps you get started on the
%  programming assignment. You will need to complete the code in sampleIMAGES.m,
%  sparseAutoencoderCost.m and computeNumericalGradient.m. 
%  For the purpose of completing the assignment, you do not need to
%  change the code in this file. 
%
%%======================================================================
%% STEP 0: Here we provide the relevant parameters values that will
%  allow your sparse autoencoder to get good filters; you do not need to 
%  change the parameters below.

visibleSize = 8*8;   % number of input units 
hiddenSize = 25;     % number of hidden units 
sparsityParam = 0.01;   % desired average activation of the hidden units.
                     % (This was denoted by the Greek alphabet rho, which looks like a lower-case "p",
		     %  in the lecture notes). 
  
% lambda = 0;             
lambda = 0.0001;     % weight decay parameter  

% beta = 0;
beta = 3;            % weight of sparsity penalty term       

%%======================================================================
%% STEP 1: Implement sampleIMAGES
%
%  After implementing sampleIMAGES, the display_network command should
%  display a random sample of 200 patches from the dataset

patches = sampleIMAGES;
display_network(patches(:,randi(size(patches,2),200,1)),8);
set(gcf,'NumberTitle','off');
set(gcf,'Name','原始图像中截取的小块');

%  Obtain random parameters theta
%theta输出为3289x1的向量,由于这是一个三层自编码的输出
%3289=64x25x2+64+25
theta = initializeParameters(hiddenSize, visibleSize);

%%======================================================================
%% STEP 2: Implement sparseAutoencoderCost
%
%  You can implement all of the components (squared error cost, weight decay term,
%  sparsity penalty) in the cost function at once, but it may be easier to do 
%  it step-by-step and run gradient checking (see STEP 3) after each step.  We 
%  suggest implementing the sparseAutoencoderCost function using the following steps:
%
%  (a) Implement forward propagation in your neural network, and implement the 
%      squared error term of the cost function.  Implement backpropagation to 
%      compute the derivatives.   Then (using lambda=beta=0), run Gradient Checking 
%      to verify that the calculations corresponding to the squared error cost 
%      term are correct.
%
%  (b) Add in the weight decay term (in both the cost function and the derivative
%      calculations), then re-run Gradient Checking to verify correctness. 
%
%  (c) Add in the sparsity penalty term, then re-run Gradient Checking to 
%      verify correctness.
%
%  Feel free to change the training settings when debugging your
%  code.  (For example, reducing the training set size or 
%  number of hidden units may make your code run faster; and setting beta 
%  and/or lambda to zero may be helpful for debugging.)  However, in your 
%  final submission of the visualized weights, please use parameters we 
%  gave in Step 0 above.

[costBegin, grad] = sparseAutoencoderCost(theta, visibleSize, hiddenSize, lambda, ...
                                     sparsityParam, beta, patches);

%%======================================================================

%下面这个为检查函数computeNumericalGradient和函数sparseAutoencoderCost的代码
%检查完毕,就可以注释掉,不用再运行
%{
%% STEP 3: Gradient Checking
%
% Hint: If you are debugging your code, performing gradient checking on smaller models 
% and smaller training sets (e.g., using only 10 training examples and 1-2 hidden 
% units) may speed things up.

% First, lets make sure your numerical gradient computation is correct for a
% simple function.  After you have implemented computeNumericalGradient.m,
% run the following: 

%检查梯度的函数 computeNumericalGradient.m,开始编写computeNumericalGradient的时候开始验证
%然后就不用再检查
%checkNumericalGradient();

% Now we can use it to check your cost function and derivative calculations
% for the sparse autoencoder. 

%下面这句语法上理解,其实一开始不懂,后面问了王鑫师姐,就完全懂了。
%下面的函数有两个参数,用逗号作为间隔,前一个参数是函数,后一个就是一个常数
%看 computeNumericalGradient(J, theta) 的函数定义,前一个参数J就是一个函数。
%这里就是用一个匿名函数作为参数,然后定义的x作为调用函数中的某一个参数变量
%所以后面如果调用这个参数的函数,那么送入的变量就会替换掉x
%搞懂这块真不容易呀,不过弄懂了是很爽呀,真就看懂了
%不过当年MATLAB也不熟,看这块代码确实是有点费劲
numgrad = computeNumericalGradient( @(x) sparseAutoencoderCost(x, visibleSize, ...
                                                  hiddenSize, lambda, ...
                                                  sparsityParam, beta, ...
                                                  patches), theta);

%下面就直接输出和theta[3289,1]维数一样的两个向量
% Use this to visually compare the gradients side by side
disp([numgrad grad]); 

% Compare numerically computed gradients with the ones obtained from backpropagation
diff = norm(numgrad-grad)/norm(numgrad+grad);

fprintf('Norm of the difference between numerical and analytical gradient (should be < 1e-9)\n\n');

disp(diff); % Should be small. In our implementation, these values are
            % usually less than 1e-9.

            % When you got this working, Congratulations!!! 

%%======================================================================
%}

%% STEP 4: After verifying that your implementation of
%  sparseAutoencoderCost is correct, You can start training your sparse
%  autoencoder with minFunc (L-BFGS).

%  Randomly initialize the parameters
theta = initializeParameters(hiddenSize, visibleSize);

%  Use minFunc to minimize the function
addpath minFunc/
options.Method = 'lbfgs'; % Here, we use L-BFGS to optimize our cost
                          % function. Generally, for minFunc to work, you
                          % need a function pointer with two outputs: the
                          % function value and the gradient. In our problem,
                          % sparseAutoencoderCost.m satisfies this.
options.maxIter = 400;	  % Maximum number of iterations of L-BFGS to run 
options.display = 'on';


[opttheta, costEnd] = minFunc( @(p) sparseAutoencoderCost(p, ...
                                   visibleSize, hiddenSize, ...
                                   lambda, sparsityParam, ...
                                   beta, patches), ...
                              theta, options);

%%======================================================================
%% STEP 5: Visualization 

W1 = reshape(opttheta(1:hiddenSize*visibleSize), hiddenSize, visibleSize);
figure;
display_network(W1', 12); 
set(gcf,'NumberTitle','off');
set(gcf,'Name','稀疏自编码后的第一层的权系数');

print -djpeg weights.jpg   % save the visualization to a file 

  实验结果,就是这样类似于Gabor滤波器的图像。

 

 

 

  

posted @ 2015-11-10 16:32  菜鸡一枚  阅读(485)  评论(0)    收藏  举报