8、Learning color features with Sparse Autoencoders
- 总结:
1)这就是把自编码的最后一次输出的激活,由sigmoid改为线性,直接输出。导致自编码,可以编码任意的输出,而不限定输入的值域为[0,1],这样导致还要更改一下反向传输的残差。
2)实验还保存了,整个网络完整的权值optTheta,ZCAWhite白化矩阵,meanPatch对于每个小块求出的均值为STL10Features.mat,给下次实验用。
3)
4)
5)
- 问题:
1)
2)
3)
4)
5)
- 想法:
1)
2)
3)
4)
5)
实验需要下载代码:linear_decoder_exercise.zip stl10_patches_100k.zip
linearDecoderExercise.m
clear;close all;clc;
disp('当前正在执行的程序是:');
disp([mfilename('fullpath'),'.m']);
%% CS294A/CS294W Linear Decoder Exercise
% Instructions
% ------------
%
% This file contains code that helps you get started on the
% linear decoder exericse. For this exercise, you will only need to modify
% the code in sparseAutoencoderLinearCost.m. You will not need to modify
% any code in this file.
%%======================================================================
%% STEP 0: Initialization
% Here we initialize some parameters used for the exercise.
imageChannels = 3; % number of channels (rgb, so 3)
patchDim = 8; % patch dimension
numPatches = 100000; % number of patches
visibleSize = patchDim * patchDim * imageChannels; % number of input units 192
outputSize = visibleSize; % number of output units
hiddenSize = 400; % number of hidden units
sparsityParam = 0.035; % desired average activation of the hidden units.
lambda = 3e-3; % weight decay parameter
beta = 5; % weight of sparsity penalty term
epsilon = 0.1; % epsilon for ZCA whitening
%%======================================================================
%{
%% STEP 1: Create and modify sparseAutoencoderLinearCost.m to use a linear decoder,
% and check gradients
% You should copy sparseAutoencoderCost.m from your earlier exercise
% and rename it to sparseAutoencoderLinearCost.m.
% Then you need to rename the function from sparseAutoencoderCost to
% sparseAutoencoderLinearCost, and modify it so that the sparse autoencoder
% uses a linear decoder instead. Once that is done, you should check
% your gradients to verify that they are correct.
% NOTE: Modify sparseAutoencoderCost first!
% To speed up gradient checking, we will use a reduced network and some
% dummy patches
debugHiddenSize = 5;
debugvisibleSize = 8;
patches = rand([8 10]);
theta = initializeParameters(debugHiddenSize, debugvisibleSize);
[cost, grad] = sparseAutoencoderLinearCost(theta, debugvisibleSize, debugHiddenSize, ...
lambda, sparsityParam, beta, ...
patches);
% Check gradients
numGrad = computeNumericalGradient( @(x) sparseAutoencoderLinearCost(x, debugvisibleSize, debugHiddenSize, ...
lambda, sparsityParam, beta, ...
patches), theta);
% Use this to visually compare the gradients side by side
disp([numGrad grad]);
diff = norm(numGrad-grad)/norm(numGrad+grad);
% Should be small. In our implementation, these values are usually less than 1e-9.
disp(diff);
assert(diff < 1e-9, 'Difference too large. Check your gradient computation again');
% NOTE: Once your gradients check out, you should run step 0 again to
% reinitialize the parameters
%}
%}
%%======================================================================
%% STEP 2: Learn features on small patches
% In this step, you will use your sparse autoencoder (which now uses a
% linear decoder) to learn features on small patches sampled from related
% images.
%% STEP 2a: Load patches
% In this step, we load 100k patches sampled from the STL10 dataset and
% visualize them. Note that these patches have been scaled to [0,1]
%a set of 100,000 small 8x8 patches sampled from the larger 96x96 STL-10 images
load stlSampledPatches.mat
figure;
displayColorNetwork(patches(:, 1:100));
set(gcf,'NumberTitle','off');
set(gcf,'Name','原始图像小块');
%% STEP 2b: Apply preprocessing
% In this sub-step, we preprocess the sampled patches, in particular,
% ZCA whitening them.
%
% In a later exercise on convolution and pooling, you will need to replicate
% exactly the preprocessing steps you apply to these patches before
% using the autoencoder to learn features on them. Hence, we will save the
% ZCA whitening and mean image matrices together with the learned features
% later on.
% Subtract mean patch (hence zeroing the mean of the patches)
%之前的自然图像,才对于每个样本求0均值,现在是人工图像,那么对于每个维度取0均值
%patches即为小块样本,尺寸为[192,100000]
%下面为取小块,每个维度上的均值
%meanPatch尺寸为[192,1]
meanPatch = mean(patches, 2);
patches = bsxfun(@minus, patches, meanPatch);
% Apply ZCA whitening
sigma = patches * patches' / numPatches;
[u, s, v] = svd(sigma);
ZCAWhite = u * diag(1 ./ sqrt(diag(s) + epsilon)) * u';
patches = ZCAWhite * patches;
figure;
displayColorNetwork(patches(:, 1:100));
set(gcf,'NumberTitle','off');
set(gcf,'Name','ZCAWhite后的图像小块');
%% STEP 2c: Learn features
% You will now use your sparse autoencoder (with linear decoder) to learn
% features on the preprocessed patches. This should take around 45 minutes.
% theta尺寸为[154192,1] 154192=2*192*400+192+400
theta = initializeParameters(hiddenSize, visibleSize);
tic;
% Use minFunc to minimize the function
addpath minFunc/
options = struct;
options.Method = 'lbfgs';
options.maxIter = 5;
options.display = 'on';
[optTheta, cost] = minFunc( @(p) sparseAutoencoderLinearCost(p, ...
visibleSize, hiddenSize, ...
lambda, sparsityParam, ...
beta, patches), ...
theta, options);
disp(['400次迭代的lbfgs训练自编码线性解码器,费时:',num2str(toc)]);
% Save the learned features and the preprocessing matrices for use in
% the later exercise on convolution and pooling
fprintf('Saving learned features and preprocessing matrices...\n');
%保存整个网络完整的权值optTheta,ZCAWhite白化矩阵,meanPatch对于每个小块求出的均值
save('STL10Features.mat', 'optTheta', 'ZCAWhite', 'meanPatch');
fprintf('Saved\n');
%% STEP 2d: Visualize learned features
W = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize);
b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
figure;
%下面显示W*ZCAWhite,是由于每个样本输入到网络中都经过白化。所以对应的输出为W*ZCAWhite*x
%单独提取W*ZCAWhite,就是对于每个原始样本的权值矩阵
displayColorNetwork( (W*ZCAWhite)');
set(gcf,'NumberTitle','off');
set(gcf,'Name','自编码线性解码器学习到的特征');
sparseAutoencoderLinearCost.m
function [cost,grad] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ...
lambda, sparsityParam, beta, data)
%{
function [cost,grad,features] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ...
lambda, sparsityParam, beta, data)
% -------------------- YOUR CODE HERE --------------------
% Instructions:
% Copy sparseAutoencoderCost in sparseAutoencoderCost.m from your
% earlier exercise onto this file, renaming the function to
% sparseAutoencoderLinearCost, and changing the autoencoder to use a
% linear decoder.
% -------------------- YOUR CODE HERE --------------------
end
%}
% visibleSize: the number of input units (probably 64)
% hiddenSize: the number of hidden units (probably 25)
% lambda: weight decay parameter
% sparsityParam: The desired average activation for the hidden units (denoted in the lecture
% notes by the greek alphabet rho, which looks like a lower-case "p").
% beta: weight of sparsity penalty term
% data: Our 64x10000 matrix containing the training data. So, data(:,i) is the i-th training example.
% The input theta is a vector (because minFunc expects the parameters to be a vector).
% We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
% follows the notation convention of the lecture notes.
%W1尺寸为[hiddenSize, visibleSize],W2尺寸为[visibleSize, hiddenSize]
%b1尺寸为[hiddenSize, 1],b2尺寸为[visibleSize, 1]
W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize);
W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize);
b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end);
% Cost and gradient variables (your code needs to compute these values).
% Here, we initialize them to zeros.
cost = 0;
W1grad = zeros(size(W1));
W2grad = zeros(size(W2));
b1grad = zeros(size(b1));
b2grad = zeros(size(b2));
%% ---------- YOUR CODE HERE --------------------------------------
% Instructions: Compute the cost/optimization objective J_sparse(W,b) for the Sparse Autoencoder,
% and the corresponding gradients W1grad, W2grad, b1grad, b2grad.
%
% W1grad, W2grad, b1grad and b2grad should be computed using backpropagation.
% Note that W1grad has the same dimensions as W1, b1grad has the same dimensions
% as b1, etc. Your code should set W1grad to be the partial derivative of J_sparse(W,b) with
% respect to W1. I.e., W1grad(i,j) should be the partial derivative of J_sparse(W,b)
% with respect to the input parameter W1(i,j). Thus, W1grad should be equal to the term
% [(1/m) \Delta W^{(1)} + \lambda W^{(1)}] in the last block of pseudo-code in Section 2.2
% of the lecture notes (and similarly for W2grad, b1grad, b2grad).
%
% Stated differently, if we were using batch gradient descent to optimize the parameters,
% the gradient descent update to W1 would be W1 := W1 - alpha * W1grad, and similarly for W2, b1, b2.
%
%相对于sparseAutoencoderCost.m,只修改了:
% 89行,从 a3=sigmoid(z3); 到 a3=z3;
%126行,从 delta3=-(data-a3).*sigmoidDer(z3); 到 delta3=-(data-a3);
%自己直接还是编不出来呀,还要参考被人的博客,哎。
%其实就这个过程维度的变换比较复杂,还理解不到位,看网页
%http://www.cnblogs.com/yymn/articles/4956333.html
%中对于维度变化的过程描述非常好。
J_cost=0;%直接的误差
J_weight=0;%权值衰减
J_sparse=0;%稀疏惩罚
%得到数据的维度和样本个数
[~,dataNum]=size(data);
%前向传播
%W1尺寸为[hiddenSize, visibleSize],data尺寸为[visibleSize, dataNum]
%b1尺寸为[hiddenSize,1]
%z2尺寸为[hiddenSize, dataNum]
z2=bsxfun(@plus,W1*data,b1);
a2=sigmoid(z2);
%W2尺寸为[visibleSize, hiddenSize],b2尺寸为[visibleSize, 1]
%z3尺寸为[visibleSize, dataNum]
z3=bsxfun(@plus,W2*a2,b2);
%线性解码器
a3=z3;
%计算直接的误差
%sum对于矩阵是先进行列加,下面这个正好是每个样本,每个维度差值先相加,然后再累加
J_cost=(0.5/dataNum)*sum(sum((a3-data).^2));
%计算权值衰减项
%由于是平方的累加和,所以不用考虑那么多累加顺序
%W1尺寸为[hiddenSize, visibleSize],W2尺寸为[visibleSize, hiddenSize]
%sum默认是先累加列,然后再加行,等于先累加前面层和后面层所有连接的系数到前面每个神经元
%这样其实和UFLDL中的公式是一致的
J_weight=(1/2)*(sum(sum(W1.^2))+sum(sum(W2.^2)));
%稀疏惩罚
%先要计算自编码网络自身的激活度,由于是一个三层的网络,所以就是第二层的激活度
%如果是n层的网络,就是2到n-2层的激活度
%就这还得看前面 自编码算法与稀疏性的章节 才能看懂
%由于a2尺寸为[hiddenSize, dataNum],所以sum要是行累加,再除以样本个数,就是每个隐单元的激活度
%rho尺寸为[hiddenSize, 1]
%也有代码是 (1/m).*sum(a2,2); 但感觉下面这种格式更简单
rho=sum(a2,2)/dataNum;
%一开始也想着 .* 和 ./ 后面发现,一个常数和矩阵的乘除 直接 * /就可以了。
%一开始也是整个一堆的公司,但感觉这样太乱了,不方便查看,调用一个函数更好。
%J_sparse=sum(sparsityParam*log(sparsityParam/rho)+(1-sparsityParam)*log((1-sparsityParam)/(1-rho)));
J_sparse=KL(sparsityParam,rho);
%计算总的代价函数
cost=J_cost+lambda*J_weight+beta*J_sparse;
%下面开始整BP
%先计算最后一层的delta
%delta是总误差相对于每一个神经元的输入的导数,但是总误差对于输入的导数
%还要根据链式法则,对于激活函数求导,所以才会出现下面这两项
%现在还是感觉UFLDL公式的推导是不完整的,对于delta的定义也不严谨,还是得看书
%delta3尺寸为[visibleSize, dataNum]
delta3=-(data-a3);
%计算残差中添加的稀疏项
%其实UFLDL中并没有给出加了稀疏后的详细推导,自己在同文件编写了一个查看损耗中稀疏和delta中稀疏项的小代码
%在这个代码中,其余对于损耗函数的稀疏项的图像和UFLDL中的图像一直,但是在delta中的项不一致
%所以对于下面这项不懂,以后学点稀疏的时候,再来看。
deltaSparsity=beta*(-sparsityParam./rho+(1-sparsityParam)./(1-rho));
%W2尺寸为[visibleSize, hiddenSize],delta3尺寸为[visibleSize, dataNum]
%deltaSparsity尺寸为[hiddenSize, 1],z2尺寸为[hiddenSize, dataNum]
%delta2尺寸为[hiddenSize, dataNum]
delta2=(W2'*delta3+repmat(deltaSparsity,1,dataNum)).*sigmoidDer(z2);
%计算W1grad
%W1grad尺寸为[hiddenSize, visibleSize]
W1grad=W1grad+delta2*data';
W1grad = (1/dataNum)*W1grad+lambda*W1;
%计算b1grad
b1grad = b1grad+sum(delta2,2);
b1grad = (1/dataNum)*b1grad;%注意b的偏导是一个向量,所以这里应该把每一行的值累加起来
%计算W2grad
W2grad=W2grad+delta3*a2';
W2grad = (1/dataNum)*W2grad+lambda*W2;
%计算b2grad
b2grad = b2grad+sum(delta3,2);
b2grad = (1/dataNum)*b2grad;%注意b的偏导是一个向量,所以这里应该把每一行的值累加起来
%-------------------------------------------------------------------
% After computing the cost and gradient, we will convert the gradients back
% to a vector format (suitable for minFunc). Specifically, we will unroll
% your gradient matrices into a vector.
grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)];
end
%-------------------------------------------------------------------
% Here's an implementation of the sigmoid function, which you may find useful
% in your computation of the costs and the gradients. This inputs a (row or
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)).
function sigm = sigmoid(x)
sigm = 1 ./ (1 + exp(-x));
end
%上面是源程序自带的,下面是自己编写的
function kl = KL(p,pj)
%p为设定的常数,pj为隐单元的平均激活度
kl=sum(p.*log(p./pj)+(1.-p).*log((1.-p)./(1.-pj)));
end
function sigmDer = sigmoidDer(x)
sigmDer = sigmoid(x).*(1-sigmoid(x));
end

浙公网安备 33010602011771号