Exercise : Self-Taught Learning
参考资料:
http://deeplearning.stanford.edu/wiki/index.php/Self-Taught_Learning
http://deeplearning.stanford.edu/wiki/index.php/Exercise:Self-Taught_Learning
http://deeplearning.stanford.edu/wiki/index.php/Exercise:Sparse_Autoencoder
http://deeplearning.stanford.edu/wiki/index.php/Exercise:Softmax_Regression
实验介绍:
完成手写体识别,采用MNIST手写数据库,数字从0-9,训练样本6万个,测试样本1万个。每个样本由一个大小为28*28的图片表示
环境:matlab 2010a
实验方法:
本节内容主要是对前面几节内容的综合运用。
1、将数据集中标签为1-5的样本作为完监督学习:self=taught learning的输入学习特征表示
2、将一部分标签为6-10的样本作为训练数据训练softmax regression模型,一部分作为测试数据
理论基础:
一、特征学习
self-taught learning 是完全无监督学习方法,通过sparse antoencoder方法学习特征表示方法,sparse antoencoder模型有三层网络,第一层输入样本,第二层为特征层,第三层是对样本的重构。

根据该模型学习参数W1,b1,W2,b2,对于任意一个输入样本我们可以计算一个激活向量a,通过该模型我们得到图片的一个更好的表示方法。因此我们把在该模型中学习到的特征作为后面分类器训练及测试的输入,可以得到更好的分类结果。

二、无监督学习
在无监督学习中,需要区分两个重要概念:self-taught learning 与 semi-supervised learning。self-taught learning不要求无标签的数据与有标签的数据来自同一个分布,并且在分类问题中,样本的标签可能不属于任何一个类别。
而semi-supervise laerning要求无标签数据与有标签数据来自同一个分布,并且无标签的样本的类别不能超出分类类别。因此self-taught learning属于完全的无监督学习方法。在本节中,我们使用标签为1-5的样本进行特征学习,
标签为5-10的测试,完全没有问题,不需要用具有全部标签的数据进行模型学习,模型泛化能力更强。
一些问题:
1、 在sparse autoencoder训练中,我们使用LBFGS方法,但该方法貌似比较耗内容,我32位机器,内存不到4G的内容完全不够用,迭代到40次就提示内存不够用了,就算把样本缩小到几百也无济于事。
只好拿到师兄的64位,4G内存机器上去跑,大概跑了三四十分钟吧。不晓得为什么在64位机器上就OK
2、 由于本节使用到前面的sparse antoencoder 及softmax regression的相关内容,所以具体代码参考相关网页
一些函数:
unique:提取矩阵中不重复元素,并按从大到小的顺序返回
numel:计算矩阵中元素的数目
实验结果:
使用autoencoder方法学习到的权值,以下是权值的可视化结果,
准确率:
,与作者给出的一致。

实验代码:
stlExercise.m
%% CS294A/CS294W Self-taught Learning Exercise % Instructions % ------------ % % This file contains code that helps you get started on the % self-taught learning. You will need to complete code in feedForwardAutoencoder.m % You will also need to have implemented sparseAutoencoderCost.m and % softmaxCost.m from previous exercises. % %% ====================================================================== % STEP 0: Here we provide the relevant parameters values that will % allow your sparse autoencoder to get good filters; you do not need to % change the parameters below. inputSize = 28 * 28; numLabels = 5; hiddenSize = 200; sparsityParam = 0.1; % desired average activation of the hidden units. % (This was denoted by the Greek alphabet rho, which looks like a lower-case "p", % in the lecture notes). lambda = 3e-3; % weight decay parameter beta = 3; % weight of sparsity penalty term maxIter = 400; %% ====================================================================== % STEP 1: Load data from the MNIST database % % This loads our training and test data from the MNIST database files. % We have sorted the data for you in this so that you will not have to % change it. % Load MNIST database files mnistData = loadMNISTImages('train-images-idx3-ubyte'); mnistLabels = loadMNISTLabels('train-labels-idx1-ubyte'); % Set Unlabeled Set (All Images) % Simulate a Labeled and Unlabeled set labeledSet = find(mnistLabels >= 0 & mnistLabels <= 4); unlabeledSet = find(mnistLabels >= 5); numTrain = round(numel(labeledSet)/2); trainSet = labeledSet(1:numTrain); testSet = labeledSet(numTrain+1:end); unlabeledData = mnistData(:, unlabeledSet); trainData = mnistData(:, trainSet); trainLabels = mnistLabels(trainSet)' + 1; % Shift Labels to the Range 1-5 testData = mnistData(:, testSet); testLabels = mnistLabels(testSet)' + 1; % Shift Labels to the Range 1-5 % Output Some Statistics fprintf('# examples in unlabeled set: %d\n', size(unlabeledData, 2)); fprintf('# examples in supervised training set: %d\n\n', size(trainData, 2)); fprintf('# examples in supervised testing set: %d\n\n', size(testData, 2)); %% ====================================================================== % STEP 2: Train the sparse autoencoder % This trains the sparse autoencoder on the unlabeled training % images. % Randomly initialize the parameters theta = initializeParameters(hiddenSize, inputSize); %% ----------------- YOUR CODE HERE ---------------------- % Find opttheta by running the sparse autoencoder on % unlabeledTrainingImages opttheta = theta; addpath minFunc/ options.Method = 'lbfgs'; options.maxIter = 35; % Maximum number of iterations of L-BFGS to run options.display = 'on'; [opttheta, cost] = minFunc( @(p) sparseAutoencoderCost(p, ... inputSize, hiddenSize, ... lambda, sparsityParam, ... beta, unlabeledData), ... theta, options); %% ----------------------------------------------------- % Visualize weights W1 = reshape(opttheta(1:hiddenSize * inputSize), hiddenSize, inputSize); display_network(W1'); %%====================================================================== %% STEP 3: Extract Features from the Supervised Dataset % % You need to complete the code in feedForwardAutoencoder.m so that the % following command will extract features from the data. trainFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ... trainData); testFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ... testData); %%====================================================================== %% STEP 4: Train the softmax classifier softmaxModel = struct; %% ----------------- YOUR CODE HERE ---------------------- % Use softmaxTrain.m from the previous exercise to train a multi-class % classifier. % Use lambda = 1e-4 for the weight regularization for softmax % You need to compute softmaxModel using softmaxTrain on trainFeatures and % trainLabels %function [softmaxModel] = softmaxTrain(inputSize, numClasses, lambda, inputData, labels, options) options.maxIter = 100; lambda = 1e-4; numClasses = numel( unique( trainLabels) ); softmaxModel = softmaxTrain(hiddenSize, numClasses, lambda, ... trainFeatures, trainLabels, options); %% ----------------------------------------------------- %%====================================================================== %% STEP 5: Testing %% ----------------- YOUR CODE HERE ---------------------- % Compute Predictions on the test set (testFeatures) using softmaxPredict % and softmaxModel %function [pred] = softmaxPredict(softmaxModel, data) pred = softmaxPredict( softmaxModel , testFeatures ); %% ----------------------------------------------------- % Classification Score fprintf('Test Accuracy: %f%%\n', 100*mean(pred(:) == testLabels(:))); % (note that we shift the labels by 1, so that digit 0 now corresponds to % label 1) % % Accuracy is the proportion of correctly classified images % The results for our implementation was: % % Accuracy: 98.3% % %
feedForwardAutoencoder.m
function [activation] = feedForwardAutoencoder(theta, hiddenSize, visibleSize, data) % theta: trained weights from the autoencoder % visibleSize: the number of input units (probably 64) % hiddenSize: the number of hidden units (probably 25) % data: Our matrix containing the training data as columns. So, data(:,i) is the i-th training example. % We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this % follows the notation convention of the lecture notes. W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize); b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize); %% ---------- YOUR CODE HERE -------------------------------------- % Instructions: Compute the activation of the hidden layer for the Sparse Autoencoder. activation = sigmoid( bsxfun(@plus , W1 * data , b1 ) ); %------------------------------------------------------------------- end %------------------------------------------------------------------- % Here's an implementation of the sigmoid function, which you may find useful % in your computation of the costs and the gradients. This inputs a (row or % column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). function sigm = sigmoid(x) sigm = 1 ./ (1 + exp(-x)); end

浙公网安备 33010602011771号