Cerebral Cortex: Principles of Operation.Appendix 4(一)
Autoassociation or Attractor Networks
(该部分内容来源于Cerebral Cortex: Principles of Operation【Rolls 2016a】的附录4——Simulation software for neuronal network models,主要介绍了如何利用matlab软件实现相关的几个神经网络。文章仅作为个人学习笔记用于复盘,若有错误,请多斧正。)
首先,对于一些变量赋初值:神经元数量N,突触数量nSyn,突触权重矩阵SynMat,训练/测试模式数量nPatts,学习率,稀疏度,nFlipBits(这个我还不知道怎么用中文表达,目前理解是测试模式相对训练模式变化的程度),以及神经网络更新次数。
clear all; close all hidden; format compact; % format bank; fig = 1; % rng(1); % comment this in only to set the random number generator for code development N = 100; % number of neurons in the fully connected autoassociation net nSyn = N; % number of synapses on each neuron SynMat = zeros(nSyn, N); % (Synapses = Rows, neurons = Columns) nPatts = 10; % the number of training and testng patterns. Suggest 10 for sparseness = 0.5, and 30 for sparseness = 0.1 Learnrate = 1 / nPatts; % Not of especial significance in an autoassociation network, but this keeps the weights within a range Sparseness = 0.5; % the sparseness of the trained representation, i.e. the proportion of neurons that have high firing rates of 1 % Investigate values of 0.5 and 0.1 display = 1; % 0 no display; 1 display the network nFlipBits = 14; % The number of bits that are flipped to produce distorted recall cues. % It is suggested that this be set to close to 0.2 * N * Sparseness nepochs = 9; % the number of times that the network is allowed to update during test
(1)设置稀疏度为0.5的10个训练模式(该训练集也是神经元的标准输出)
实现方法为:将矩阵全部初始化为0,再将每一列前一半(50个)元素赋值为1,然后打乱每一列的行顺序,将1随机分散在该列。
***用不同灰度画图输出TrainPatts矩阵的代码在此处省略,下面有关画图的代码也全都省略。
TrainPatts = zeros(N, nPatts); % This matrix stores the training patterns. Each pattern vector has N elements for patt = 1 : nPatts TrainPatts(1 : N * Sparseness, patt) = 1; % the number of bits set to 1 in each pattern p = randperm(N); % rearrange the elements of this pattern vector in random order TrainPatts(:, patt) = TrainPatts(p, patt); end
(2)对TrainPatts作变换,生成distorted(adj.扭曲的,也就是将理想输出稍微变动一下,以测试其准确度)的测试集。用于后面测试时,作为epoch=1时的输入。
(先将每列中随机14个1变成0,再随机将14个0变成1)
TrainPattsFlipped = TrainPatts; for patt = 1 : nPatts synarray = randperm(nSyn); el = 1; for bit = 1 : nFlipBits while TrainPatts(synarray(el),patt) ~= 1 el = el + 1; if el > nSyn disp('Error: too many bits being flipped'); el = 1; end end TrainPattsFlipped(synarray(el),patt) = 0; el = el + 1; end synarray = randperm(nSyn); el = 1; for bit = 1 : nFlipBits while TrainPatts(synarray(el),patt) ~= 0 el = el + 1; if el > nSyn disp('Error: too many bits being flipped'); el = 1; end end TrainPattsFlipped(synarray(el),patt) = 1; el = el + 1; end end
(3)训练权重矩阵SynMat
每个神经元对应一个postSynRate,每个神经元各突触的preSynRate不同,但不同神经元突触权重共享同一个列向量。(这两个变量各代表什么,我暂时不是很 清楚,后面再补加解释)
( syn ~= neuron ,recurrent collateral 以及 covariance rule也都在之后再补加解释。)
for patt = 1 : nPatts for neuron = 1 : N postSynRate = TrainPatts(neuron, patt); % postsynaptic firing rate. The external input to the neurons. for syn = 1 : nSyn if syn ~= neuron % avoid self connections of a recurrent collateral axon onto its sending neuron preSynRate = TrainPatts(syn, patt); % the presynaptic rate is the same as the postsynaptic rate because of the recurrent collaterals weight_change = Learnrate * (postSynRate - Sparseness) * (preSynRate - Sparseness); % use a covariance rule. % The sparseness is the average firing rate % weight_change = Learnrate * (postSynRate) * (preSynRate); % OR use a Hebb rule % weight_change = Learnrate * (postSynRate) * (preSynRate - Sparseness); % OR use a Hebb rule but with also heterosynaptic LTD see Rolls (2008) B.3.3.6. SynMat(syn, neuron) = SynMat(syn, neuron) + weight_change; end end
(4)测试该网络的准确率
epoch=1时,用TrainPattsFlipped作为输入;epoch>1时,将上一次输出结果Rate作为输入。用点乘计算出不同神经元的ActVn后,将其归一化,并让其中一半输出状态为1,剩下输出状态为0(稀疏度为0.5),存储在Rate向量(1*N矩阵)里。比较输出矩阵Rate和训练集TrainPatts的相似度,可判断该网络的准确率。
disp('Testing'); nCorrect = 0; % the number of correctly recalled patterns for patt = 1 : nPatts Rate = zeros(1, N); % output firing rates for epoch = 1 : nepochs if epoch <=1 clamp = 1; % the recall cue or pattern is applied only at the beginning of retrieval else clamp = 0; end % calculate activation Actvn = zeros(1, N); % the activations of the N output neurons for neuron = 1 : N PreSynInput = Rate' + (TrainPattsFlipped(:, patt) * clamp); % the firing on the recurrent collaterals is the firing rate of the neurons % plus at the beginning of retrieval the cue recall pattern Actvn(neuron) = dot(PreSynInput,SynMat(:, neuron)); % dot product of input pattern and synaptic weight vector for one neuron end scale = 1.0 / (max(Actvn) - min(Actvn)); Actvn = (Actvn - min(Actvn)) * scale; % scale Activation to 0-1. % Now convert the activation into a firing rate using a binary thereshold activation functon % The threshold is selected in this case artificially by sorting % the array to determine where the threshold should be set to achieve the required sparseness % In the brain, inhibitory neurons are important in setting the proportion of excitatory neurons that are active tmp = sort(Actvn, 'descend'); el = floor(length(Rate) * Sparseness); if el < 1 el = 1; end Threshold = tmp(el); Rate = Actvn; % Threshold % Actvn for neuron = 1 : length(Rate) if Rate(neuron) >= Threshold % threshold binary activation function Rate(neuron) = 1; else Rate(neuron) = 0; end end
R = corr(TrainPatts(:, patt), Rate'); % correlation coefficient
if R > 0.98 % the criterion of correct is a correlation of 0.98 between a recalled pattern and the stored pattern
nCorrect = nCorrect + 1; % accumulate the number of correct recalls across patterns
end

浙公网安备 33010602011771号