天天看點

學習矢量量化(LVQ)

一 自組織競争神經網絡 net=newc([0 1;0 1],2)

1. 網絡結構

單層神經元網絡 輸入節點與輸出節點之間全互聯

競争是神經元之間的競争;當神經元赢時,該神經元輸出為1,否則為0。

2. 訓練過程

權值調整――Kohonen學習規則:dw=learnk(w,p,[],[],a,[],[],[],[],[],lp,[]);

隻對獲勝的神經元權值進行調整,使得網絡的權值趨向于輸入向量。結果,獲勝的神經元對将來再次出現的相似向量(能被門檻值b所包容的)更加容易赢得該神經元的勝利。最終實作了對輸入向量的分類。

門檻值調整――門檻值學習規則:[dB,LS]=learncon(B,P,Z,N,A,T,E,gW,gA ,D,LP,LS)

使經常活動的神經元的門檻值越來越小,并且使得不經常活動的神經元活動更加頻繁。

二 自組織特征映射(SOFM)神經網絡

1. 網絡結構

在結構上模拟了大腦皮層中神經元呈二維空間點陣的結構

輸入層和競争層組成單層神經網絡 :

輸入層:一維神經元 n節

競争層:二維神經元拓撲結構 互相間可能有局部連接配接

拓撲結構: 矩形網格 gridtop()

六角形 hextop()

随機結構 randtop()

神經元間距: 歐氏距離 dist();box距離 boxdist();

link距離 linkdist();manhattan距離 mandist()

  1. 訓練過程

    對獲勝節點及半徑k内節點進行權值調整,且k越來越小,直到隻包含獲勝節點本身為止;這樣,使得對于某類模式,獲勝節點能作出最大響應,相鄰節點作出較少響應。

    權值調整――learnsom():

    排序階段:學習率由初始值下降至調整階段學習率;鄰域大小由最大神經元距離減小到1

    調整階段:學習率緩慢下降,直到0;鄰域大小一直為1。學習矢量量化(LVQ)神經網絡

  2. 網絡結構

    競争層(隐層)+線性層

    線性層的一個期望類别對應競争層中若幹個子類

  3. 學習規則

    競争層将自動學習對輸入向量進行分類,這種分類的結果僅僅依賴于輸入向量之間的距離。如果兩個輸入向量特别相近,競争層就把他們分在同一類。

詳細介紹見:http://www.doc88.com/p-8495503025413.html

function [dw,ls] = learnlv3(w,p,z,n,a,t,e,gW,gA,d,lp,ls)
%LEARNLV2 LVQ2 weight learning function.
%
%   Syntax
%   
%     [dW,LS] = learnlv3(w,p,n,a,T,lp,ls,Ttrain,C)
%     info = learnlv2(code)
%
%   Description
%
%     LEARNLV3 is the OLVQ weight learning function.
%
%     LEARNLV2(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
%       W  - SxR weight matrix (or Sx1 bias vector).
%       P  - RxQ input vectors (or ones(1,Q)).
%       Z  - SxQ weighted input vectors.
%       N  - SxQ net input vectors.
%       A  - SxQ output vectors.
%       T  - SxQ layer target vectors.
%       E  - SxQ layer error vectors.
%       gW - SxR weight gradient with respect to performance.
%       gA - SxQ output gradient with respect to performance.
%       D  - SxS neuron distances.
%       LP - Learning parameters, none, LP = [].
%       LS - Learning state, initially should be = [].
%     and returns,
%       dW - SxR weight (or bias) change matrix.
%       LS - New learning state.
%
%     Learning occurs according to LEARNLV1's learning parameter,
%     shown here with its default value.
%       LP.lr - 0.01 - Learning rate
%
%     LEARNLV2(CODE) returns useful information for each CODE string:
%       'pnames'    - Returns names of learning parameters.
%       'pdefaults' - Returns default learning parameters.
%       'needg'     - Returns 1 if this function uses gW or gA.
%
%   Examples
%
%     Here we define a sample input P, output A, weight matrix W, and
%     output gradient gA for a layer with a 2-element input and 3 neurons.
%     We also define the learning rate LR.
%
%       p = rand(2,1);
%       w = rand(3,2);
%       n = negdist(w,p);
%       a = compet(n);
%       gA = [-1;1; 1];
%       lp.lr = 0.5;
%
%     Since LEARNLV2 only needs these values to calculate a weight
%     change (see Algorithm below), we will use them to do so.
%
%       dW = learnlv3(w,p,n,a,lp,Ttrain,C)
%
%   Network Use
%
%     You can create a standard network that uses LEARNLV2 with NEWLVQ.
%
%     To prepare the weights of layer i of a custom network
%     to learn with LEARNLV1:
%     1) Set NET.trainFcn to 'trainwb1'.
%        (NET.trainParam will automatically become TRAINWB1's default parameters.)
%     2) Set NET.adaptFcn to 'adaptwb'.
%        (NET.adaptParam will automatically become TRAINWB1's default parameters.)
%     3) Set each NET.inputWeights{i,j}.learnFcn to 'learnlv2'.
%        Set each NET.layerWeights{i,j}.learnFcn to 'learnlv2'.
%        (Each weight learning parameter property will automatically
%        be set to LEARNLV2's default parameters.)
%
%     To train the network (or enable it to adapt):
%     1) Set NET.trainParam (or NET.adaptParam) properties as desired.
%     2) Call TRAIN (or ADAPT).
%
%   Algorithm
%
%     LEARNLV3 calculates the weight change dW for a given neuron from
%     the neuron's input P, output A, train vector target T train, output
%     conexion matrix C and learning rate LR
%     according to the OLVQ rule, given i the index of the neuron whose
%     output a(i) is 1:
%
%       dw(i,:) = +lr*(p-w(i,:)) if C(:,i) = Ttrain
%               = -lr*(p-w(i,:)) if C(:,i) ~= Ttrain
%
%     if C(:,i) ~= Ttrain then the index j is found of the neuron with the
%     greatest net input n(k), from the neurons whose C(:,k)=Ttrain.  This
%     neuron's weights are updated as follows:
%
%       dw(j,:) = +lr*(p-w(i,:))
%
%   See also LEARNLV1, ADAPTWB, TRAINWB, ADAPT, TRAIN.

% Mark Beale, 11-31-97
% Copyright (c) 1992-1998 by The MathWorks, Inc.
% $Revision: 1.1.1.1 $

% FUNCTION INFO
% =============
if isstr(w)
  switch lower(w)
  case 'name'
      dw = 'Learning Vector Quantization 3';
  case 'pnames'
    dw = {'lr';'window'};
  case 'pdefaults'
    lp.lr = ;
    lp.window = ;
    dw = lp;
  case 'needg'
    dw = ;
  otherwise
    error('NNET:Arguments','Unrecognized code.')
  end
  return
end


% CALCULATION
% ===========

[S,R] = size(w);
Q = size(p,);
pt = p';
dw = zeros(S,R);
% For each q...
for q=:Q

  % Find closest neuron k1 找到獲勝神經元
  nq = n(:,q);
  k1 = find(nq == max(nq));
  k1 = k1();

  % Find next closest neuron k2 次獲勝神經元
  nq(k1) = -inf;
  k2 = find(nq == max(nq));
  k2 = k2();


  % and if x falls into the window...
  d1 = abs(n(k1,q)); % Shorter distance
  d2 = abs(n(k2,q)); % Greater distance

  if d2/d1 > ((-lp.window)/(+lp.window))

      % then move incorrect neuron away from input,
      % and the correct neuron towards the input
      ptq = pt(q,:);
      if gA(k1,q) ~= gA(k2,q)
          % indicate the incorrect neuron with i, the other with j
          if gA(k1,q) ~= 
              i = k1;
              j = k2;
          else
              i = k2;
              j = k1;
          end
          dw(i,:) = dw(i,:) - lp.lr*(ptq - w(i,:));
          dw(j,:) = dw(j,:) + lp.lr*(ptq - w(j,:));
      else
          dw(k1,:) = dw(k1,:) + *lp.window*(ptq-w(k1,:));
       %   dw(k2,:) = dw(k2,:) + 0.11*lp.window*(ptq-w(k2,:));
      end
  end
end
           

以上代碼轉自:http://blog.csdn.net/cxf7394373/article/details/6400372

繼續閱讀