天天看點

[機器學習實驗5]樸素貝葉斯(篩選垃圾郵件)

本次實驗是使用生成學習算法來處理資料(篩選垃圾郵件)。

判别學習算法(discriminative learning algorithm):直接學習p(y|x)(比如說logistic回歸)或者說是從輸入直接映射到{0,1}.

生成學習算法(generative learning algorithm):對p(x|y)(和p(y))進行模組化,比如高斯判别法(GDA)和樸素貝葉斯法,前者是用來處理連續資料的,後者是用來處理離散資料的。

簡單的來說,判别學習算法的模型是通過一條分隔線把兩種類别區分開,而生成學習算法是對兩種可能的結果分别進行模組化,然後分别和輸入進行比對,計算出相應的機率。

比如說良性惡性良性腫瘤和惡性惡性良性腫瘤的問題,對良性惡性良性腫瘤建立model1(y=0),對惡性惡性良性腫瘤建立model2(y=1),p(x|y=0)表示是良性惡性良性腫瘤的機率,p(x|y=1)表示是惡性惡性良性腫瘤的機率,然後根據貝葉斯公式(Bayes rule)推導出惡性惡性良性腫瘤的機率:p(y=1|x),貝葉斯公式如下:

[機器學習實驗5]樸素貝葉斯(篩選垃圾郵件)

本次實驗主要是使用樸素貝葉斯法處理離散資料,高斯判别法類似,隻是參數的計算方法不同。

題目如下:

[機器學習實驗5]樸素貝葉斯(篩選垃圾郵件)

資料連結:

http://openclassroom.stanford.edu/MainFolder/courses/MachineLearning/exercises/ex6materials/ex6DataPrepared.zip

原題連結:

http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=MachineLearning&doc=exercises/ex6/ex6.html

原理:

這裡就不給出詳細的概念和推導了,想要了解的可以查閱其他資料,這裡直接給出計算樸素貝葉斯參數的公式并做個解釋:

[機器學習實驗5]樸素貝葉斯(篩選垃圾郵件)

這裡1{…}表達式的意思:1{true}=1 , 1{false}=0

m代表有m個特征值(x),φj表示第j個特征向量xj的機率,^表示and的意思,是以我們在使用的時候就是算出xj的個數,除以分母即是φj|y=1和φj|y=0的值。

然後根據公式:

[機器學習實驗5]樸素貝葉斯(篩選垃圾郵件)

可以得到n個特征量對應的機率,注意這裡公式上面的x是向量x,因為我們對樸素貝葉斯的假設是各特征量之間是獨立的,是以計算機率可以進行乘法計算。

因為在甄别過程中還可能碰到沒有加入過的特征量,但是如果按照之前的公式就會計算出0機率,而實際上這是不合理,是以需要引入拉普拉斯平滑

[機器學習實驗5]樸素貝葉斯(篩選垃圾郵件)

最後給出我們實驗中用到的公式:

[機器學習實驗5]樸素貝葉斯(篩選垃圾郵件)
[機器學習實驗5]樸素貝葉斯(篩選垃圾郵件)

m代表有m個文本,本試驗中有700的文本用例,k代表的是對應的特征詞,ni表示第i個文本中有ni個特征詞,V代表的是特征數量。

最後轉換成對數進行計算:

[機器學習實驗5]樸素貝葉斯(篩選垃圾郵件)

訓練部分的代碼:

% train.m
% Exercise : Naive Bayes text classifier

clear all; close all; clc

% store the number of training examples
numTrainDocs = ;

% store the dictionary size
numTokens = ;

% read the features matrix
M = dlmread('train-features.txt', ' ');
spmatrix = sparse(M(:,), M(:,), M(:,), numTrainDocs, numTokens);
train_matrix = full(spmatrix);

% train_matrix now contains information about the words within the emails
% the i-th row of train_matrix represents the i-th training email
% for a particular email, the entry in the j-th column tells
% you how many times the j-th dictionary word appears in that email



% read the training labels
train_labels = dlmread('train-labels.txt');
% the i-th entry of train_labels now indicates whether document i is spam


% Find the indices for the spam and nonspam labels
spam_indices = find(train_labels);
nonspam_indices = find(train_labels == );

% Calculate probability of spam
prob_spam = length(spam_indices) / numTrainDocs;

% Sum the number of words in each email by summing along each row of
% train_matrix
email_lengths = sum(train_matrix, );%得到每個郵件中的特征詞的個數,ni個
% Now find the total word counts of all the spam emails and nonspam emails
spam_wc = sum(email_lengths(spam_indices));%代表∑{y(i)=}ni 
nonspam_wc = sum(email_lengths(nonspam_indices));%代表∑{y(i)=}ni 

% Calculate the probability of the tokens in spam emails
%對應于∑∑{xj^i=K and y(i)=}+ 
prob_tokens_spam = (sum(train_matrix(spam_indices, :)) + ) ./ ...
    (spam_wc + numTokens);
% Now the k-th entry of prob_tokens_spam represents phi_(k|y=)

% Calculate the probability of the tokens in non-spam emails
prob_tokens_nonspam = (sum(train_matrix(nonspam_indices, :)) + )./ ...
    (nonspam_wc + numTokens);
% Now the k-th entry of prob_tokens_nonspam represents phi_(k|y=)
           

分類測試部分的代碼:

% test.m
% Exercise : Naive Bayes text classifier

% read the test matrix in the same way we read the training matrix
N = dlmread('test-features.txt', ' ');
spmatrix = sparse(N(:,), N(:,), N(:,));
test_matrix = full(spmatrix);

% Store the number of test documents and the size of the dictionary
numTestDocs = size(test_matrix, );
numTokens = size(test_matrix, );


% The output vector is a vector that will store the spam/nonspam prediction
% for the documents in our test set.
output = zeros(numTestDocs, );

% Calculate log p(x|y=) + log p(y=)
% and log p(x|y=) + log p(y=)
% for every document
% make your prediction based on what value is higher
% (note that this is a vectorized implementation and there are other
%  ways to calculate the prediction)
log_a = test_matrix*(log(prob_tokens_spam))' + log(prob_spam);
log_b = test_matrix*(log(prob_tokens_nonspam))'+ log( - prob_spam);  
output = log_a > log_b;


% Read the correct labels of the test set
test_labels = dlmread('test-labels.txt');

% Compute the error on the test set
% A document is misclassified if it's predicted label is different from
% the actual label, so count the number of 's from an exclusive "or"
numdocs_wrong = sum(xor(output, test_labels))

%Print out error statistics on the test set
fraction_wrong = numdocs_wrong/numTestDocs


           
[機器學習實驗5]樸素貝葉斯(篩選垃圾郵件)

注意這個地方的test_matrix為我們測試用的資料,代表了xk,那麼我們需要通過φk|y=1 ^num(xk)=P(x|y=1)來換算得到,注意這裡的x是向量,num表示k特征值出現的次數,因為是獨立的,是以是連乘換算得到。

最後把結果和人工甄别的結果做個對比

[機器學習實驗5]樸素貝葉斯(篩選垃圾郵件)

誤檢率:1.9%

繼續閱讀