天天看點

[機器學習實戰]--樸素貝葉斯過濾垃圾郵件

我們将充分利用python的文本處理能力将文檔切分成詞向量,然後利用詞向量對文檔進行分類。還将構造分類器觀察其在真實的垃圾郵件資料集中的過濾效果。

基于貝葉斯決策理論的分類方法

假設現在我們有一個資料集,它由兩類資料組成,資料分布如圖4-1所示。

[機器學習實戰]--樸素貝葉斯過濾垃圾郵件

我們現在用 p1(x,y) 表示資料點(x,y)屬于類别1(圖中用圓點表示的類别)的機率,用 p2(x,y) 表示資料點(x,y)屬于類别2(圖中用三角形表示的類别)的機率,那麼對于一個新資料點(x,y),可以用下面的規則來判斷它的類别:

  • 如果 p1(x,y) > p2(x,y) ,那麼類别為1。
  • 如果 p2(x,y) > p1(x,y) ,那麼類别為2。

計算p1,p2時我們應用到的是貝葉斯準則:

[機器學習實戰]--樸素貝葉斯過濾垃圾郵件

我們這次的實驗是使用樸素貝葉斯進行文檔分類,我們以垃圾郵件的識别為例。

問題背景:以線上社群的留言闆為例。為了不影響社群的發展,我們要屏蔽侮辱性的言論,是以要建構一個快速過濾器,如果某條留言使用了負面或者侮辱性的語言,那麼就将該留言辨別為内容不當。過濾這類内容是一個很常見的需求。對此問題建立兩個類别:侮辱類和非侮辱類,使用1和0分别表示。

準備資料:從文本中建構詞向量

給一段文本,根據詞的出現與否建構詞向量。

def loadDataSet():
    oldPostingList = [['my dog has flea problems help please'], ['maybe not take him to dog park stupid'],
                ['my dalmation is so cute I love him'], ['stop posting stupid worthless garbage'], 
                ['mr licks ate my steak how to stop him']]
    postingList = []
    for line in oldPostingList:
        newline = line[].split()
        postingList.append(newline)
    classVec = [, , , , , ]
    return postingList,classVec

def createVocabList(dataSet):
    vocabSet = set([])
    for document in dataSet:
        vocabSet = vocabSet | set(document)
    return list(vocabSet)

def setOfWords2Vec(vocabList, inputSet):
    returnVec = []*len(vocabList)
    for word in inputSet:
        if word in vocabList:
            returnVec[vocabList.index(word)] = 
        else:
            print "the word: %s is not in my Vocabulary"%word
    return returnVec
           
[機器學習實戰]--樸素貝葉斯過濾垃圾郵件

訓練算法:從詞向量計算機率

p ( c i | w ) = p ( w | c i ) p ( c i ) p ( w )

[機器學習實戰]--樸素貝葉斯過濾垃圾郵件

樸素貝葉斯分類器訓練函數

from numpy import *
def trainNB0(trainMatrix, trainCategory):
    numTrainDocs = len(trainMatrix)
    numWords = len(trainMatrix[])
    pAbusive = sum(trainCategory)/float(numTrainDocs)
    # p0Num =zeros(numWords); p1Num =zeros(numWords)
    # p0Denom = 0.0; p1Denom = 0.0
    p0Num =ones(numWords); p1Num = ones(numWords)
    p0Denom = ; p1Denom = 
    for i in range(numTrainDocs):
        if trainCategory[i] == :
            p1Num += trainMatrix[i]
            p1Denom += sum(trainMatrix[i])
        else:
            p0Num += trainMatrix[i]
            p0Denom += sum(trainMatrix[i])
    # p1Vect = p1Num/p1Denom
    # p0Vect = p0Num/p0Denom
    p1Vect = log(p1Num/p1Denom)
    p0Vect = log(p0Num/p0Denom)
    return p0Vect,p1Vect,pAbusive
           

樸素貝葉斯分類函數:

def classifyNB(vec2Classify, p0Vec, p1Vec, pClass1):
    p1 = sum(vec2Classify * p1Vec) + log(pClass1)
    p0 = sum(vec2Classify * p0Vec) + log(-pClass1)
    if p1 > p0:
        return 
    else:
        return 

def testingNB():
    listOPosts, listClasses = loadDataSet()
    myVocabList = createVocabList(listOPosts)
    trainMat = []
    for postinDoc in listOPosts:
        trainMat.append(setOfWords2Vec(myVocabList, postinDoc))
    p0V,p1V,pAb = trainNB0(array(trainMat), array(listClasses))
    testEntry = ['love', 'my', 'dalmation']
    thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
    print testEntry,'classified as: ', classifyNB(thisDoc,p0V,p1V,pAb)
    testEntry = ['stupid', 'garbage']
    thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
    print testEntry, 'classified as :', classifyNB(thisDoc, p0V, p1V, pAb)
           

準備資料:文檔詞袋模

如果一個詞在文檔中出現不止一次,這可能意味着包含該詞是否出現在文檔中所不能表達的某種資訊,這種方法被稱為詞袋模型(bag-of-words model)

def bagOfWords2VecMN(vocabList, inputSet):
    returnVec = []*len(vocabList)
    for word in inputSet:
        if word in vocabList:
            returnVec[vocabList.index(word)] += 
    return returnVec
           

使用樸素貝葉斯過濾垃圾郵件

def textParse(bigString):
    import re
    listOfTokens = re.split(r'\W*', bigString)
    return [tok.lower() for tok in listOfTokens if len(tok) > ]

def spamTest():
    docList = []; classList = []; fullText = []
    for i in range(,):
        wordList = textParse(open('email/spam/%d.txt'%i).read())
        docList.append(wordList)
        fullText.extend(wordList)
        classList.append()
        wordList = textParse(open('email/ham/%d.txt'%i).read())
        docList.append(wordList)
        fullText.extend(wordList)
        classList.append()
    vocabList = createVocabList(docList)
    trainingSet = range(); testSet = []
    for i in range():
        randIndex = int(random.uniform(, len(trainingSet)))
        testSet.append(trainingSet[randIndex])
        del(trainingSet[randIndex])
    trainMat = [];  trainClasses = []
    for docIndex in trainingSet:
        trainMat.append(setOfWords2Vec(vocabList, docList[docIndex]))
        trainClasses.append(classList[docIndex])
    p0V,p1V,pSpam = trainNB0(array(trainMat), array(trainClasses))
    errorCount = 
    for docIndex in testSet:
        wordVector = setOfWords2Vec(vocabList, docList[docIndex])
        if classifyNB(array(wordVector), p0V, p1V, pSpam) != classList[docIndex]:
            errorCount += 
    print 'the error rate is :', float(errorCount)/len(testSet)
           

函數 spamTest() 會輸出在10封随機選擇的電子郵件上的分類錯誤率。既然這些電子郵件是随機選擇的,是以每次的輸出結果可能有些差别。如果發現錯誤的話,函數會輸出錯分文檔的詞表,這樣就可以了解到底是哪篇文檔發生了錯誤。如果想要更好地估計錯誤率,那麼就應該将上述過程重複多次,比如說10次,然後求平均值。