天天看點

py2.7 : 《機器學習實戰》 決策樹 12.5:構造注解樹

香農熵:集合資訊的度量方式

定義為:資訊的期望值

計算所有類别所有可能值包含的資訊期望值:H = sum(-p(xi)*log2p(xi))    (1<=i<=n) n為分類的數目

# -*- coding: utf-8 -*-
from math import log
def calcShannonEnt(dataset): #計算香農熵
    numEntries = len(dataset) #計算樣本個數
    labelCounts = {} #建立一個字典儲存類别
    for featVec in dataset:
        currentLabel = featVec[-1] #一般資料最後一列都是分類情況
        if currentLabel not in labelCounts.keys():
            labelCounts[currentLabel] = 0
        labelCounts[currentLabel]+=1
    shannonEnt = 0.0 #香農熵
    for key in labelCounts:
        prob = float(labelCounts[key]) / numEntries #計算目前類别在總類别裡的機率
        shannonEnt -= (prob*log(prob,2))  #log(x,y)代表以y為底x的對數
    return shannonEnt

def createDataSet(): #拟定自己的資料
    dataSet = [
        [1,1,'yes'],
        [1,1,'yes'],
        [1,0,'no'],
        [0,1,'no'],
        [0,1,'no']
    ]
    labels = ['no surfacing' , 'flippers']
    return dataSet,labels

           

輸出測試效果:

# -*- coding: utf-8 -*-
import trees
myDat , labels = trees.createDataSet()
print myDat
print trees.calcShannonEnt(myDat)
           

得到:

[[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
0.970950594455
           

3.1.2劃分資料集:特征可能有好多個,判斷按照哪個特征劃分資料集是最好的劃分

大概思路就是按照第n列特征為m的清單進行劃分

def splitDataSet(dataSet,axis,value): #三個參數:待劃分的參數集,劃分資料集的特征,需要傳回的特征的值
    #按照第axis列作為參考劃分出值為value的清單
    retDataSet = []#新建立一個清單 将符合要求的假如
    for featVec in dataSet:
        if featVec[axis] == value:
            reducedFeatVec = featVec[:axis] #按照axis劃分參數集,把axis拿出去
            reducedFeatVec.extend(featVec[axis+1:]) #extend方法:得到一個新的清單
            retDataSet.append(reducedFeatVec)#append方法:得到包含a~b的清單
    return retDataSet
           

測試結果

print myDat
print trees.splitDataSet(myDat,0,1) #劃分mydata,按照featvec[0] == 1 劃分 ;即按照第0(1列)劃分出特征為1的
[[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
[[1, 'yes'], [1, 'yes'], [0, 'no']]
           

3.3 

PS:之前一直糾結于到底啥是香農熵,然後挑燈夜戰研究了一下《統計學習方法》書中的決策樹搞了一下理論,現在看代碼就感覺很清晰了;

        突然覺得,一邊看理論一邊實踐也是個不錯的搞法。順便吐槽一下學校兩個大資料實驗室,找不到一個能給窩講決策樹的學長學姐;

def chooseBestFeatureToSplit(dataSet): #選擇最好的資料劃分方式
    numFeature = len(dataSet[0])-1 #算出資料個數
    baseEntropy = calcShannonEnt(dataSet) #計算香農熵
    bestInfoGain = 0.0 ; bestFeature = -1
    for i in range(numFeature):
        featList = [example[i] for example in dataSet] #建立分類标簽
        uniqueVals = set(featList) #去重
        nuwEntropy = 0.0
        for value in uniqueVals:
            subDataSet = splitDataSet(dataSet,i,value) #劃分比較
            prob = len(subDataSet)/float(len(dataSet)) #Di/D
            nuwEntropy += prob*calcShannonEnt(subDataSet)
        infoGain = baseEntropy - nuwEntropy
        if (infoGain > bestInfoGain):
            bestInfoGain = infoGain
            bestFeature = i
    return bestFeature
           

PS:香農熵本身就是資訊增益,按照公式來就OK咯;

測試:

import trees
myDat , labels = trees.createDataSet()
print(trees.chooseBestFeatureToSplit(myDat))
print(myDat)
測試輸出:0 #意味着選第0列是最好的劃分特征
[[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
#可以輸出myDat來研究一下
           

3-3,3-4

當建構樹之前要先處理一下特征,如果類标簽仍舊不是唯一,就需要考慮如何定義葉子節點,是以用多數表決的方式;

遞歸建樹就是不斷的尋求可以劃分的,一樣特征的就停止;

def majorityCnt(classLIst) : #多數表決
    classCount = {}
    for vote in classLIst:
        if vote not in classCount.keys() : classCount[vote] = 0
        classCount[vote]+=1
    sortedClassCount = sorted(classCount.iteritems(),key = operator.itemgetter(1),reversed=True)
    return sortedClassCount[0][0] # 傳回最适合定義的葉子節點

def createTree(dataSet,labels):
    classList = [example[-1] for example in dataSet] #建立包含資料集所有類标簽的清單
    if classList.count(classList[0]) == len(classList): #類别如果完全相同則停止繼續劃分
        return classList[0]
    if len(dataSet[0])==1 :#如果所有特征都被使用完,則利用投票方法選舉出類标簽傳回
        return majorityCnt(classList)
    bestFeat = chooseBestFeatureToSplit(dataSet) # 擷取最好的資料集劃分方式
    bestFeatLabel = labels[bestFeat]
    myTree = {bestFeatLabel:{}}
    del(labels[bestFeat]) #删除結點遞歸
    featValues = [example[bestFeat] for example in dataSet]
    uniqueVals = set(featValues) #得到清單包含的所有屬性值
    for value in uniqueVals:
        subLabels = labels[:]
        myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet,bestFeat,value),subLabels)
    return myTree
           

效果:

import trees
myDat , labels = trees.createDataSet()
myTree = trees.createTree(myDat,labels)
print(myTree)
效果:{'no surfacing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}}
           

3.2.1 : 模拟建樹并且使用箭頭指向:

import matplotlib.pyplot as plt
#定義文本框和箭頭格式
decisionNode = dict(boxstyle="sawtooth", fc="0.8")
leafNode = dict(boxstyle="round4", fc="0.8")
arrow_args = dict(arrowstyle="<-")

#繪制帶箭頭的注解
def plotNode(nodeTxt, centerPt, parentPt, nodeType):
    createPlot.ax1.annotate(nodeTxt, xy=parentPt, xycoords='axes fraction',
                            xytext=centerPt, textcoords='axes fraction',
                            va="center", ha="center", bbox=nodeType, arrowprops=arrow_args)
def createPlot():
    fig = plt.figure(1, facecolor='white')
    fig.clf()#繪制并且清空圖
    createPlot.ax1 = plt.subplot(111, frameon=False) #ticks for demo puropses
    plotNode('a decision node', (0.5, 0.1), (0.1, 0.5), decisionNode) #繪制兩個不同的節點
    plotNode('a leaf node', (0.8, 0.1), (0.3, 0.8), leafNode)
    plt.show()
           

效果預覽:

py2.7 : 《機器學習實戰》 決策樹 12.5:構造注解樹

3.2.2 構造注解樹

要繪制一棵樹,首先要知道葉子節點,以及樹的深度;在python中一般用字典對樹進行存儲;

這樣可以确定x軸的長度和y軸的高度

3-6:擷取葉節點數和樹的層數:

def getNumLeafs(myTree): #擷取葉節點個數:利用字典存樹資訊
    numLeafs = 0
    firstStr = myTree.keys()[0] #從父節點開始可以周遊整棵樹
    secondDict = myTree[firstStr]
    for key in secondDict.keys():
        if type(secondDict[key]).__name__ == 'dict': #判斷資料類型是不是字典類型
            numLeafs += getNumLeafs(secondDict[key]) #遞歸調用累計葉子節點個數,傳回值
        else:
            numLeafs+=1
    return numLeafs

def getTreeDepth(myTree):
    maxDepth = 0
    firstStr = myTree.keys()[0]#擷取父節點
    secondDict = myTree[firstStr]
    for key in secondDict.keys():
        if type(secondDict[key]).__name__ == 'dict':
            thisDepth = 1 + getTreeDepth(secondDict[key]) #周遊到達葉子節點,遞歸傳回,深度加一
        else:
            thisDepth = 1
        if thisDepth > maxDepth : maxDepth = thisDepth #三目運算符:比較目前最深,求出最大深度
    return maxDepth

def retrieveTree(i):
    listOfTrees =[{'no surfacing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}},
                  {'no surfacing': {0: 'no', 1: {'flippers': {0: {'head': {0: 'no', 1: 'yes'}}, 1: 'no'}}}}
                  ]
    return listOfTrees[i]
           

輸出結果:

# -*- coding: utf-8 -*-
import treeplot
print(treeplot.retrieveTree(1))
mytree = treeplot.retrieveTree(0)
print treeplot.getTreeDepth(mytree)
print treeplot.getNumLeafs(mytree)


{'no surfacing': {0: 'no', 1: {'flippers': {0: {'head': {0: 'no', 1: 'yes'}}, 1: 'no'}}}}
2
3
           

使用圖像構造決策樹:

def plotMidText(cntrpt , parentpt , txtString): #在節點之間填充資訊
    xMid = (parentpt[0]-cntrpt[0]) / 2.0 + cntrpt[0] #父子節點x軸的平均
    yMid = (parentpt[1]-cntrpt[1])/2.0 + cntrpt[1]  #父子節點y軸的平均
    createPlot.ax1.text(xMid,yMid,txtString)

def plotTree(myTree, parentPt, nodeTxt):#畫圖主函數:
    numLeafs = getNumLeafs(myTree)  #求出節點個數
    depth = getTreeDepth(myTree)#求出節點深度
    firstStr = myTree.keys()[0]     #
    cntrPt = (plotTree.xOff + (1.0 + float(numLeafs))/2.0/plotTree.totalW, plotTree.yOff)
    plotMidText(cntrPt, parentPt, nodeTxt)
    plotNode(firstStr, cntrPt, parentPt, decisionNode)
    secondDict = myTree[firstStr]
    plotTree.yOff = plotTree.yOff - 1.0/plotTree.totalD
    for key in secondDict.keys():
        if type(secondDict[key]).__name__=='dict':#判斷子節點是不是字典類型決定是否繼續劃分
            plotTree(secondDict[key],cntrPt,str(key))
        else:   #畫出節點
            plotTree.xOff = plotTree.xOff + 1.0/plotTree.totalW
            plotNode(secondDict[key], (plotTree.xOff, plotTree.yOff), cntrPt, leafNode)
            plotMidText((plotTree.xOff, plotTree.yOff), cntrPt, str(key))
    plotTree.yOff = plotTree.yOff + 1.0/plotTree.totalD

def createPlot(inTree):
    fig = plt.figure(1, facecolor='white')
    fig.clf()
    axprops = dict(xticks=[], yticks=[])
    createPlot.ax1 = plt.subplot(111, frameon=False, **axprops)
    plotTree.totalW = float(getNumLeafs(inTree))
    plotTree.totalD = float(getTreeDepth(inTree))
    plotTree.xOff = -0.5/plotTree.totalW; plotTree.yOff = 1.0;
    plotTree(inTree, (0.5,1.0), '')
    plt.show()
           

圖形輸出:

py2.7 : 《機器學習實戰》 決策樹 12.5:構造注解樹

繼續閱讀