基本梳理:
- 決策樹模型與學習
- 決策樹是通過一系列規則對資料進行分類的過程
- 優點
- 推理過程容易了解
- 依賴于屬性變量
- 忽略沒有貢獻的屬性變量
- 核心是歸納算法
- 決策樹相關的重要算法
- CLS
- ID3
- C4.5
- CART
- 特征選擇
- 決策樹的CLS算法
- 資訊增益
- 熵
- 消息量大小的度量
- I ( a i ) = p ( a i ) log 2 1 p ( a i ) I \left( a _ { i } \right) = p \left( a _ { i } \right) \log _ { 2 } \frac { 1 } { p \left( a _ { i } \right) } I(ai)=p(ai)log2p(ai)1
- 熵越大,随機變量的不确定性越大
- 條件熵
- H ( Y ∣ X ) = ∑ i = 1 n p i H ( Y ∣ X = x i ) H ( Y | X ) = \sum _ { i = 1 } ^ { n } p _ { i } H ( Y | X = x _ { i } ) H(Y∣X)=∑i=1npiH(Y∣X=xi)
- 消息增益
- g ( D , A ) = H ( D ) − H ( D ∣ A ) g ( D , A ) = H ( D ) - H ( D | A ) g(D,A)=H(D)−H(D∣A)
- 得知特征X的消息而使的類Y的消息的不确定性減少的程度
- 算法
- 輸入:訓練資料集D和特征A
- 輸出:特征A對訓練資料集D的資訊增益g(D,A)
- 計算資料集D的經驗熵H(D)
- H ( D ) = - \sum _ { k = 1 } ^ { K } \frac { \left| C _ { k } \right| } { D | } \log _ { 2 } \frac { \left| C _ { k } \right| } { | D | }
- 計算特征A對資料集D的經驗條件熵H(D|A)
- H ( D ∣ A ) = ∑ i = 1 n ∣ D i ∣ ∣ D ∣ H ( D i ) = − ∑ i = 1 n ∣ D i ∣ D ∣ ∑ k = 1 K ∣ D i k ∣ ∣ D i ∣ log 2 ∣ D i k ∣ ∣ D i ∣ H ( D | A ) = \sum _ { i = 1 } ^ { n } \frac { \left| D _ { i } \right| } { | D | } H \left( D _ { i } \right) = - \sum _ { i = 1 } ^ { n } \frac { \left| D _ { i } \right| } { D | } \sum _ { k = 1 } ^ { K } \frac { \left| D _ { i k } \right| } { \left| D _ { i } \right| } \log _ { 2 } \frac { \left| D _ { i k } \right| } { \left| D _ { i } \right| } H(D∣A)=∑i=1n∣D∣∣Di∣H(Di)=−∑i=1nD∣∣Di∣∑k=1K∣Di∣∣Dik∣log2∣Di∣∣Dik∣
- 計算資訊增益
- g ( D , A ) = H ( D ) − H ( D ∣ A ) g ( D , A ) = H ( D ) - H ( D | A ) g(D,A)=H(D)−H(D∣A)
- 熵
- 決策樹的生成
- 決策樹的剪枝
- CART算法
- 分類樹(目标變量是類别的)
- 回歸樹(目标變量是連續的)
代碼小練習:
- ID3(基于資訊增益)
- C4.5(基于資訊增益比)
- CART(gini指數)
entropy: H ( x ) = − ∑ i = 1 n p i log p i H(x) = -\sum_{i=1}^{n}p_i\log{p_i} H(x)=−∑i=1npilogpi
conditional entropy: H ( X ∣ Y ) = ∑ P ( X ∣ Y ) log P ( X ∣ Y ) H(X|Y)=\sum{P(X|Y)}\log{P(X|Y)} H(X∣Y)=∑P(X∣Y)logP(X∣Y)
information gain : g ( D , A ) = H ( D ) − H ( D ∣ A ) g(D, A)=H(D)-H(D|A) g(D,A)=H(D)−H(D∣A)
information gain ratio: g R ( D , A ) = g ( D , A ) H ( A ) g_R(D, A) = \frac{g(D,A)}{H(A)} gR(D,A)=H(A)g(D,A)
gini index: G i n i ( D ) = ∑ k = 1 K p k log p k = 1 − ∑ k = 1 K p k 2 Gini(D)=\sum_{k=1}^{K}p_k\log{p_k}=1-\sum_{k=1}^{K}p_k^2 Gini(D)=∑k=1Kpklogpk=1−∑k=1Kpk2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from collections import Counter
import math
from math import log
import pprint
1. 建立資料
# 書上題目5.1
def create_data():
datasets = [['青年', '否', '否', '一般', '否'],
['青年', '否', '否', '好', '否'],
['青年', '是', '否', '好', '是'],
['青年', '是', '是', '一般', '是'],
['青年', '否', '否', '一般', '否'],
['中年', '否', '否', '一般', '否'],
['中年', '否', '否', '好', '否'],
['中年', '是', '是', '好', '是'],
['中年', '否', '是', '非常好', '是'],
['中年', '否', '是', '非常好', '是'],
['老年', '否', '是', '非常好', '是'],
['老年', '否', '是', '好', '是'],
['老年', '是', '否', '好', '是'],
['老年', '是', '否', '非常好', '是'],
['老年', '否', '否', '一般', '否'],
]
labels = [u'年齡', u'有工作', u'有自己的房子', u'信貸情況', u'類别']
# 傳回資料集和每個次元的名稱
return datasets, labels
datasets, labels = create_data()
train_data = pd.DataFrame(datasets, columns=labels)
train_data
年齡 | 有工作 | 有自己的房子 | 信貸情況 | 類别 | |
---|---|---|---|---|---|
青年 | 否 | 否 | 一般 | 否 | |
1 | 青年 | 否 | 否 | 好 | 否 |
2 | 青年 | 是 | 否 | 好 | 是 |
3 | 青年 | 是 | 是 | 一般 | 是 |
4 | 青年 | 否 | 否 | 一般 | 否 |
5 | 中年 | 否 | 否 | 一般 | 否 |
6 | 中年 | 否 | 否 | 好 | 否 |
7 | 中年 | 是 | 是 | 好 | 是 |
8 | 中年 | 否 | 是 | 非常好 | 是 |
9 | 中年 | 否 | 是 | 非常好 | 是 |
10 | 老年 | 否 | 是 | 非常好 | 是 |
11 | 老年 | 否 | 是 | 好 | 是 |
12 | 老年 | 是 | 否 | 好 | 是 |
13 | 老年 | 是 | 否 | 非常好 | 是 |
14 | 老年 | 否 | 否 | 一般 | 否 |
熵
def calc_ent(datasets):
data_length = len(datasets)
label_count = {}
for i in range(data_length):
label = datasets[i][-1]
if label not in label_count:
label_count[label] = 0
label_count[label] += 1
ent = -sum([ (p/data_length)*log(p/data_length,2) for p in label_count.values()])
return ent
條件熵
def cond_ent(datasets,axis=0):
data_length = len(datasets)
feature_sets = {}
for i in range(data_length):
feature = datasets[i][axis]
if feature not in feature_sets:
feature_sets[feature] = []
feature_sets[feature].append(datasets[i])
cond_ent = sum([(len(p)/data_length)*calc_ent(p) for p in feature_sets.values()])
return cond_ent
資訊增益
def info_gain(ent,cond_ent):
return ent - cond_ent
訓練
def info_gain_train(datasets):
count = len(datasets[0]) - 1
ent = calc_ent(datasets)
best_feature = []
for c in range(count):
c_info_gain = info_gain(ent,cond_ent(datasets,axis=c))
best_feature.append((c,c_info_gain))
print('特征({}) 的 info_gain : {:.5f}'.format(labels[c], c_info_gain))
best_ = max(best_feature,key = lambda x: x[-1])
return '特征({})的資訊增益最大,選擇為根節點特征'.format(labels[best_[0]])
info_gain_train(np.array(datasets))
特征(年齡) 的 info_gain : 0.08301
特征(有工作) 的 info_gain : 0.32365
特征(有自己的房子) 的 info_gain : 0.41997
特征(信貸情況) 的 info_gain : 0.36299
'特征(有自己的房子)的資訊增益最大,選擇為根節點特征'
利用ID3算法生成決策樹
class Node:
def __init__(self,root=True,label=None,feature_name=None,feature=None):
self.root = root
self.label = label
self.feature_name = feature_name
self.feature = feature
self.tree = {}
self.result = {'label:': self.label, 'feature': self.feature, 'tree': self.tree}
#elf.result = {"label:"self.label,"feature:"self.feature,"tree":self.tree}
def __repr__(self):
return "{}".format(self.result)
def add_node(self,val,node):
self.tree[val] = node
def predict(self,features):
if self.root is True:
return self.label
return self.tree[features[self.feature]].predict(features)
class DTree:
def __init__(self,epsilon=0.1):
self.epsilon = epsilon
self._tree = {}
# 熵
@staticmethod
def calc_ent(datasets):
data_length = len(datasets)
label_count = {}
for i in range(data_length):
label = datasets[i][-1]
if label not in label_count:
label_count[label] = 0
label_count[label] += 1
ent = -sum([(p/data_length)*log(p/data_length, 2) for p in label_count.values()])
return ent
# 經驗條件熵
def cond_ent(self, datasets, axis=0):
data_length = len(datasets)
feature_sets = {}
for i in range(data_length):
feature = datasets[i][axis]
if feature not in feature_sets:
feature_sets[feature] = []
feature_sets[feature].append(datasets[i])
cond_ent = sum([(len(p)/data_length)*self.calc_ent(p) for p in feature_sets.values()])
return cond_ent
# 資訊增益
@staticmethod
def info_gain(ent, cond_ent):
return ent - cond_ent
def info_gain_train(self, datasets):
count = len(datasets[0]) - 1
ent = self.calc_ent(datasets)
best_feature = []
for c in range(count):
c_info_gain = self.info_gain(ent, self.cond_ent(datasets, axis=c))
best_feature.append((c, c_info_gain))
# 比較大小
best_ = max(best_feature, key=lambda x: x[-1])
return best_
def train(self,train_data):
_, y_train, features = train_data.iloc[:, :-1], train_data.iloc[:, -1], train_data.columns[:-1]
# 1,若D中執行個體屬于同一類Ck,則T為單節點樹,并将類Ck作為結點的類标記,傳回T
if len(y_train.value_counts()) == 1:
return Node(root=True,
label=y_train.iloc[0])
# 2, 若A為空,則T為單節點樹,将D中執行個體樹最大的類Ck作為該節點的類标記,傳回T
if len(features) == 0:
return Node(root=True, label=y_train.value_counts().sort_values(ascending=False).index[0])
# 3,計算最大資訊增益 同5.1,Ag為資訊增益最大的特征
max_feature, max_info_gain = self.info_gain_train(np.array(train_data))
max_feature_name = features[max_feature]
# 4,Ag的資訊增益小于門檻值eta,則置T為單節點樹,并将D中是執行個體數最大的類Ck作為該節點的類标記,傳回T
if max_info_gain < self.epsilon:
return Node(root=True, label=y_train.value_counts().sort_values(ascending=False).index[0])
# 5,建構Ag子集
node_tree = Node(root=False, feature_name=max_feature_name, feature=max_feature)
feature_list = train_data[max_feature_name].value_counts().index
for f in feature_list:
sub_train_df = train_data.loc[train_data[max_feature_name] == f].drop([max_feature_name], axis=1)
# 6, 遞歸生成樹
sub_tree = self.train(sub_train_df)
node_tree.add_node(f, sub_tree)
# pprint.pprint(node_tree.tree)
return node_tree
def fit(self, train_data):
self._tree = self.train(train_data)
return self._tree
def predict(self, X_test):
return self._tree.predict(X_test)
datasets, labels = create_data()
data_df = pd.DataFrame(datasets, columns=labels)
dt = DTree()
tree = dt.fit(data_df)
tree
{'label:': None, 'feature': 2, 'tree': {'是': {'label:': '是', 'feature': None, 'tree': {}}, '否': {'label:': None, 'feature': 1, 'tree': {'是': {'label:': '是', 'feature': None, 'tree': {}}, '否': {'label:': '否', 'feature': None, 'tree': {}}}}}}
dt.predict(['老年', '否', '否', '一般'])
'否'
sklearn.tree.DecisionTreeClassifier
criterion : string, optional (default=”gini”)
The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain.
# data
def create_data():
iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df['label'] = iris.target
df.columns = ['sepal length', 'sepal width', 'petal length', 'petal width', 'label']
data = np.array(df.iloc[:100, [0, 1, -1]])
# print(data)
return data[:,:2], data[:,-1]
X, y = create_data()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
from sklearn.tree import export_graphviz
import graphviz
clf = DecisionTreeClassifier()
clf.fit(X_train, y_train,)
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False, random_state=None,
splitter='best')
tree_pic = export_graphviz(clf, out_file="mytree.pdf")
with open('mytree.pdf') as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
clf.score(X_test, y_test)
0.90000000000000002