要求编程实现基于信息熵进行划分选择的决策树算法。并为西瓜数据集3.0生成一棵决策树;

其实说白了,决策树就是一颗递归生成的树,每个中间结点都是属性的划分,叶节点给出分类结果。

西瓜书习题4.3 决策树

从算法流程来看显然要构造一个node类,用来存储树的结构。这里设计了四个属性,分别来表示node的类别(-1表示中间结点,0表示负类,1表示正类),node的分支数组,node的最优划分属性和选择这个属性的值。

class Node:
    def __init__(self,ntype):
        self.ntype = ntype
        self.children = []
        self.a = -1
        self.limit = "None"
    def __str__(self):
        return "node->a : %s node->limit : %s node->ntype : %s"%(self.a,self.limit,self.ntype)

然后就是算法的主流程

def treeGenerate(x,y,a):
    node = Node(-1)
    if(sum(y)==len(y)):  #判断数据集样本是不是全属于正类
        node.ntype = 1   #如果是,递归结束 返回node头节点
        return node
    if(sum(y)==0):       #同理,判断样本是不是全属于负类
        node.ntype = 0
        return node
    if(len(a)==0 or judge(x,a)==1): #judge函数判断所有样本是否在所有属性上取值相同
        node.ntype = maxy(y)        #如果相同,标记为数据集中最多的类
        return node
    a_ = getMaxAv(x,y,a)            #该函数选出最优划分属性a*
    d_ = getdict(x,a_)              #生成一个字典,返回a*每一个值在数据集中出现的次数
    node.ntype = -1                 #node为中间结点
    node.a = a_
    for ai in set(x[a_]):           
        child = Node(-1)            #生成分支结点
        child.a = a_
        if(d_[ai] == 0):            #如果数据集中在a*上取值为ai的子集为空
            child.ntype = maxy(y)   #,标记为数据集中样本最多的类
            node.children.append(child)
            return node
        else:
            xt,yt = getdata(x,y,a_,ai)        #获取样本子集
            xt.reset_index(drop = True,inplace = True) #重建索引
            at = [av for av in a if av != a_]          
            child = treeGenerate(xt,yt,at)             #递归建树
            child.limit = ai
            node.children.append(child)
    return node

接下来就是实现各个功能函数

judge函数判断x的每列列首元素是不是和每一列每个元素相等,这样就达到了判断每列所有元素是否相同的作用。

def judge(x,a):
    for ar in a:
        if((x.loc[:,ar][0]==x.loc[:,ar]).all() == False):
            return 0
    return 1

maxy函数比较简单,返回样本数最多的类。

def maxy(y):
    if(sum(y)*2 >= len(y)):
        return 1
    else:
        return 0

getMaxAv函数获取最大的信息增益和最优划分属性,其中信息增益由calGain函数给出

def getMaxAv(x,y,a):
    Max = -1
    MaxAv = a[0]
    for av in a:
        temp = calGain(x,y,av)
        if(temp > Max):
            Max = temp
            MaxAv = av
    return MaxAv

信息增益的定义其实就是父节点的信息熵减去子节点信息熵的总合;而信息熵度量了样本集合的纯度:信息熵越小,纯度越高。因而父节点又经过一次划分得到的子节点的信息熵一定更小。信息增益就定义了这种变小的幅度,因而我们要找到最大的信息增益。

def calGain(x,y,av):
    d = getdict(x,av)
    p1 = getp1(x,y,av)
    
    ans = calEnt(sum(y)/len(y))
    for key,value in d.items():
        ans -= (value/len(y))*calEnt(p1[key]/value) #Gain(D,a) = Ent(D) - sum(Ent(D.child))
    return ans

 

#计算信息熵
def calEnt(p1):  #p1 = sum(y)/len(y)
    p0 = 1 - p1
    if(p1 == 0 or p0 == 0):
        return 0
    return -1*(math.log(p1,2)*p1+math.log(p0,2)*p0)

#辅助函数
def getdict(x,av):
    d = dict.fromkeys(set(x[av]),0)
    arr = np.array(x[av])
    for i in range(len(arr)):
        d[arr[i]] += 1
    return d

#辅助函数 返回分支节点的p1
def getp1(x,y,av):
    p1 = dict.fromkeys(set(x[av]),0)
    arr = np.array(x[av])
    for i in range(len(arr)):
        if(y[i] == 1):
            p1[arr[i]] += 1
    return p1

 最后就是getData函数,很简单,目的是得到递归建树时要用的样本子集。

def getdata(x,y,av,ai):
    xt = copy.deepcopy(x)
    yt = [] 
    for i in range(x.shape[0]):
        if(x[av][i] != ai):                #删去a*属性上不符合要求取值的样本
            xt.drop(i,axis = 0,inplace = True)
        if(x[av][i] == ai):
            yt.append(y[i])
    yt = pd.Series(yt,name = "target",dtype = "int64")
    return xt,yt

最最后就是import的库和main函数了

import numpy as np
import pandas as pd
import math
import copy


def display(node):              #打印树信息
    print(node)
    if(len(node.children) > 0):
        for i in range(len(node.children)):
            display(node.children[i])
    return 0

def main():
    data = pd.read_csv("edata.txt",sep = ',')
    y = data.target
    x = data.drop(['target'],axis = 1)
    a = np.array(x.columns)
    #print(x)
    #print(y)
    node = treeGenerate(x,y,a)
    display(node)
if __name__ == "__main__":
    main()

 

打印是按照深度优先遍历的,所以遇到type为-1的分支点,会一直往下打印
node.a为-1,就是当前node没有选择最优划分属性。

最后结果:
node->a : v4 node->limit : None node->ntype : -1
node->a : -1 node->limit : 模糊 node->ntype : 0
node->a : v6 node->limit : 稍糊 node->ntype : -1
node->a : -1 node->limit : 硬滑 node->ntype : 0
node->a : -1 node->limit : 软粘 node->ntype : 1
node->a : v2 node->limit : 清晰 node->ntype : -1
node->a : v1 node->limit : 稍蜷 node->ntype : -1
node->a : -1 node->limit : 青绿 node->ntype : 1
node->a : v6 node->limit : 乌黑 node->ntype : -1
node->a : -1 node->limit : 软粘 node->ntype : 0
node->a : -1 node->limit : 硬滑 node->ntype : 1
node->a : -1 node->limit : 蜷缩 node->ntype : 1
node->a : -1 node->limit : 硬挺 node->ntype : 0

需要注意的是,这里的决策树没有涉及连续属性,当数据中由连续属性时需要适当改写。不过也很简单。

def getTb(x,av):
    Ta = []
    Tb = []
    for i in range(x.shape[0]):
        Ta.append(x[av][i])
    list.sort(Ta)
    
    for i in range(len(Ta)-1):
        Tb.append((Ta[i] + Ta[i+1])/2)
    return Tb

def divideByT(x,y,av,tb):
    t = f = 0
    pt1 = pf1 = 0
    for i in range(x.shape[0]):
        if(x[av][i] >= tb):
            t += 1
            if(y[i] == 1):
                pt1 += 1
        else:
            f += 1
            if(y[i] == 1):
                pf1 +=1
    return t,f,pt1,pf1

def calGain(x,y,av):
    if(x[av].dtype != "float64"):
        d = getdict(x,av)
        p1 = getp1(x,y,av)
    
        ans = calEnt(sum(y)/len(y))
        for key,value in d.items():
            ans -= (value/len(y))*calEnt(p1[key]/value)
        return ans,None
    else:
        Tb = getTb(x,av)
        Max = -1
        Maxt = Tb[0]
        for tb in Tb:
            ans = calEnt(sum(y)/len(y))
            t,f,pt1,pf1 = divideByT(x,y,av,tb)
            ans -= ((t/len(y))*calEnt(pt1/t) + (f/len(y))*calEnt(pf1/f))
            if ans > Max:
                Max = ans
                Maxt = tb
        return Max,Maxt  
  
def getdata(x,y,av,ai):
    xt = copy.deepcopy(x)
    yt = [] 
    for i in range(x.shape[0]):
        if(x[av][i] != ai):
            xt.drop(i,axis = 0,inplace = True)
        if(x[av][i] == ai):
            yt.append(y[i])
    yt = pd.Series(yt,name = "target",dtype = "int64")
    return xt,yt

def getcontinuousdata(x,y,av,ai,flag):
    xt = copy.deepcopy(x)
    xf = copy.deepcopy(x)
    yt = [] 
    yf = []
    for i in range(x.shape[0]):
        if(x[av][i] > ai):
            xt.drop(i,axis = 0,inplace = True)
            yf.append(y[i])
        if(x[av][i] <= ai):
            yt.append(y[i])
            xf.drop(i,axis = 0,inplace = True)
    yt = pd.Series(yt,name = "target",dtype = "int64")
    yf = pd.Series(yf,name = "target",dtype = "int64")
    if(flag == 0):
        return xt,yt
    if(flag == 1):
        return xf,yf

def treeGenerate(x,y,a):
    node = Node(-1)
    if(sum(y)==len(y)):
        node.ntype = 1
        return node
    if(sum(y)==0):
        node.ntype = 0
        return node
    if(len(a)==0 or judge(x,a)==1):
        node.ntype = maxy(y)
        return node
    a_,t_ = getMaxAv(x,y,a)
    d_ = getdict(x,a_)
    node.ntype = -1
    node.a = a_
    if(x[a_].dtype != "float64"):
        for ai in set(x[a_]):
            child = Node(-1)
            child.a = a_
            if(d_[ai] == 0):
                child.ntype = maxy(y)
                node.children.append(child)
                return node
            else:
                xt,yt = getdata(x,y,a_,ai)
                xt.reset_index(drop = True,inplace = True)
                at = [av for av in a if av != a_]
                child = treeGenerate(xt,yt,at)
                child.limit = ai
                node.children.append(child)
        return node
    else:
        t,f,pt1,pf1 = divideByT(x,y,a_,t_)
        if(t == 0 or f == 0):
            child.ntype = maxy(y)
            node.children.append(child)
            return node
        else:
            for i in range(2):
                xt,yt = getcontinuousdata(x,y,a_,t_,i)
                xt.reset_index(drop = True,inplace = True)
                at = [av for av in a]
                child = treeGenerate(xt,yt,at)
                if(i == 0):
                    child.limit = "<=" + str(t_)
                else :
                    child.limit = ">" + str(t_)
                node.children.append(child)
        return node
结果:
node->a : v4 node->limit : None node->ntype : -1
node->a : -1 node->limit : 模糊 node->ntype : 0
node->a : v7 node->limit : 清晰 node->ntype : -1
node->a : -1 node->limit : <=0.38149999999999995 node->ntype : 0
node->a : -1 node->limit : >0.38149999999999995 node->ntype : 1
node->a : v6 node->limit : 稍糊 node->ntype : -1
node->a : -1 node->limit : 软粘 node->ntype : 1
node->a : -1 node->limit : 硬滑 node->ntype : 0
这里的数据集
v1,v2,v3,v4,v5,v6,target
青绿,蜷缩,浊响,清晰,凹陷,硬滑,1
乌黑,蜷缩,沉闷,清晰,凹陷,硬滑,1
乌黑,蜷缩,浊响,清晰,凹陷,硬滑,1
青绿,蜷缩,沉闷,清晰,凹陷,硬滑,1
浅白,蜷缩,浊响,清晰,凹陷,硬滑,1
青绿,稍蜷,浊响,清晰,稍凹,软粘,1
乌黑,稍蜷,浊响,稍糊,稍凹,软粘,1
乌黑,稍蜷,浊响,清晰,稍凹,硬滑,1
乌黑,稍蜷,沉闷,稍糊,稍凹,硬滑,0
青绿,硬挺,清脆,清晰,平坦,软粘,0
浅白,硬挺,清脆,模糊,平坦,硬滑,0
浅白,蜷缩,浊响,模糊,平坦,软粘,0
青绿,稍蜷,浊响,稍糊,凹陷,硬滑,0
浅白,稍蜷,沉闷,稍糊,凹陷,硬滑,0
乌黑,稍蜷,浊响,清晰,稍凹,软粘,0
浅白,蜷缩,浊响,模糊,平坦,硬滑,0
青绿,蜷缩,沉闷,稍糊,稍凹,硬滑,0

西瓜数据集3.0
v1,v2,v3,v4,v5,v6,v7,v8,target
青绿,蜷缩,浊响,清晰,凹陷,硬滑,0.697,0.46,1
乌黑,蜷缩,沉闷,清晰,凹陷,硬滑,0.774,0.376,1
乌黑,蜷缩,浊响,清晰,凹陷,硬滑,0.634,0.264,1
青绿,蜷缩,沉闷,清晰,凹陷,硬滑,0.608,0.318,1
浅白,蜷缩,浊响,清晰,凹陷,硬滑,0.556,0.215,1
青绿,稍蜷,浊响,清晰,稍凹,软粘,0.403,0.237,1
乌黑,稍蜷,浊响,稍糊,稍凹,软粘,0.481,0.149,1
乌黑,稍蜷,浊响,清晰,稍凹,硬滑,0.437,0.211,1
乌黑,稍蜷,沉闷,稍糊,稍凹,硬滑,0.666,0.091,0
青绿,硬挺,清脆,清晰,平坦,软粘,0.243,0.267,0
浅白,硬挺,清脆,模糊,平坦,硬滑,0.245,0.057,0
浅白,蜷缩,浊响,模糊,平坦,软粘,0.343,0.099,0
青绿,稍蜷,浊响,稍糊,凹陷,硬滑,0.639,0.161,0
浅白,稍蜷,沉闷,稍糊,凹陷,硬滑,0.657,0.198,0
乌黑,稍蜷,浊响,清晰,稍凹,软粘,0.36,0.37,0
浅白,蜷缩,浊响,模糊,平坦,硬滑,0.593,0.042,0
青绿,蜷缩,沉闷,稍糊,稍凹,硬滑,0.719,0.103,0

 

相关文章: