from sklearn.tree import DecisionTreeClassifier
import pandas as pd
from sklearn.feature_extraction import DictVectorizer
from sklearn.model_selection import train_test_split

决策树算法入门实例: Titanic存活预测
从kaggle获取数据后,查看所有特征,这里只是简单的学习决策树,所以粗略的认为存活率与pclass, sex, Age相关

df = pd.read_csv("D:/data/titanic/train.csv")
# print(df.head(1))
# print(df.info())

# 处理数据
x = df.loc[:, ["Pclass", "Sex", "Age"]]
y = df.loc[:, "Survived"]
# 这里将缺失的数据简单的替换为均值
x["Age"].fillna(x["Age"].mean(), inplace=True)

# 特征工程
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25)

# 将类别转化为机器识别的one-hot编码,因为用了DictVectorizer方法,所以需要将信息转化为字典格式
dic = DictVectorizer()
x_train = dic.fit_transform(x_train.to_dict(orient="records"))
x_test = dic.transform(x_test.to_dict(orient="records"))

# 决策树预测
dec = DecisionTreeClassifier(criterion="entropy")
dec.fit(x_train, y_train)
print(dec.score(x_test, y_test))

最后得到的准确率为
决策树算法入门实例: Titanic存活预测

相关文章:

  • 2021-10-19
  • 2021-07-04
  • 2021-10-03
  • 2022-12-23
  • 2022-01-07
  • 2022-12-23
  • 2022-12-23
猜你喜欢
  • 2021-12-10
  • 2022-12-23
  • 2021-12-04
  • 2021-05-17
  • 2021-07-15
  • 2021-10-12
相关资源
相似解决方案