背景介绍:大众点评必吃榜是基于大众点评的海量用户大数据和用户实地体验验证综合评选出的美食榜单,作为必系列榜单之一力求在吃逛玩住四大领域深挖城市消费场景,为用户提供快速、高效、权威可信任的出行及消费参考,打造城市的品质生活指南。
查找方式:打开【大众点评app】—首页【必系列】——找到【必吃榜】/搜索【必吃榜】
由于此页面仅存在于手机版中,因此选用手机浏览器模拟方式进行数据采集,将采集到的数据导入python,进行数据分析。
import pandas as pd
import numpy as np
dishes = pd.read_csv('dish.csv',index_col=0)#成都必吃菜
#index_col=0,读取时不要生成index,因为之前按店铺排名已经生成了一个排序,如果再生成index会造成混淆
dishes.head()
smallten为每个品类的排名,pinlei为后期生成的品类名称,vendorname为店铺名称,price-wrap为人均消费,dishname为菜品名称,recomm为推荐人数,site为商区,dishdec为菜品评论。人均和评论均存在缺失值,按大类分析,因此将人均按每类均值进行填充,评论维持空缺。
#每个品类推荐店铺数,共31
店铺 = dishes.groupby('pinlei')['dishName'].count()
from pyecharts import Bar
bar = Bar("每个品类推荐店铺数", "成都")
bar.add("店铺数", 店铺.index, 店铺.values)
可以看到,大多数品类的推荐店铺数为100,成都的推荐品类共31个,分别为’串串’, ‘伤心凉粉’, ‘兔头’, ‘冒菜’, ‘冰粉’, ‘凉面’, ‘口水鸡’, ‘回锅肉’, ‘夫妻肺片’, ‘宫保鸡丁’, ‘川北凉粉’, ‘怪味面’, ‘抄手’, ‘担担面’, ‘樟茶鸭’, ‘水煮肉片’, ‘火锅’, ‘甜水面’, ‘粉蒸牛肉’, ‘红油兔丁’, ‘肥肠粉’, ‘蒜泥白肉’, ‘蛋烘糕’, ‘豆花’, ‘蹄花’, ‘钟水饺’, ‘钵钵鸡’, ‘锅巴肉片’, ‘锅盔’, ‘鱼香肉丝’, ‘麻婆豆腐’
#每个品类排名第一店铺
dishes[dishes.index ==1]
将每个品类排名第一的店铺做成一个表,可以按照这个去吃成都最正宗的本地小吃。
#每个品类均价---店均价,无法完全代表品类均价
qian = dishes.groupby('pinlei')['price-wrap'].mean()
bar = Bar("人均消费")
bar.add("店均价", qian.index, qian.values)
人均消费不能代表品类,因为这个字段是按店均消费做的,个别店铺品类较多,造成偏差较大。
#每个品类评论均值 ---评论数越多,热度越高
a = dishes.groupby('pinlei')['recomm'].mean()
from pyecharts import Line
line = Line("每个品类评论均值")
line.add("评论数", a.index, a.values, mark_point=["max", "min"], mark_line=["average"])
评论数可以侧面映证品类的热度,但是也可能反应此品类的消费人群较多使用大众点评。从图上可以看到,冰粉的热度最高,口水鸡最低。
#每个商区推荐店铺数,共107个商区
b = dishes.groupby('site')['recomm'].count()
line = Line("每个商区推荐店铺数")
line.add("店铺数", b.index, b.values, mark_line=["max", "average"])
一共有107个商区上榜,其中主要集中在高新区以及春熙路附近。
评论词云
为了按品类做词语,将表格进行了拆分。
huoguo = dishes[dishes.pinlei == '火锅']
chuanchuan = dishes[dishes.pinlei == '串串']
feichang = dishes[dishes.pinlei == '肥肠粉']
tutou = dishes[dishes.pinlei == '兔头']
boboji = dishes[dishes.pinlei == '钵钵鸡']
danhonggao = dishes[dishes.pinlei == '蛋烘糕']
bingfen = dishes[dishes.pinlei == '冰粉']
maocai = dishes[dishes.pinlei == '冒菜']
tianshuimian = dishes[dishes.pinlei == '甜水面']
huiguorou = dishes[dishes.pinlei == '回锅肉']
mapo = dishes[dishes.pinlei == '麻婆豆腐']
shuizhu = dishes[dishes.pinlei == '水煮肉片']
fuqi = dishes[dishes.pinlei == '夫妻肺片']
zhong = dishes[dishes.pinlei == '钟水饺']
tuding = dishes[dishes.pinlei == '红油兔丁']
tihua = dishes[dishes.pinlei == '蹄花']
douhua = dishes[dishes.pinlei == '豆花']
niurou = dishes[dishes.pinlei == '粉蒸牛肉']
bairou = dishes[dishes.pinlei == '蒜泥白肉']
yuxiang = dishes[dishes.pinlei == '鱼香肉丝']
koushui = dishes[dishes.pinlei == '口水鸡']
jiding = dishes[dishes.pinlei == '宫保鸡丁']
chaoshou = dishes[dishes.pinlei == '抄手']
guokui = dishes[dishes.pinlei == '锅盔']
dandan = dishes[dishes.pinlei == '担担面']
shangxin = dishes[dishes.pinlei == '伤心凉粉']
chuanbei = dishes[dishes.pinlei == '川北凉粉']
guoba = dishes[dishes.pinlei == '锅巴肉片']
ya = dishes[dishes.pinlei == '樟茶鸭']
guaiwei = dishes[dishes.pinlei == '怪味面']
chuanchuan.head()
chuanchuan['dishdec']
import jieba
import numpy as np
#读取停用词
stopwords = {}
def stopword(filename = ''):
global stopwords
f = open(filename, 'r')
line = f.readline().rstrip()
while line:
stopwords.setdefault(line, 0)
line = f.readline().rstrip()
f.close()
stopword(filename = 'stopwords.txt')
a = chuanchuan['dishdec']
a.to_csv('chuanchuan.csv')
novel = open('chuanchuan.csv','rb')
content=novel.read()
n = np.arange(100)
n = [str(i) for i in n ]
根据表格形式,去除停用词以及数字,将关键词制作为词云图。
#定义中文分词和停用词清洗
def cleancntxt(txt, stopwords):
seg_generator = jieba.cut(txt, cut_all=False)
seg_list = [i for i in seg_generator if i not in stopwords]
seg_list = [i for i in seg_list if i != u' ']
seg_list = [i for i in seg_list if i not in n]
return(seg_list)
seg_list = cleancntxt(content, stopwords)
seg = pd.value_counts(seg_list)
from pyecharts import WordCloud
wordcloud = WordCloud(width=1000, height=600)
wordcloud.add("", seg.index, seg.values, word_size_range=[12, 150], is_more_utils=True)
吃串串比较关注的点在:菜品新鲜,味道巴适,菜品多样,牛肉是大势