1. 下载hive安装包
2. 进入 conf 中 : cp hive-default.xml.template hive-site.xml, vi hive-site.xml
1) 找到如下对应的配置修改对应的值 (例如: /javax.jdo.option.ConnectionURL)
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>Gw_sp1226</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://gw-sp.novalocal:3306/hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
3. cp hive-env.sh.template hive-env.sh, vi hive-env.sh
export HADOOP_HOME=/home/hadoop/hadoop ##Hadoop安装路径 export HIVE_CONF_DIR=/home/hadoop/hive-2.1.1/conf ##Hive配置文件路径
export HIVE_AUX_JARS_PATH=/usr/hive/apache-hive-2.2.0-bin/lib ##Hive lib 目录
4. 拷贝mysql-connector-java-5.1.7-bin.jar到hive的lib包中,(格式化json需要多加2个jar包json-serde-1.3.8-jar-with-dependencies.jar 和json-udf-1.3.8-jar-with-dependencies.jar, 具体参考如下flume存储数据到hive)
链接:https://pan.baidu.com/s/1suPzGJmtJlsROC6SVpcztQ 密码:zlgg
目标: 通过接受 1084端口的http请求信息, 存储到 hive数据库中, osgiweb2.db为hive中创建的数据库名称 periodic_report5 为创建的数据表, flume配置如下: a1.sources=r1 a1.channels=c1 a1.sinks=k1 a1.sources.r1.type = http a1.sources.r1.bind = 0.0.0.0 a1.sources.r1.port = 1084 a1.sources.r1.handler=jkong.Test.HTTPSourceDPIHandler #a1.sources.r1.interceptors=i1 i2 #a1.sources.r1.interceptors.i1.type=regex_filter #a1.sources.r1.interceptors.i1.regex=\\{.*\\} #a1.sources.r1.interceptors.i2.type=timestamp a1.channels.c1.type=memory a1.channels.c1.capacity=10000 a1.channels.c1.transactionCapacity=1000 a1.channels.c1.keep-alive=30 a1.sinks.k1.type=hdfs a1.sinks.k1.channel=c1 a1.sinks.k1.hdfs.path=hdfs://gw-sp.novalocal:1086/user/hive/warehouse/osgiweb2.db/periodic_report5 a1.sinks.k1.hdfs.fileType=DataStream a1.sinks.k1.hdfs.writeFormat=Text a1.sinks.k1.hdfs.rollInterval=0 a1.sinks.k1.hdfs.rollSize=10240 a1.sinks.k1.hdfs.rollCount=0 a1.sinks.k1.hdfs.idleTimeout=60 a1.sources.r1.channels=c1 a1.sinks.k1.channel=c1 复制代码 2. 数据表创建: create table periodic_report5(id BIGINT, deviceId STRING,report_time STRING,information STRING) row format serde "org.openx.data.jsonserde.JsonSerDe" WITH SERDEPROPERTIES("id"="$.id","deviceId"="$.deviceId","report_time"="$.report_time","information"="$.information"); 2.1 将数据表中的字段也同样拆分成数据字段的创表语句(还没有试验, 暂时不用) 复制代码 create table periodic_report4(id BIGINT, deviceId STRING,report_time STRING,information STRUCT<actualTime:BIGINT,dpiVersionInfo:STRING,subDeviceInfo:STRING,wanTrafficData:STRING,ponInfo:STRING,eventType:STRING,potsInfo:STRING,deviceInfo:STRING,deviceStatus:STRING>) row format serde "org.openx.data.jsonserde.JsonSerDe" WITH SERDEPROPERTIES("input.invalid.ignore"="true","id"="$.id","deviceId"="$.deviceId","report_time"="$.report_time","requestParams.actualTime"="$.requestParams.actualTime","requestParams.dpiVersionInfo"="$.requestParams.dpiVersionInfo","requestParams.subDeviceInfo"="$.requestParams.subDeviceInfo","requestParams.wanTrafficData"="$.requestParams.wanTrafficData","requestParams.ponInfo"="$.requestParams.ponInfo","requestParams.eventType"="$.requestParams.eventType","requestParams.potsInfo"="$.requestParams.potsInfo","requestParams.deviceInfo"="$.requestParams.deviceInfo","requestParams.deviceStatus"="$.requestParams.deviceStatus"); 复制代码 3. 启动flume语句:flume 根目录 bin/flume-ng agent --conf ./conf/ -f ./conf/flume.conf --name a1 -Dflume.root.logger=DEBUG,console 4. 启动hive语句: hive bin目录 hive 或者: ./hive -hiveconf hive.root.logger=DEBUG,console #带log信息启动