安装spark之前先安装hadoop集群。
spark下载地址:
|
1
|
https://downloads.apache.org/spark/
|
下载安装包:
|
1
|
wget https://downloads.apache.org/spark/spark-2.4.6/spark-2.4.6-bin-hadoop2.7.tgz
|
安装包复制到各个节点:
|
1
|
scp spark-2.4.6-bin-hadoop2.7.tgz root@hadoop-node1:/root
|
解压安装:
|
1
2
3
|
tar -xf spark-2.4.6-bin-hadoop2.7.tgz -C /usr/local/
cd /usr/local/
ln -sv spark-2.4.6-bin-hadoop2.7/ spark
|
配置环境变量:
|
1
2
3
4
5
|
cat > /etc/profile.d/spark.sh <<EOF
export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin
EOF. /etc/profile.d/spark.sh
|
配置工作节点:这里将master节点也作为工作节点。
|
1
2
3
4
5
|
cat > /usr/local/spark/conf/slaves <<EOF
hadoop-masterhadoop-node1hadoop-node2EOF |
复制配置文件:
|
1
|
cp /usr/local/spark/conf/spark-env.sh.template /usr/local/spark/conf/spark-env.sh
|
修改环境变量:spark会先加载这个文件里的环境变量
|
1
2
3
|
cat >> /usr/local/spark/conf/spark-env.sh <<EOF
export SPARK_MASTER_HOST=hadoop-master
EOF |
修改属组属主:
|
1
2
|
cd /usr/local/
chown -R hadoop.hadoop spark/ spark
|
复制配置到其他节点:
|
1
2
|
scp ./* root@hadoop-node1:/usr/local/spark/conf/
scp ./* root@hadoop-node2:/usr/local/spark/conf/
|
启动master节点:使用hadoop用户启动。
|
1
2
3
|
su hadoop
~]$ ./start-master.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop-master.out
|
查看主节点运行的进程:
|
1
2
3
4
|
~]$ jps5078 Master5163 Worker... |
启动worker节点:
|
1
2
3
|
]$ ./start-slaves.sh
hadoop-node1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop-node1.out
hadoop-node2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop-node2.out
|
node1节点:
|
1
2
3
|
~]$ jps2898 Worker... |
同时启动master和node节点:
|
1
2
3
4
5
|
]$ ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop-master.out
hadoop-master: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop-master.out
hadoop-node2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop-node2.out
hadoop-node1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop-node1.out
|
web页面:
|
1
|
http://192.168.0.54:8080/
|