HDFS双机HA-部署

1.1、系统环境初始化

防火墙关闭

server iptables stop

chkconfig iptables off

selinux关闭

 

用户创建:

vim yonghu.txt

hbase hdfs hive impala Impala kudu Kudu spark wxl zookeeper

 

#!/bin/sh

for i in `cat /root/hadoop-cdh/yonghu.txt`;

do

    echo $i

    useradd $i

done

 

免密码登录:其他用户更改root名称XXX

config-ssh-root.sh

#!/bin/sh

 

expect -c "

spawn ssh-****** -t rsa

expect {

\".ssh/id_rsa): \" {send \"\r\";exp_continue }

\"Enter passphrase (empty for no passphrase): \" {send \"\r\";exp_continue }

\"Enter same passphrase again: \" {send \"\r\" }

}

expect eof

"

expect -c "

    spawn ssh-copy-id -i /root/.ssh/id_rsa.pub zk-master-la01

    expect {

        yes/no { send \"yes\r\"; exp_continue }

        *assword* { send \"[email protected]\r\" }

    }

    expect {

        *assword* { send \"[email protected]\r\" }

    }

expect eof

"

expect -c "

    spawn ssh-copy-id -i /root/.ssh/id_rsa.pub zk-master-la02

    expect {

        yes/no { send \"yes\r\"; exp_continue }

        *assword* { send \"[email protected]\r\" }

    }

    expect {

        *assword* { send \"[email protected]\r\" }

    }

expect eof

"

expect -c "

spawn ssh-copy-id -i /root/.ssh/id_rsa.pub zk-la03

expect {

yes/no { send \"yes\r\"; exp_continue }

*assword* { send \"[email protected]\r\" }

}

expect {

*assword* { send \"[email protected]\r\" }

}

expect eof

"

expect -c "

spawn ssh-copy-id -i /root/.ssh/id_rsa.pub nagios_server

expect {

yes/no { send \"yes\r\"; exp_continue }

*assword* { send \"[email protected]\r\" }

}

expect {

*assword* { send \"[email protected]\r\" }

}

expect eof

"

for slave in $(</tmp/hadoop-slaves)

do

#ecl=${EXPECT_CMD/remoteHost/$slave}

#echo $ecl

expect -c "

spawn ssh-copy-id -i /root/.ssh/id_rsa.pub $slave

expect {

yes/no { send \"yes\r\"; exp_continue }

*assword* { send \"[email protected]\r\" }

}

expect {

*assword* { send \"[email protected]\r\" }

}

"

done

 

1.2、zookeeper安装配置:

JDK1.8配置生效

tar -zxvf jdk-8u121-linux-x64.gz -C /usr/local/

 

vim /home/hadoop/.bash_profile

 

export JAVA_HOME=/usr/local/jdk1.8.0_121

export PATH=$PATH:$JAVA_HOME/bin

 

source /home/hadoop/.bash_profile

zk-master-la01节点操作

tar -zxvf zookeeper-3.4.61.tar.gz –C /usr/local/

cd /usr/local/zookeeper-3.4.6/conf

 

配置zookeerper文件:

vim zoo.cfg

 

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/opt/zookeeper/data

clientPort=2181

server.1=zk-master-la01:2888:3888

server.2=zk-master-la02:2888:3888

server.3=slave-01:2888:3888

 

#创建data数据目录

mkdir -p /opt/zookeeper/data/

echo "1" >> /opt/zookeeper/data/myid

#scp分别传输多台节点

vim host.txt

zk-master-la01

zk-master-la02

zk-la03

slave-01

slave-02

 

for i in `cat /root/host.txt`;do echo $i;scp -r /usr/local/zookeeper-3.4.5-cdh5.10.0 [email protected]$i:/usr/local/zookeeper-3.4.5-cdh5.10.0 ;done

 

zk-master-la02操作

#注意改下myid 2

mkdir –p /opt/zookeeper/data/

echo "2" >> /opt/zookeeper/data/myid

 

zk-la03操作

#注意改下myid 3

mkdir -p /opt/zookeeper/data/

echo "3" >> /opt/zookeeper/data/myid

 

#zk-master-la01操作

/usr/local/zookeeper-3.4.6/bin/zkServer.sh start

/usr/local/zookeeper-3.4.6/bin/zkServer.sh stop

//usr/local/zookeeper-3.4.6/bin/zkServer.sh status

 

#zk-master-la02操作

/usr/local/zookeeper-3.4.6/bin/zkServer.sh start

 

#zk-la03操作

/usr/local/zookeeper-3.4.6/bin/zkServer.sh start

 

1.3、部署hadoop双机HA-namenode

hadoop解压到/usr目录

tar -zxvf hadoop-2.6.0-cdh5.7.5.tar.gz -C /usr/local/

 

cd /usr/local/hadoop-2.6.0-cdh5.7.5/etc/hadoop

编辑修改内容如下:

vim core-site.xml

<?xml version="1.0" encoding="utf-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://ns</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/opt/hadoop/temp</value>

</property>

<property>

<name>io.file.buffer.size</name>

<value>4096</value>

</property>

<property>

<name>ha.zookeeper.quorum</name>

<value>zk-master-la01:2181,zk-master-la02:2181,zk-la03:2181</value>

</property>

<property>

<name>hadoop.proxyuser.hive.hosts</name>

<value>*</value>

</property>

<property>

<name>hadoop.proxyuser.hive.groups</name>

<value>*</value>

</property>

</configuration>

 

vim hdfs-site.xml

<?xml version="1.0" encoding="utf-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<!--指定hdfs的nameservice为ns,需要和core-site.xml中的保持一致 -->

<property>

<name>dfs.nameservices</name>

<value>ns</value>

</property>

<property>

<name>dfs.ha.namenodes.ns</name>

<value>namenode1,namenode2</value>

</property>

<property>

<name>dfs.namenode.rpc-address.ns.namenode1</name>

<value>zk-master-la01:8020</value>

</property>

<property>

<name>dfs.namenode.http-address.ns.namenode1</name>

<value>zk-master-la01:50070</value>

</property>

<property>

<name>dfs.namenode.rpc-address.ns.namenode2</name>

<value>zk-master-la02:8020</value>

</property>

<property>

<name>dfs.namenode.http-address.ns.namenode2</name>

<value>zk-master-la02:50070</value>

</property>

    <property>

        <name>dfs.namenode.secondary.http-address</name>

        <value>zk-master-la01:50090</value>

        </property>

<property>

<name>dfs.namenode.shared.edits.dir</name>

<value>qjournal://zk-la03:8485;zk-master-la02:8485;zk-master-la01:8485/ns</value>

</property>

<property>

<name>dfs.journalnode.edits.dir</name>

<value>/opt/hadoop/journal</value>

</property>

<property>

<name>dfs.ha.automatic-failover.enabled</name>

<value>true</value>

</property>

<property>

<name>dfs.client.failover.proxy.provider.ns</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>

<property>

<name>dfs.ha.fencing.methods</name>

<value>sshfence</value>

</property>

<property>

<name>dfs.ha.fencing.ssh.private-key-files</name>

<value>/home/hadoop/.ssh/id_rsa</value>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:/opt/hadoop/name</value>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>file:/opt/hadoop/data</value>

</property>

<property>

<name>dfs.replication</name>

<value>2</value>

</property>

<property>

<name>dfs.webhdfs.enabled</name>

<value>true</value>

</property>

</configuration>

 

Vim mapred-site.xml

 

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

 

Vim yarn-site.xml

<?xml version="1.0" encoding="utf-8"?>

 

<configuration>

<property>

<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>

<value>org.apache.hadoop.mapred.ShuffleHandler</value>

</property>

<property>

<name>yarn.resourcemanager.ha.enabled</name>

<value>true</value>

</property>

<property>

<name>yarn.resourcemanager.cluster-id</name>

<value>yrc</value>

</property>

<property>

<name>yarn.resourcemanager.ha.rm-ids</name>

<value>rm1,rm2</value>

</property>

<property>

<name>yarn.resourcemanager.hostname.rm1</name>

<value>zk-master-la01</value>

</property>

<property>

<name>yarn.resourcemanager.hostname.rm2</name>

<value>zk-master-la02</value>

</property>

<property>

<name>yarn.resourcemanager.zk-address</name>

<value>zk-master-la01:2181,zk-master-la02:2181,zk-la03:2181</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

</configuration>

 

cat yarn-env.sh

#如果需要日志目录指定目录需手动更改env日志目录,默认不更改。

export JAVA_HOME=/usr/local/jdk1.8.0_121

export YARN_LOG_DIR=/var-logs/hadoop/logs

 

cat hadoop-env.sh

export JAVA_HOME=/usr/local/jdk1.8.0_121

export HADOOP_SSH_OPTS="-p 22"

export HADOOP_LOG_DIR=/var/hadoop/logs

export HADOOP_SECURE_DN_LOG_DIR=$HADOOP_LOG_DIR

HDFS双机集群HA-部署

 

Scp 分发节点

vim host.txt

zk-master-la01

zk-master-la02

zk-la03

slave-01

slave-02

 

for i in `cat /root/host.txt`;do echo $i;scp -r /usr/local/ hadoop-2.6.0-cdh5.7.5 [email protected]$i:/usr/local/ hadoop-2.6.0-cdh5.7.5 ;done

 

hdfs启动命令

注意首次初始化启动命令和之后启动的命令是不同的,首次启动比较复杂,步骤不对的话就会报错,不过之后就好了

首次启动命令

1、首先启动各个节点的Zookeeper,在各个节点上执行以下命令:

bin/zkServer.sh start

2、在某一个namenode节点执行如下命令,创建命名空间

hdfs zkfc -formatZK

3、在每个journalnode节点用如下命令启动journalnode

sbin/hadoop-daemon.sh start journalnode

4、在主namenode节点用格式化namenode和journalnode目录

hdfs namenode -format ns

5、在主namenode节点启动namenode进程

sbin/hadoop-daemon.sh start namenode

6、在备namenode节点执行第一行命令,这个是把备namenode节点的目录格式化并把元数据从主namenode节点copy过来,并且这个命令不会把journalnode目录再格式化了!然后用第二个命令启动备namenode进程!

hdfs namenode -bootstrapStandby

sbin/hadoop-daemon.sh start namenode

7、在两个namenode节点都执行以下命令

sbin/hadoop-daemon.sh start zkfc

8、在所有datanode节点都执行以下命令启动datanode

sbin/hadoop-daemon.sh start datanode

9、在两个namenode节点都执行以下命令

yarn-daemon.sh start resourcemanager

10、在所有datanode节点都执行以下命令启动

yarn-daemon.sh start nodemanager

日常启停命令

sbin/start-dfs.sh

sbin/stop-dfs.sh

 

验证

首先在浏览器分别打开两个节点的namenode状态,其中一个显示active,另一个显示standby

HDFS双机集群HA-部署

HDFS双机集群HA-部署

然后在active所在的namenode节点执行jps,杀掉相应的namenode进程

HDFS双机集群HA-部署

前面standby所对应的namenode变成active

HDFS双机集群HA-部署

相关文章:

  • 2021-06-03
  • 2021-10-10
  • 2022-12-23
  • 2021-05-01
  • 2021-09-25
  • 2022-12-23
  • 2021-05-20
  • 2021-10-03
猜你喜欢
  • 2021-11-12
  • 2021-05-08
  • 2021-08-15
  • 2021-06-19
  • 2022-12-23
  • 2022-01-01
  • 2022-02-28
相关资源
相似解决方案