下载解压

[xiaobai@xiaobai ~]$ tar -zvxf zookeeper-3.4.9.tar.gz

本机ip地址映射

[xiaobai@xiaobai /]$ su - root
Password:
Last login: Sat Aug 4 18:59:38 EDT 2018 on :0

[root@xiaobai ~]# cd /etc

[root@xiaobai etc]# vim hosts

将本机ip映射为一个主机名,在zoo.cfg中可用这个主机名代替ip来配置:

添加一行:

172.20.10.60 xiaobai-rhel7

保存,退出,切换回普通用户.

详细配置

[xiaobai@xiaobai conf]$ cp zoo_sample.cfg zoo.cfg

[xiaobai@xiaobai conf]$ vim zoo.cfg

 配置项值及含义:

tickTime=2000 心跳间隔(ms)

initLimit=10 集群初始化连接间隔最大心跳个数


syncLimit=5 集群Follower与Leader请求与应答间隔最大心跳个数

dataDir=/home/xiaobai/zookeeper-3.4.9/data 集群中该点数据存储目录


dataLogDir=/home/xiaobai/zookeeper-3.4.9/logs 集群中该点日志存储目录

clientPort=2181 客户端访问端口,如是伪集群,每个点该端口都应不同,否则会端口冲突,只能启动一个点,客户端根据多点不同端口进行配置(配置每个点的ip:端口)

 

集群server配置:

server.1=xiaobai-rhel7:2888:3888
server.2=xiaobai-rhel7:4888:5888
server.3=xiaobai-rhel7:6888:7888

集群中的点至少3个,伪集群要保证端口都不相同,否则端口冲突。与该点ip、端口匹配的的server.xxx...,server.后面的数字与该点配置的myid内容应相同

ZooKeeper单机伪集群搭建与启动

 

ZooKeeper单机伪集群搭建与启动

在每个点配置的data目录中创建名为myid的文件,编辑内容为该点对应的数字

环境变量

 用户主目录下执行:

[xiaobai@xiaobai ~]$ vim .bash_profile

配置内容:

#zookeeper env
export ZOOKEEPER_HOME=/home/xiaobai/zookeeper-3.4.9

PATH=$ZOOKEEPER_HOME/bin:$PATH:$HOME/.local/bin:$HOME/bin

export PATH

注意第一行的变量ZOOKEEPER_HOME定义不要写成$ZOOKEEPER_HOME

 

使配置生效:
[xiaobai@xiaobai ~]$ source .bash_profile

 

这个配置主要在真正集群环境每个点有意义,方便在任何目录下直接使用ZooKeeper命令。在这里反而引起了误解:在伪集群的每个点应该使用全限定名或./xxx执行ZooKeeper命令(比如后面的启动后查看状态),否则生效的永远是在这里配置了路径的那个点的。这里仅作原理演示。

 

RHEL7防火墙设置

 防火墙需要打开ZooKeeper集群中每个点所使用的端口,这里一共是9个。RHEL7的默认防火墙工具有很大变化。这里仍然使用6及以前的iptables方式,提示及切换效果如下:

[xiaobai@xiaobai ~]$ su - root
Password:
Last login: Sat Aug 4 19:39:17 EDT 2018 on pts/0

 

[root@xiaobai ~]# chkconfig iptables on
Note: Forwarding request to 'systemctl enable iptables.service'.
ln -s '/usr/lib/systemd/system/iptables.service' '/etc/systemd/system/basic.target.wants/iptables.service'
[root@xiaobai ~]# systemctl enable iptables.service
[root@xiaobai ~]# service iptables start
Redirecting to /bin/systemctl start iptables.service
[root@xiaobai ~]# systemctl start iptables.service
[root@xiaobai ~]# vim /etc/sysconfig/iptables
[root@xiaobai ~]# service iptables restart
Redirecting to /bin/systemctl restart iptables.service
[root@xiaobai ~]# systemctl restart iptables.service
[root@xiaobai ~]# service iptables status
Redirecting to /bin/systemctl status iptables.service
iptables.service - IPv4 firewall with iptables
Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled)
Active: active (exited) since Sat 2018-08-04 21:21:57 EDT; 17s ago
Process: 9208 ExecStop=/usr/libexec/iptables/iptables.init stop (code=exited, status=0/SUCCESS)
Process: 9259 ExecStart=/usr/libexec/iptables/iptables.init start (code=exited, status=0/SUCCESS)
Main PID: 9259 (code=exited, status=0/SUCCESS)

Aug 04 21:21:57 xiaobai.rhel7 systemd[1]: Starting IPv4 firewall with iptables...
Aug 04 21:21:57 xiaobai.rhel7 iptables.init[9259]: iptables: Applying firewall rules: [ OK ]
Aug 04 21:21:57 xiaobai.rhel7 systemd[1]: Started IPv4 firewall with iptables.
[root@xiaobai ~]# systemctl status iptables.service
iptables.service - IPv4 firewall with iptables
Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled)
Active: active (exited) since Sat 2018-08-04 21:21:57 EDT; 1min 32s ago
Process: 9208 ExecStop=/usr/libexec/iptables/iptables.init stop (code=exited, status=0/SUCCESS)
Process: 9259 ExecStart=/usr/libexec/iptables/iptables.init start (code=exited, status=0/SUCCESS)
Main PID: 9259 (code=exited, status=0/SUCCESS)

Aug 04 21:21:57 xiaobai.rhel7 systemd[1]: Starting IPv4 firewall with iptables...
Aug 04 21:21:57 xiaobai.rhel7 iptables.init[9259]: iptables: Applying firewall rules: [ OK ]
Aug 04 21:21:57 xiaobai.rhel7 systemd[1]: Started IPv4 firewall with iptables.

iptables文件最终配置内容(新增):

-A INPUT -p tcp -m state --state NEW -m tcp --dport 2181 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 2182 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 2183 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 2888 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 3888 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 4888 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5888 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 6888 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 7888 -j ACCEPT

集群启动

 拷贝一个配置好的点的整个ZooKeeper目录,做出另外两个点。真正集群几个点的配置是完全相同的。这里的伪集群中server集群配置不需要改动,所有点ip与端口都已配置好。只需修改每个点的dataLogDir配置,成该点对应的目录。

启动每个点:

使用普通用户,到该点目录使用./xxx而不是默认环境变量配置的方式启动:

[xiaobai@xiaobai bin]$ cd ../../zookeeper-3.4.9/bin
[xiaobai@xiaobai bin]$ ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/xiaobai/zookeeper-3.4.9/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

Java进程查看:

[xiaobai@xiaobai bin]$ ps -ef|grep java
xiaobai 10643 1 1 21:50 pts/0 00:00:00 java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /home/xiaobai/zookeeper-3.4.9_2/bin/../build/classes:/home/xiaobai/zookeeper-3.4.9_2/bin/../build/lib/*.jar:/home/xiaobai/zookeeper-3.4.9_2/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/xiaobai/zookeeper-3.4.9_2/bin/../lib/slf4j-api-1.6.1.jar:/home/xiaobai/zookeeper-3.4.9_2/bin/../lib/netty-3.10.5.Final.jar:/home/xiaobai/zookeeper-3.4.9_2/bin/../lib/log4j-1.2.16.jar:/home/xiaobai/zookeeper-3.4.9_2/bin/../lib/jline-0.9.94.jar:/home/xiaobai/zookeeper-3.4.9_2/bin/../zookeeper-3.4.9.jar:/home/xiaobai/zookeeper-3.4.9_2/bin/../src/java/lib/*.jar:/home/xiaobai/zookeeper-3.4.9_2/bin/../conf: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /home/xiaobai/zookeeper-3.4.9_2/bin/../conf/zoo.cfg
xiaobai 10672 1 1 21:51 pts/0 00:00:00 java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /home/xiaobai/zookeeper-3.4.9/bin/../build/classes:/home/xiaobai/zookeeper-3.4.9/bin/../build/lib/*.jar:/home/xiaobai/zookeeper-3.4.9/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/xiaobai/zookeeper-3.4.9/bin/../lib/slf4j-api-1.6.1.jar:/home/xiaobai/zookeeper-3.4.9/bin/../lib/netty-3.10.5.Final.jar:/home/xiaobai/zookeeper-3.4.9/bin/../lib/log4j-1.2.16.jar:/home/xiaobai/zookeeper-3.4.9/bin/../lib/jline-0.9.94.jar:/home/xiaobai/zookeeper-3.4.9/bin/../zookeeper-3.4.9.jar:/home/xiaobai/zookeeper-3.4.9/bin/../src/java/lib/*.jar:/home/xiaobai/zookeeper-3.4.9/bin/../conf: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /home/xiaobai/zookeeper-3.4.9/bin/../conf/zoo.cfg
xiaobai 10715 1 5 21:51 pts/0 00:00:00 java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /home/xiaobai/zookeeper-3.4.9_3/bin/../build/classes:/home/xiaobai/zookeeper-3.4.9_3/bin/../build/lib/*.jar:/home/xiaobai/zookeeper-3.4.9_3/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/xiaobai/zookeeper-3.4.9_3/bin/../lib/slf4j-api-1.6.1.jar:/home/xiaobai/zookeeper-3.4.9_3/bin/../lib/netty-3.10.5.Final.jar:/home/xiaobai/zookeeper-3.4.9_3/bin/../lib/log4j-1.2.16.jar:/home/xiaobai/zookeeper-3.4.9_3/bin/../lib/jline-0.9.94.jar:/home/xiaobai/zookeeper-3.4.9_3/bin/../zookeeper-3.4.9.jar:/home/xiaobai/zookeeper-3.4.9_3/bin/../src/java/lib/*.jar:/home/xiaobai/zookeeper-3.4.9_3/bin/../conf: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /home/xiaobai/zookeeper-3.4.9_3/bin/../conf/zoo.cfg
xiaobai 10755 9691 0 21:51 pts/0 00:00:00 grep --color=auto java

jps查看:

[xiaobai@xiaobai zookeeper-3.4.9_3]$ jps
10715 QuorumPeerMain
10672 QuorumPeerMain
10765 Jps
10643 QuorumPeerMain

其中QuorumPeerMain为每个点的ZooKeeper主进程

状态监控

 这里使用ZooKeeper命令。因为前面环境变量配置中配置了其中一个点的路径为ZOOKEEPER_HOME,所以这里就不能直接使用命令,这样永远只看到这个配置的点的。需要到每个点路径下使用./xxx这种相对路径或者全限定名路径:

其中的一个点:

[xiaobai@xiaobai bin]$ pwd
/home/xiaobai/zookeeper-3.4.9/bin
[xiaobai@xiaobai bin]$ cd ../../zookeeper-3.4.9_2/bin
[xiaobai@xiaobai bin]$ ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/xiaobai/zookeeper-3.4.9_2/bin/../conf/zoo.cfg
Mode: leader

客户端访问(ZkClient客户端,单实例版)

依赖:

    <!-- https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper -->
    <dependency>
        <groupId>org.apache.zookeeper</groupId>
        <artifactId>zookeeper</artifactId>
        <version>3.4.9</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/com.github.sgroschupf/zkclient -->
    <dependency>
        <groupId>com.github.sgroschupf</groupId>
        <artifactId>zkclient</artifactId>
        <version>0.1</version>
    </dependency>

 

代码:

package com.xiaobai.zkclient;

import java.util.List;

import org.I0Itec.zkclient.IZkChildListener;
import org.I0Itec.zkclient.IZkDataListener;
import org.I0Itec.zkclient.ZkClient;

/**
 * Hello world!
 *
 */
public class ZClient 
{
    
    private void client() {
        ZkClient client = new ZkClient("127.0.0.1:2181", 5000);
        String path = "/zk-book/cl";
        client.createPersistent(path, true);
    }
    
    private void listener() throws InterruptedException {
        ZkClient client = new ZkClient("127.0.0.1:2181", 5000);
        String path = "/zk-book11";
        client.subscribeChildChanges(path, new IZkChildListener() {
            
            public void handleChildChange(String parentPath, List<String> currentChildren) throws Exception {
                System.out.println(parentPath + "'s children changed,current children:" + currentChildren);
            }
        });
        
        //节点是否存在
        System.out.println("Node Exists:" + client.exists(path));
        
        //节点不存在也可通知:
        client.createPersistent(path);
        Thread.sleep(1000);
        
        System.out.println("Node Exists:" + client.exists(path));
        
        //获取子节点
        System.out.println(client.getChildren(path));
        
        client.createPersistent(path + "/cl");
        Thread.sleep(1000);
        
        client.delete(path + "/cl");
        Thread.sleep(1000);
        
        client.delete(path);
        Thread.sleep(Integer.MAX_VALUE);
    }
    
    private void read() throws InterruptedException {
        ZkClient client = new ZkClient("127.0.0.1:2181", 5000);
        String path = "/zk-book222";
        //节点是否存在
        System.out.println("Node Exists:" + client.exists(path));
        client.createEphemeral(path, "123");//数据为123
        System.out.println("Node Exists:" + client.exists(path));
        client.subscribeDataChanges(path, new IZkDataListener() {
            
            public void handleDataDeleted(String dataPath) throws Exception {
                System.out.println("Node" + dataPath + " deleted");
            }
            
            public void handleDataChange(String dataPath, Object data) throws Exception {
                System.out.println("Node" + dataPath + " changed,new data:" + data);
            }
        });
        
        System.out.println(client.readData(path));//获取节点数据
        
        client.writeData(path, "456");//写数据
        Thread.sleep(1000);
        
        client.delete(path);
        Thread.sleep(Integer.MAX_VALUE);
    }
    
    public static void main( String[] args ) throws InterruptedException
    {
        ZClient cl = new ZClient();
//        cl.client();
        cl.listener();
//        cl.read();
        System.out.println( "Hello World!" );
    }
}
View Code

相关文章: