在配置完多台dataserver之后,就需要考虑到nameserver的单点故障问题。本文将介绍如何针对tfs的nameserver实现ha,官方推荐采用heartbeat实现,但heartbeat的实现配置同keepalive相比,复杂了许多,因而这里我们采用keepalive来实现namserver的ha和failover。
环境介绍:
NS服务器1 ——> 192.168.1.225
NS服务器2 ——> 192.168.1.226
NS VIP ——> 192.168.1.229
Data server服务器1 ——> 192.168.1.227
Data server服务器2 ——> 192.168.1.228
Nginx server ——> 192.168.1.12
在开始之前,先关闭tfs的所有的name server和data server
|
1
2
3
4
5
6
7
|
# /usr/local/tfs/scripts/tfs stop_ns nameserver exit SUCCESSFULLY
# /usr/local/tfs/scripts/tfs stop_ds 1-3 dataserver 1 exit SUCCESSFULLY
dataserver 2 exit SUCCESSFULLY
dataserver 3 exit SUCCESSFULLY
|
一:配置tfs name server
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
|
225服务器:# grep -v '^#' /usr/local/tfs/conf/ns.conf |grep -v '^$'[public]log_size=1073741824log_num = 16 log_level=infotask_max_queue_size = 10240port = 8108work_dir=/usr/local/tfsdev_name= eth0thread_count = 4ip_addr = 192.168.1.229 //229地址为vip[nameserver]safe_mode_time = 300 ip_addr_list = 192.168.1.225|192.168.1.226group_mask = 255.255.255.255block_max_size = 83886080 max_replication = 2min_replication = 2 use_capacity_ratio = 98block_max_use_ratio = 98heart_interval = 2object_dead_max_time = 3600cluster_id = 1replicate_ratio_ = 50max_write_filecount = 16heart_thread_count = 2 heart_max_queue_size = 10repl_max_time = 60compact_delete_ratio = 15 compact_max_load = 200object_dead_max_time = 86400object_clear_max_time = 300max_wait_write_lease = 15lease_expired_time = 3max_lease_timeout = 3000cleanup_lease_threshold = 102400build_plan_interval = 10run_plan_expire_interval = 120build_plan_ratio = 25dump_stat_info_interval = 60000000 build_plan_default_wait_time = 2 balance_max_diff_block_num = 5add_primary_block_count = 3block_chunk_num = 32task_percent_sec_size = 200 task_max_queue_size = 10000oplog_sync_max_slots_num = 1024oplog_sync_thread_num = 1 # cd /usr/local/# tar -zcvpf tfs.tgz tfs/# scp tfs.tgz 192.168.1.226:/usr/local/ 226服务器:# cd /usr/local/# tar -zxvpf tfs.tgz |
二:配置ha
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
|
225服务器# cd /usr/local/src/# wget http://www.keepalived.org/software/keepalived-1.2.13.tar.gz# tar -zxvpf keepalived-1.2.13.tar.gz # cd keepalived-1.2.13# ./configure --prefix=/usr/local/keepalived# make && make install # cat /usr/local/keepalived/etc/keepalived/keepalived.conf global_defs { router_id tfs_ns } vrrp_script chk_nameserver { script "killall -0 nameserver" interval 2 weight 2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 23 priority 101 advert_int 1 authentication {auth_type PASSauth_pass password_123} virtual_ipaddress { 192.168.1.229/24 brd 192.168.255.255 dev eth0 label eth0:0} track_interface { eth1} track_script { chk_nameserver} } # cd /usr/local/# tar -zcvpf keepalived.tgz keepalived/# scp keepalived.tgz 192.168.1.226:/usr/local/ 226服务器:# cd /usr/local/# tar -zxvpf keepalived.tgz # tar -zxvpf tfs.tgz # cat /usr/local/keepalived/etc/keepalived/keepalived.conf global_defs { router_id tfs_ns } vrrp_script chk_nameserver { script "killall -0 nameserver" interval 2 weight 2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 23 priority 100 advert_int 1 authentication {auth_type PASSauth_pass password_123} virtual_ipaddress { 192.168.1.229/24 brd 192.168.255.255 dev eth0 label eth0:0} track_interface { eth1} track_script { chk_nameserver} } |
三:修改data server 配置
|
1
2
3
4
|
# grep -2 'vip' /usr/local/tfs/conf/ds.conf //两台data server做同样的修改操作[dataserver]#nameserver ip addr(vip)ip_addr = 192.168.1.229 |
四:启动服务
|
1
2
3
4
5
6
|
# /usr/local/tfs/scripts/tfs start_ds 1-3 //227,228服务器上分别执行# /usr/local/tfs/scripts/tfs start_ns //225,226服务器上分别执行# /usr/local/keepalived/sbin/keepalived -f /usr/local/keepalived/etc/keepalived/keepalived.conf //225,226服务器上分别执行 225服务器将成为主节点,并注册vip# ip a |
|
1
|
# tail -f /var/log/messages |
|
1
2
|
226服务器将成为备份节点# tail -f /var/log/messages |
五:读写测试
|
1
|
# /usr/local/tfs/bin/ssm -s 192.168.1.229:8108 -i show machine |
|
1
2
|
# /usr/local/tfs/bin/tfstool -s 192.168.1.229:8108TFS> put /var/log/messages |
|
1
2
3
4
5
6
|
Nginx配置文件要稍作修改,指向VIP# grep '8108' /usr/local/nginx/conf/nginx.conf server 192.168.1.229:8108;
# service nginx restart http://192.168.1.12:7500/v1/tfs/T1bRxTByJT1RCvBVdK |
六:failover测试
|
1
2
3
4
|
225上手动关闭name server服务# sh /usr/local/tfs/scripts/tfs stop_ns nameserver exit SUCCESSFULLY
# tail -f /var/log/messages |
|
1
2
|
观察226服务器的日志,自动提升为master# tail -f /var/log/messages |
|
1
2
3
|
重新上传,读取测试# /usr/local/tfs/bin/tfstool -s 192.168.1.229:8108TFS> put /etc/group |
http://192.168.1.12:7500/v1/tfs/T1TRxTByZT1RCvBVdK
|
1
2
3
|
重新启动225上的name server服务,观察225,226的日志输出# sh /usr/local/tfs/scripts/tfs start_ns# tail -f /var/log/messages |
|
1
|
# tail -f /var/log/messages |
本文转自斩月博客51CTO博客,原文链接http://blog.51cto.com/ylw6006/1559271如需转载请自行联系原作者
ylw6006