1、找sa帮忙准备服务器环境

# TiDB Cluster Part
[tidb_servers]
192.168.78.73 deploy_dir=/data0/deploy
192.168.77.171 deploy_dir=/data0/deploy

[tikv_servers]
192.168.77.65 deploy_dir=/data0/deploy tikv_port=20171 labels="host=tidbkv1st.hy.com"
192.168.77.41 deploy_dir=/data0/deploy tikv_port=20171 labels="host=tidbkv2st.hy.com"
192.168.76.233 deploy_dir=/data0/deploy tikv_port=20171 labels="host=tidbkv3st.hy.com"

[pd_servers]
192.168.78.55
192.168.78.119
192.168.77.97

2、开始部署安装ansible

如何检测 NTP 服务是否正常

$ sudo systemctl status ntpd.service
● ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
Active: inactive (dead)

以下情况表示 NTP 服务未正常运行:

$ ntpstat
Unable to talk to NTP daemon. Is it running?

使用以下命令可使 NTP 服务尽快开始同步,pool.ntp.org 可替换为 NTP server:

$ sudo systemctl stop ntpd.service
$ sudo ntpdate pool.ntp.org
$ sudo systemctl start ntpd.service


直接安装ansible依赖以及其它组件

$ yum install epel-release -y 
$ yum install ansible curl -y 
$ ansible --version 
ansible 2.4.2.0

TiDB 安装记录


在中间控制机器上安装tidb-ansible

下载软件:
git clone -b release-1.0 https://github.com/pingcap/tidb-ansible.git

赋予权限

chown -R tidb.wheel /data0/software/tidb-ansible


3、分配机器资源,编辑 inventory.ini 文件

inventory.ini 文件路径为 tidb-ansible/inventory.ini。

# TiDB Cluster Part
[tidb_servers]
192.168.78.73 deploy_dir=/data0/deploy
192.168.77.171 deploy_dir=/data0/deploy

[tikv_servers]
192.168.77.65 deploy_dir=/data0/deploy tikv_port=20171 labels="host=tidbkv1st.hy.com"
192.168.77.41 deploy_dir=/data0/deploy tikv_port=20171 labels="host=tidbkv2st.hy.com"
192.168.76.233 deploy_dir=/data0/deploy tikv_port=20171 labels="host=tidbkv3st.hy.com"

[pd_servers]
192.168.78.55
192.168.78.119
192.168.77.97

[spark_master]

[spark_slaves]

# Monitoring Part
[monitoring_servers]
192.168.78.55

[grafana_servers]
192.168.78.55

[monitored_servers:children]
tidb_servers
tikv_servers
pd_servers

# Binlog Part
[pump_servers:children]
tidb_servers


[drainer_servers]

[pd_servers:vars]
# location_labels = ["zone","rack","host"]

## Global variables
[all:vars]
deploy_dir=/data0/deploy
## Connection
# ssh via root:
ansible_user = root
ansible_become = true
ansible_become_user = tidb

# ssh via normal user
#ansible_user = tidb

cluster_name = pay-cluster

tidb_version = v1.0.7

# deployment methods, [binary, docker]
deployment_method = binary

# process supervision, [systemd, supervise]
process_supervision = systemd

# timezone of deployment region
timezone = Asia/Shanghai
set_timezone = True

# misc
enable_firewalld = False
# check NTP service
enable_ntpd = True
machine_benchmark = True
set_hostname = False

# binlog trigger
enable_binlog = False
# zookeeper address of kafka cluster, example:
# zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181"
zookeeper_addrs = ""

# KV mode
deploy_without_tidb = False


4、调整参数

多实例情况下, 需要修改 conf/tikv.yml 中的 end-point-concurrency 以及 block-cache-size 参数:

  • end-point-concurrency: 总数低于 CPU Vcores 即可   ,在tidbkv服务器上,执行lscpu看CPU(s)的值,这里是32
  • rocksdb defaultcf block-cache-size(GB) = MEM * 80% / TiKV 实例数量 * 30%,120g*80%/3*30%,这里是10g左右,这里分多种类型,default和writecf
  • rocksdb writecf block-cache-size(GB) = MEM * 80% / TiKV 实例数量 * 45%
  • rocksdb lockcf block-cache-size(GB) = MEM * 80% / TiKV 实例数量 * 2.5% (最小 128 MB)
  • raftdb defaultcf block-cache-size(GB) = MEM * 80% / TiKV 实例数量 * 2.5% (最小 128 MB)

如果多个 TiKV 实例部署在同一块物理磁盘上, 需要修改 conf/tikv.yml 中的 capacity 参数:

  • capacity = (DISK - 日志空间) / TiKV 实例数量, 单位为 GB
变量 含义
cluster_name 集群名称,可调整
tidb_version TiDB 版本,TiDB-Ansible 各分支默认已配置
deployment_method 部署方式,默认为 binary,可选 docker
process_supervision 进程监管方式,默认为 systemd,可选 supervise
timezone 修改部署目标机器时区,默认为 Asia/Shanghai, 可调整,与 set_timezone 变量结合使用
set_timezone 默认为 True,即修改部署目标机器时区,关闭可修改为 False
enable_firewalld 开启防火墙,默认不开启
enable_ntpd 检测部署目标机器 NTP 服务,默认为 True,请勿关闭
set_hostname 根据 IP 修改部署目标机器主机名,默认为 False
enable_binlog 是否部署 pump 并开启 binlog,默认为 False,依赖 Kafka 集群,参见 zookeeper_addrs 变量
zookeeper_addrs binlog Kafka 集群的 zookeeper 地址
enable_slow_query_log TiDB 慢查询日志记录到单独文件({{ deploy_dir }}/log/tidb_slow_query.log),默认为 False,记录到 tidb 日志
deploy_without_tidb KV 模式,不部署 TiDB 服务,仅部署 PD、TiKV 及监控服务,请将 inventory.ini 文件中 tidb_servers 主机组 IP 设置为空。



5、部署任务

ansible-playbook 执行 Playbook 时默认并发为 5,部署目标机器较多时可添加 -f 参数指定并发, 如 ansible-playbook deploy.yml -f 10


准备文件all.hosts

[tidbs]
tidb[1:2]st.hy.com
tidbkv[1:3]st.hy.com
tidbpd[1:3]st.hy.com

远程创建用户:ansible -i /tmp/all.hosts tidbs -a "useradd tidb"

创建公钥:ssh-****** -t rsa

将公钥copy到远程服务器:ansible -i /tmp/all.hosts tidbs -m copy -a "src=/home/tidb/.ssh/id_rsa.pub.2 dest=/tmp/"


将初始化公钥命令copy到远程服务器:ansible -i /tmp/all.hosts tidbs -m copy -a "src=/tmp/createkey.sh dest=/tmp/"

ansible -i /tmp/all.hosts tidbs -m copy -a "src=/root/.ssh/id_rsa.pub dest=/tmp/id_rsa.pub.3"


创建目录:ansible -i /tmp/all.hosts tidbs -m shell -a 'mkdir -p ~tidb/.ssh' -b

赋予权限:ansible -i /tmp/all.hosts tidbs -m shell -a 'chown tidb.  ~tidb/.ssh' -b


使用 local_prepare.yml playbook, 联网下载 TiDB binary 到中控机:ansible-playbook local_prepare.yml

初始化系统内核参数:ansible-playbook bootstrap.yml

部署 TiDB 集群软件:ansible-playbook deploy.yml

启动TiDB集群:ansible-playbook start.yml


查看tidb的4000端口部署在哪里:ansible -i /tmp/all.hosts tidbs -m shell -a 'ps -eaf|grep 4000' -b

寻找web管理地址:ps -eaf|grep tidb 会看到9090的进程,有--web.external-url=http://tidbpd1st.hy.com:9090/,这就是web端入口


六、访问操作

mysql登陆

[[email protected]] ~/dbj1st/orders$ mysql -htidb1st.hy.com -uroot -p -P 4000
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.1-TiDB-v1.0.7 MySQL Community Server (Apache License 2.0)

Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

root-tidb1st.hy.com:4000:(none) 17:45:14> show databases;
+--------------------+
| Database |
+--------------------+
| INFORMATION_SCHEMA |
| PERFORMANCE_SCHEMA |
| mysql |
| test |
+--------------------+
4 rows in set (0.01 sec)

root-tidb1st.hy.com:4000:(none) 17:45:26>


TiDB 安装记录



创建用户:

create user 'payment_wr'@'%' identified by 'xxxxxx';

grant all on test.* to 'payment_wr'@'%';


相关文章:

  • 2022-12-23
  • 2021-12-06
  • 2021-06-17
  • 2022-12-23
  • 2022-01-27
  • 2022-01-21
猜你喜欢
  • 2021-08-09
  • 2021-12-19
  • 2022-01-26
  • 2021-10-19
  • 2021-06-07
  • 2022-12-23
  • 2022-12-23
相关资源
相似解决方案