08.存储Ceph的  所有笔记将使用6OSD、2MON的ceph集群
  1. 在一控两计算的devstack环境上搭建ceph集群,首先,每个节点上都有2个卷用作osdceph搭建
  2. 三个节点CEPH-DEPLOY SETUP:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    # Add the release key
    [email protected]:~# wget -q -O- 'http://mirrors.163.com/ceph/keys/release.asc' | apt-key add -
    OK
    # Add the Ceph packages to your repository.
    [email protected]:~# echo deb http://mirrors.163.com/ceph/debian-luminous/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list
    deb http://mirrors.163.com/ceph/debian-luminous/ xenial main
    Update your repository and install ceph-deploy
    [email protected]:~# apt update
    [email protected]:~# apt install ceph-deploy
  3. CEPH NODE SETUP→1.INSTALL NTP:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    # We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to prevent issues arising from clock drift. 
    ########################
    #控制节点
    [email protected]:~# apt install chrony -y
    [email protected]:~# vim /etc/chrony/chrony.conf
    # 第20行:注释掉这一行
    # pool 2.debian.pool.ntp.org offline iburst
    # 换为:(根据地区添加合适的时间同步服务器,这里选择阿里云服务器)
    server time4.aliyun.com offline iburst
    # 67行添加: (这里意思是允许其他节点来控制节点进行时间同步) 子网
    allow 10.110.31.0/24
    # 保存并退出VIM
    # 重启chrony服务
    [email protected]:~# service chrony restart
    #########################
    #计算节点
    [email protected]:~# apt install chrony -y
    [email protected]:~# vim /etc/chrony/chrony.conf
     
    # 第20行:注释掉这一行
    # pool 2.debian.pool.ntp.org offline iburst
     
    # 添加控制节点的IP
    server 10.110.31.94 iburst
    # 保存并退出VIM
     
    [email protected]:~# service chrony restart
    ###########################
    #验证
    [email protected]ler:~# chronyc sources
    210 Number of sources = 1
    MS Name/IP address         Stratum Poll Reach LastRx Last sample
    ===============================================================================
    ^* 203.107.6.88                  2   6    17    10   +192us[+1885us] +/-   13ms
    [email protected]:~# chronyc sources
    210 Number of sources = 1
    MS Name/IP address         Stratum Poll Reach LastRx Last sample
    ===============================================================================
    ^* controller                    3   6    17    19    -80us[ -284us] +/-   14ms
  4. CEPH NODE SETUP→2.各节点间实现免密ssh登录,从而可以保证The ceph-deploy utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.
    1
    2
    3
    4
    #仅展示控制节点传公钥到计算节点,计算节点执行同样操作
    [email protected]:~# ssh-****** -t rsa
    [email protected]:~# ssh-copy-id [email protected]
    [email protected]:~# ssh-copy-id [email protected]
  5. STARTING OVER
    1
    2
    3
    4
    5
    #If at any point you run into trouble and you want to start over, execute the following to purge the Ceph packages, and erase all its data and configuration:
    ceph-deploy purge {ceph-node} [{ceph-node}]
    ceph-deploy purgedata {ceph-node} [{ceph-node}]
    ceph-deploy forgetkeys
    rm ceph.*
  6. CREATE A CLUSTER
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    # create a directory for maintaining the configuration files and keys that ceph-deploy generates for your cluster.
    # Ensure you are in this directory when executing ceph-deploy
    [email protected]:~# cd my-cluster/
    [email protected]:~/my-cluster# 
    # Create the cluster,
    [email protected]:~/my-cluster# ceph-deploy new controller compute1
    [email protected]:~/my-cluster# ll 
    total 24
    drwxr-xr-x 2 root root 4096 Sep  4 20:21 ./
    drwx------ 5 root root 4096 Sep  4 20:21 ../
    -rw-r--r-- 1 root root  223 Sep  4 20:21 ceph.conf
    -rw-r--r-- 1 root root 4098 Sep  4 20:21 ceph-deploy-ceph.log
    -rw------- 1 root root   73 Sep  4 20:21 ceph.mon.keyring
    #Install Ceph packages,The ceph-deploy utility will install Ceph on each node
    #!   [email protected]:~/my-cluster#                  ceph-deploy install controller compute1 compute2
    #
    [email protected]:~/my-cluster# apt-get install ceph ceph-base ceph-common ceph-mds ceph-mgr ceph-mon ceph-osd libcephfs2 python-cephfs
    # Deploy the initial monitor(s) and gather the keys:
    [email protected]:~/my-cluster# ceph-deploy mon create-initial
    [email protected]:~/my-cluster# ll
    total 96
    drwxr-xr-x 2 root root  4096 Sep  5 08:54 ./
    drwx------ 5 root root  4096 Sep  4 20:26 ../
    -rw------- 1 root root    71 Sep  5 08:54 ceph.bootstrap-mds.keyring
    -rw------- 1 root root    71 Sep  5 08:54 ceph.bootstrap-mgr.keyring
    -rw------- 1 root root    71 Sep  5 08:54 ceph.bootstrap-osd.keyring
    -rw------- 1 root root    71 Sep  5 08:54 ceph.bootstrap-rgw.keyring
    -rw------- 1 root root    63 Sep  5 08:54 ceph.client.admin.keyring
    -rw-r--r-- 1 root root   223 Sep  4 20:24 ceph.conf
    -rw-r--r-- 1 root root 50696 Sep  5 08:54 ceph-deploy-ceph.log
    -rw------- 1 root root    73 Sep  4 20:21 ceph.mon.keyring
    -rw-r--r-- 1 root root  1645 Oct 16  2015 release.asc
    #Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.
    [email protected]:~/my-cluster# ceph-deploy admin controller compute1 compute2
    #执行上述命令前的节点
    [email protected]:~# ll /etc/ceph/
    total 12
    drwxr-xr-x   2 root root 4096 Sep  4 20:44 ./
    drwxr-xr-x 105 root root 4096 Sep  4 20:44 ../
    -rw-r--r--   1 root root   92 Apr 11 21:18 rbdmap
    #执行上述命令后的节点
    [email protected]:~# ll /etc/ceph/
    total 20
    drwxr-xr-x   2 root root 4096 Sep  5 08:58 ./
    drwxr-xr-x 105 root root 4096 Sep  4 20:44 ../
    -rw-------   1 root root   63 Sep  5 08:58 ceph.client.admin.keyring
    -rw-r--r--   1 root root  223 Sep  5 08:58 ceph.conf
    -rw-r--r--   1 root root   92 Apr 11 21:18 rbdmap
    -rw-------   1 root root    0 Sep  5 08:58 tmpiamuoy
    
    #Deploy a manager daemon. (Required only for luminous+ builds):
    [email protected]:~/my-cluster# ceph-deploy mgr create controller compute1
    #Add six OSDs
    [email protected]:~/my-cluster# ceph-deploy osd create --data /dev/vdc controller
    [email protected]:~/my-cluster# ceph-deploy osd create --data /dev/vdd controller
    [email protected]:~/my-cluster# ceph-deploy osd create --data /dev/vdc compute1
    [email protected]:~/my-cluster# ceph-deploy osd create --data /dev/vdd compute1
    [email protected]:~/my-cluster# ceph-deploy osd create --data /dev/vdc compute2
    [email protected]:~/my-cluster# ceph-deploy osd create --data /dev/vdd compute2
    #检查集群信息
    [email protected]:~/my-cluster# ceph health                                    
    HEALTH_OK
    [email protected]:~/my-cluster# ceph -s
      cluster:
        id:     6afe180b-87c5-4f51-bdc6-53a8ecf85a9a
        health: HEALTH_OK
     
      services:
        mon: 2 daemons, quorum compute1,controller
        mgr: compute1(active)
        osd: 6 osds: 6 up, 6 in
     
      data:
        pools:   0 pools, 0 pgs
        objects: 0  objects, 0 B
        usage:   6.0 GiB used, 54 GiB / 60 GiB avail
        pgs:
    1.为了满足HA的要求,OSD需要分散在不同的节点上,这里拷贝数量为3,则需要有三个OSD节点来承载这些OSD,如果三个OSD分布在两个OSD节点上,则依然可能会出现”active+undersized+degraded”的状态。
    2.如果在创建osd时提醒config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite,表明这是节点间配置文件不一致,可以ceph-deploy --overwrite-conf mon create controller compute1来解决
    3.If you are creating an OSD on an LVM volume, the argument to --data must be volume_group/lv_name, rather than the path to the volume’s block device.(可以使用lvm卷当osd)
  7. EXPANDING YOUR CLUSTER
  8. ceph与cinder的对接
    1. CREATE A POOL
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      [email protected]:~# ceph osd pool create volumes 128
      pool 'volumes' created
      [email protected]:~# ceph osd pool create images 128
      pool 'images' created
      [email protected]:~# ceph osd pool create vms 128
      pool 'vms' created
      #Use the rbd tool to initialize the pools
      [email protected]:~# rbd pool init volumes
      [email protected]:~# rbd pool init images
      [email protected]:~# rbd pool init vms
    2. SETUP CEPH CLIENT AUTHENTICATION:
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      # If you have cephx authentication enabled, 
      [email protected]:~# ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'
      [client.cinder]
        key = AQA9k3Bdir5EMRAAqe+jAWqznUEHGSHU7qwAYA==
      # Add the keyrings for client.cinder
      [email protected]:~# ceph auth get-or-create client.cinder |  tee  /etc/ceph/ceph.client.cinder.keyring
      [client.cinder]
        key = AQA9k3Bdir5EMRAAqe+jAWqznUEHGSHU7qwAYA==
      #为了可以让虚机能attach ceph卷,在进行如下设置(因为三节点都可以创建虚机)
      [email protected]:~# vim /etc/ceph/secret.xml
      <secret ephemeral='no' private='no'>
        <uuid>19e30f2b-0565-4194-82ea-f982982438a9</uuid>
        <usage type='ceph'>
          <name>client.cinder secret</name>
        </usage>
      </secret>
      
      
      [email protected]:~# ll /etc/ceph/
      total 20
      drwxr-xr-x   2 root root 4096 Sep  5 12:43 ./
      drwxr-xr-x 113 root root 4096 Sep  5 10:43 ../
      -rw-------   1 root root   63 Sep  5 08:58 ceph.client.admin.keyring
      -rw-r--r--   1 root root  252 Sep  5 11:30 ceph.conf
      -rw-r--r--   1 root root   92 Jun  4 00:15 rbdmap
      -rw-------   1 root root    0 Sep  5 08:54 tmpRiOsBL
      [email protected]:~# 
      
        
      [email protected]:~# 
      [email protected]:~# virsh secret-define --file /etc/ceph/secret.xml
      Secret 19e30f2b-0565-4194-82ea-f982982438a9 created
      
      [email protected]:~# ceph auth get-key client.cinder | tee /etc/ceph/client.cinder.key
      [email protected]:~# virsh secret-set-value --secret 19e30f2b-0565-4194-82ea-f982982438a9 --base64 $(cat /etc/ceph/client.cinder.key)
      Secret value set
      
      [email protected]:~# virsh secret-list
       UUID                                  Usage
      --------------------------------------------------------------------------------
       19e30f2b-0565-4194-82ea-f982982438a9  ceph client.cinder secret
      
      [email protected]:~# rm /etc/ceph/client.cinder.key && rm /etc/ceph/secret.xml
      [email protected]:~# ll /etc/ceph/
      total 24
      drwxr-xr-x   2 root root 4096 Sep  5 17:19 ./
      drwxr-xr-x 113 root root 4096 Sep  5 10:43 ../
      -rw-------   1 root root   63 Sep  5 08:58 ceph.client.admin.keyring
      -rw-r--r--   1 root root   64 Sep  5 17:14 ceph.client.cinder.keyring
      -rw-r--r--   1 root root  252 Sep  5 11:30 ceph.conf
      -rw-r--r--   1 root root   92 Jun  4 00:15 rbdmap
      -rw-------   1 root root    0 Sep  5 08:54 tmpRiOsBL
    3. CONFIGURE CINDER TO USE CEPH
      1. 参考见08.存储Cinder→5.场学→12.Ceph Volume Provider→1.配置→扩展
  9. DASHBOARD PLUGIN
    1. 确定三台节点安装的ceph的版本保持一致  不同版本dashboard安装方法不同
      1
      2
      3
      4
      5
      6
      [email protected]:~# ceph --version
      ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
      [email protected]:~# ceph --version
      ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
      [email protected]:~# ceph --version
      ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
    2. 安装
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      #Within a running Ceph cluster, the dashboard manager module is enabled with
      [email protected]:~# ceph mgr module enable dashboard
      [email protected]:~# ceph mgr module enable dashboard
      
      # 设置dashboard的ip和端口
      [email protected]:~# ceph config-key put mgr/dashboard/server_addr 10.110.31.94 
      set mgr/dashboard/server_addr
      [email protected]:~# ceph config-key put mgr/dashboard/server_port 7000
      set mgr/dashboard/server_port
      # 重启controller节点上的mgr服务(不用重启compute节点上的)
      [email protected]:~# systemctl restart [email protected] 
      
      # 检查端口,u表示udp
      [email protected]:~# netstat -tunlp|grep 7000
      tcp        0      0 10.110.31.94:7000       0.0.0.0:*               LISTEN      239170/ceph-mgr 
      #访问网址
      [email protected]:~# {{c1::ceph mgr services}}
      {
          "dashboard": "http://10.110.31.94:7000/"
      }
      1.重启dashboard执行systemctl restart [email protected]  2.当出现重启失败的情况下,先执行systemctl reset-failed [email protected],然后再执行systemctl start [email protected] 3.重启后应该可以看到ceph搭建
    3. 访问界面

ceph搭建

相关文章: