一、Ceph对象存储(RGW)的简介和安装

1.1、对象存储的简介:

  • 通过对象存储,将数据存储为对象,每个对象除了包含数据,还包含数据自身的元数据
  • 对象通过Object ID来检索,无法通过普通文件系统操作来直接访问对象,只能通过API来访问,或者第三方客户端(实际上也是对API的封装)
  • 对象存储中的对象不整理到目录树中,而是存储在扁平的命名空间中,Amazon S3将这个扁平命名空间称为bucket。而swift则将其称为容器
  • 无论是bucket还是容器,都不能嵌套
  • bucket需要被授权才能访问到,一个帐户可以对多个bucket授权,而权限可以不同
  • 对象存储的优点:易扩展、快速检索

1.2、Rados网关介绍:

  • RADOS网关也称为Ceph对象网关、RADOSGW、RGW,是一种服务,使客户端能够利用标准对象存储API来访问Ceph集群。它支持S3和Swift API
  • rgw运行于librados之上,事实上就是一个称之为Civetweb的web服务器来响应api请求
  • 客户端使用标准api与rgw通信,而rgw则使用librados与ceph集群通信
  • rgw客户端通过s3或者swift api使用rgw用户进行身份验证。然后rgw网关代表用户利用cephx与ceph存储进行身份验证

更多详细介绍请查看官网:https://docs.ceph.com/en/latest/radosgw/

本次实验的主键情况如下:

20210828第三天:Ceph对象存储(RGW)的安装使用、Ceph的Dashboard和通过 Prometheus监控Ceph集群状态

 

1.3、安装Ceph的对象网关主键并验证(本次打算再node02和node03上安装RGW),具体详细过程如下:

  1 一、安装RGW:
  2 
  3 node02:上的操作
  4 root@node02:~# apt install radosgw
  5 Reading package lists... Done
  6 Building dependency tree       
  7 Reading state information... Done
  8 radosgw is already the newest version (16.2.5-1focal).
  9 0 upgraded, 0 newly installed, 0 to remove and 73 not upgraded.
 10 root@node02:~#
 11 
 12 
 13 
 14 node03:上的操作
 15 root@node03:~# apt install radosgw
 16 Reading package lists... Done
 17 Building dependency tree       
 18 Reading state information... Done
 19 radosgw is already the newest version (16.2.5-1focal).
 20 0 upgraded, 0 newly installed, 0 to remove and 73 not upgraded.
 21 root@node03:~# 
 22 
 23 
 24 
 25 ceph-deploy(node01):上的操作
 26 root@node01:~# ceph -s
 27   cluster:
 28     id:     9138c3cf-f529-4be6-ba84-97fcab59844b
 29     health: HEALTH_OK
 30  
 31   services:
 32     mon: 3 daemons, quorum node01,node02,node03 (age 42s)
 33     mgr: node01(active, since 35s), standbys: node02, node03
 34     osd: 6 osds: 6 up (since 37s), 6 in (since 2w)
 35  
 36   data:
 37     pools:   1 pools, 1 pgs
 38     objects: 0 objects, 0 B
 39     usage:   33 MiB used, 240 GiB / 240 GiB avail
 40     pgs:     1 active+clean
 41  
 42 root@node01:~# 
 43 root@node01:~/ceph-deploy# 
 44 root@node01:~/ceph-deploy# ceph-deploy rgw create node02
 45 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
 46 [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy rgw create node02
 47 [ceph_deploy.cli][INFO  ] ceph-deploy options:
 48 [ceph_deploy.cli][INFO  ]  username                      : None
 49 [ceph_deploy.cli][INFO  ]  verbose                       : False
 50 [ceph_deploy.cli][INFO  ]  rgw                           : [('node02', 'rgw.node02')]
 51 [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
 52 [ceph_deploy.cli][INFO  ]  subcommand                    : create
 53 [ceph_deploy.cli][INFO  ]  quiet                         : False
 54 [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa24fea4d20>
 55 [ceph_deploy.cli][INFO  ]  cluster                       : ceph
 56 [ceph_deploy.cli][INFO  ]  func                          : <function rgw at 0x7fa24ff101d0>
 57 [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
 58 [ceph_deploy.cli][INFO  ]  default_release               : False
 59 [ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts node02:rgw.node02
 60 [node02][DEBUG ] connected to host: node02 
 61 [node02][DEBUG ] detect platform information from remote host
 62 [node02][DEBUG ] detect machine type
 63 [ceph_deploy.rgw][INFO  ] Distro info: Ubuntu 20.04 focal
 64 [ceph_deploy.rgw][DEBUG ] remote host will use systemd
 65 [ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to node02
 66 [node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
 67 [node02][DEBUG ] create path recursively if it doesn't exist
 68 [node02][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.node02 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.node02/keyring
 69 [node02][INFO  ] Running command: systemctl enable ceph-radosgw@rgw.node02
 70 [node02][INFO  ] Running command: systemctl start ceph-radosgw@rgw.node02
 71 [node02][INFO  ] Running command: systemctl enable ceph.target
 72 [ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host node02 and default port 7480
 73 root@node01:~/ceph-deploy# ceph-deploy rgw create node03
 74 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
 75 [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy rgw create node03
 76 [ceph_deploy.cli][INFO  ] ceph-deploy options:
 77 [ceph_deploy.cli][INFO  ]  username                      : None
 78 [ceph_deploy.cli][INFO  ]  verbose                       : False
 79 [ceph_deploy.cli][INFO  ]  rgw                           : [('node03', 'rgw.node03')]
 80 [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
 81 [ceph_deploy.cli][INFO  ]  subcommand                    : create
 82 [ceph_deploy.cli][INFO  ]  quiet                         : False
 83 [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6dee079d20>
 84 [ceph_deploy.cli][INFO  ]  cluster                       : ceph
 85 [ceph_deploy.cli][INFO  ]  func                          : <function rgw at 0x7f6dee0e51d0>
 86 [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
 87 [ceph_deploy.cli][INFO  ]  default_release               : False
 88 [ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts node03:rgw.node03
 89 [node03][DEBUG ] connected to host: node03 
 90 [node03][DEBUG ] detect platform information from remote host
 91 [node03][DEBUG ] detect machine type
 92 [ceph_deploy.rgw][INFO  ] Distro info: Ubuntu 20.04 focal
 93 [ceph_deploy.rgw][DEBUG ] remote host will use systemd
 94 [ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to node03
 95 [node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
 96 [node03][WARNIN] rgw keyring does not exist yet, creating one
 97 [node03][DEBUG ] create a keyring file
 98 [node03][DEBUG ] create path recursively if it doesn't exist
 99 [node03][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.node03 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.node03/keyring
100 [node03][INFO  ] Running command: systemctl enable ceph-radosgw@rgw.node03
101 [node03][WARNIN] Created symlink /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.node03.service → /lib/systemd/system/ceph-radosgw@.service.
102 [node03][INFO  ] Running command: systemctl start ceph-radosgw@rgw.node03
103 [node03][INFO  ] Running command: systemctl enable ceph.target
104 [ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host node03 and default port 7480
105 root@node01:~/ceph-deploy# 
106 root@node01:~/ceph-deploy# 
107 root@node01:~/ceph-deploy# 
108 root@node01:~/ceph-deploy# 
109 root@node01:~/ceph-deploy# 
110 root@node01:~/ceph-deploy# curl http://node02:7480
111 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>root@node01:~/ceph-deploy# curl http://node03:7480
112 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>root@node01:~/ceph-deploy# 
113 root@node01:~/ceph-deploy# 
114 root@node01:~/ceph-deploy# ceph -s
115   cluster:
116     id:     9138c3cf-f529-4be6-ba84-97fcab59844b
117     health: HEALTH_OK
118  
119   services:
120     mon: 3 daemons, quorum node01,node02,node03 (age 12m)
121     mgr: node01(active, since 12m), standbys: node02, node03
122     osd: 6 osds: 6 up (since 12m), 6 in (since 2w)
123     rgw: 3 daemons active (3 hosts, 1 zones)
124  
125   data:
126     pools:   5 pools, 105 pgs
127     objects: 189 objects, 4.9 KiB
128     usage:   90 MiB used, 240 GiB / 240 GiB avail
129     pgs:     105 active+clean
130  
131 root@node01:~/ceph-deploy# 
132 root@node01:~/ceph-deploy# 
133 root@node01:~/ceph-deploy# ceph osd lspools
134 1 device_health_metrics
135 2 .rgw.root
136 3 default.rgw.log
137 4 default.rgw.control
138 5 default.rgw.meta
139 root@node01:~/ceph-deploy#
View Code

相关文章: