1、HEALTH_WARN:pools have too many placement groups

[root@k8s-master ~]# ceph -s
  cluster:
    id:     627227aa-4a5e-47c1-b822-28251f8a9936
    health: HEALTH_WARN
            2 pools have too many placement groups
            mons are allowing insecure global_id reclaim

[root@k8s-master ~]# ceph health detail
HEALTH_WARN 2 pools have too many placement groups; mons are allowing insecure global_id reclaim
POOL_TOO_MANY_PGS 2 pools have too many placement groups
Pool cephfs_metadata has 128 placement groups, should have 16
Pool cephfs_data has 128 placement groups, should have 32
[root@k8s-master ~]# ceph osd pool autoscale-status
POOL              SIZE TARGET SIZE RATE RAW CAPACITY  RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_metadata  1350k              2.0       30708M 0.0001                               4.0    128         16 warn
cephfs_data     194.8k              2.0       30708M 0.0000                               1.0    128         32 warn

参考链接: https://forum.proxmox.com/threads/ceph-pools-have-too-many-placement-groups.81047/
原因: 开启了  autoscale-status
解决方法:
[root@k8s-master ~]# ceph mgr module disable pg_autoscaler

2、HEALTH_WARN: mons are allowing insecure global_id reclaim

[root@k8s-master ~]# ceph -s
  cluster:
    id:     627227aa-4a5e-47c1-b822-28251f8a9936
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim

参考链接:  http://www.manongjc.com/detail/24-dvcrprtvjeglqcc.html
解决方法:  禁用不安全模式
[root@k8s-master ~]# ceph config set mon auth_allow_insecure_global_id_reclaim false
[root@k8s-master ~]# ceph -s
  cluster:
    id:     627227aa-4a5e-47c1-b822-28251f8a9936
    health: HEALTH_OK

 

相关文章:

  • 2021-09-16
  • 2021-06-22
  • 2021-10-30
  • 2021-12-19
  • 2021-11-14
  • 2022-02-25
  • 2022-12-23
  • 2022-01-08
猜你喜欢
  • 2021-08-04
  • 2022-12-23
  • 2021-09-26
  • 2021-05-18
  • 2021-06-12
  • 2021-07-29
  • 2022-01-20
相关资源
相似解决方案