重启ceph的时候,在配置主管理节点查看ceph的状态的时候报错。

重启ceph的错误health HEALTH_ERR [56 pgs are stuck inactive for more than 300 seconds]


报错原因是忘记在node2,node3节点加UDEV规则,使得vdb1vdb2重启后,属主属组仍然是ceph,在配置的ceph的三个节点加入以下的规则,在重启三台主机。

[[email protected]~]# vim /etc/udev/rules.d/90-cephdisk.rules

ACTION=="add",KERNEL=="vdb[12]", OWNER="ceph", GROUP="ceph"

[[email protected]~]# vim /etc/udev/rules.d/90-cephdisk.rules

ACTION=="add",KERNEL=="vdb[12]", OWNER="ceph", GROUP="ceph"

[[email protected]~]# vim /etc/udev/rules.d/90-cephdisk.rules

ACTION=="add",KERNEL=="vdb[12]", OWNER="ceph", GROUP="ceph"

[[email protected]~]# vim /etc/udev/rules.d/90-cephdisk.rules

ACTION=="add",KERNEL=="vdb[12]", OWNER="ceph", GROUP="ceph"



相关文章:

  • 2022-12-23
  • 2022-01-07
  • 2021-12-17
  • 2022-12-23
  • 2021-12-10
  • 2022-12-23
猜你喜欢
  • 2021-10-27
  • 2021-07-15
  • 2022-12-23
  • 2021-05-19
  • 2021-09-01
  • 2022-01-08
  • 2022-01-17
相关资源
相似解决方案