解决ceph 100.000% pgs not active
这里写自定义目录标题欢迎使用Markdown编辑器欢迎使用Markdown编辑器虚拟机单节点安装完ceph后,只创建了两个OSD,导致状态是100.000% pgs not active,用python rados往pool存数据一直卡着,存不了。root@controller ceph-test]# ceph -scluster:id:c5544727-e047-47e3-85fc-6cc5da
这里写自定义目录标题
欢迎使用Markdown编辑器
虚拟机单节点安装完ceph后,只创建了两个OSD,导致状态是100.000% pgs not active,用python rados往pool存数据一直卡着,存不了。
root@controller ceph-test]# ceph -s
cluster:
id: c5544727-e047-47e3-85fc-6cc5dab8b314
health: HEALTH_WARN
Reduced data availability: 192 pgs inactive
Degraded data redundancy: 192 pgs undersized
OSD count 2 < osd_pool_default_size 3
mon is allowing insecure global_id reclaim
services:
mon: 1 daemons, quorum controller (age 13h)
mgr: controller(active, since 13h)
osd: 2 osds: 2 up (since 12h), 2 in (since 12h)
data:
pools: 2 pools, 192 pgs
objects: 0 objects, 0 B
usage: 2.0 GiB used, 38 GiB / 40 GiB avail
pgs: 100.000% pgs not active
192 undersized+peered
将crush map导出
ceph osd getcrushmap -o /etc/ceph/crushmap
输出5,这一步的输出和osd数量有关系
将crush map反编译为可读模式
crushtool -d /etc/ceph/crushmap -o /etc/ceph/crushmap.txt
修改curshmap,主要是修改OSD级别
sed -i ‘s/step chooseleaf firstn 0 type host/step chooseleaf firstn 0 type osd/’ /etc/ceph/crushmap.txt
grep ‘step chooseleaf’ /etc/ceph/crushmap.txt
输出step chooseleaf firstn 0 type osd
crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap-new
将修改过的crush map导入集群
ceph osd setcrushmap -i /etc/ceph/crushmap-new
输出6
现在再看ceph -s,pg没用那个问题了
[root@controller ceph-cluster]# ceph -s
cluster:
id: c5544727-e047-47e3-85fc-6cc5dab8b314
health: HEALTH_WARN
Degraded data redundancy: 192 pgs undersized
OSD count 2 < osd_pool_default_size 3
mon is allowing insecure global_id reclaim
services:
mon: 1 daemons, quorum controller (age 13h)
mgr: controller(active, since 13h)
osd: 2 osds: 2 up (since 13h), 2 in (since 13h)
data:
pools: 2 pools, 192 pgs
objects: 0 objects, 0 B
usage: 2.0 GiB used, 38 GiB / 40 GiB avail
pgs: 192 active+undersized
我现在再运行代码,没问题,数据存进去了,对象名是/root/ceph-test/123.txt
[root@controller ceph-test]# python pooltest.py
Object contents = /root/ceph-test/123.txt
[root@controller ceph-test]# rados -p data2 ls
/root/ceph-test/123.txt
更多推荐
所有评论(0)