Ceph: 在同一机箱内混合使用 SATA 和 SSD

用例很简单,我想在同一台机器中使用 SSD 磁盘和 SATA 磁盘,并最终创建指向 SSD 或 SATA 磁盘的池。为了实现我们的目标,我们需要修改 CRUSH 映射。我的示例在每台主机上有 2 个 SATA 磁盘和 2 个 SSD 磁盘,并且我总共有 3 台主机。
为了说明,请参考下图

I. CRUSH 映射
CRUSH 非常灵活且感知拓扑,这在我们的场景中非常有用。我们即将创建两个不同的 root 或入口点,CRUSH 算法将通过它们来存储我们的对象。我们将为我们的 SSD 磁盘创建一个 root,为我们的 SATA 磁盘创建一个 root。查看下面的 CRUSH 映射,您会看到我们复制了我们的拓扑,就像我们让 CRUSH 认为我们有不同的平台一样,这并不完全正确。我们只是以逻辑方式表示我们想要实现的目标。
这是 CRUSH 映射
##
# OSD SATA DECLARATION
##
host ceph-osd2-sata {
id -2 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item osd.0 weight 1.000
item osd.3 weight 1.000
}
host ceph-osd1-sata {
id -3 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item osd.2 weight 1.000
item osd.5 weight 1.000
}
host ceph-osd0-sata {
id -4 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item osd.1 weight 1.000
item osd.4 weight 1.000
}
##
# OSD SSD DECLARATION
##
host ceph-osd2-ssd {
id -22 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item osd.6 weight 1.000
item osd.9 weight 1.000
}
host ceph-osd1-ssd {
id -23 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item osd.8 weight 1.000
item osd.11 weight 1.000
}
host ceph-osd0-ssd {
id -24 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item osd.7 weight 1.000
item osd.10 weight 1.000
}
现在我们创建包含我们的 OSD 的两个 root
##
# SATA ROOT DECLARATION
##
root sata {
id -1 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item ceph-osd2-sata weight 2.000
item ceph-osd1-sata weight 2.000
item ceph-osd0-sata weight 2.000
}
##
# SATA ROOT DECLARATION
##
root ssd {
id -21 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item ceph-osd2-ssd weight 2.000
item ceph-osd1-ssd weight 2.000
item ceph-osd0-ssd weight 2.000
}
我创建 2 个新规则
##
# SSD RULE DECLARATION
##
# rules
rule ssd {
ruleset 0
type replicated
min_size 1
max_size 10
step take ssd
step chooseleaf firstn 0 type host
step emit
}
##
# SATA RULE DECLARATION
##
rule sata {
ruleset 1
type replicated
min_size 1
max_size 10
step take sata
step chooseleaf firstn 0 type host
step emit
}
编译并注入新的映射
bash $ crushtool -c lamap.txt -o lamap.coloc $ sudo ceph osd setcrushmap -i lamap.coloc
然后查看结果
```bash $ sudo ceph osd tree
id weight type name up/down reweight
-21 12 root ssd -22 2 host ceph-osd2-ssd 6 1 osd.6 up 1 9 1 osd.9 up 1 -23 2 host ceph-osd1-ssd 8 1 osd.8 up 1 11 1 osd.11 up 1 -24 2 host ceph-osd0-ssd 7 1 osd.7 up 1 10 1 osd.10 up 1 -1 12 root sata -2 2 host ceph-osd2-sata 0 1 osd.0 up 1 3 1 osd.3 up 1 -3 2 host ceph-osd1-sata 2 1 osd.2 up 1 5 1 osd.5 up 1 -4 2 host ceph-osd0-sata 1 1 osd.1 up 1 4 1 osd.4 up 1 ```
II. CRUSH 规则
池配置
创建池
bash root@ceph-mon0:~# ceph osd pool create ssd 128 128 pool 'ssd' created root@ceph-mon0:~# ceph osd pool create sata 128 128 pool 'sata' created
将规则分配给池
bash root@ceph-mon0:~# ceph osd pool set ssd crush_ruleset 0 set pool 8 crush_ruleset to 0 root@ceph-mon0:~# ceph osd pool set sata crush_ruleset 1 set pool 9 crush_ruleset to 1
ceph osd dump 的结果
pool 8 'ssd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 116 flags hashpspool stripe_width 0
pool 9 'sata' replicated size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 117 flags hashpspool stripe_width 0
III. OSD 配置
是的,您可以禁用在守护程序启动时更新 crushmap
[osd]
osd crush update on start = false
哇!