使用 Ceph 分享硬盘

loic

一组用户将硬盘交给 Ceph 集群的系统管理员。作为回报,他们每个人都获得访问 Ceph 集群中给定大小的专用 的凭证。

系统管理员运行

# ceph-authtool jean-keyring --create-keyring --name client.jean \ --gen-key --set-uid 458 \ --cap mon 'allow profile simple-rados-client' \ --cap osd 'allow rwx pool=jean-pool' creating jean-keyring

ceph auth import --in-file jean-keyring

导入了 keyring

ceph auth get client.jean

导出 client.jean 的 keyring [client.jean] key = AQCziVZT6EJoIRAA/HVxueyPmRGjvqQeOR40hQ== auid = 458 caps mon = "allow profile simple-rados-client" caps osd = "allow rwx pool=jean-pool"

这会在 Ceph 集群中创建用户 client.jean,并限制其访问监视器(simple-rados-client)以及对 OSD 的读写访问权限,但仅限于访问(尚未存在的)池 jean-pool。然后使用以下命令创建池:

# cat > create-pool-auid.py <<EOF import rados import sys cluster = rados.Rados(conffile = '/etc/ceph/ceph.conf') cluster.connect() cluster.create_pool(sys.argv[1], int(sys.argv[2])) EOF

python create-pool-auid.py jean-pool 458

ceph osd pool set-quota jean-pool max_bytes $((1024 * 1024 * 1024))

为池 jean-pool 设置 max_bytes 配额 = 1073741824

使用 python API 将 jean 设置为池的所有者,通过与创建时关联的 auid458。将池的配额设置为 1GB,达到后写入将失败

rados put --pool jean-pool GROUP /etc/group error putting jean-pool/GROUP: No space left on device

用户获得刚刚创建的 keyring 和一个包含访问集群的监视器列表的 ceph.conf 文件

[global] auth_service_required = cephx fsid = 8790ab57-f06f-4b27-8507-55c8d59e1327 auth_supported = cephx auth_cluster_required = cephx mon_host = 10.89.0.2 auth_client_required = cephx

然后用户可以使用以下命令创建 RBD 卷:

# rbd --name client.jean --keyring jean-keyring --pool jean-pool create --size 100 vda

rbd --name client.jean --keyring jean-keyring --pool jean-pool info vda

rbd image 'vda': size 102400 kB in 25 objects order 22 (4096 kB objects) block_name_prefix: rb.0.10f5.74b0dc51 format: 1

然后将其映射为块设备:

# rbd --name client.jean --keyring jean-keyring --pool jean-pool map vda

dmesg

... 232.099642] Key type ceph registered [ 232.099695] libceph: loaded (mon/osd proto 15/24) [ 232.100879] rbd: loaded rbd (rados block device) [ 232.102434] libceph: client4399 fsid 8790ab57-f06f-4b27-8507-55c8d59e1327 [ 232.102971] libceph: mon0 10.89.0.2:6789 session established [ 232.159177] rbd1: unknown partition table [ 232.159230] rbd: rbd1: added with size 0x6400000

ls -l /dev/rbd1

brw-rw---- 1 root disk 251, 0 Apr 22 17:49 /dev/rbd1

然后可以将其格式化并挂载为文件系统:

# mkfs /dev/rbd1 mke2fs 1.42.9 (4-Feb-2014) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=4096 blocks, Stripe width=4096 blocks 25688 inodes, 102400 blocks 5120 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67371008 13 block groups 8192 blocks per group, 8192 fragments per group 1976 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Allocating group tables: done Writing inode tables: done Writing superblocks and filesystem accounting information: done

mount /dev/rbd1 /mnt

df -h /mnt

Filesystem Size Used Avail Use% Mounted on /dev/rbd1 97M 1.6M 91M 2% /mnt

注意:提出的在创建池时自动设置 auid 的更改过于激进。或者,ceph osd pool set auid 建议提供一种使用 shell 命令行设置 auid 的方法,而不是 python 代码。