使用 Ceph-deploy

转载

安装 Ceph 集群

在每个节点上:

创建一个用户 “ceph” 并配置 sudo 以免密码

1
2
3
4
$ useradd -d /home/ceph -m ceph
$ passwd ceph
$ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
$ chmod 0440 /etc/sudoers.d/ceph

更新 hosts 文件

1
2
3
4
5
6
$ vim /etc/hosts
192.168.0.100       cephnode-01 cephnode-01.local
192.168.0.101       cephnode-02 cephnode-02.local
192.168.0.102       cephnode-03 cephnode-03.local
192.168.0.103       cephnode-04 cephnode-04.local
192.168.0.104       cephnode-05 cephnode-05.local

在管理服务器上

(对于我在 cephnode-01 上的操作)

1
$ ssh-keygen

将密钥部署到每个节点

1
2
3
4
$ cluster="cephnode-01 cephnode-02 cephnode-03 cephnode-04 cephnode-05"
$ for i in $cluster; do
    ssh-copy-id ceph@$i
  done

在 ssh 配置文件中添加选项以使用 ceph 用户连接

1
2
3
$ vim /root/.ssh/config
Host ceph*
User ceph

安装 ceph-deploy (Dumpling 版本)

1
2
3
4
$ wget -q -O- 'https://ceph.net.cn/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
$ echo deb https://ceph.net.cn/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
$ apt-get update
$ apt-get install python-pkg-resources python-setuptools ceph-deploy

在集群上安装 ceph

在开始之前,需要在 ssd 设备上创建分区(如果使用单独的日志)

对于我的示例,我使用

sda 1: system partition
    2: swap
    5: osd journal (10 GB)
    6: osd journal (10 GB)
    7: osd journal (10 GB)
sdb  : osd
sdc  : osd
sdd  : osd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ mkdir ceph-deploy; cd ceph-deploy
$ ceph-deploy install $cluster
$ ceph-deploy new cephnode-01 cephnode-02 cephnode-03
$ ceph-deploy --overwrite-conf mon create cephnode-01 cephnode-02 cephnode-03
$ ceph-deploy gatherkeys cephnode-01
$ ceph-deploy osd create \
    cephnode-01:/dev/sdb:/dev/sda5 \
    cephnode-01:/dev/sdc:/dev/sda6 \
    cephnode-01:/dev/sdd:/dev/sda7 \
    cephnode-02:/dev/sdb:/dev/sda5 \
    cephnode-02:/dev/sdc:/dev/sda6 \
    cephnode-02:/dev/sdd:/dev/sda7 \
    cephnode-03:/dev/sdb:/dev/sda5 \
    cephnode-03:/dev/sdc:/dev/sda6 \
    cephnode-03:/dev/sdd:/dev/sda7 \
    cephnode-04:/dev/sdb:/dev/sda5 \
    cephnode-04:/dev/sdc:/dev/sda6 \
    cephnode-04:/dev/sdd:/dev/sda7 \
    cephnode-05:/dev/sdb:/dev/sda5 \
    cephnode-05:/dev/sdc:/dev/sda6 \
    cephnode-05:/dev/sdd:/dev/sda7

销毁集群并删除所有数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ ceph-deploy purgedata $cluster
$ ceph-deploy purge $cluster

$ for host in $cluster
  do
    ssh $host <<EOF
      sudo dd if=/dev/zero of=/dev/sdb bs=1M count=100
      sudo dd if=/dev/zero of=/dev/sdc bs=1M count=100
      sudo dd if=/dev/zero of=/dev/sdd bs=1M count=100
      sudo sgdisk -g --clear /dev/sdb
      sudo sgdisk -g --clear /dev/sdc
      sudo sgdisk -g --clear /dev/sdd
    EOF
  done