跳转至

01-ceph服务部署

Ceph集群的安装部署

单mon集群环境

集群部署说明

一:完全手动编译安装

(1)安装必要的依赖

(2)安装源码包

(3)手动编写ceph.conf文件

(4)测试集群状态

二:借助ceph-deploy,salastack等工具部署,0.80版本之后,ceph官网推荐使用ceph-deploy进行部署

三:下面以ceph-deploy方式部署实例

硬件配置:每个节点配置两个网卡”一内一外”

操作系统:ubuntu14.4 64bit。仅选择openssh-server作为预装项

节点准备:

(1),3个节点,配置3个osd,1个mon
(2).每个节点运行2个ceph daemon(OSD和mon)
(3).每个os节点1个ssd日志盘,1个SATA容量1TB的数据盘

image-20211029221133370

部署前准备工作:(任选一个作为admin节点)

在每个节点上执行:

关闭iptables等不必要的服务
设置并修改主机名称与host文件

快速配置网卡的命令

\cp /etc/sysconfig/network-scripts/ifcfg-eth0  /etc/sysconfig/network-scripts/ifcfg-eth1
sed -i "/GATEWAY/d"  /etc/sysconfig/network-scripts/ifcfg-eth1
sed -i "/DNS/d"  /etc/sysconfig/network-scripts/ifcfg-eth1
sed -i "/HWADDR/d"  /etc/sysconfig/network-scripts/ifcfg-eth1
sed -i "s#eth0#eth1#g"  /etc/sysconfig/network-scripts/ifcfg-eth1
sed -i "s#192.168.1#192.168.10#g" /etc/sysconfig/network-scripts/ifcfg-eth1
ifdown eth1 && ifup eth1
ping -c 4 192.168.10.15
echo

部署准备环境

1.系统环境介绍

[root@linux-bkce-node15 ~]# cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core) 
[root@linux-bkce-node15 ~]# uname -a
Linux linux-bkce-node15 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

2.初始化dns

cat >/etc/resolv.conf<<EOF
nameserver 114.114.114.114
EOF

3.初始化yum仓库

rm -f /etc/yum.repos.d/*.repo
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum repolist

4.下载初始化脚本

yum -y install git ansible
git clone https://gitee.com/chriscentos/system-init.git

5.生成密钥对

ssh-keygen

6.配置初始化脚本

[root@linux-bkce-node15 ~]# cd /root/system-init/
cat >hosts<<EOF
[nodes01]
192.168.1.15 
192.168.1.16 
192.168.1.17 

[all:vars]
ansible_ssh_port=22
ansible_ssh_user=root
ansible_ssh_pass="P@sswd"
EOF

cat >group_vars/all<<EOF
---
# system init size percentage
ntp_server_host: 'ntp1.aliyun.com'
dns_server_host: '114.114.114.114'

# Safety reinforcement related configuration
auth_keys_file: '/root/.ssh/authorized_keys'
password_auth: 'yes'
root_public_key: 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDDNH9zaCDxDsNCV3VeL0R/Z+cYLhffwOQfze/sc5dmcrWNp7a8GVskRIkJz97riaWkzbWePLthzcPsT0a4QOOQYldRFrmWSOp/Bx/cqV+O60MhRz8hQd0f9/scd87GzfJ7lTo4g3EXFqZWnjB6SDVgdS3akOhV7Vu5iq81JBS3H8HqxczKHTG8ezSdRtiimkwNESZxgsferBwqUMNbi1ZruMvwHwZ2vo4U4vQpZ91WYXfseWCgE6qhR5xbwjy+5JsGFbOezrI9vPzXD/TVVvi/liQ5aiatF/8K5WmURLA0xC4+oeEa/VWsKG1t2+k4ypxzJqgjH7G4U96sXKD13BEJ root@linux-bkce-node15'
root_passwd: 'P@sswd'
EOF

7.检查服务器的联通性

ansible nodes01 -m ping

6.开始运行初始化脚本

ansible-playbook playbooks/system_init.yml -e "nodes=nodes01"

6.安装ceph-deploy自动部署工具(网络安装的方法),

提示,如果你安装的ceph版本为j版的,那么你的ceph-deploy的版本一定要1.5.37版本,不然可能部署的时候会报错,那么我来给大家演示如何安装)配置国内的pip源

Linux下,修改 ~/.pip/pip.conf (没有就创建一个文件夹及文件。文件夹要加“.”,表示是隐藏文件夹)

mkdir -p ~/.pip/
cat > ~/.pip/pip.conf<<EOF

[global]
index-url = https://mirrors.cloud.tencent.com/pypi/simple
trusted-host = mirrors.cloud.tencent.com
EOF

7.开始安装ceph-deploy自动部署工具

yum -y install python-pip
pip install ceph-deploy

8.检查ceph-deploy的版本,一定要为2.0.1

[root@linux-bkce-node15 ~]# ceph-deploy 
usage: ceph-deploy [-h] [-v | -q] [--version] [--username USERNAME]
                   [--overwrite-conf] [--ceph-conf CEPH_CONF]
                   COMMAND ...

Easy Ceph deployment

    -^-
   /   \
   |O o|  ceph-deploy v2.0.1
   ).-.(
  '/|||\`
  | '|` |
    '|`

9.测试免秘钥登录

[root@linux-bkce-node15 ~]# ssh 192.168.1.16
[root@linux-bkce-node16 ~]# 

10.所有服务器配置ceph yum仓库

cat >/etc/yum.repos.d/ceph.repo<<EOF
[ceph]
name=ceph
baseurl=https://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/
enabled=1
gpgcheck=0
EOF
yum repolist

11.所有服务器配置hosts解析

cat >>/etc/hosts<<EOF
192.168.1.15 linux-bkce-node15 
192.168.1.16 linux-bkce-node16
192.168.1.17 linux-bkce-node17
EOF

MON节点安装

配置mon节点

1.admin节点(这里是linux-bkce-node15) 上建立一个ceph集群目录:

mkdir -p /data/ceph/ && cd /data/ceph/

2.初始化并创建一个monitor节点linux-bkce-node15

ceph-deploy new linux-bkce-node15 --public-network 192.168.1.0/24 --cluster-network 192.168.10.0/24

3.节点初始化完成以后会在/ceph目录下生成如下文件

[root@linux-bkce-node15 ceph]# ll
total 12
-rw-r--r-- 1 root root  273 Mar 21 13:57 ceph.conf
-rw-r--r-- 1 root root 3176 Mar 21 13:57 ceph-deploy-ceph.log
-rw------- 1 root root   73 Mar 21 13:57 ceph.mon.keyring

4.调整ceph.conf默认的参数

cat >>ceph.conf<<EOF
osd_pool_default_min_size = 2
osd_pool_default_size = 3
mon_clock_drift_allowed = 5
osd_pool_default_crush_rule = 0
osd_crush_chooseleaf_type = 1
EOF

5.需要在每个节点都要安装ceph软件

ssh 192.168.1.15 "yum -y install ceph"
ssh 192.168.1.16 "yum -y install ceph"
ssh 192.168.1.17 "yum -y install ceph"

初始化mon节点

  1. 覆盖monitor节点默认配置
ceph-deploy --overwrite-conf config push linux-bkce-node15

7.收集并生成monitor节点密钥,

[root@linux-node-ansible ceph]# ceph-deploy mon create-initial
提示:执行完成会生成如下文件
[root@linux-node-ansible ceph]# ll *.keyring
-rw------- 1 root root 113 Mar 27 12:20 ceph.bootstrap-mds.keyring
-rw------- 1 root root 113 Mar 27 12:20 ceph.bootstrap-osd.keyring
-rw------- 1 root root 113 Mar 27 12:20 ceph.bootstrap-rgw.keyring
-rw------- 1 root root 129 Mar 27 12:20 ceph.client.admin.keyring
-rw------- 1 root root  73 Mar 27 12:14 ceph.mon.keyring

OSD节点安装

添加osd节点

1.操作OSD节点,增加osd节点

添加192-168-56-15服务器节点的操作

ceph-deploy disk zap linux-bkce-node15 /dev/sdb
ceph-deploy osd create --data /dev/sdb linux-bkce-node15

ceph-deploy disk zap linux-bkce-node16 /dev/sdb
ceph-deploy osd create --data /dev/sdb linux-bkce-node16

ceph-deploy disk zap linux-bkce-node17 /dev/sdb
ceph-deploy osd create --data /dev/sdb linux-bkce-node17

2.检查所有osd节点是否挂载成功

ansible nodes01 -m shell -a "lsblk|grep osd"

管理节点配置

3.设置ceph集群管理节点,这里就全部都是管理节点

ceph-deploy admin linux-bkce-node15

4.修改keyring权限(mon节点操作)

[root@192-168-56-15 ceph]# chmod +r /etc/ceph/ceph.client.admin.keyring 
  1. 检测ceph集群状态
[root@linux-node11 ~]# ceph -s
    cluster 814cf38d-7cf0-414d-ac43-2c2b76ffe1f4
     health HEALTH_OK
     monmap e1: 1 mons at {linux-node11=192.168.56.11:6789/0}
            election epoch 3, quorum 0 linux-node11
     osdmap e14: 3 osds: 3 up, 3 in
            flags sortbitwise
      pgmap v30: 64 pgs, 1 pools, 0 bytes data, 0 objects
            101 MB used, 1484 GB / 1484 GB avail
                  64 active+clean
提示:OK就表示ceph集群搭建完成

查看ceph集群osd节点的状态信息

[root@linux-node11 ~]# ceph osd tree
ID WEIGHT  TYPE NAME             UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 1.44955 root default                                            
-2 0.48318     host linux-node11                                   
 0 0.48318         osd.0              up  1.00000          1.00000 
-3 0.48318     host linux-node12                                   
 1 0.48318         osd.1              up  1.00000          1.00000 
-4 0.48318     host linux-node13                                   
 2 0.48318         osd.2              up  1.00000          1.00000

集群节点初始化

使用ceph health显示没有激活mgr

[root@linux-bkce-node15 ceph]# ceph -s
  cluster:
    id:     1c08775c-58e5-4f8c-8838-7286ae7175f4
    health: HEALTH_WARN
            no active mgr
解决方法如下:
ceph-deploy mgr create linux-bkce-node15

使用ceph health显示application not enabled on 1 pool(s)

[root@linux-bkce-node15 ceph]# ceph -s
  cluster:
    id:     1c08775c-58e5-4f8c-8838-7286ae7175f4
    health: HEALTH_WARN
            application not enabled on 1 pool(s)

解决方法如下:
ceph osd pool application enable rbd rbd

根据最后一句提示,需要在’sata-pool’上开启应用授权(这里的 sata-pool 是我们手动建的 pool)
[root@k8s-product01-ceph01 /]# ceph osd pool application enable sata-pool rbd
enabled application 'rbd' on pool 'sata-pool'
然后再看 dashboard ,已经没有报错了,而且block 也能识别到了。
这里我们使用了rbd(块设备),pool 只能对一种类型进行 enable,另外两种类型是cephfs(文件系统),rgw(对象存储)

块设备完整操作

创建存储池

默认是没有存储池的,现在我们开始创建并启用

ceph osd pool create rbd 128
ceph osd pool application enable rbd rbd
ceph osd pool ls

创建块存储

查看ceph的默认的存储池

[root@192-168-56-15 ~]# ceph osd pool ls
rbd

1.在ceph默认的rbd存储池创建一个块存储设备,大小为4G

rbd create --size 4096 test01 --image-feature layering --pool rbd 

2.查看已经存储的块存储设备

[root@linux-bkce-node15]# rbd ls
test01
[root@linux-bkce-node15]# rbd ls -l
NAME   SIZE PARENT FMT PROT LOCK 
test01 4GiB          2 

3.详细查看块存储设备的信息

[root@linux-bkce-node15 ~]# rbd --image test01 info
rbd image 'test01':
    size 4GiB in 1024 objects
    order 22 (4MiB objects)
    block_name_prefix: rbd_data.10216b8b4567
    format: 2
    features: layering
    flags: 
    create_timestamp: Mon Mar 21 14:18:30 2022

扩容块存储

4.更改块存储设备的大小为8G

[root@linux-bkce-node15 ~]# rbd resize --image test01 --size 8192
Resizing image: 100% complete...done.
[root@linux-bkce-node15 ~]#  rbd --image test01 info
rbd image 'test01':
    size 8GiB in 2048 objects
    order 22 (4MiB objects)
    block_name_prefix: rbd_data.10216b8b4567
    format: 2
    features: layering
    flags: 
    create_timestamp: Mon Mar 21 14:18:30 2022

映射块存储

5.将创建好的块存储进行映射,然后挂载到指定目录下

[root@linux-bkce-node15 ~]# rbd map test01 #映射块存储
/dev/rbd0
[root@linux-bkce-node15 ~]# mkdir /mnt/test01 -p
[root@linux-bkce-node15 ~]# mkfs.xfs -f /dev/rbd0
meta-data=/dev/rbd0              isize=512    agcount=8, agsize=262144 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=2097152, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@linux-bkce-node15 ~]#  mount /dev/rbd0 /mnt/test01/
[root@linux-bkce-node15 ~]# df -h|grep test01
/dev/rbd0       8.0G   33M  8.0G   1% /mnt/test01
[root@linux-bkce-node15 ~]# echo "This is test" >/mnt/test01/test.sh
[root@linux-bkce-node15 ~]# cat /mnt/test01/test.sh
This is test

如果快存储想设置为开机自启动的化,那么需要做如下配置

设置开机ceph自动map和mount rbd块设备:

1: edit /etc/ceph/rbdmap
#poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyring

举例: For laowang disk:

rbd/laowang id=admin,keyring=/etc/ceph/ceph.client.admin.keyring

2: edit /etc/fstab

/dev/rbd/rbd/laowang /mnt/ceph-laowang ext4 defaults,noatime,_netdev

快照管理

3.现在我们来给块存储创建一个快照

[root@linux-bkce-node15 ~]# rbd --pool rbd snap create --snap test01_snap test01

4.查看我们刚刚创建的快照

[root@linux-bkce-node15 ~]# rbd --pool rbd snap ls test01
SNAPID NAME           SIZE 
     4 test01_snap 4096 MB

5.现在我们删除刚刚创建的文件

[root@linux-bkce-node15 ~]# cd /mnt/test01/
[root@linux-bkce-node15 test01]# ll
total 4
-rw-r--r-- 1 root root 13 Mar 21 14:21 test.sh
[root@linux-bkce-node15 test01]# rm -f test.sh 

6.现在我们将快照进行回滚(回滚之前一定要先umount)

[root@linux-bkce-node15 test01]# cd
[root@linux-bkce-node15 ~]# umount /mnt/test01
[root@linux-bkce-node15 ~]# rbd --pool rbd snap rollback --snap test01_snap test01
Rolling back to snapshot: 100% complete...done.
[root@linux-bkce-node15 ~]# mount /dev/rbd0 /mnt/test01
[root@linux-bkce-node15 ~]# ls /mnt/test01/
test.sh
[root@linux-bkce-node15 ~]# cat /mnt/test01/test.sh 
This is test
提示:从上面的结果可以看出文件已经回滚到第一次的环境啦

7.当然我们可以将创建的快照进行删除,

[root@linux-bkce-node15 ~]# rbd --pool rbd snap rm --snap test01_snap test01
Removing snap: 100% complete...done.
[root@linux-bkce-node15 ~]# rbd --pool rbd snap ls test01
[root@linux-bkce-node15 ~]# 

8.当然我们也可以对快照进行保存,防止被删除

root@linux-bkce-node15 ~]# rbd --pool rbd snap create --snap test01_snap test01
[root@linux-bkce-node15 ~]# rbd --pool rbd snap ls test01
SNAPID NAME        SIZE TIMESTAMP                
     6 test01_snap 8GiB Mon Mar 21 14:25:26 2022 
[root@linux-bkce-node15 ~]# rbd --pool rbd snap protect --image test01 --snap test01_snap
[root@linux-bkce-node15 ~]# rbd --pool rbd snap rm --snap test01_snap test01
Removing snap: 0% complete...failed.
rbd: snapshot 'test01_snap' is protected from removal.
2022-03-21 14:25:45.419717 7f23c2877d40 -1 librbd::Operations: snapshot is protected
[root@linux-bkce-node15 ~]#  rbd --pool rbd snap ls test01
SNAPID NAME        SIZE TIMESTAMP                
     6 test01_snap 8GiB Mon Mar 21 14:25:26 2022 

9.我们也可以取消对快照的保护,使其快照可以被删除

[root@linux-bkce-node15 ~]# rbd --pool rbd snap unprotect --image test01 --snap test01_snap
[root@linux-bkce-node15 ~]# rbd --pool rbd snap rm --snap test01_snap test01
Removing snap: 100% complete...done.
[root@linux-bkce-node15 ~]# rbd --pool rbd snap ls test01

快照克隆

1.当然我们也可以对已经创建的快照进行克隆成为一个新的快存储

[root@linux-bkce-node15 ~]# rbd --pool rbd snap create --snap test01_snap test01
[root@linux-bkce-node15 ~]# rbd --pool rbd snap ls test01
SNAPID NAME        SIZE TIMESTAMP                
     8 test01_snap 8GiB Mon Mar 21 14:27:05 2022 
[root@linux-bkce-node15 ~]# rbd snap protect rbd/test01@test01_snap
[root@linux-bkce-node15 ~]# rbd clone rbd/test01@test01_snap rbd/test01_snap-clone

记得把test01的快照删除
rbd rm test01_snap-clone
rbd snap unprotect rbd/test01@test01_snap
rbd --pool rbd snap rm --snap test01_snap test01

2.查看快照的克隆

[root@linux-bkce-node15 ~]# rbd children rbd/test01@test01_snap
rbd/test01_snap-clone
[root@linux-bkce-node15 ~]# rbd ls
test01
test01_snap-clone

删除块存储

6.如果不想使用块存储的化,也可以删除块存储操作

[root@linux-bkce-node15 ~]#  rbd rm test01_snap-clone
Removing image: 100% complete...done.
[root@192-168-56-15 ~]# rbd rm test01
Removing image: 100% complete...done.

问题描述:删除块存储时遇到如下错误

[root@linux-bkce-node15 ~]# rbd rm test01
2022-03-21 14:28:35.568076 7fcaceffd700 -1 librbd::image::RemoveRequest: 0x55b2111dbe30 check_image_snaps: image has snapshots - not removing
Removing image: 0% complete...failed.
rbd: image has snapshots - these must be deleted with 'rbd snap purge' before the image can be removed.

解决方法:取消映射,删除块存储

[root@linux-bkce-node15 ~]# umount /mnt/test01
[root@linux-bkce-node15 ~]# rbd unmap test01
[root@linux-bkce-node15 ~]# rbd rm test01
Removing image: 100% complete...done.
[root@linux-bkce-node15 ~]# rbd ls #检查块存储是否删除成功
[root@linux-bkce-node15 ~]# 

Ceph日常维护操作

服务管理

mon相关服务

systemctl status ceph-mon.target
systemctl status ceph.target
systemctl status ceph-mon@linux-bkce-node15.service

osd相关服务

systemctl status ceph-osd.target
systemctl status ceph.target
systemctl status ceph-osd@0.service
systemctl status ceph-osd@1.service
systemctl status ceph-osd@2.service

集群管理

集群状态信息

1.查看ceph集群的状态

[root@linux-bkce-node15 ~]# ceph -s
  cluster:
    id:     de142f92-6016-450e-80b0-46db08d1e3f1
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum linux-bkce-node15
    mgr: linux-bkce-node15(active)
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   1 pools, 128 pgs
    objects: 3 objects, 19B
    usage:   3.01GiB used, 1.46TiB / 1.46TiB avail
    pgs:     128 active+clean

2.以树形结构查看osd节点的状态

[root@linux-bkce-node15 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME                  STATUS REWEIGHT PRI-AFF 
-1       1.46489 root default                                       
-3       0.48830     host linux-bkce-node15                         
 0   hdd 0.48830         osd.0                  up  1.00000 1.00000 
-5       0.48830     host linux-bkce-node16                         
 1   hdd 0.48830         osd.1                  up  1.00000 1.00000 
-7       0.48830     host linux-bkce-node17                         
 2   hdd 0.48830         osd.2                  up  1.00000 1.00000 

3.查看节点硬盘存储

[root@linux-bkce-node15 ~]# cd /data/ceph/
[root@linux-bkce-node15 ceph]# ceph-deploy disk list linux-bkce-node15

4.默认情况下ceph的osd的日志分区为5G

[root@10-0-38-51 ~]# fdisk -l /dev/sdb
#         Start          End    Size  Type            Name
 1     10487808    104857566     45G  unknown         ceph data
 2         2048     10487807      5G  unknown         ceph journal

查看mon节点配置

[root@linux-bkce-node15 ~]# ceph mon_status -f json-pretty

{
    "name": "10-0-38-51",
    "rank": 0,
    "state": "leader",
    "election_epoch": 3,
    "quorum": [
        0
    ],
    "outside_quorum": [],
    "extra_probe_peers": [],
    "sync_provider": [],
    "monmap": {
        "epoch": 1,
        "fsid": "7b1bc4a7-065c-4311-9fe3-4d328640ccaf",
        "modified": "2017-05-15 10:37:30.551737",
        "created": "2017-05-15 10:37:30.551737",
        "mons": [
            {
                "rank": 0,
                "name": "10-0-38-51",
                "addr": "192.168.100.51:6789\/0"
            }
        ]
    }
}

添加osd节点

现在我们来查看我们有节点osd节点,

[root@linux-bkce-node15 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME                  STATUS REWEIGHT PRI-AFF 
-1       1.46489 root default                                       
-3       0.48830     host linux-bkce-node15                         
 0   hdd 0.48830         osd.0                  up  1.00000 1.00000 
-5       0.48830     host linux-bkce-node16                         
 1   hdd 0.48830         osd.1                  up  1.00000 1.00000 
-7       0.48830     host linux-bkce-node17                         
 2   hdd 0.48830         osd.2                  up  1.00000 1.00000 

从上面可以看出我们有三个osd节点,现在我们新增加三个osd节点

[root@linux-bkce-node15 ~]# cd /data/ceph/

ceph-deploy disk zap linux-bkce-node15 /dev/sdc
ceph-deploy osd create --data /dev/sdc linux-bkce-node15

ceph-deploy disk zap linux-bkce-node16 /dev/sdc
ceph-deploy osd create --data /dev/sdc linux-bkce-node16

ceph-deploy disk zap linux-bkce-node17 /dev/sdc
ceph-deploy osd create --data /dev/sdc linux-bkce-node17


可选扩容:
ceph-deploy disk zap linux-bkce-node15 /dev/sdd
ceph-deploy osd create --data /dev/sdd linux-bkce-node15

ceph-deploy disk zap linux-bkce-node16 /dev/sdd
ceph-deploy osd create --data /dev/sdd linux-bkce-node16

ceph-deploy disk zap linux-bkce-node17 /dev/sdd
ceph-deploy osd create --data /dev/sdd linux-bkce-node17

检查osd节点是否添加成功(mon节点查看)

[root@linux-bkce-node15 ceph]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME                  STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                                       
-3       0.97659     host linux-bkce-node15                         
 0   hdd 0.48830         osd.0                  up  1.00000 1.00000 
 3   hdd 0.48830         osd.3                  up  1.00000 1.00000 
-5       0.97659     host linux-bkce-node16                         
 1   hdd 0.48830         osd.1                  up  1.00000 1.00000 
 4   hdd 0.48830         osd.4                  up  1.00000 1.00000 
-7       0.97659     host linux-bkce-node17                         
 2   hdd 0.48830         osd.2                  up  1.00000 1.00000 
 5   hdd 0.48830         osd.5                  up  1.00000 1.00000 

检查集群状态是否ok

[root@linux-bkce-node15 ceph]# ceph -s
  cluster:
    id:     de142f92-6016-450e-80b0-46db08d1e3f1
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum linux-bkce-node15
    mgr: linux-bkce-node15(active)
    osd: 6 osds: 6 up, 6 in

  data:
    pools:   1 pools, 128 pgs
    objects: 3 objects, 19B
    usage:   6.03GiB used, 2.92TiB / 2.93TiB avail
    pgs:     128 active+clean

删除osd节点

更换硬盘步骤:
1.ceph osd crush reweight osd.ID 0   把对应磁盘的osd的reweight调为0(这一步主要是让数据不在落到此磁盘,视频里面的停osd操作太暴力了,实际生产根本不会这么操作)
2.systemctl stop ceph-osd@ID  (这一步是停对应的osd进程,需要到osd对应的机器上执行,且这条命令执行的前提是第一步pg状态都为active+clearn,因为当你执行第一个命令的时候ceph就自动将此osd上的数据迁移到其他osd上了,如果不看pg状态就直接停osd,虽然问题不大但是会对上层业务造成影响)
3.ceph osd out osd.ID (这一步是将osd标识变为out)
4.ceph osd crush remove osd.ID (通知mon删除此osd的crush视图)
5.ceph auth del osd.ID (删除认证)
6.ceph osd rm osd.ID (删除osd)

要想缩减集群尺寸或替换硬件,可在运行时删除 OSD 。在 Ceph 里,一个 OSD 通常是一台主机上的一个 ceph-osd 守护进程、它运行在一个硬盘之上。如果一台主机上有多个数据盘,你得挨个删除其对应 ceph-osd 。通常,操作前应该检查集群容量,看是否快达到上限了,确保删除 OSD 后不会使集群达到 near full 比率。

1.把 OSD 踢出集群,删除 OSD 前,它通常是 up 且 in 的,要先把它踢出集群,以使 Ceph 启动重新均衡、把数据拷贝到其他 OSD

停止osd进程

[root@linux-bkce-node17 ~]# systemctl stop ceph-osd@5.service
[root@linux-bkce-node17 ~]# systemctl disable ceph-osd@5.service

[root@linux-bkce-node15 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME                  STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                                       
-3       0.97659     host linux-bkce-node15                         
 0   hdd 0.48830         osd.0                  up  1.00000 1.00000 
 3   hdd 0.48830         osd.3                  up  1.00000 1.00000 
-5       0.97659     host linux-bkce-node16                         
 1   hdd 0.48830         osd.1                  up  1.00000 1.00000 
 4   hdd 0.48830         osd.4                  up  1.00000 1.00000 
-7       0.97659     host linux-bkce-node17                         
 2   hdd 0.48830         osd.2                  up  1.00000 1.00000 
 5   hdd 0.48830         osd.5                down  1.00000 1.00000 

观察数据迁移

一旦把 OSD 踢出( out )集群, Ceph 就会开始重新均衡集群、把归置组迁出将删除的 OSD 。你可以用 ceph 工具观察此过程。

ceph -w

你会看到归置组状态从 active+clean 变为 active, some degraded objects 、迁移完成后最终回到 active+clean 状态。( Ctrl-c 中止)

停止 OSD,把 OSD 踢出集群后,它可能仍在运行,就是说其状态为 up 且 out 。删除前要先停止 OSD 进程。

将节点状态标记为out

[root@linux-bkce-node15 ~]# ceph osd out osd.5
[root@linux-bkce-node15 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME                  STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                                       
-3       0.97659     host linux-bkce-node15                         
 0   hdd 0.48830         osd.0                  up  1.00000 1.00000 
 3   hdd 0.48830         osd.3                  up  1.00000 1.00000 
-5       0.97659     host linux-bkce-node16                         
 1   hdd 0.48830         osd.1                  up  1.00000 1.00000 
 4   hdd 0.48830         osd.4                  up  1.00000 1.00000 
-7       0.97659     host linux-bkce-node17                         
 2   hdd 0.48830         osd.2                  up  1.00000 1.00000 
 5   hdd 0.48830         osd.5                down        0 1.00000 
提示:停止 OSD 后,状态变为 down

此步骤依次把一个 OSD 移出集群 CRUSH 图、删除认证密钥、删除 OSD 图条目、删除 ceph.conf 条目。如果主机有多个硬盘,每个硬盘对应的 OSD 都得重复此步骤。

1.删除 CRUSH 图的对应 OSD 条目,它就不再接收数据了。你也可以反编译 CRUSH 图、删除 device 列表条目、删除对应的 host 桶条目或删除 host 桶(如果它在 CRUSH 图里,而且你想删除主机),重编译 CRUSH 图并应用它。详情参见删除 OSD 。

[root@linux-bkce-node15 ~]# ceph osd crush remove osd.5
removed item id 5 name 'osd.5' from crush map
[root@linux-bkce-node15 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME                  STATUS REWEIGHT PRI-AFF 
-1       2.44148 root default                                       
-3       0.97659     host linux-bkce-node15                         
 0   hdd 0.48830         osd.0                  up  1.00000 1.00000 
 3   hdd 0.48830         osd.3                  up  1.00000 1.00000 
-5       0.97659     host linux-bkce-node16                         
 1   hdd 0.48830         osd.1                  up  1.00000 1.00000 
 4   hdd 0.48830         osd.4                  up  1.00000 1.00000 
-7       0.48830     host linux-bkce-node17                         
 2   hdd 0.48830         osd.2                  up  1.00000 1.00000 
 5             0 osd.5                        down        0 1.00000 
  1. 删除 OSD 认证密钥:
[root@linux-bkce-node15 ~]# ceph auth del osd.5
updated
  1. 删除 OSD 。
[root@linux-bkce-node15 ~]# ceph osd rm osd.5
removed osd.5
[root@linux-bkce-node15 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME                  STATUS REWEIGHT PRI-AFF 
-1       2.44148 root default                                       
-3       0.97659     host linux-bkce-node15                         
 0   hdd 0.48830         osd.0                  up  1.00000 1.00000 
 3   hdd 0.48830         osd.3                  up  1.00000 1.00000 
-5       0.97659     host linux-bkce-node16                         
 1   hdd 0.48830         osd.1                  up  1.00000 1.00000 
 4   hdd 0.48830         osd.4                  up  1.00000 1.00000 
-7       0.48830     host linux-bkce-node17                         
 2   hdd 0.48830         osd.2                  up  1.00000 1.00000 

4.删除lsblk的信息

[root@linux-bkce-node17 ~]# lsblk 
[root@linux-bkce-node17 ~]# dmsetup remove ceph--632940fe--2c6f--4111--bc9d--ebcc65340d31-osd--block--e091cff2--8e0a--40b5--b099--25082f07b3dd

5.如果有必要格式化一下磁盘

mkfs.xfs  -f /dev/sdc

6.最后检查osd被成功删除

[root@192-168-56-15 ceph]# ceph osd tree
ID WEIGHT  TYPE NAME              UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 2.41592 root default                                             
-2 0.48318     host 192-168-56-15                                   
 0 0.48318         osd.0               up  1.00000          1.00000 
-3 0.96637     host 192-168-56-16                                   
 1 0.48318         osd.1               up  1.00000          1.00000 
 4 0.48318         osd.4               up  1.00000          1.00000 
-4 0.96637     host 192-168-56-17                                   
 2 0.48318         osd.2               up  1.00000          1.00000 
 5 0.48318         osd.5               up  1.00000          1.00000

添加mon节点

添加linux-bkce-node16 linux-bkce-node17为mon节点

cd /data/ceph/
ceph-deploy --overwrite-conf mon add linux-bkce-node16 
ceph-deploy --overwrite-conf mon add linux-bkce-node17
ceph quorum_status --format json-pretty 

检查15,16,17是否升级为mon节点

[root@linux-bkce-node15 ceph]# ceph -s
  cluster:
    id:     de142f92-6016-450e-80b0-46db08d1e3f1
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum linux-bkce-node15,linux-bkce-node16,linux-bkce-node17
    mgr: linux-bkce-node15(active)
    osd: 5 osds: 5 up, 5 in

删除mon节点

当然我们可以将刚刚添加的mon节点进行移除

[root@linux-bkce-node15 ~]# cd /data/ceph/
[root@linux-bkce-node15 ceph]# ceph-deploy mon destroy linux-bkce-node16 linux-bkce-node17
[root@linux-bkce-node15 ceph]# ceph -s
  cluster:
    id:     de142f92-6016-450e-80b0-46db08d1e3f1
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum linux-bkce-node15
    mgr: linux-bkce-node15(active)
    osd: 5 osds: 5 up, 5 in

Ceph常见监控方式

开启监控模块

Ceph学习之路(三)Ceph luminous版本部署

https://www.cnblogs.com/linuxk/p/9419423.html

查看集群支持的模块

[root@linux-bkce-node15 ~]# ceph mgr dump  
[root@linux-bkce-node15 ~]# ceph mgr module enable dashboard   #启用dashboard模块

/etc/ceph/ceph.conf中添加

[mgr]
mgr modules = dashboard

设置dashboardip和端口

[root@ceph-node1 ceph]# ceph config-key put mgr/dashboard/server_addr 192.168.1.15
set mgr/dashboard/server_addr
[root@ceph-node1 ceph]# ceph config-key put mgr/dashboard/server_port 7000
set mgr/dashboard/server_port
[root@ceph-node1 ceph]# netstat -tulnp |grep 7000
tcp6       0      0 :::7000                 :::*                    LISTEN      13353/ceph-mgr 

访问:http://192.168.1.15:7000/

image-20220321150416729

kraken监控

image-20211030132908973

calamari监控

Calamari在Centos7.1上成功部署的案例分享

http://cloud.51cto.com/art/201507/486246.htm

image-20211030132926716

Ceph调优与基准测试

调整ceph的PG数

http://www.tuicool.com/articles/MZFv6bU

PG全称是placement groups,它是ceph的逻辑存储单元。在数据存储到cesh时,先打散成一系列对象,再结合基于对象名的哈希操作、复制级别、PG数量,产生目标PG号。根据复制级别的不同,每个PG在不同的OSD上进行复制和分发。可以把PG想象成存储了多个对象的逻辑容器,这个容器映射到多个具体的OSD。PG存在的意义是提高ceph存储系统的性能和扩展性。

image-20211030132956562

如果没有PG,就难以管理和跟踪数以亿计的对象,它们分布在数百个OSD上。对ceph来说,管理PG比直接管理每个对象要简单得多。每个PG需要消耗一定的系统资源包括CPU、内存等。集群的PG数量应该被精确计算得出。通常来说,增加PG的数量可以减少OSD的负载,但是这个增加应该有计划进行。一个推荐配置是每OSD对应50-100个PG。如果数据规模增大,在集群扩容的同时PG数量也需要调整。CRUSH会管理PG的重新分配。

每个pool应该分配多少个PG,与OSD的数量、复制份数、pool数量有关,有个计算公式在:

Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool_count

结算的结果往上取靠近2的N次方的值。比如总共OSD数量是160,复制份数3,pool数量也是3,那么按上述公式计算出的结果是1777.7。取跟它接近的2的N次方是2048,那么每个pool分配的PG数量就是2048。

在更改pool的PG数量时,需同时更改PGP的数量。PGP是为了管理placement而存在的专门的PG,它和PG的数量应该保持一致。如果你增加pool的pg_num,就需要同时增加pgp_num,保持它们大小一致,这样集群才能正常rebalancing。下面介绍如何修改pg_num和pgp_num。

(1)检查rbd这个pool里已存在的PG和PGP数量:

$ ceph osd pool get rbd pg_num
pg_num: 128
$ ceph osd pool get rbd pgp_num
pgp_num: 128

ceph osd pool get images pg_num

(2)检查pool的复制size,执行如下命令:

$ ceph osd dump |grep size|grep rbd
pool 2 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 45 flags hashpspool stripe_width 0

3)使用上述公式,根据OSD数量、复制size、pool的数量,计算出新的PG数量,假设是256.

(4)变更rbd的pg_num和pgp_num为256:

$ ceph osd pool set rbd pg_num 256
$ ceph osd pool set rbd pgp_num 256

ceph osd pool set volumes pg_num 128
ceph osd pool set volumes pgp_num 256

ceph osd pool get volumes pg_num
ceph osd pool get volumes pgp_num

ceph osd pool set vms pg_num 256
ceph osd pool set vms pgp_num 256

(5)如果有其他pool,同步调整它们的pg_num和pgp_num,以使负载更加均衡。

Ceph对接openstack

如何将Ceph与OpenStack集成

https://xiexianbin.cn/openstack/2016/11/14/ceph-as-storage-for-openstack

我们还将展示如何将三个重要的OpenStack用例:Cinder(块存储)、Glance(镜像)和Nova(虚拟机虚拟磁盘)与Ceph集成。

Ceph提供统一的横向扩展存储,使用带有自我修复和智能预测故障功能的商用x86硬件。它已经成为软件定义存储的事实上的标准。因为Ceph是开源的,它使许多供应商能够提供基于Ceph的软件定义存储系统。 Ceph不仅限于Red Hat、Suse、Mirantis、Ubuntu等公司。 SanDisk、富士通、惠普、戴尔、三星等公司现在也提供集成解决方案。甚至还有大规模的社区建造的环境(如CERN),为10000个虚拟机提供存储服务。

Ceph绝不局限于OpenStack,但是这是Ceph开始获得牵引力的地方。看看最新的OpenStack用户调查,Ceph是OpenStack存储的显著领导者。2016年4月OpenStack用户调查报告的第42页显示,Ceph占OpenStack存储的57%,下一个是LVM(本地存储)占28%,NetApp占9%。如果我们不看LVM,Ceph领先其他存储公司48%,令人难以置信。这是为什么?

有好几个原因,我认为以下是最重要的三个:

Ceph是一个横向扩展的统一存储平台。OpenStack最需要的存储能力的两个方面:能够与OpenStack本身一起扩展,并且扩展时不需要考虑是块(Cinder)、文件(Manila)还是对象(Swift)。传统存储供应商需要提供两个或三个不同的存储系统来实现这一点。它们不同样扩展,并且在大多数情况下仅在永无止境的迁移周期中纵向扩展。它们的管理功能从来没有真正实现跨不同的存储用例集成。

Ceph具有成本效益。 Ceph利用Linux作为操作系统,而不是专有的系统。你不仅可以选择找谁购买Ceph,还可以选择从哪里购买硬件。可以是同一供应商也可以是不同的。你可以购买硬件,甚至从单一供应商购买Ceph +硬件的集成解决方案。已经有针对Ceph的超融合方案出现(在计算节点上运行Ceph服务)。

和OpenStack一样,Ceph是开源项目。这允许更紧密的集成和跨项目开发。专有供应商总是在努力追赶,因为他们有秘密要保护,他们的影响力通常限于开源社区。

这里是一个架构图,显示了所有需要存储的不同OpenStack组件。它显示了这些组件如何与Ceph集成,以及Ceph如何提供一个统一的存储系统,扩展以满足所有这些用例。

img

如果你对与Ceph和OpenStack相关的更多主题感兴趣,推荐这个网址:http://ceph.com/category/ceph-and-openstack/

说够了为什么Ceph和OpenStack如此伟大,该说说如何连接它们了。如果你没有Ceph环境,可以按照这篇文章快速设置。

Ceph与Glance集成

Glance是OpenStack中的映像服务。默认情况下,映像存储在本地控制器,然后在被请求时复制到计算主机。计算主机缓存镜像,但每次更新镜像时,都需要再次复制。

Ceph为Glance提供了后端,允许镜像存储在Ceph中,而不是本地存储在控制器和计算节点上。这大大减少了抓取镜像的网络流量,提高了性能,因为Ceph可以克隆镜像而不是复制镜像。此外,它使得在OpenStack部署或诸如多站点OpenStack之类的概念的迁移变得更简单。

安装Glance使用的Ceph客户端。

[root@192-168-56-11 ~]# yum -y install ceph python-rbd

在Ceph管理节点。为Glance镜像创建Ceph RBD池。

[root@192-168-56-15 ~]# ceph osd pool create images 128
pool 'images' created

创建将允许Glance访问池的密钥环。

[root@192-168-56-15 ~]# ceph auth get-or-create client.images mon 'allow r' osd 'allow class-read object_prefix rdb_children, allow rwx pool=images' -o /etc/ceph/ceph.client.images.keyring

提示:上面的命令会生成如下文件

[root@192-168-56-15 ~]# ll /etc/ceph/ceph.client.images.keyring 
-rw-r--r-- 1 root root 64 Jun 21 15:15 /etc/ceph/ceph.client.images.keyring

将密钥环和ceph的配置文件复制到OpenStack控制节点上的/etc/ceph目录下

[root@192-168-56-15 ~]# cd /etc/ceph/
[root@192-168-56-15 ceph]# scp ceph.conf ceph.client.images.keyring 192.168.56.11:/etc/ceph/

在控制节点上设置ceph文件相关的权限,让 Glance可以访问Ceph密钥环

[root@192-168-56-11 ~]#
chgrp glance /etc/ceph/ceph.client.images.keyring
chmod 0640 /etc/ceph/ceph.client.images.keyring

在控制节点上将密钥环文件添加到Ceph配置

[root@192-168-56-11 ~]# vim /etc/ceph/ceph.conf
[client.images]
keyring = /etc/ceph/ceph.client.images.keyring

在控制节点上创建原始Glance配置的备份

[root@192-168-56-11 ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.orig

在控制节点上修改glance的相关配置

[root@192-168-56-11 ~]# vim /etc/glance/glance-api.conf
[glance_store]
stores = glance.store.rbd.Store
default_store = rbd
rbd_store_pool = images
rbd_store_user = images
rbd_store_ceph_conf = /etc/ceph/ceph.conf

提示:这里一定要主要修改标签下的参数,不要直接放在标签下面

在控制节点上重新启动Glance服务

[root@192-168-56-11 ~]# systemctl restart openstack-glance-api

将测试的镜像文件上传到控制节点的/root目录下

[root@192-168-56-11 ~]# ll cirros-0.3.4-x86_64-disk.img 
-rw-r--r-- 1 root root 13287936 Jun 20 10:57 cirros-0.3.4-x86_64-disk.img

现在我们来上传镜像到ceph存储上

[root@192-168-56-11 ~]# source admin-openrc
[root@192-168-56-11 ~]# openstack image create "cirros-test" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
[root@192-168-56-11 ~]# glance image-list
+--------------------------------------+-------------+
| ID                                   | Name        |
+--------------------------------------+-------------+
| 79387e6d-382f-4e72-97cd-3215162fd1dd | ciross      |
| 8484f103-66ff-4c5a-92d1-5fcf2d151193 | cirros-test |
+--------------------------------------+-------------+

我们到ceph存储的mon的admin节点查看是否将镜像上传成功

[root@192-168-56-15 ~]# rbd ls images
8484f103-66ff-4c5a-92d1-5fcf2d151193

这里就表示上传成功啦

image-20211030143143565

我们可以在界面上看到上传成功

Ceph与Cinder集成

*准备环境:首先我们需要配置好cinder的控制节点与计算节点,在这个基础之上在配置ceph与cinder的集成*

Cinder是OpenStack中的块存储服务。 Cinder提供了关于块存储的抽象,并允许供应商通过提供驱动程序进行集成。在Ceph中,每个存储池可以映射到不同的Cinder后端。这允许创建诸如金、银或铜的存储服务。你可以决定例如金应该是复制三次的快速SSD磁盘,银应该是复制两次,铜应该是使用较慢的擦除编码的磁盘。

安装Glance使用的Ceph客户端。

[root@192-168-56-11 ~]# yum -y install ceph python-rbd

为Cinder卷创建一个Ceph池

[root@192-168-56-15 ~]# ceph osd pool create volumes 128
pool 'volumes' created

创建一个密钥环以授予Cinder访问权限

[root@192-168-56-15 ~]# ceph auth get-or-create client.volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images' -o /etc/ceph/ceph.client.volumes.keyring

将密钥与配置文件环复制到OpenStack计算节点

[root@192-168-56-15 ~]# cd /etc/ceph/
[root@192-168-56-15 ceph]# scp ceph.client.volumes.keyring ceph.conf 192.168.56.12:/etc/ceph/

创建一个只包含OpenStack计算节点上的身份验证密钥的文件

[root@192-168-56-15 ~]# ceph auth get-key client.volumes |ssh 192.168.56.12 tee /etc/ceph/client.volumes.key

在计算节点上设置密钥环文件的权限,以便Cinder可以访问

[root@192-168-56-12 ~]#
chgrp cinder /etc/ceph/ceph.client.volumes.keyring
chmod 0640 /etc/ceph/ceph.client.volumes.keyring

在计算节点上将密钥环添加到Ceph配置文件中

[root@192-168-56-12 ~]# vim /etc/ceph/ceph.conf
[client.volumes]
keyring = /etc/ceph/ceph.client.volumes.keyring

使KVM Hypervisor访问Ceph

[root@192-168-56-12 ~]# uuidgen |tee /etc/ceph/cinder.uuid.txt
82934e3a-65a1-4f89-8b4c-a413ecc039c3

在virsh中创建一个密钥,因此KVM可以访问Ceph池的Cinder卷

[root@192-168-56-12 ~]# vim /etc/ceph/cinder.xml
<secret ephemeral='no' private='no'>
  <uuid>82934e3a-65a1-4f89-8b4c-a413ecc039c3</uuid>
  <usage type='ceph'>
    <name>client.volumes secret</name>
  </usage>
</secret>
[root@192-168-56-12 ~]# virsh secret-define --file /etc/ceph/cinder.xml
Secret 82934e3a-65a1-4f89-8b4c-a413ecc039c3 created
[root@192-168-56-12 ~]# virsh secret-set-value --secret 82934e3a-65a1-4f89-8b4c-a413ecc039c3 --base64 $(cat /etc/ceph/client.volumes.key)
Secret value set

在计算节点上为Cinder添加一个Ceph后端

[root@192-168-56-12 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
enabled_backends = rbd

[rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = volumes
rbd_secret_uuid = 82934e3a-65a1-4f89-8b4c-a413ecc039c3

在计算节点上重新启动Cinder服务

[root@192-168-56-12 ~]# systemctl restart openstack-cinder-volume.service target.service

在控制节点上查看cinder的节点信息

[root@192-168-56-11 ~]# source admin-openrc 
[root@192-168-56-11 ~]# cinder service-list
+------------------+-------------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host              | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | 192-168-56-11     | nova | enabled | up    | 2017-06-21T07:45:46.000000 | -               |
| cinder-volume    | 192-168-56-12@lvm | nova | enabled | down  | 2017-06-21T07:44:33.000000 | -               |
| cinder-volume    | 192-168-56-12@rbd | nova | enabled | up    | 2017-06-21T07:45:51.000000 | -               |
+------------------+-------------------+------+---------+-------+----------------------------+-----------------+

在控制节点上创建Cinder卷

[root@192-168-56-11 ~]# source demo-openrc 
[root@192-168-56-11 ~]# cinder create --display-name="test01" 1
[root@192-168-56-11 ~]# cinder list
+--------------------------------------+-----------+--------+------+-------------+----------+-------------+
| ID                                   | Status    | Name   | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------+------+-------------+-
| a5271730-e2ef-4fad-9a5e-145d9e38ff9b | available | test01 | 1    | -           | false    |             |
+--------------------------------------+-----------+--------+------+-------------+-

在Ceph中列出Cinder卷

[root@192-168-56-15 ~]# rbd ls volumes
volume-a5271730-e2ef-4fad-9a5e-145d9e38ff9b
[root@192-168-56-15 ~]# rbd info volumes/volume-a5271730-e2ef-4fad-9a5e-145d9e38ff9b
rbd image 'volume-a5271730-e2ef-4fad-9a5e-145d9e38ff9b':
    size 1024 MB in 256 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.107b7d64e86d
    format: 2
    features: layering
    flags:

Ceph与Nova集成

Nova是OpenStack中的计算服务。 Nova存储与默认的运行虚拟机相关联的虚拟磁盘镜像,在/ var / lib / nova / instances下的Hypervisor上。在虚拟磁盘映像的计算节点上使用本地存储有一些缺点:

镜像存储在根文件系统下。大镜像可能导致文件系统被填满,从而导致计算节点崩溃。

计算节点上的磁盘崩溃可能导致虚拟磁盘丢失,因此无法进行虚拟机恢复。

Ceph是可以直接与Nova集成的存储后端之一。在本节中,我们将看到如何配置

安装Glance使用的Ceph客户端。

[root@192-168-56-11 ~]# yum -y install ceph python-rbd

为Nova卷创建一个Ceph池

[root@192-168-56-15 ~]# ceph osd pool create vms 128
pool 'vms' created

为Nova创建验证密钥环

[root@192-168-56-15 ~]# ceph auth get-or-create client.nova mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' -o /etc/ceph/ceph.client.nova.keyring

将密钥与配置文件环复制到OpenStack计算节点

[root@192-168-56-15 ~]# cd /etc/ceph/
[root@192-168-56-15 ceph]# scp ceph.client.nova.keyring 192.168.56.12:/etc/ceph/

在OpenStack计算节点创建密钥文件

[root@192-168-56-15 ceph]# ceph auth get-key client.nova |ssh 192.168.56.12 tee /etc/ceph/client.nova.key

在计算节点上设置密钥环文件的权限,以便Nova服务可以访问

[root@192-168-56-12 ~]#
chgrp nova /etc/ceph/ceph.client.nova.keyring
chmod 0640 /etc/ceph/ceph.client.nova.keyring

在计算节点上更新Ceph配置

[root@192-168-56-12 ~]# vim /etc/ceph/ceph.conf
[client.nova]
keyring = /etc/ceph/ceph.client.nova.keyring

在节点节点上运行,让KVM可以访问Ceph

[root@192-168-56-12 ~]# uuidgen |tee /etc/ceph/nova.uuid.txt
7e37262c-a2cd-4a06-82a2-9e4681616c0e

在virsh中创建一个密钥,这样KVM可以访问Cinder卷的Ceph池

[root@192-168-56-12 ~]# vim /etc/ceph/nova.xml
<secret ephemeral='no' private='no'>
  <uuid>7e37262c-a2cd-4a06-82a2-9e4681616c0e</uuid>
  <usage type='ceph'>
    <name>client.nova secret</name>
  </usage>
</secret>
[root@192-168-56-12 ~]# virsh secret-define --file /etc/ceph/nova.xml
Secret 7e37262c-a2cd-4a06-82a2-9e4681616c0e created
[root@192-168-56-12 ~]# virsh secret-set-value --secret 7e37262c-a2cd-4a06-82a2-9e4681616c0e --base64 $(cat /etc/ceph/client.nova.key)
Secret value set

在计算节点上备份Nova配置

[root@192-168-56-12 ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.orig

在计算节点上更新Nova配置以使用Ceph后端

[root@192-168-56-12 ~]# vim /etc/nova/nova.conf
[DEFAULT]
force_raw_images = True
disk_cachemodes = writeback

[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = nova
rbd_secret_uuid = 7e37262c-a2cd-4a06-82a2-9e4681616c0e

在计算节点上重新启动Nova服务

[root@192-168-56-12 ~]# systemctl restart openstack-nova-compute

然后在控制节点上创建虚拟机测试

image-20211030143659653

在Ceph虚拟机池中列出镜像。我们现在应该看到镜像存储在Ceph中

[root@192-168-56-15 ~]# rbd -p vms ls
29ec137a-f7eb-47fd-bab6-7667752c9c17_disk

Cinder Backup备份

Cinder Backup备份

http://tanglei528.blog.163.com/blog/static/43353399201512695030227/

Cinder磁盘备份原理与实践

http://zhuanlan.51cto.com/art/201704/537194.htm

在ceph集群的某个节点,创建backups池

[root@192_168_56_15 ceph]# ceph osd pool create backups 128

为backupb创建验证密钥环

[root@192_168_56_15 ceph]# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' -o /etc/ceph/ceph.client.cinder-backup.keyring

将密钥环发送到cinder-backup 服务的主机

[root@192_168_56_15 ceph]# scp ceph.client.cinder-backup.keyring 192.168.56.12:/etc/ceph/

更改秘钥的所有权

[root@192-168-56-12]#
chgrp cinder /etc/ceph/ceph.client.cinder-backup.keyring
chmod 0640 /etc/ceph/ceph.client.cinder-backup.keyring

配置/etc/cinder/cinder.conf

[root@192-168-56-12 ceph]# vim /etc/cinder/cinder.conf
[DEFAULT]
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

启动openstack-cinder-backup服务

[root@192-168-56-12 ceph]# systemctl start openstack-cinder-backup

调整dashboard

[root@192-168-56-11 ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_CINDER_FEATURES = {
    'enable_backup': True,
}

重启dashboard服务

systemctl restart httpd.service memcached.service

查看cinderbackup功能是否开启

image-20211030143855261

具体操作

创建卷
cinder create --display-name volume1 1
备份卷
cinder backup-create --container volumes_backup --display-name backuptoswift volume1
恢复卷
cinder backup-restore --volume-id cb0fe233-f9b6-4303-8a61-c31c863ef7ce volume1
删除卷
cinder backup-delete 1b9237a4-b384-4c8e-ad05-2e2dfd0c698c

ceph对接openstack故障排除

无法删除存储在CEPH RBD中的Glance镜像。

复制

[root@osp9 ceph(keystone_admin)]# nova image-list
+--------------------------------------+--------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------------+--------+--------+
| a55e9417-67af-43c5-a342-85d2c4c483f7 | Cirros 0.3.4 | ACTIVE | |
| 34510bb3-da95-4cb1-8a66-59f572ec0a5d | test123 | ACTIVE | |
| cf56345e-1454-4775-84f6-781912ce242b | test456 | ACTIVE | |
+--------------------------------------+--------------+--------+--------+

[root@osp9 ceph(keystone_admin)]# rbd -p images snap unprotect cf56345e-1454-4775-84f6-781912ce242b@snap
[root@osp9 ceph(keystone_admin)]# rbd -p images snap rm cf56345e-1454-4775-84f6-781912ce242b@snap
[root@osp9 ceph(keystone_admin)]# glance image-delete cf56345e-1454-4775-84f6-781912ce242b

遇到的问题2:在nova对接ceph的时候出现创建虚拟机后与创建的卷无法连接,nova日志报如下错误:

2017-06-21 16:44:50.024 33210 ERROR oslo_messaging.rpc.server libvirtError: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk1' could not be initialized
2017-06-21 16:44:50.024 33210 ERROR oslo_messaging.rpc.server

最后排查问题出现在nova对接ceph的权限问题上,应将权限修改如下

有问题的权限

ceph auth get-or-create client.nova mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=vms, allow rx pool=images' -o /etc/ceph/ceph.client.nova.keyring

修改后的权限

ceph auth get-or-create client.nova mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' -o /etc/ceph/ceph.client.nova.keyring

*错误:* 云主机 "linux-node4" 执行所请求操作失败,云主机处于错误状态。: 请稍后再试 [错误: Build of instance 04ce0ed6-5f20-43d2-85ff-2c375af2447a aborted: Block Device Mapping is Invalid.].

Ceph常见错误总结

防火墙导致的问题

将防火墙开启,但是为开启相应的端口,在部署的时候遇到的问题

[ceph_deploy.mon][WARNIN] mon.192-168-56-15 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
[192-168-56-15][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.192-168-56-15.asok mon_status
[ceph_deploy.mon][WARNIN] mon.192-168-56-15 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[192-168-56-15][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.192-168-56-15.asok mon_status
[ceph_deploy.mon][WARNIN] mon.192-168-56-15 monitor is not yet in quorum, tries left: 3
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[192-168-56-15][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.192-168-56-15.asok mon_status
[ceph_deploy.mon][WARNIN] mon.192-168-56-15 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying
[192-168-56-15][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.192-168-56-15.asok mon_status
[ceph_deploy.mon][WARNIN] mon.192-168-56-15 monitor is not yet in quorum, tries left: 1
[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying

解决方法:防火墙需要开启一下端口

mon: 6789 端口

osd: 6800-7300 端口

selinux状态为Permissive导致的问题

问题描述:selinux的状态如果为Permissive的话,那么在部署ceph的时候就可能出现如下错误。

[10-0-192-83][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@10-0-38-83.service to /usr/lib/systemd/system/ceph-mon@.service.
[10-0-192-83][INFO  ] Running command: systemctl start ceph-mon@10-0-38-83
[10-0-192-83][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-83.asok mon_status
[10-0-192-83][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[10-0-192-83][WARNIN] monitor: mon.10-0-192-83, might not be running yet
[10-0-192-83][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-83.asok mon_status
[10-0-192-83][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[10-0-192-83][WARNIN] monitor 10-0-192-83 does not exist in monmap
[ceph_deploy.mon][INFO  ] processing monitor mon.10-0-192-81
[10-0-192-81][DEBUG ] connected to host: 10-0-192-81 
[10-0-192-81][DEBUG ] detect platform information from remote host
[10-0-192-81][DEBUG ] detect machine type
[10-0-192-81][DEBUG ] find the location of an executable
[10-0-192-81][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-81.asok mon_status
[10-0-192-81][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-81 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
[10-0-192-81][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-81.asok mon_status
[10-0-192-81][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-81 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[10-0-192-81][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-81.asok mon_status
[10-0-192-81][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-81 monitor is not yet in quorum, tries left: 3
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[10-0-192-81][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-81.asok mon_status
[10-0-192-81][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-81 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying
[10-0-192-81][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-81.asok mon_status
[10-0-192-81][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-81 monitor is not yet in quorum, tries left: 1
[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying
[ceph_deploy.mon][INFO  ] processing monitor mon.10-0-192-82
[10-0-192-82][DEBUG ] connected to host: 10-0-192-82 
[10-0-192-82][DEBUG ] detect platform information from remote host
[10-0-192-82][DEBUG ] detect machine type
[10-0-192-82][DEBUG ] find the location of an executable
[10-0-192-82][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-82.asok mon_status
[10-0-192-82][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-82 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
[10-0-192-82][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-82.asok mon_status
[10-0-192-82][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-82 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[10-0-192-82][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-82.asok mon_status
[10-0-192-82][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-82 monitor is not yet in quorum, tries left: 3
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[10-0-192-82][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-82.asok mon_status
[10-0-192-82][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-82 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying
[10-0-192-82][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-82.asok mon_status
[10-0-192-82][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-82 monitor is not yet in quorum, tries left: 1
[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying
[ceph_deploy.mon][INFO  ] processing monitor mon.10-0-192-83
[10-0-192-83][DEBUG ] connected to host: 10-0-192-83 
[10-0-192-83][DEBUG ] detect platform information from remote host
[10-0-192-83][DEBUG ] detect machine type
[10-0-192-83][DEBUG ] find the location of an executable
[10-0-192-83][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-83.asok mon_status
[10-0-192-83][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-83 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
[10-0-192-83][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-83.asok mon_status
[10-0-192-83][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-83 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[10-0-192-83][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-83.asok mon_status
[10-0-192-83][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-83 monitor is not yet in quorum, tries left: 3
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[10-0-192-83][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-83.asok mon_status
[10-0-192-83][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-83 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying
[10-0-192-83][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.10-0-192-83.asok mon_status
[10-0-192-83][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.10-0-192-83 monitor is not yet in quorum, tries left: 1
[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying
[ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum:
[ceph_deploy.mon][ERROR ] 10-0-192-83
[ceph_deploy.mon][ERROR ] 10-0-192-82
[ceph_deploy.mon][ERROR ] 10-0-192-81

2017-05-18 12:41:39.067658 7f847c7be700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2017-05-18 12:41:39.067672 7f847c7be700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2017-05-18 12:41:39.067674 7f847c7be700  0 librados: client.admin initialization error (2) No such file or directory
Error connecting to cluster: ObjectNotFound
Ceph mon was not started on the 10-0-192-81

rdb map出错rbd sysfs write failed

创建了一个rbd镜像

$ rbd create --size 4096 docker_test

然后,在Ceph client端将该rbd镜像映射为本地设备时出错。

$ rbd map docker_test --name client.admin
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail" or so.

原因:

rbd镜像的一些特性,OS kernel并不支持,所以映射失败。我们查看下该镜像支持了哪些特性。

$ rbd info docker_test
rbd image 'docker_test':
    size 4096 MB in 1024 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.43702ae8944a
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    flags: 

可以看到特性feature一栏,由于我OS的kernel只支持layering,其他都不支持,所以需要把部分不支持的特性disable掉。

方法一:

直接diable这个rbd镜像的不支持的特性:

$ rbd feature disable docker_test exclusive-lock object-map fast-diff deep-flatten

方法二:

创建rbd镜像时就指明需要的特性,如:

$ rbd create --size 4096 docker_test --image-feature layering

方法三:

如果还想一劳永逸,那么就在执行创建rbd镜像命令的服务器中,修改Ceph配置文件/etc/ceph/ceph.conf,在global section下,增加

rbd_default_features = 1

再创建rdb镜像。

$ rbd create --size 4096 docker_test

通过上述三种方法后,查看rbd镜像的信息。

$ rbd info docker_test
rbd image 'docker_test':
    size 4096 MB in 1024 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.43a22ae8944a
    format: 2
    features: layering
    flags:

再次尝试映射rdb镜像到本地块设备,成功!

$ rbd map docker_test --name client.admin
/dev/rbd0

删除块存储遇到的错误

问题描述:删除块存储时遇到如下错误

[root@10-0-38-51 ~]# rbd rm test01
2017-05-15 11:20:30.079612 7fc22e980d80 -1 librbd: image has watchers - not removing
Removing image: 0% complete...failed.
rbd: error: image still has watchers
This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.

解决方法:取消映射,删除块存储

[root@10-0-38-51 ~]# umount /mnt/test01
[root@10-0-38-51 ~]# rbd unmap test01
[root@10-0-38-51 ~]# rbd rm test01
Removing image: 100% complete...done.

ceph出现以下报错

[ceph_deploy.mon][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite [ceph_deploy][ERROR ] GenericError: Failed to create 3 monitors

原因:修改了ceph用户里的ceph.conf文件内容,但是没有把这个文件里的最新消息发送给其他节点,所有要推送消息

解决:ceph-deploy --overwrite-conf config push node1-4

或**ceph-deploy --overwrite-conf mon create node1-4**

ceph-deploy --overwrite-conf config push linux-bkce-node15
ceph-deploy --overwrite-conf config push linux-bkce-node16
ceph-deploy --overwrite-conf config push linux-bkce-node17