目前ceph集群:

1.10.125.145.211,10.125.145.212,10.125.145.213,10.125.145.214 每个服务器是4x12T数据盘
2.集群容量已趋于警戒值以上,很多OSD节点都已经为near full状态,个别OSD容量已到90%往上,因此调整了OSD存储权限,数据rebalance过程中,部分osd已到full状态,拒绝回填数据,导致整个集群recovery进度卡死
3.为恢复集群正常必须扩容OSD节点

扩容准备(本次扩容只扩容OSD,没有扩容MON、MDS节点!):

1.服务器:一台4x12T服务器,ip 10.125.145.217,修改hostname    
  hostnamectl set-hostname bj02-ops-ceph05

2.经查看确认10.125.145.211为安装主控节点(只有整个集群只有这台安装了ceph-deploy),需要做安装主控节点到新节点10.125.145.217免密登录

主控节点root目录下没有key,因此新生成key

[root@bj02-ops-ceph01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
24:a9:04:c4:ec:3f:b5:6e:f8:97:3d:84:53:81:b8:2c root@bj02-ops-ceph01
The key's randomart image is:
+--[ RSA 2048]----+
| +o    . .       |
|  o.  ... .      |
| .  ..o..  .     |
|  ..E.+o  .      |
|   ..o .So       |
|    o . o .      |
|     +   =       |
|    . o o o      |
|     o..   .     |
+-----------------+
[root@bj02-ops-ceph01 ~]# 
[root@bj02-ops-ceph01 ~]# 
[root@bj02-ops-ceph01 ~]# ll .ssh/
authorized_keys  id_rsa           id_rsa.pub       known_hosts      
[root@bj02-ops-ceph01 ~]# cat .ssh/id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDmNFUaGritnUo2zqIU0czYFuEpB+ADzXCqzWsuJCAutSa3gyp1DmPt9HFUMmRjU51+WLauaLekGsOtQsSuZV+nYychKkj0Msvmhjx0uSi4peBpNtr9/VWUe8kQsetdmUy+Nd5kIetKO/XkEvDH0U9/FcQ82kvmJ1NGMz06GLnuRBcgYr0s+UxnKdJZU44yOqWwGezCt0nJWdUnMvif4zFnZDMTPTJiUVIDCm5Lg9QB/tl/5/OaxDLOXlGtK8egzLuWA5N6wEUB1TbhbMyQYI4Xafu/S3eyKhuHXGuAfwfig95YBadg71CNkK2PPmyNjWCIq91useRXWO2/dWtAmX3d root@bj02-ops-ceph01
[root@bj02-ops-ceph01 ~]#

将.ssh/id_rsa.pub内容添加到新节点(10.125.145.217).ssh/authorized_keys里面

[root@bj02-ops-ceph01 ~]# ssh root@10.125.145.217
The authenticity of host '10.125.145.217 (10.125.145.217)' can't be established.
ECDSA key fingerprint is 0a:e2:1f:db:21:76:94:a9:22:74:24:6e:7a:58:e9:16.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.125.145.217' (ECDSA) to the list of known hosts.
Last login: Tue Aug 15 16:17:14 2017
[root@spa-217-145-125 ~]#

3.添加hosts

[root@bj02-ops-ceph01 ~]# vi /etc/hosts 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.125.145.211 bj02-ops-ceph01
10.125.145.212 bj02-ops-ceph02
10.125.145.213 bj02-ops-ceph03
10.125.145.214 bj02-ops-ceph04
10.125.145.217 bj02-ops-ceph05

4.查看目前集群OSD节点目录结构和磁盘分区可断定以前集群安装的时候对磁盘没有区分journal分区,没有使用ceph-deploy disk-zap对磁盘进行逻辑分区,因此为保持和一致本次安装也不需要区分journal,不对磁盘进行journal和数据存储进行分区,所以扩容前新节点必须手动格式化并挂载所有12块盘
  集群OSD节点都用xfs格式,新扩容节点服务器保持一致将12快盘格式化成xfs格式,不做逻辑分区且挂载到相应/data/ceph-sdx,并写入到/etc/fstab,开机自动挂载
  我看到目前集群都有ceph用户,并且/data/ceph-sdx目录都被改成ceph用户属主和属组,没有必要,因为ceph都是以root用户运行的,本次扩容都没有修改
[root@bj02-ops-ceph05 ceph]# df -Th|grep data
/dev/sdh       xfs       3.7T   33M  3.7T   1% /data/ceph-sdh
/dev/sdb       xfs       3.7T  465G  3.2T  13% /data/ceph-sdb
/dev/sdg       xfs       3.7T   33M  3.7T   1% /data/ceph-sdg
/dev/sdf       xfs       3.7T   33M  3.7T   1% /data/ceph-sdf
/dev/sdm       xfs       3.7T   33M  3.7T   1% /data/ceph-sdm
/dev/sdc       xfs       3.7T   33M  3.7T   1% /data/ceph-sdc
/dev/sdd       xfs       3.7T   33M  3.7T   1% /data/ceph-sdd
/dev/sdj       xfs       3.7T   33M  3.7T   1% /data/ceph-sdj
/dev/sdk       xfs       3.7T   33M  3.7T   1% /data/ceph-sdk
/dev/sdi       xfs       3.7T   33M  3.7T   1% /data/ceph-sdi
/dev/sde       xfs       3.7T   33M  3.7T   1% /data/ceph-sde
/dev/sdl       xfs       3.7T   33M  3.7T   1% /data/ceph-sdl
[root@bj02-ops-ceph05 ceph]# 

5.查看目前集群安装ceph相关软件版本

[root@bj02-ops-ceph01 ~]# rpm -qa|grep ceph
ceph-radosgw-0.94.7-0.el7.x86_64
libcephfs1-0.94.7-0.el7.x86_64
ceph-0.94.7-0.el7.x86_64
python-cephfs-0.94.7-0.el7.x86_64
ceph-release-1-1.el7.noarch
ceph-deploy-1.5.34-0.noarch
ceph-common-0.94.7-0.el7.x86_64
[root@bj02-ops-ceph01 ~]#

6.安装ceph相关软件包到新扩容的服务器(也可在新扩容服务器手动安装上述ceph软件,版本必须和以前集群保持一致,可先安装ceph-release-1-1.el7.noarch,在yum安装以上软件)
  我本来是手动安装的,但是这个yum源hammer版本已经是0.94.10-0版本了,比目前集群版本高,与集群版本不一致
  用ceph-deploy install bj02-ops-ceph05安装发现版本已经到更高jewel版本,尝试指定版本ceph-deploy install bj02-ops-ceph05 --release hammer 发现安装的也是hammer0.94.10-0版本,还是比现在集群版本高
  所以决定手动安装,首先可通过上面的ceph-deploy看出依赖关系,排除掉所有ceph相关的软件包,然后用yum(epel源)安装这些除ceph以外的相关依赖的软件包
  然后去http://download.ceph.com/rpm-hammer/el7/x86_64/下载ceph相关的软件包(6、7个左右,版本为0.94.7-0,而且rpm安装这些包是有顺序的,可根据提示自己尝试安装)并安装,安装就到此结束
  ceph yum源地址 http://download.ceph.com/rpm-hammer/el7/noarch/ceph-release-1-1.el7.noarch.rpm
  相关操作记录如下(ceph-deploy安装都失败,但是能看出依赖关系的所有包),本次扩容只扩容OSD,没有扩容MON、MDS节点!

[root@bj02-ops-ceph01 ~]# ceph-deploy install bj02-ops-ceph05
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.34): /bin/ceph-deploy install bj02-ops-ceph05
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x20b05f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : True
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0x207ccf8>
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['bj02-ops-ceph05']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : None
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts bj02-ops-ceph05
[ceph_deploy.install][DEBUG ] Detecting platform for host bj02-ops-ceph05 ...
The authenticity of host 'bj02-ops-ceph05 (10.125.145.217)' can't be established.
ECDSA key fingerprint is 0a:e2:1f:db:21:76:94:a9:22:74:24:6e:7a:58:e9:16.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'bj02-ops-ceph05' (ECDSA) to the list of known hosts.
[bj02-ops-ceph05][DEBUG ] connected to host: bj02-ops-ceph05 
[bj02-ops-ceph05][DEBUG ] detect platform information from remote host
[bj02-ops-ceph05][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
[bj02-ops-ceph05][INFO  ] installing Ceph on bj02-ops-ceph05
[bj02-ops-ceph05][INFO  ] Running command: yum clean all
[bj02-ops-ceph05][DEBUG ] Loaded plugins: fastestmirror, langpacks
[bj02-ops-ceph05][WARNIN] Repository epel-testing is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-source is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-source is listed more than once in the configuration
[bj02-ops-ceph05][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[bj02-ops-ceph05][DEBUG ] Cleaning up everything
[bj02-ops-ceph05][DEBUG ] Cleaning up list of fastest mirrors
[bj02-ops-ceph05][INFO  ] Running command: yum -y install epel-release
[bj02-ops-ceph05][DEBUG ] Loaded plugins: fastestmirror, langpacks
[bj02-ops-ceph05][WARNIN] Repository epel-testing is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-source is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-source is listed more than once in the configuration
[bj02-ops-ceph05][DEBUG ] Determining fastest mirrors
[bj02-ops-ceph05][DEBUG ] Package epel-release-7-9.noarch already installed and latest version
[bj02-ops-ceph05][DEBUG ] Nothing to do
[bj02-ops-ceph05][INFO  ] Running command: yum -y install yum-plugin-priorities
[bj02-ops-ceph05][DEBUG ] Loaded plugins: fastestmirror, langpacks
[bj02-ops-ceph05][WARNIN] Repository epel-testing is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-source is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-source is listed more than once in the configuration
[bj02-ops-ceph05][DEBUG ] Loading mirror speeds from cached hostfile
[bj02-ops-ceph05][DEBUG ] Resolving Dependencies
[bj02-ops-ceph05][DEBUG ] --> Running transaction check
[bj02-ops-ceph05][DEBUG ] ---> Package yum-plugin-priorities.noarch 0:1.1.31-40.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Finished Dependency Resolution
[bj02-ops-ceph05][DEBUG ] 
[bj02-ops-ceph05][DEBUG ] Dependencies Resolved
[bj02-ops-ceph05][DEBUG ] 
[bj02-ops-ceph05][DEBUG ] ================================================================================
[bj02-ops-ceph05][DEBUG ]  Package                     Arch         Version              Repository  Size
[bj02-ops-ceph05][DEBUG ] ================================================================================
[bj02-ops-ceph05][DEBUG ] Installing:
[bj02-ops-ceph05][DEBUG ]  yum-plugin-priorities       noarch       1.1.31-40.el7        base        27 k
[bj02-ops-ceph05][DEBUG ] 
[bj02-ops-ceph05][DEBUG ] Transaction Summary
[bj02-ops-ceph05][DEBUG ] ================================================================================
[bj02-ops-ceph05][DEBUG ] Install  1 Package
[bj02-ops-ceph05][DEBUG ] 
[bj02-ops-ceph05][DEBUG ] Total download size: 27 k
[bj02-ops-ceph05][DEBUG ] Installed size: 28 k
[bj02-ops-ceph05][DEBUG ] Downloading packages:
[bj02-ops-ceph05][DEBUG ] Running transaction check
[bj02-ops-ceph05][DEBUG ] Running transaction test
[bj02-ops-ceph05][DEBUG ] Transaction test succeeded
[bj02-ops-ceph05][DEBUG ] Running transaction
[bj02-ops-ceph05][WARNIN] Warning: RPMDB altered outside of yum.
[bj02-ops-ceph05][DEBUG ]   Installing : yum-plugin-priorities-1.1.31-40.el7.noarch                   1/1 
[bj02-ops-ceph05][DEBUG ]   Verifying  : yum-plugin-priorities-1.1.31-40.el7.noarch                   1/1 
[bj02-ops-ceph05][DEBUG ] 
[bj02-ops-ceph05][DEBUG ] Installed:
[bj02-ops-ceph05][DEBUG ]   yum-plugin-priorities.noarch 0:1.1.31-40.el7                                  
[bj02-ops-ceph05][DEBUG ] 
[bj02-ops-ceph05][DEBUG ] Complete!
[bj02-ops-ceph05][DEBUG ] Configure Yum priorities to include obsoletes
[bj02-ops-ceph05][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[bj02-ops-ceph05][INFO  ] Running command: rpm --import https://download.ceph.com/keys/release.asc
[bj02-ops-ceph05][INFO  ] Running command: rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[bj02-ops-ceph05][DEBUG ] Retrieving https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[bj02-ops-ceph05][DEBUG ] Preparing...                          ########################################
[bj02-ops-ceph05][WARNIN]       file /etc/yum.repos.d/ceph.repo from install of ceph-release-1-1.el7.noarch conflicts with file from package ceph-release-1-1.el7.noarch
[bj02-ops-ceph05][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm

[root@bj02-ops-ceph01 ~]# 

尝试指定版本ceph-deploy为hammer,版本还是比现在高

[root@bj02-ops-ceph01 ~]# ceph-deploy install bj02-ops-ceph05 --release hammer
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.34): /bin/ceph-deploy install bj02-ops-ceph05 --release hammer
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1d5f5f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : True
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0x1d2dcf8>
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['bj02-ops-ceph05']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : hammer
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version hammer on cluster ceph hosts bj02-ops-ceph05
[ceph_deploy.install][DEBUG ] Detecting platform for host bj02-ops-ceph05 ...
[bj02-ops-ceph05][DEBUG ] connected to host: bj02-ops-ceph05 
[bj02-ops-ceph05][DEBUG ] detect platform information from remote host
[bj02-ops-ceph05][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
[bj02-ops-ceph05][INFO  ] installing Ceph on bj02-ops-ceph05
[bj02-ops-ceph05][INFO  ] Running command: yum clean all
[bj02-ops-ceph05][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[bj02-ops-ceph05][WARNIN] Repository epel-testing is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-source is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-source is listed more than once in the configuration
[bj02-ops-ceph05][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[bj02-ops-ceph05][DEBUG ] Cleaning up everything
[bj02-ops-ceph05][DEBUG ] Cleaning up list of fastest mirrors
[bj02-ops-ceph05][INFO  ] Running command: yum -y install epel-release
[bj02-ops-ceph05][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[bj02-ops-ceph05][WARNIN] Repository epel-testing is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-source is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-source is listed more than once in the configuration
[bj02-ops-ceph05][DEBUG ] Determining fastest mirrors
[bj02-ops-ceph05][DEBUG ] Package epel-release-7-9.noarch already installed and latest version
[bj02-ops-ceph05][DEBUG ] Nothing to do
[bj02-ops-ceph05][INFO  ] Running command: yum -y install yum-plugin-priorities
[bj02-ops-ceph05][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[bj02-ops-ceph05][WARNIN] Repository epel-testing is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-source is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-source is listed more than once in the configuration
[bj02-ops-ceph05][DEBUG ] Loading mirror speeds from cached hostfile
[bj02-ops-ceph05][DEBUG ] Package yum-plugin-priorities-1.1.31-40.el7.noarch already installed and latest version
[bj02-ops-ceph05][DEBUG ] Nothing to do
[bj02-ops-ceph05][DEBUG ] Configure Yum priorities to include obsoletes
[bj02-ops-ceph05][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[bj02-ops-ceph05][INFO  ] Running command: rpm --import https://download.ceph.com/keys/release.asc
[bj02-ops-ceph05][INFO  ] Running command: rpm -Uvh --replacepkgs https://download.ceph.com/rpm-hammer/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[bj02-ops-ceph05][DEBUG ] Retrieving https://download.ceph.com/rpm-hammer/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[bj02-ops-ceph05][DEBUG ] Preparing...                          ########################################
[bj02-ops-ceph05][DEBUG ] Updating / installing...
[bj02-ops-ceph05][DEBUG ] ceph-release-1-1.el7                  ########################################
[bj02-ops-ceph05][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[bj02-ops-ceph05][WARNIN] altered ceph.repo priorities to contain: priority=1
[bj02-ops-ceph05][INFO  ] Running command: yum -y install ceph ceph-radosgw
[bj02-ops-ceph05][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[bj02-ops-ceph05][WARNIN] Repository epel-testing is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-testing-source is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-debuginfo is listed more than once in the configuration
[bj02-ops-ceph05][WARNIN] Repository epel-source is listed more than once in the configuration
[bj02-ops-ceph05][DEBUG ] Loading mirror speeds from cached hostfile
[bj02-ops-ceph05][DEBUG ] 26 packages excluded due to repository priority protections
[bj02-ops-ceph05][DEBUG ] Resolving Dependencies
[bj02-ops-ceph05][DEBUG ] --> Running transaction check
[bj02-ops-ceph05][DEBUG ] ---> Package ceph.x86_64 1:0.94.10-0.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: python-rados = 1:0.94.10-0.el7 for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: librbd1 = 1:0.94.10-0.el7 for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: ceph-common = 1:0.94.10-0.el7 for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: python-cephfs = 1:0.94.10-0.el7 for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: libcephfs1 = 1:0.94.10-0.el7 for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: librados2 = 1:0.94.10-0.el7 for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: python-rbd = 1:0.94.10-0.el7 for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: python-flask for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: gdisk for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: redhat-lsb-core for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: hdparm for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: python-requests for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: libtcmalloc.so.4()(64bit) for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: libleveldb.so.1()(64bit) for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: libcephfs.so.1()(64bit) for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: liblttng-ust.so.0()(64bit) for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: librados.so.2()(64bit) for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: libboost_program_options-mt.so.1.53.0()(64bit) for package: 1:ceph-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] ---> Package ceph-radosgw.x86_64 1:0.94.10-0.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: mailcap for package: 1:ceph-radosgw-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: libfcgi.so.0()(64bit) for package: 1:ceph-radosgw-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Running transaction check
[bj02-ops-ceph05][DEBUG ] ---> Package boost-program-options.x86_64 0:1.53.0-26.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package ceph-common.x86_64 1:0.94.10-0.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: libbabeltrace.so.1()(64bit) for package: 1:ceph-common-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: libbabeltrace-ctf.so.1()(64bit) for package: 1:ceph-common-0.94.10-0.el7.x86_64
[bj02-ops-ceph05][DEBUG ] ---> Package fcgi.x86_64 0:2.4.0-25.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package gdisk.x86_64 0:0.8.6-5.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: libicuuc.so.50()(64bit) for package: gdisk-0.8.6-5.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: libicuio.so.50()(64bit) for package: gdisk-0.8.6-5.el7.x86_64
[bj02-ops-ceph05][DEBUG ] ---> Package gperftools-libs.x86_64 0:2.4-8.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: libunwind.so.8()(64bit) for package: gperftools-libs-2.4-8.el7.x86_64
[bj02-ops-ceph05][DEBUG ] ---> Package hdparm.x86_64 0:9.43-5.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package leveldb.x86_64 0:1.12.0-11.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package libcephfs1.x86_64 1:0.94.10-0.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package librados2.x86_64 1:0.94.10-0.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package librbd1.x86_64 1:0.94.10-0.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package lttng-ust.x86_64 0:2.4.1-4.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: liburcu-cds.so.1()(64bit) for package: lttng-ust-2.4.1-4.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: liburcu-bp.so.1()(64bit) for package: lttng-ust-2.4.1-4.el7.x86_64
[bj02-ops-ceph05][DEBUG ] ---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package python-cephfs.x86_64 1:0.94.10-0.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package python-flask.noarch 1:0.10.1-4.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: python-werkzeug for package: 1:python-flask-0.10.1-4.el7.noarch
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: python-jinja2 for package: 1:python-flask-0.10.1-4.el7.noarch
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: python-itsdangerous for package: 1:python-flask-0.10.1-4.el7.noarch
[bj02-ops-ceph05][DEBUG ] ---> Package python-rados.x86_64 1:0.94.10-0.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package python-rbd.x86_64 1:0.94.10-0.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package python-requests.noarch 0:2.6.0-1.el7_1 will be installed
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: python-urllib3 >= 1.10.2-1 for package: python-requests-2.6.0-1.el7_1.noarch
[bj02-ops-ceph05][DEBUG ] ---> Package redhat-lsb-core.x86_64 0:4.1-27.el7.centos.1 will be installed
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: redhat-lsb-submod-security(x86-64) = 4.1-27.el7.centos.1 for package: redhat-lsb-core-4.1-27.el7.centos.1.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: spax for package: redhat-lsb-core-4.1-27.el7.centos.1.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: /usr/sbin/fuser for package: redhat-lsb-core-4.1-27.el7.centos.1.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: /usr/bin/lpr for package: redhat-lsb-core-4.1-27.el7.centos.1.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: /usr/bin/lp for package: redhat-lsb-core-4.1-27.el7.centos.1.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: /usr/bin/killall for package: redhat-lsb-core-4.1-27.el7.centos.1.x86_64
[bj02-ops-ceph05][DEBUG ] --> Running transaction check
[bj02-ops-ceph05][DEBUG ] ---> Package cups-client.x86_64 1:1.6.3-26.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: cups-libs(x86-64) = 1:1.6.3-26.el7 for package: 1:cups-client-1.6.3-26.el7.x86_64
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: libcups.so.2()(64bit) for package: 1:cups-client-1.6.3-26.el7.x86_64
[bj02-ops-ceph05][DEBUG ] ---> Package libbabeltrace.x86_64 0:1.2.4-3.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package libicu.x86_64 0:50.1.2-15.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package libunwind.x86_64 2:1.1-5.el7_2.2 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package psmisc.x86_64 0:22.20-11.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package python-itsdangerous.noarch 0:0.23-2.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package python-jinja2.noarch 0:2.7.2-2.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: python-babel >= 0.8 for package: python-jinja2-2.7.2-2.el7.noarch
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: python-markupsafe for package: python-jinja2-2.7.2-2.el7.noarch
[bj02-ops-ceph05][DEBUG ] ---> Package python-urllib3.noarch 0:1.10.2-2.el7_1 will be installed
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: python-backports-ssl_match_hostname for package: python-urllib3-1.10.2-2.el7_1.noarch
[bj02-ops-ceph05][DEBUG ] ---> Package python-werkzeug.noarch 0:0.9.1-2.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package redhat-lsb-submod-security.x86_64 0:4.1-27.el7.centos.1 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package spax.x86_64 0:1.5.2-13.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package userspace-rcu.x86_64 0:0.7.16-1.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Running transaction check
[bj02-ops-ceph05][DEBUG ] ---> Package cups-libs.x86_64 1:1.6.3-26.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package python-babel.noarch 0:0.9.6-8.el7 will be installed
[bj02-ops-ceph05][DEBUG ] ---> Package python-backports-ssl_match_hostname.noarch 0:3.4.0.2-4.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Processing Dependency: python-backports for package: python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch
[bj02-ops-ceph05][DEBUG ] ---> Package python-markupsafe.x86_64 0:0.11-10.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Running transaction check
[bj02-ops-ceph05][DEBUG ] ---> Package python-backports.x86_64 0:1.0-8.el7 will be installed
[bj02-ops-ceph05][DEBUG ] --> Finished Dependency Resolution
[bj02-ops-ceph05][DEBUG ] 
[bj02-ops-ceph05][DEBUG ] Dependencies Resolved
[bj02-ops-ceph05][DEBUG ] 
[bj02-ops-ceph05][DEBUG ] ================================================================================
[bj02-ops-ceph05][DEBUG ]  Package                             Arch   Version                Repository
[bj02-ops-ceph05][DEBUG ]                                                                            Size
[bj02-ops-ceph05][DEBUG ] ================================================================================
[bj02-ops-ceph05][DEBUG ] Installing:
[bj02-ops-ceph05][DEBUG ]  ceph                                x86_64 1:0.94.10-0.el7        Ceph    20 M
[bj02-ops-ceph05][DEBUG ]  ceph-radosgw                        x86_64 1:0.94.10-0.el7        Ceph   2.3 M
[bj02-ops-ceph05][DEBUG ] Installing for dependencies:
[bj02-ops-ceph05][DEBUG ]  boost-program-options               x86_64 1.53.0-26.el7          base   156 k
[bj02-ops-ceph05][DEBUG ]  ceph-common                         x86_64 1:0.94.10-0.el7        Ceph   7.2 M
[bj02-ops-ceph05][DEBUG ]  cups-client                         x86_64 1:1.6.3-26.el7         base   149 k
[bj02-ops-ceph05][DEBUG ]  cups-libs                           x86_64 1:1.6.3-26.el7         base   356 k
[bj02-ops-ceph05][DEBUG ]  fcgi                                x86_64 2.4.0-25.el7           epel    47 k
[bj02-ops-ceph05][DEBUG ]  gdisk                               x86_64 0.8.6-5.el7            base   187 k
[bj02-ops-ceph05][DEBUG ]  gperftools-libs                     x86_64 2.4-8.el7              base   272 k
[bj02-ops-ceph05][DEBUG ]  hdparm                              x86_64 9.43-5.el7             base    83 k
[bj02-ops-ceph05][DEBUG ]  leveldb                             x86_64 1.12.0-11.el7          epel   161 k
[bj02-ops-ceph05][DEBUG ]  libbabeltrace                       x86_64 1.2.4-3.el7            epel   147 k
[bj02-ops-ceph05][DEBUG ]  libcephfs1                          x86_64 1:0.94.10-0.el7        Ceph   1.9 M
[bj02-ops-ceph05][DEBUG ]  libicu                              x86_64 50.1.2-15.el7          base   6.9 M
[bj02-ops-ceph05][DEBUG ]  librados2                           x86_64 1:0.94.10-0.el7        Ceph   1.8 M
[bj02-ops-ceph05][DEBUG ]  librbd1                             x86_64 1:0.94.10-0.el7        Ceph   1.9 M
[bj02-ops-ceph05][DEBUG ]  libunwind                           x86_64 2:1.1-5.el7_2.2        base    56 k
[bj02-ops-ceph05][DEBUG ]  lttng-ust                           x86_64 2.4.1-4.el7            epel   176 k
[bj02-ops-ceph05][DEBUG ]  mailcap                             noarch 2.1.41-2.el7           base    31 k
[bj02-ops-ceph05][DEBUG ]  psmisc                              x86_64 22.20-11.el7           base   141 k
[bj02-ops-ceph05][DEBUG ]  python-babel                        noarch 0.9.6-8.el7            base   1.4 M
[bj02-ops-ceph05][DEBUG ]  python-backports                    x86_64 1.0-8.el7              base   5.8 k
[bj02-ops-ceph05][DEBUG ]  python-backports-ssl_match_hostname noarch 3.4.0.2-4.el7          base    12 k
[bj02-ops-ceph05][DEBUG ]  python-cephfs                       x86_64 1:0.94.10-0.el7        Ceph    11 k
[bj02-ops-ceph05][DEBUG ]  python-flask                        noarch 1:0.10.1-4.el7         extras 204 k
[bj02-ops-ceph05][DEBUG ]  python-itsdangerous                 noarch 0.23-2.el7             extras  24 k
[bj02-ops-ceph05][DEBUG ]  python-jinja2                       noarch 2.7.2-2.el7            base   515 k
[bj02-ops-ceph05][DEBUG ]  python-markupsafe                   x86_64 0.11-10.el7            base    25 k
[bj02-ops-ceph05][DEBUG ]  python-rados                        x86_64 1:0.94.10-0.el7        Ceph    28 k
[bj02-ops-ceph05][DEBUG ]  python-rbd                          x86_64 1:0.94.10-0.el7        Ceph    18 k
[bj02-ops-ceph05][DEBUG ]  python-requests                     noarch 2.6.0-1.el7_1          base    94 k
[bj02-ops-ceph05][DEBUG ]  python-urllib3                      noarch 1.10.2-2.el7_1         base   100 k
[bj02-ops-ceph05][DEBUG ]  python-werkzeug                     noarch 0.9.1-2.el7            extras 562 k
[bj02-ops-ceph05][DEBUG ]  redhat-lsb-core                     x86_64 4.1-27.el7.centos.1    base    38 k
[bj02-ops-ceph05][DEBUG ]  redhat-lsb-submod-security          x86_64 4.1-27.el7.centos.1    base    15 k
[bj02-ops-ceph05][DEBUG ]  spax                                x86_64 1.5.2-13.el7           base   260 k
[bj02-ops-ceph05][DEBUG ]  userspace-rcu                       x86_64 0.7.16-1.el7           epel    73 k
[bj02-ops-ceph05][DEBUG ] 
[bj02-ops-ceph05][DEBUG ] Transaction Summary
[bj02-ops-ceph05][DEBUG ] ================================================================================
[bj02-ops-ceph05][DEBUG ] Install  2 Packages (+35 Dependent packages)
[bj02-ops-ceph05][DEBUG ] 
[bj02-ops-ceph05][DEBUG ] Total download size: 47 M
[bj02-ops-ceph05][DEBUG ] Installed size: 176 M
[bj02-ops-ceph05][DEBUG ] Downloading packages:
^CKilled by signal 2.
[ceph_deploy][ERROR ] KeyboardInterrupt

[root@bj02-ops-ceph01 ~]#


7.扩容OSD节点,在安装主控节点执行 ceph-deploy --overwrite-conf osd prepare bj02-ops-ceph05:/data/ceph-sdb
(必须进入到/etc/ceph目录下执行这个命令,要不会报找不到ceph.conf错误,找不到ceph配置文件)

[root@bj02-ops-ceph01 ~]# ceph-deploy --overwrite-conf osd prepare bj02-ops-ceph05:/data/ceph-sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.34): /bin/ceph-deploy --overwrite-conf osd prepare bj02-ops-ceph05:/data/ceph-sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('bj02-ops-ceph05', '/data/ceph-sdb', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x2b525f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x2afb230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy][ERROR ] ConfigError: Cannot load config: [Errno 2] No such file or directory: 'ceph.conf'; has `ceph-deploy new` been run in this directory?

[root@bj02-ops-ceph01 ~]#

进入到/etc/ceph目录执行

[root@bj02-ops-ceph01 ceph]# ceph-deploy --overwrite-conf osd prepare bj02-ops-ceph05:/data/ceph-sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.34): /bin/ceph-deploy --overwrite-conf osd prepare bj02-ops-ceph05:/data/ceph-sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('bj02-ops-ceph05', '/data/ceph-sdb', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x201d5f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x1fc6230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks bj02-ops-ceph05:/data/ceph-sdb:
[ceph_deploy][ERROR ] RuntimeError: bootstrap-osd keyring not found; run 'gatherkeys'

[root@bj02-ops-ceph01 ceph]#

报错找不到bootstrap-osd keyring密钥,查看当前目录确实没有bootstrap-osd keyring密钥,因此收集远程节点密钥到当前文件夹下,因为安装主控节点是集群的MON节点,因此这里我收集本机bootstrap-osd keyring就行

[root@bj02-ops-ceph01 ceph]# ll
total 20
-rw-r--r-- 1 root root   63 Aug 10  2016 ceph.client.admin.keyring
-rw-r--r-- 1 root root  334 May 24 17:00 ceph.conf
-rw-r--r-- 1 root root 4220 Aug 16 10:09 ceph-deploy-ceph.log
-rw-r--r-- 1 root root  140 Aug 11  2016 ceph.mon.keyring
-rw------- 1 root root    0 Aug 10  2016 tmp6dZWc_
[root@bj02-ops-ceph01 ceph]#

[root@bj02-ops-ceph01 ceph]# ceph-deploy gatherkeys bj02-ops-ceph01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.34): /bin/ceph-deploy gatherkeys bj02-ops-ceph01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x14cf5f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['bj02-ops-ceph01']
[ceph_deploy.cli][INFO  ]  func                          : <function gatherkeys at 0x1463410>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpOwDiTZ
[bj02-ops-ceph01][DEBUG ] connected to host: bj02-ops-ceph01 
[bj02-ops-ceph01][DEBUG ] detect platform information from remote host
[bj02-ops-ceph01][DEBUG ] detect machine type
[bj02-ops-ceph01][DEBUG ] get remote short hostname
[bj02-ops-ceph01][DEBUG ] fetch remote file
[bj02-ops-ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.bj02-ops-ceph01.asok mon_status
[bj02-ops-ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-bj02-ops-ceph01/keyring auth get-or-create client.admin osd allow * mds allow mon allow *
[bj02-ops-ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-bj02-ops-ceph01/keyring auth get-or-create client.bootstrap-mds mon allow profile bootstrap-mds
[bj02-ops-ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-bj02-ops-ceph01/keyring auth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd
[bj02-ops-ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-bj02-ops-ceph01/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.client.admin.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpOwDiTZ
[root@bj02-ops-ceph01 ceph]#

再次查看当前目录,已经有bootstrap-osd keyring密钥

[root@bj02-ops-ceph01 ceph]# ll
total 32
-rw------- 1 root root   71 Aug 16 10:13 ceph.bootstrap-mds.keyring
-rw------- 1 root root   71 Aug 16 10:13 ceph.bootstrap-osd.keyring
-rw------- 1 root root   71 Aug 16 10:13 ceph.bootstrap-rgw.keyring
-rw-r--r-- 1 root root   63 Aug 10  2016 ceph.client.admin.keyring
-rw-r--r-- 1 root root  334 May 24 17:00 ceph.conf
-rw-r--r-- 1 root root 7806 Aug 16 10:13 ceph-deploy-ceph.log
-rw-r--r-- 1 root root  140 Aug 11  2016 ceph.mon.keyring
-rw------- 1 root root    0 Aug 10  2016 tmp6dZWc_
[root@bj02-ops-ceph01 ceph]#


然后在执行扩容命令

[root@bj02-ops-ceph01 ceph]# ceph-deploy --overwrite-conf osd prepare bj02-ops-ceph05:/data/ceph-sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.34): /bin/ceph-deploy --overwrite-conf osd prepare bj02-ops-ceph05:/data/ceph-sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('bj02-ops-ceph05', '/data/ceph-sdb', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1a2a5f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x19d3230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks bj02-ops-ceph05:/data/ceph-sdb:
[bj02-ops-ceph05][DEBUG ] connected to host: bj02-ops-ceph05 
[bj02-ops-ceph05][DEBUG ] detect platform information from remote host
[bj02-ops-ceph05][DEBUG ] detect machine type
[bj02-ops-ceph05][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to bj02-ops-ceph05
[bj02-ops-ceph05][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[bj02-ops-ceph05][WARNIN] osd keyring does not exist yet, creating one
[bj02-ops-ceph05][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host bj02-ops-ceph05 disk /data/ceph-sdb journal None activate False
[bj02-ops-ceph05][DEBUG ] find the location of an executable
[bj02-ops-ceph05][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /data/ceph-sdb
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /data/ceph-sdb
[bj02-ops-ceph05][INFO  ] checking OSD status...
[bj02-ops-ceph05][DEBUG ] find the location of an executable
[bj02-ops-ceph05][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host bj02-ops-ceph05 is now ready for osd use.
[root@bj02-ops-ceph01 ceph]#

没有问题,也没有报错

8.将新准备的OSD节点加入到集群并激活

[root@bj02-ops-ceph01 ceph]# ceph-deploy --overwrite-conf osd activate bj02-ops-ceph05:/data/ceph-sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.34): /bin/ceph-deploy --overwrite-conf osd activate bj02-ops-ceph05:/data/ceph-sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x248d5f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x2436230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('bj02-ops-ceph05', '/data/ceph-sdb', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks bj02-ops-ceph05:/data/ceph-sdb:
[bj02-ops-ceph05][DEBUG ] connected to host: bj02-ops-ceph05 
[bj02-ops-ceph05][DEBUG ] detect platform information from remote host
[bj02-ops-ceph05][DEBUG ] detect machine type
[bj02-ops-ceph05][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] activating host bj02-ops-ceph05 disk /data/ceph-sdb
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[bj02-ops-ceph05][DEBUG ] find the location of an executable
[bj02-ops-ceph05][INFO  ] Running command: /usr/sbin/ceph-disk -v activate --mark-init sysvinit --mount /data/ceph-sdb
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:Cluster uuid is b1fd8147-634c-49c9-abeb-c82747a0530d
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:OSD uuid is a769b2f7-6f97-4d69-8a21-057d290e302b
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise a769b2f7-6f97-4d69-8a21-057d290e302b
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:OSD id is 0
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:Initializing OSD...
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /data/ceph-sdb/activate.monmap
[bj02-ops-ceph05][WARNIN] got monmap epoch 1
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /data/ceph-sdb/activate.monmap --osd-data /data/ceph-sdb --osd-journal /data/ceph-sdb/journal --osd-uuid a769b2f7-6f97-4d69-8a21-057d290e302b --keyring /data/ceph-sdb/keyring
[bj02-ops-ceph05][WARNIN] 2017-08-16 10:16:46.443098 7f168d488880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[bj02-ops-ceph05][WARNIN] 2017-08-16 10:16:46.449017 7f168d488880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[bj02-ops-ceph05][WARNIN] 2017-08-16 10:16:46.452477 7f168d488880 -1 filestore(/data/ceph-sdb) could not find -1/23c2fcde/osd_superblock/0 in index: (2) No such file or directory
[bj02-ops-ceph05][WARNIN] 2017-08-16 10:16:46.475547 7f168d488880 -1 created object store /data/ceph-sdb journal /data/ceph-sdb/journal for osd.0 fsid b1fd8147-634c-49c9-abeb-c82747a0530d
[bj02-ops-ceph05][WARNIN] 2017-08-16 10:16:46.475609 7f168d488880 -1 auth: error reading file: /data/ceph-sdb/keyring: can't open /data/ceph-sdb/keyring: (2) No such file or directory
[bj02-ops-ceph05][WARNIN] 2017-08-16 10:16:46.475819 7f168d488880 -1 created new key in keyring /data/ceph-sdb/keyring
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.0 -i /data/ceph-sdb/keyring osd allow * mon allow profile osd
[bj02-ops-ceph05][WARNIN] Error EINVAL: entity osd.0 exists but key does not match
[bj02-ops-ceph05][WARNIN] Traceback (most recent call last):
[bj02-ops-ceph05][WARNIN]   File "/usr/sbin/ceph-disk", line 3016, in <module>
[bj02-ops-ceph05][WARNIN]     main()
[bj02-ops-ceph05][WARNIN]   File "/usr/sbin/ceph-disk", line 2994, in main
[bj02-ops-ceph05][WARNIN]     args.func(args)
[bj02-ops-ceph05][WARNIN]   File "/usr/sbin/ceph-disk", line 2186, in main_activate
[bj02-ops-ceph05][WARNIN]     init=args.mark_init,
[bj02-ops-ceph05][WARNIN]   File "/usr/sbin/ceph-disk", line 2015, in activate_dir
[bj02-ops-ceph05][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)
[bj02-ops-ceph05][WARNIN]   File "/usr/sbin/ceph-disk", line 2153, in activate
[bj02-ops-ceph05][WARNIN]     keyring=keyring,
[bj02-ops-ceph05][WARNIN]   File "/usr/sbin/ceph-disk", line 1756, in auth_key
[bj02-ops-ceph05][WARNIN]     'mon', 'allow profile osd',
[bj02-ops-ceph05][WARNIN]   File "/usr/sbin/ceph-disk", line 323, in command_check_call
[bj02-ops-ceph05][WARNIN]     return subprocess.check_call(arguments)
[bj02-ops-ceph05][WARNIN]   File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
[bj02-ops-ceph05][WARNIN]     raise CalledProcessError(retcode, cmd)
[bj02-ops-ceph05][WARNIN] subprocess.CalledProcessError: Command '['/usr/bin/ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', 'auth', 'add', 'osd.0', '-i', '/data/ceph-sdb/keyring', 'osd', 'allow *', 'mon', 'allow profile osd']' returned non-zero exit status 22
[bj02-ops-ceph05][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init sysvinit --mount /data/ceph-sdb

[root@bj02-ops-ceph01 ceph]#

发现有报错,此时新加入的OSD状态为down

[root@bj02-ops-ceph01 ceph]# ceph osd tree
ID WEIGHT    TYPE NAME                UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 171.07997 root default                                               
-2  40.03999     host bj02-ops-ceph01                                   
 8   3.64000         osd.8                 up  1.00000          1.00000 
12   3.64000         osd.12                up  0.79999          1.00000 
16   3.64000         osd.16                up  0.84999          1.00000 
20   3.64000         osd.20                up  1.00000          1.00000 
24   3.64000         osd.24                up  1.00000          1.00000 
28   3.64000         osd.28                up  1.00000          1.00000 
32   3.64000         osd.32                up  0.89999          1.00000 
36   3.64000         osd.36                up  0.70000          1.00000 
40   3.64000         osd.40                up  1.00000          1.00000 
44   3.64000         osd.44                up  0.79999          1.00000 
 4   3.64000         osd.4                 up  0.89999          1.00000 
-3  43.67999     host bj02-ops-ceph02                                   
 1   3.64000         osd.1                 up  0.79999          1.00000 
 5   3.64000         osd.5                 up  1.00000          1.00000 
 9   3.64000         osd.9                 up  1.00000          1.00000 
13   3.64000         osd.13                up  1.00000          1.00000 
17   3.64000         osd.17                up  0.79999          1.00000 
21   3.64000         osd.21                up  1.00000          1.00000 
25   3.64000         osd.25                up  1.00000          1.00000 
33   3.64000         osd.33                up  1.00000          1.00000 
37   3.64000         osd.37                up  0.64999          1.00000 
41   3.64000         osd.41                up  1.00000          1.00000 
45   3.64000         osd.45                up  1.00000          1.00000 
29   3.64000         osd.29                up  0.50000          1.00000 
-4  43.67999     host bj02-ops-ceph03                                   
 2   3.64000         osd.2                 up  1.00000          1.00000 
 6   3.64000         osd.6                 up  0.89999          1.00000 
10   3.64000         osd.10                up  1.00000          1.00000 
14   3.64000         osd.14                up  1.00000          1.00000 
18   3.64000         osd.18                up  1.00000          1.00000 
22   3.64000         osd.22                up  1.00000          1.00000 
26   3.64000         osd.26                up  1.00000          1.00000 
30   3.64000         osd.30                up  1.00000          1.00000 
34   3.64000         osd.34                up  1.00000          1.00000 
38   3.64000         osd.38                up  1.00000          1.00000 
42   3.64000         osd.42                up  1.00000          1.00000 
46   3.64000         osd.46                up  0.84999          1.00000 
-5  43.67999     host bj02-ops-ceph04                                   
 7   3.64000         osd.7                 up  1.00000          1.00000 
11   3.64000         osd.11                up  1.00000          1.00000 
15   3.64000         osd.15                up  1.00000          1.00000 
19   3.64000         osd.19                up  1.00000          1.00000 
23   3.64000         osd.23                up  0.45000          1.00000 
27   3.64000         osd.27                up  1.00000          1.00000 
31   3.64000         osd.31                up  0.79999          1.00000 
35   3.64000         osd.35                up  0.70000          1.00000 
39   3.64000         osd.39                up  1.00000          1.00000 
43   3.64000         osd.43                up  1.00000          1.00000 
47   3.64000         osd.47                up  1.00000          1.00000 
 3   3.64000         osd.3                 up  0.89999          1.00000 
 0         0 osd.0                       down        0          1.00000 
[root@bj02-ops-ceph01 ceph]#

ceph -s可看出,已经多了一个OSD节点但是只有47个是up和in状态

[root@bj02-ops-ceph01 ceph]# ceph -s
    cluster b1fd8147-634c-49c9-abeb-c82747a0530d
     health HEALTH_WARN
            1 pgs backfill
            29 pgs backfill_toofull
            35 pgs stuck unclean
            recovery 43906/46234679 objects degraded (0.095%)
            recovery 3309205/46234679 objects misplaced (7.157%)
            19 near full osd(s)
     monmap e1: 4 mons at {bj02-ops-ceph01=10.125.145.211:6789/0,bj02-ops-ceph02=10.125.145.212:6789/0,bj02-ops-ceph03=10.125.145.213:6789/0,bj02-ops-ceph04=10.125.145.214:6789/0}
            election epoch 340, quorum 0,1,2,3 bj02-ops-ceph01,bj02-ops-ceph02,bj02-ops-ceph03,bj02-ops-ceph04
     mdsmap e150: 1/1/1 up {0=bj02-ops-ceph01=up:active}, 1 up:standby
     osdmap e5296: 48 osds: 47 up, 47 in; 35 remapped pgs
      pgmap v22655084: 768 pgs, 4 pools, 52891 GB data, 14534 kobjects
            136 TB used, 34767 GB / 170 TB avail
            43906/46234679 objects degraded (0.095%)
            3309205/46234679 objects misplaced (7.157%)
                 731 active+clean
                  28 active+remapped+backfill_toofull
                   6 active+remapped
                   1 active+clean+scrubbing
                   1 active+remapped+wait_backfill+backfill_toofull
                   1 active+clean+scrubbing+deep
  client io 8987 kB/s wr, 19 op/s
[root@bj02-ops-ceph01 ceph]#

在仔细看下报错内容,添加osd时遇到Error EINVAL: entity osd.0 exists but key does not match错误,报错说此osd节点已经存,但是key密钥不对
说明集群以前有过这个OSD节点,并且MON记录的此OSD的key和现在不一样,因此报错,所以先删除以前存在过的这个OSD的key密钥

[root@bj02-ops-ceph01 ceph]# ceph auth del osd.0
updated
[root@bj02-ops-ceph01 ceph]#

然后在重新执行激活命令

[root@bj02-ops-ceph01 ceph]# ceph-deploy --overwrite-conf osd activate bj02-ops-ceph05:/data/ceph-sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.34): /bin/ceph-deploy --overwrite-conf osd activate bj02-ops-ceph05:/data/ceph-sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x111a5f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x10c3230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('bj02-ops-ceph05', '/data/ceph-sdb', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks bj02-ops-ceph05:/data/ceph-sdb:
[bj02-ops-ceph05][DEBUG ] connected to host: bj02-ops-ceph05 
[bj02-ops-ceph05][DEBUG ] detect platform information from remote host
[bj02-ops-ceph05][DEBUG ] detect machine type
[bj02-ops-ceph05][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] activating host bj02-ops-ceph05 disk /data/ceph-sdb
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[bj02-ops-ceph05][DEBUG ] find the location of an executable
[bj02-ops-ceph05][INFO  ] Running command: /usr/sbin/ceph-disk -v activate --mark-init sysvinit --mount /data/ceph-sdb
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:Cluster uuid is b1fd8147-634c-49c9-abeb-c82747a0530d
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:OSD uuid is a769b2f7-6f97-4d69-8a21-057d290e302b
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:OSD id is 0
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.0 -i /data/ceph-sdb/keyring osd allow * mon allow profile osd
[bj02-ops-ceph05][WARNIN] added key for osd.0
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:ceph osd.0 data dir is ready at /data/ceph-sdb
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/osd/ceph-0 -> /data/ceph-sdb
[bj02-ops-ceph05][WARNIN] DEBUG:ceph-disk:Starting ceph osd.0...
[bj02-ops-ceph05][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/service ceph --cluster ceph start osd.0
[bj02-ops-ceph05][DEBUG ] === osd.0 === 
[bj02-ops-ceph05][WARNIN] create-or-move updating item name 'osd.0' weight 3.64 at location {host=bj02-ops-ceph05,root=default} to crush map
[bj02-ops-ceph05][DEBUG ] Starting Ceph osd.0 on bj02-ops-ceph05...
[bj02-ops-ceph05][WARNIN] Running as unit ceph-osd.0.1502850279.113563509.service.
[bj02-ops-ceph05][INFO  ] checking OSD status...
[bj02-ops-ceph05][DEBUG ] find the location of an executable
[bj02-ops-ceph05][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[bj02-ops-ceph05][INFO  ] Running command: systemctl enable ceph
[bj02-ops-ceph05][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.
[bj02-ops-ceph05][WARNIN] Executing /sbin/chkconfig ceph on
[root@bj02-ops-ceph01 ceph]#

此时在查看新增OSD节点已经up并且已经加入到集群,集群已经开始了数据平衡,以前卡死的状态已经开始再次恢复了

[root@bj02-ops-ceph01 ceph]# ceph osd tree
ID WEIGHT    TYPE NAME                UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 174.71997 root default                                               
-2  40.03999     host bj02-ops-ceph01                                   
 8   3.64000         osd.8                 up  1.00000          1.00000 
12   3.64000         osd.12                up  0.79999          1.00000 
16   3.64000         osd.16                up  0.84999          1.00000 
20   3.64000         osd.20                up  1.00000          1.00000 
24   3.64000         osd.24                up  1.00000          1.00000 
28   3.64000         osd.28                up  1.00000          1.00000 
32   3.64000         osd.32                up  0.89999          1.00000 
36   3.64000         osd.36                up  0.70000          1.00000 
40   3.64000         osd.40                up  1.00000          1.00000 
44   3.64000         osd.44                up  0.79999          1.00000 
 4   3.64000         osd.4                 up  0.89999          1.00000 
-3  43.67999     host bj02-ops-ceph02                                   
 1   3.64000         osd.1                 up  0.79999          1.00000 
 5   3.64000         osd.5                 up  1.00000          1.00000 
 9   3.64000         osd.9                 up  1.00000          1.00000 
13   3.64000         osd.13                up  1.00000          1.00000 
17   3.64000         osd.17                up  0.79999          1.00000 
21   3.64000         osd.21                up  1.00000          1.00000 
25   3.64000         osd.25                up  1.00000          1.00000 
33   3.64000         osd.33                up  1.00000          1.00000 
37   3.64000         osd.37                up  0.64999          1.00000 
41   3.64000         osd.41                up  1.00000          1.00000 
45   3.64000         osd.45                up  1.00000          1.00000 
29   3.64000         osd.29                up  0.50000          1.00000 
-4  43.67999     host bj02-ops-ceph03                                   
 2   3.64000         osd.2                 up  1.00000          1.00000 
 6   3.64000         osd.6                 up  0.89999          1.00000 
10   3.64000         osd.10                up  1.00000          1.00000 
14   3.64000         osd.14                up  1.00000          1.00000 
18   3.64000         osd.18                up  1.00000          1.00000 
22   3.64000         osd.22                up  1.00000          1.00000 
26   3.64000         osd.26                up  1.00000          1.00000 
30   3.64000         osd.30                up  1.00000          1.00000 
34   3.64000         osd.34                up  1.00000          1.00000 
38   3.64000         osd.38                up  1.00000          1.00000 
42   3.64000         osd.42                up  1.00000          1.00000 
46   3.64000         osd.46                up  0.84999          1.00000 
-5  43.67999     host bj02-ops-ceph04                                   
 7   3.64000         osd.7                 up  1.00000          1.00000 
11   3.64000         osd.11                up  1.00000          1.00000 
15   3.64000         osd.15                up  1.00000          1.00000 
19   3.64000         osd.19                up  1.00000          1.00000 
23   3.64000         osd.23                up  0.45000          1.00000 
27   3.64000         osd.27                up  1.00000          1.00000 
31   3.64000         osd.31                up  0.79999          1.00000 
35   3.64000         osd.35                up  0.70000          1.00000 
39   3.64000         osd.39                up  1.00000          1.00000 
43   3.64000         osd.43                up  1.00000          1.00000 
47   3.64000         osd.47                up  1.00000          1.00000 
 3   3.64000         osd.3                 up  0.89999          1.00000 
-6   3.64000     host bj02-ops-ceph05                                   
 0   3.64000         osd.0                 up  1.00000          1.00000 
[root@bj02-ops-ceph01 ceph]# ceph -s
    cluster b1fd8147-634c-49c9-abeb-c82747a0530d
     health HEALTH_WARN
            31 pgs backfill
            29 pgs backfill_toofull
            10 pgs backfilling
            68 pgs stuck unclean
            recovery 37124/47940534 objects degraded (0.077%)
            recovery 6659844/47940534 objects misplaced (13.892%)
            19 near full osd(s)
     monmap e1: 4 mons at {bj02-ops-ceph01=10.125.145.211:6789/0,bj02-ops-ceph02=10.125.145.212:6789/0,bj02-ops-ceph03=10.125.145.213:6789/0,bj02-ops-ceph04=10.125.145.214:6789/0}
            election epoch 340, quorum 0,1,2,3 bj02-ops-ceph01,bj02-ops-ceph02,bj02-ops-ceph03,bj02-ops-ceph04
     mdsmap e150: 1/1/1 up {0=bj02-ops-ceph01=up:active}, 1 up:standby
     osdmap e5301: 48 osds: 48 up, 48 in; 83 remapped pgs
      pgmap v22655545: 768 pgs, 4 pools, 52892 GB data, 14534 kobjects
            136 TB used, 38480 GB / 174 TB avail
            37124/47940534 objects degraded (0.077%)
            6659844/47940534 objects misplaced (13.892%)
                 683 active+clean
                  30 active+remapped+wait_backfill
                  28 active+remapped+backfill_toofull
                  14 active+remapped
                  10 active+remapped+backfilling
                   2 active+clean+scrubbing+deep
                   1 active+remapped+wait_backfill+backfill_toofull
recovery io 85731 kB/s, 25 objects/s
  client io 6603 kB/s wr, 9 op/s
[root@bj02-ops-ceph01 ceph]#

此时在查看新增OSD节点已经up并且已经加入到集群,集群已经开始了数据平衡,以前卡死的状态已经开始再次恢复了

9.最后一步将admin权限拷贝到新的节点服务器,要不新的节点服务器上操作不了集群(如果不想在新扩容服务器操作集群可以不用操作)

[root@bj02-ops-ceph01 ceph]# ceph-deploy --overwrite-conf admin bj02-ops-ceph05
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.34): /bin/ceph-deploy --overwrite-conf admin bj02-ops-ceph05
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x139c3b0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['bj02-ops-ceph05']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x12dc668>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to bj02-ops-ceph05
[bj02-ops-ceph05][DEBUG ] connected to host: bj02-ops-ceph05 
[bj02-ops-ceph05][DEBUG ] detect platform information from remote host
[bj02-ops-ceph05][DEBUG ] detect machine type
[bj02-ops-ceph05][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[root@bj02-ops-ceph01 ceph]#

10.扩容这台服务器剩下的11个osd节点
只需在安装控制节点执行以下两个命令即可(修改磁盘对应的挂载点,一个节点一个节点扩容),千万注意主机名和磁盘挂载目录,一定不要错了!
ceph-deploy --overwrite-conf osd prepare bj02-ops-ceph05:/data/ceph-sdx
ceph-deploy --overwrite-conf osd activate bj02-ops-ceph05:/data/ceph-sdx

本次扩容发现的问题:
1.整个ceph集群太满,数据再平衡的时候很多OSD经常报too full拒绝回填数据,只能先慢慢扩,让集群空间使用率降下来,后续需做好监控!
2.ceph集群存储池和整个ceph集群PG数量严重不合理,这是导致集群数据分布不均匀的根本原因,后续需要想办法把集群使用率降下来后调整各个存储池的PG数量,这个会导致很恶心的数据再平衡!
3.ceph集群使用量最大的存储池使用的是文件存储MDS方式,这种方式会导致很多问题,官方不建议使用,后续有机会请使用块存储,映射为块设备在挂载使用!

碰到的技术问题处理:

1.扩容第二台服务器的时候(10.125.145.218  4Tx12块盘,只扩了两个OSD节点)发现这两个节点偶尔会显示down,显示down的时间很短(1秒钟左右)需要仔细观察,开始怀疑是不是机器有问题,查看了系统日志,
dmesg日志,没发现有关Ceph、磁盘有关的报错,最后在实时状态的时候发现了一个一瞬间的报错,根据提示(此服务器的两个OSD超过20秒宽限期没显示心跳)检查服务器时间是否一致,发现此台服务器时间比其他服务器相差3秒左右,经查发现其他服务器
时间均是通过内部ntp服务器同步的,而此服务器没有,原因是提供给我的机器没有初始化,拷贝其他机器的初始化脚本(修改脚本去掉修改主机名部门代码)初始化,时间同步正确了,完美解决!


解释:默认osd_heartbeat_interval=6秒,为什么相差3秒就会出现这个20秒宽限期的报错,每6秒心跳一次,估计是有累积,累积大于20秒就报一次,这个解释和现象吻合!
每个Ceph的OSD守护程序每6秒检查其它Ceph的OSD守护进程的heartbeat。 你可以在你的Ceph配置文件中在 [osd] 部分通过增加一个 osd heartbeat interval 设置来改变heartbeat间隔,或者通过设置运行时间的值。 
如果邻居的Ceph的OSD守护在20秒的宽限期不显示heartbeat,Ceph的OSD守护进程可能考虑周边的Ceph OSD守护状态为down并且报告给一个Ceph的监视器,这将导致更新的CEPH集群映射。
你可以在你的Ceph配置文件中在 [osd] 部分通过增加一个 osd heartbeat grace 设置来改变宽限期,或者通过设置运行时间的值。

2.一个OSD节点(osd.52第52个OSD节点)在数据平衡的时候打满(理论上不应该,不知道你们设置的mon-osd-full-ratio为多少),导致OSD节点down掉,由于5分钟内没有操作,此OSD节点被自动剔除ceph集群,导致这个osd上的PG和数据平衡到了其他OSD节点,等我发
现的时候此OSD节点已经被剔除集群,我试着手动重启没起来,因为磁盘只剩20k空间,此时有两种做法:
一种是删除这个OSD上一些数据(按照官网文档小心操作),腾出空间以启动此OSD节点,启动成功后会自动删除已经均衡到其他节点的数据并将刚才手动删除的其他OSD节点的数据补回来!
一种是手动将改节点从集群剔除,重新加入到集群,因为我发现的时候此OSD已经down了很长时间了,没有必要恢复了,所以手动剔除出集群,格式化磁盘,再次加入到集群即可!

官方:如果因满载而导致 OSD 不能启动,你可以试着删除那个 OSD 上的一些归置组数据目录;
      Important 如果你准备从填满的 OSD 中删除某个归置组,注意不要删除另一个OSD 上的同名归置组,否则你会丢数据。必须在多个 OSD 上保留至少一份数据副本。

经验:降低 mon osd full ratio 和 mon osd nearfull ratio 值保留足够的空间!

解释:默认mon_osd_down_out_interval = 300  ,默认一个osd,down超过300s就会标记为out,然后触发迁移!
建议:将mon_osd_down_out_subtree_limit = rack此参数设置为host
      如果为rack,不管是down一个OSD或者整个某个ceph集群服务器(会有好多osd节点),超过300秒都会导致down掉的所有osd都会被集群剔除,数据会再次进行均衡,这会导致严重后果!
      如果设置为host,只有某些OSD节点down了超300秒会出发迁移,当出现host级别故障的时候,是不会触发数据迁移的,这对服务器维护有很大好处(比如ceph集群某台服务器电源坏了,就可以整体停机不会触发OSD被out集群导致数据迁移)

3.数据再平衡的时候,集群快恢复完的时候,还有2个pg没迁移完数据,但是由于OSD使用超85%的时候回拒绝回填,导致recovery进度一直卡死,这个时候可以继续扩容来解决,当然还有一种方式可以解决,
就是调整85%这个限制,调整所有OSD节点的如下参数,其实主要就是--osd-backfill-full-ratio,其他两个上面已经讲过了:
ceph tell osd.* injectargs '--mon-osd-nearfull-ratio 0.98'
ceph tell osd.* injectargs '--mon-osd-full-ratio 0.98'
ceph tell osd.* injectargs '--osd-backfill-full-ratio 0.9' 
调整到90%后recovery进度又开始恢复了,当然这个只是临时解决了,长远还需扩容OSD节点!

4.整个Ceph集群没有权限管理,所有使用ceph的客户端都是用的admin超级管理员权限,没有针对不同的存储类型、存储池及各种权限做控制,特别的MDS客户端都挂载到根目录,这都存在很严重的漏洞!

5.常用的ceph命令:

ceph -s
ceph -w
ceph osd tree
ceph df
ceph osd df tree
ceph --show-config
ceph health detail
ceph osd stat
ceph mon stat
ceph mds stat
ceph osd reweight 51 0.90000(51为OSD节点号,后面是OSD权重)

存储池相关命令:

ceph osd pool get rbd pg_num
ceph osd pool get rbd pgp_num
ceph osd lspools
ceph osd pool get rbd size
ceph osd pool get rbd min_size

一下两条是调整存储池的pg数,rbd为一个pool的名字,谨慎操作,后果自负:

ceph osd pool set rbd pg_num 1024
ceph osd pool set rbd pgp_num 1024


查看Ceph集群上创建的块信息:

[root@bj02-ops-ceph06 ~]# rbd ls
system_elk
video_4T
video_original
[root@bj02-ops-ceph06 ~]#

启动、停止OSD节点、MDS节点

/etc/init.d/ceph start osd.x
/etc/init.d/ceph stop osd.x
/etc/init.d/ceph start mds(或者/etc/init.d/ceph start mds.xxx,xxx为ceph集群mds节点hostname)

移除一个OSD节点的正确姿势(一定要按顺序,后果自负):

ceph osd out osd.52
/etc/init.d/ceph stop osd.52(其实52节点已经down了)
ceph osd crush remove osd.52
ceph auth del osd.52
ceph osd rm osd.52
ceph osd crush remove ceph-node4(这里只是举个例子,如果某个机器都不在集群了才执行这个,清除改节点的所有痕迹)

查看集群有多少个存储池pool及各存储池pg数量分布(现在集群每个存储池都给的是192,总共4个存储池,因此整个集群的pg数是768,可通过ceph -s看到,此值对于超过50个OSD节点的集群来说太小太小了,会导致OSD节点上数据分布很不均匀)
[root@bj02-ops-ceph06 ~]# ceph osd dump |grep pool | awk '{print $1,$3,$4,$5":"$6,$13":"$14}'
pool 'rbd' replicated size:3 pg_num:192
pool 'data' replicated size:3 pg_num:192
pool 'metadata' replicated size:3 pg_num:192
pool 'test-pool' replicated size:3 pg_num:192
[root@bj02-ops-ceph06 ~]#

这样也可以看到,但是看不到每个pool的pg数
[root@bj02-ops-ceph06 ~]# ceph osd lspools
0 rbd,1 data,2 metadata,3 test-pool,
[root@bj02-ops-ceph06 ~]# 


MDS挂载:

mount -t ceph  bj02-ops-ceph01,bj02-ops-ceph02,bj02-ops-ceph03,bj02-ops-ceph04:/  /gomeo2o/NFS   -o name=admin,secret=AQAK+apX76e7ERAA9oQw7hbiiO7tIMrWoMBRWQ==

###########################################我以前的文档供参考#####################################################
客户端挂载

如果内核>=2.6.34可直接挂载
mkdir -p /mnt/cephfs
获取秘钥
ceph auth get-key client.cephfs
挂载
mount -t ceph 10.183.93.173,10.183.93.174,10.183.93.175:/ /mnt/cephfs -o name=cephfs,secret=AQDpfttY/RjrDhAAzYXiSjTENH4TBYQC1Qvt2w==
如果内核<2.6.34,使用ceph-fuse挂载



块设备挂载
ceph-deploy install  10-149-11-8 --repo-url=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/ --gpg-url=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
(如果安装不上每个节点都导入以上yum源再执行 yum -y install ceph ceph-radosgw)
ceph-deploy config push 10-149-11-8
ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd'
ceph auth get-or-create client.rbd | ssh root@10-149-11-8 tee /etc/ceph/ceph.client.rbd.keyring
ceph auth list

挂载块设备的客户端

内核>=2.6.34
modprobe rbd
cat /etc/ceph/ceph.client.rbd.keyring >> /etc/ceph/keyring
由于没有使用默认用户client.admin,我们必须提供用户名来连接Ceph集群
ceph -s --name client.rbd


创建Ceph RBD
rbd create rbd1 --size 102400 --name client.rbd
列出rbd镜像,保存块的存储池是"rbd",也可通过rbd命令-p选项指定一个存储池
rbd ls --name client.rbd
rbd ls -p rbd --name client.rbd
rbd list --name client.rbd
查看rbd镜像细节
rbd --image rbd1 info --name client.rbd
映射Ceph块设备
rbd map --image rbd1 --name client.rbd
查看映射的设备
rbd showmapped --name client.rbd
使用块设备
fdisk -l /dev/rbd1
mkfs.xfs /dev/rbd1
mkdir /mnt/ceph-disk1
mount /dev/rbd1 /mnt/ceph-disk1
df -h /mnt/ceph-disk1

开机重启自动映射该块设备,并自动挂载
wget https://raw.githubusercontent.com/ksingh7/ceph-cookbook/master/rbdmap -O /etc/init.d/rbdmap
chmod +x /etc/init.d/rbdmap
chkconfig --add rbdmap
chkconfig --list
修改rbdmap
[root@bops-10-183-93-172 ~]# cat /etc/ceph/rbdmap 
# RbdDevice        Parameters
#poolname/imagename    id=client,keyring=/etc/ceph/ceph.client.keyring
rbd/rbd1                id=rbd,keyring=/etc/ceph/keyring
[root@bops-10-183-93-172 ~]#
修改/etc/fstab,添加
/dev/rbd0     /mnt/ceph-rbd0          xfs     defaults,_netdev 0 0
mkdir -p /mnt/ceph-rbd0
/etc/init.d/rbdmap start
/etc/init.d/rbdmap status

调整Ceph RBD大小
rbd resize --image rbd1 --size 204800 --name client.rbd
rbd info --image rbd1 --name client.rbd
扩展文件系统来利用增加了的空间,XFS支持在线调整大小
dmesg | grep -i capacity
xfs_growfs -d /mnt/ceph-disk1

如果内核<2.6.34,可通过ceph-fuse客户端挂载
rpm -Uvh http://download.ceph.com/rpm-giant/el6/noarch/ceph-release-1-0.el6.noarch.rpm
yum -y install ceph-fuse
如果安装不上可按如下直接安装
rpm -Uvh http://download.ceph.com/rpm-giant/el6/x86_64/ceph-fuse-0.87.2-0.el6.x86_64.rpm

创建CephFS秘钥文件 /etc/ceph/client.cephfs.keyring内容如下
[client.cephfs]
key = AQDpfttY/RjrDhAAzYXiSjTENH4TBYQC1Qvt2w==


挂载:
ceph-fuse --keyring /etc/ceph/ceph.client.admin.keyring --name client.cephfs -m 10.149.11.143:6789,10.149.11.144:6789,10.149.11.145:6789 /cephfs

###########################################我以前的文档供参考#####################################################







参考网站:
https://www.itzhoulin.com/2016/04/20/deal_with_ceph_full/
http://docs.ceph.org.cn/rados/troubleshooting/troubleshooting-osd/
http://docs.ceph.org.cn/cephfs/eviction/
http://int32bit.me/2016/05/19/Ceph-Pool%E6%93%8D%E4%BD%9C%E6%80%BB%E7%BB%93/
https://mritd.me/2017/05/30/ceph-note-2/
Copyright © opschina.org 2017 with zzlyzq@gmail.com all right reserved,powered by Gitbook该文件修订时间: 2017-09-23 17:40:08

results matching ""

    No results matching ""