承接上文:Ceph分散式儲存系列(一):Ceph工作原理及架構淺析梳理
之前都是使用Deepsea方式部署的ceph,長時間不用ceph-deploy了,這次來回顧,順便總結下!
ceph-deploy部署方式ceph官方通知已不再維護,沒有在Nautilus(v14.x)之後的版本測試過,也不支援在centos8系列版本上執行,不過我們在centos7及6版本中還是可以使用的。
詳細的可檢視官方說明
官方宣告:https://docs.ceph.com/en/latest/install/
ceph-deploy is a tool for quickly deploying clusters.
Important:ceph-deploy is no longer actively maintained. It is not tested on versions of Ceph newer than Nautilus. It does not support RHEL8, CentOS 8, or newer operating systems.
本次使用Ceph版本:
本次測試節點資訊:
IP地址 | 主機名 | 附加磁碟(OSD) | 叢集角色 |
---|---|---|---|
192.168.56.125 | ceph-node1 | 一塊10G磁碟(/dev/sdb) | mon,mgr,osd0(主節點) |
192.168.56.126 | ceph-node2 | 一塊10G磁碟(/dev/sdb) | osd1 |
192.168.56.127 | ceph-node3 | 一塊10G磁碟(/dev/sdb) | osd2 |
如果環境允許,可以用一個ceph-admin節點專門放置mon,mgr,mds等這些元件,osd放置在其他節點,更便於管理
伺服器系統版本
[root@ceph-node1 ~]# cat /etc/centos-release
CentOS Linux release 7.8.2003 (Core)
sed -i "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config
setenforce 0
systemctl stop firewalld
systemctl disable firewalld
保證叢集內主機名與ip解析正常(每個節點都需要設定)
[root@ceph-node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.125 ceph-node1
192.168.56.126 ceph-node2
192.168.56.127 ceph-node3
[root@ceph-node1 ~]# ping ceph-node2
PING ceph-node2 (192.168.56.126) 56(84) bytes of data.
64 bytes from ceph-node2 (192.168.56.126): icmp_seq=1 ttl=64 time=0.616 ms
…………
a.考慮到使用root使用者的安全性問題,所以這裡建立一個 ceph-admin 普通使用者做為部署及運維使用
b.再加上ceph-deploy會在節點安裝軟體包,所以建立的使用者需要無密碼 sudo 許可權
[root@ceph-node1 ~]# useradd ceph-admin
[root@ceph-node1 ~]# echo "123456" | passwd --stdin ceph-admin
Changing password for user ceph-admin.
passwd: all authentication tokens updated successfully.
[root@ceph-node1 ~]# echo "ceph-admin ALL = NOPASSWD:ALL" | tee /etc/sudoers.d/ceph-admin
ceph-admin ALL = NOPASSWD:ALL
[root@ceph-node1 ~]# chmod 0440 /etc/sudoers.d/ceph-admin
[root@ceph-node1 ~]# ll /etc/sudoers.d/ceph-admin
-r--r-----. 1 root root 30 Oct 19 16:06 /etc/sudoers.d/ceph-admin
測試
[root@ceph-node1 ~]# su - ceph-admin
Last login: Mon Oct 19 16:11:51 CST 2020 on pts/0
[ceph-admin@ceph-node1 ~]$ sudo su -
Last login: Mon Oct 19 16:12:04 CST 2020 on pts/0
[root@ceph-node1 ~]# exit
logout
[ceph-admin@ceph-node1 ~]$ exit
logout
[root@ceph-node1 ~]# su - ceph-admin
[ceph-admin@ceph-node1 ~]$ ssh-keygen (每一步都按回車,口令密碼留空)
[ceph-admin@ceph-node1 ~]$ ssh-copy-id ceph-admin@ceph-node1
[ceph-admin@ceph-node1 ~]$ ssh-copy-id ceph-admin@ceph-node2
[ceph-admin@ceph-node1 ~]$ ssh-copy-id ceph-admin@ceph-node3
設定時間同步目的:因在時間一致的情況下,才可保證叢集正常執行
設定時間同步方式:node1連線網路上的ntp伺服器同步時間,node2,3連線node1同步時間(即node1既為ntp伺服器端,也為使用者端)
注:ntpd啟動後需要等待幾分鐘去同步
yum -y intall ntp(安裝ntp,全部節點都需要執行)
node1節點操作:
vim /etc/ntp.conf
註釋掉預設的設定項:
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
新增設定項:
server ntp1.aliyun.com #阿里雲ntp伺服器
server 127.127.1.0 #本地ntp伺服器,設定此項是為了在外網ntp連線異常的情況下還能保證ntp正常,維護叢集穩定
node2/node3節點操作:
vim /etc/ntp.conf
同樣註釋掉預設的server設定項:
新增設定項:
server 192.168.56.125 #node1-ntp伺服器
全部節點都執行:
systemctl restart ntpd
systemctl enable ntpd
檢視ntp連線情況和狀態
[root@ceph-node1 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*120.25.115.20 10.137.53.7 2 u 41 128 377 30.382 -1.019 1.001
LOCAL(0) .LOCL. 5 l 806 64 0 0.000 0.000 0.000
[root@ceph-node2 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*ceph-node1 120.25.115.20 3 u 20 64 377 2.143 33.254 10.350
[root@ceph-node1 ~]# ntpstat
synchronised to NTP server (120.25.115.20) at stratum 3
time correct to within 27 ms
polling server every 128 s
備份系統原本的源
[root@ceph-node1 ~]# mkdir /mnt/repo_bak
[root@ceph-node1 ~]# mv /etc/yum.repos.d/* /mnt/repo_bak
新增新源
[root@ceph-node1 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@ceph-node1 ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
注意事項:
這裡的yum源是確定了ceph的版本,在源中的baseurl項中rpm-nautilus
即代表著是ceph的nautilus
版本的rpm包(nautilus是ceph的14.x版本
)如果需要安裝其他版本,還需要替換為其他版本號,12.x版本是luminous,13.x版本是rpm-mimic。詳情可以去ceph官方源中檢視:https://download.ceph.com/
vim /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph
baseurl=http://download.ceph.com/rpm-nautilus/el7/x86_64
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
更新yum快取及系統軟體
yum makecache
yum -y update
可檢視ceph版本,判斷yum是否設定正確
[root@ceph-node1 yum.repos.d]# yum list ceph --showduplicates |sort -r
* updates: mirrors.cn99.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
* extras: mirrors.163.com
ceph.x86_64 2:14.2.9-0.el7 Ceph
ceph.x86_64 2:14.2.8-0.el7 Ceph
ceph.x86_64 2:14.2.7-0.el7 Ceph
ceph.x86_64 2:14.2.6-0.el7 Ceph
ceph.x86_64 2:14.2.5-0.el7 Ceph
ceph.x86_64 2:14.2.4-0.el7 Ceph
ceph.x86_64 2:14.2.3-0.el7 Ceph
ceph.x86_64 2:14.2.2-0.el7 Ceph
ceph.x86_64 2:14.2.11-0.el7 Ceph
ceph.x86_64 2:14.2.1-0.el7 Ceph
ceph.x86_64 2:14.2.10-0.el7 Ceph
ceph.x86_64 2:14.2.0-0.el7 Ceph
ceph.x86_64 2:14.1.1-0.el7 Ceph
ceph.x86_64 2:14.1.0-0.el7 Ceph
* base: mirrors.163.com
Available Packages
[root@ceph-node1 yum.repos.d]# yum list ceph-deploy --showduplicates |sort -r
* updates: mirrors.cn99.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
* extras: mirrors.163.com
ceph-deploy.noarch 2.0.1-0 Ceph-noarch
ceph-deploy.noarch 2.0.0-0 Ceph-noarch
ceph-deploy.noarch 1.5.39-0 Ceph-noarch
ceph-deploy.noarch 1.5.38-0 Ceph-noarch
ceph-deploy.noarch 1.5.37-0 Ceph-noarch
ceph-deploy.noarch 1.5.36-0 Ceph-noarch
ceph-deploy.noarch 1.5.35-0 Ceph-noarch
ceph-deploy.noarch 1.5.34-0 Ceph-noarch
ceph-deploy.noarch 1.5.33-0 Ceph-noarch
ceph-deploy.noarch 1.5.32-0 Ceph-noarch
ceph-deploy.noarch 1.5.31-0 Ceph-noarch
ceph-deploy.noarch 1.5.30-0 Ceph-noarch
ceph-deploy.noarch 1.5.29-0 Ceph-noarch
* base: mirrors.163.com
Available Packages
[root@ceph-node1 ~]# su - ceph-admin
[ceph-admin@ceph-node1 ~]$ sudo yum -y install python-setuptools #安裝ceph依賴包
[ceph-admin@ceph-node1 ~]$ sudo yum install ceph-deploy (預設會選擇安裝2.0最新版本)
檢視ceph-deploy安裝版本
[root@ceph-node1 ~]# ceph-deploy --version
2.0.1
建立叢集安裝目錄(ceph-deploy部署程式會將檔案輸出到當前目錄)
[ceph-admin@ceph-node1 ~]$ mkdir cluster
[ceph-admin@ceph-node1 ~]$ cd cluster/
建立叢集(後邊是指定哪些節點做為mon監視器使用,所以選擇規劃中部署mon的節點-node1)
[ceph-admin@ceph-node1 cluster]$ ceph-deploy new ceph-node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph-admin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy new ceph-node1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7f14c44c9de8>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f14c3c424d0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['ceph-node1']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO ] Running command: sudo /usr/sbin/ip link show
[ceph-node1][INFO ] Running command: sudo /usr/sbin/ip addr show
[ceph-node1][DEBUG ] IP addresses found: [u'192.168.56.125']
[ceph_deploy.new][DEBUG ] Resolving host ceph-node1
[ceph_deploy.new][DEBUG ] Monitor ceph-node1 at 192.168.56.125
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-node1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.56.125']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[ceph-admin@ceph-node1 cluster]$ ls
ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
在當前目錄下的ceph.conf中新增以下兩行內容
public_network = 192.168.56.0/24
cluster_network = 192.168.56.0/24
安裝Ceph包至其他節點
(其中 --no-adjust-repos
引數含義:使用本地設定的源,不更改源。以防出現問題)
[ceph-admin@ceph-node1 cluster]$ ceph-deploy install --no-adjust-repos ceph-node1 ceph-node2 ceph-node3
初始化mon節點
在2.0.1版本的ceph-deploy中在該初始化的時候就會做收集金鑰的動作,無需再執行 ceph-deploy gatherkeys
{monitor-host} 這個命令
[ceph-admin@ceph-node1 cluster]$ ceph-deploy mon create-initial
如果是裡邊有資料的磁碟,還需先清除資料:(詳細可檢視 ceph-depoy disk zap --help)
列出所有節點上所有可用的磁碟
[ceph-admin@ceph-node1 cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3
清除資料
sudo ceph-deploy disk zap {osd-server-name} {disk-name}
eg:sudo ceph-deploy disk zap ceph-node2 /dev/sdb
如果是乾淨的磁碟,可忽略上邊清除資料的操作,直接新增OSD即可
(我這裡是新新增的/dev/sdb磁碟)
[ceph-admin@ceph-node1 cluster]$ ceph-deploy osd create --data /dev/sdb ceph-node1
[ceph-admin@ceph-node1 cluster]$ ceph-deploy osd create --data /dev/sdb ceph-node2
[ceph-admin@ceph-node1 cluster]$ ceph-deploy osd create --data /dev/sdb ceph-node3
可以看到cpeh將新增OSD建立為LVM格式加入ceph叢集中
[ceph-admin@ceph-node1 cluster]$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/sdb ceph-ab1b8533-018e-4924-8520-fdbefbb7d184 lvm2 a-- <10.00g 0
將ceph-deploy命令將組態檔和 admin key複製到各個ceph節點,其他節點主機也能管理ceph叢集
[ceph-admin@ceph-node1 cluster]$ ceph-deploy admin ceph-node1 ceph-node2 ceph-node3
[ceph-admin@ceph-node1 cluster]$ ceph-deploy mgr create ceph-node1
檢視叢集狀態
[ceph-admin@ceph-node1 cluster]$ sudo ceph health detail
HEALTH_OK
[ceph-admin@ceph-node1 cluster]$ sudo ceph -s
cluster:
id: e9290965-40d4-4c65-93ed-e534ae389b9c
health: HEALTH_OK
services:
mon: 1 daemons, quorum ceph-node1 (age 62m)
mgr: ceph-node1(active, since 5m)
osd: 3 osds: 3 up (since 12m), 3 in (since 12m)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 27 GiB / 30 GiB avail
pgs:
因/etc/ceph/下key檔案普通使用者沒有讀許可權,所以普通使用者無權直接執行ceph命令
如果需要ceph-admin普通使用者也可直接呼叫叢集,增加對ceph組態檔的讀許可權即可
(想要每個節點普通使用者都可以執行ceph相關命令,那就所有節點都修改許可權)
[ceph-admin@ceph-node1 ~]$ ll /etc/ceph/
total 12
-rw-------. 1 root root 151 Oct 21 17:33 ceph.client.admin.keyring
-rw-r--r--. 1 root root 268 Oct 21 17:35 ceph.conf
-rw-r--r--. 1 root root 92 Oct 20 04:48 rbdmap
-rw-------. 1 root root 0 Oct 21 17:30 tmpcmU035
[ceph-admin@ceph-node1 ~]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
[ceph-admin@ceph-node1 ~]$ ll /etc/ceph/
total 12
-rw-r--r--. 1 root root 151 Oct 21 17:33 ceph.client.admin.keyring
-rw-r--r--. 1 root root 268 Oct 21 17:35 ceph.conf
-rw-r--r--. 1 root root 92 Oct 20 04:48 rbdmap
-rw-------. 1 root root 0 Oct 21 17:30 tmpcmU035
[ceph-admin@ceph-node1 ~]$ ceph -s
cluster:
id: 130b5ac0-938a-4fd2-ba6f-3d37e1a4e908
health: HEALTH_OK
services:
mon: 1 daemons, quorum ceph-node1 (age 20h)
mgr: ceph-node1(active, since 20h)
osd: 3 osds: 3 up (since 20h), 3 in (since 20h)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 27 GiB / 30 GiB avail
pgs:
開啟dashboard模組
[ceph-admin@ceph-node1 ~]$ sudo ceph mgr module enable dashboard
如果報錯如下:
Error ENOENT: all mgr daemons do not support module 'dashboard', pass --force to force enablement
那是因為沒有安裝ceph-mgr-dashboard,在mgr節點上安裝即可
[ceph-admin@ceph-node1 ~]$ sudo yum -y install ceph-mgr-dashboard
預設情況下,儀表板的所有HTTP連線均使用SSL/TLS進行保護。
要快速啟動並執行儀表板,可以使用以下命令生成並安裝自簽名證書
[ceph-admin@ceph-node1 ~]$ sudo ceph dashboard create-self-signed-cert
Self-signed certificate created
建立具有管理員角色的使用者:
[ceph-admin@ceph-node1 ~]$ sudo ceph dashboard set-login-credentials admin admin
******************************************************************
*** WARNING: this command is deprecated. ***
*** Please use the ac-user-* related commands to manage users. ***
******************************************************************
Username and password updated
檢視ceph-mgr服務:
[ceph-admin@ceph-node1 ~]$ sudo ceph mgr services
{
"dashboard": "https://ceph-node1:8443/"
}
瀏覽器存取測試:
http://192.168.56.125:8443