手把手教你使用rpm部署ceph集群

環境準備

1、在運行 Ceph 守護進程的節點上創建一個普通用戶,ceph-deploy 會在節點安裝軟件包,所以你創建的用戶需要無密碼 sudo 權限。如果使用root可以忽略。
為賦予用戶所有權限,把下列加入 /etc/sudoers.d/ceph

echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph

2、配置你的管理主機,使之可通過 SSH無密碼訪問各節點。
3、配置ceph源ceph.repo,這裡直接配置163的源,加快安裝速度

[Ceph]
name=Ceph packages for $basearch
baseurl=//mirrors.163.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=//mirrors.163.com/ceph/keys/release.asc
priority=1


[Ceph-noarch]
name=Ceph noarch packages
baseurl=//mirrors.163.com/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=//mirrors.163.com/ceph/keys/release.asc
priority=1


[ceph-source]
name=Ceph source packages
baseurl=//mirrors.163.com/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=//mirrors.163.com/ceph/keys/release.asc
priority=1

部署ceph相關軟件與集群初始化

4、安裝ceph-deploy 只需要在管理節點

sudo yum update && sudo yum install ceph-deploy
安裝ceph相關軟件以及依賴包
sudo ceph-deploy install qd01-stop-cloud001 qd01-stop-cloud002 qd01-stop-cloud003

注意:這裡可以使用yum直接安裝ceph相關包
在所有節點執行,如果執行這一步,上面的ceph-deploy install可以不用執行

sudo yum -y install ceph ceph-common rbd-fuse ceph-release python-ceph-compat  python-rbd librbd1-devel ceph-radosgw

5、創建集群

sudo ceph-deploy new qd01-stop-k8s-node001 qd01-stop-k8s-node002 qd01-stop-k8s-node003

修改配置文件ceph.conf

[global]
fsid = ec7ee19a-f7c6-4ed0-9307-f48af473352c
mon_initial_members = qd01-stop-k8s-node001, qd01-stop-k8s-node002, qd01-stop-k8s-node003
mon_host = 10.26.22.105,10.26.22.80,10.26.22.85
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
cluster_network = 10.0.0.0/0
public_network = 10.0.0.0/0

filestore_xattr_use_omap = true
osd_pool_default_size = 3
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 520
osd_pool_default_pgp_num = 520
osd_recovery_op_priority= 10
osd_client_op_priority = 100
osd_op_threads = 20
osd_recovery_max_active = 2
osd_max_backfills = 2
osd_scrub_load_threshold = 1

osd_deep_scrub_interval = 604800000
osd_deep_scrub_stride = 4096

[client]
rbd_cache = true
rbd_cache_size = 134217728
rbd_cache_max_dirty = 125829120

[mon]
mon_allow_pool_delete = true

注意: 在某一主機上新增 Mon 時,如果它不是由 ceph-deploy new 命令所定義的,那就必須把 public network 加入 ceph.conf 配置文件。

6、創建初始化mon

ceph-deploy mon create-initial

7、創建OSD,這裡只列出一台機器的命令,多台osd替換主機名重複執行即可
初始化磁盤

ceph-deploy  disk zap qd01-stop-k8s-node001 /dev/sdb
ceph-deploy  disk zap qd01-stop-k8s-node001 /dev/sdc
ceph-deploy  disk zap qd01-stop-k8s-node001 /dev/sdd

創建並激活

ceph-deploy osd create   --data  /dev/sdb  qd01-stop-k8s-node001
ceph-deploy osd create   --data  /dev/sdc  qd01-stop-k8s-node001
ceph-deploy osd create   --data  /dev/sdd  qd01-stop-k8s-node001

8、創建管理主機

ceph-deploy mgr create  qd01-stop-k8s-node001 qd01-stop-k8s-node002 qd01-stop-k8s-node003

驗證ceph集群狀態

9、查看集群狀態

[root@qd01-stop-k8s-node001 ~]# ceph -s
  cluster:
    id:     ec7ee19a-f7c6-4ed0-9307-f48af473352c
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum qd01-stop-k8s-node002,qd01-stop-k8s-node003,qd01-stop-k8s-node001
    mgr: qd01-stop-k8s-node001(active), standbys: qd01-stop-k8s-node002, qd01-stop-k8s-node003
    osd: 24 osds: 24 up, 24 in
 
  data:
    pools:   1 pools, 256 pgs
    objects: 5  objects, 325 B
    usage:   24 GiB used, 44 TiB / 44 TiB avail
    pgs:     256 active+clean

10、開啟mgr dashboard

ceph mgr module enable dashboard
ceph dashboard create-self-signed-cert
ceph dashboard set-login-credentials admin admin
ceph mgr services

ceph config-key put mgr/dashboard/server_addr 0.0.0.0 綁定IP
ceph config-key put mgr/dashboard/server_port 7000    設置端口
systemctl  restart [email protected]

11、創建pool

創建
ceph osd pool create  k8s 256 256
允許rbd使用pool
ceph osd pool application enable k8s rbd --yes-i-really-mean-it
查看
ceph osd pool ls

12、測試

[root@qd01-stop-k8s-node001 ~]# rbd   create docker_test --size 4096 -p k8s
[root@qd01-stop-k8s-node001 ~]# rbd info docker_test -p k8s
rbd image 'docker_test':
        size 4 GiB in 1024 objects
        order 22 (4 MiB objects)
        id: 11ed6b8b4567
        block_name_prefix: rbd_data.11ed6b8b4567
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Wed Nov 11 17:19:38 2020

卸載

13、刪除節點並刪除ceph包

ceph-deploy purgedata  qd01-stop-k8s-node008
ceph-deploy purge qd01-stop-k8s-node008

ceph集群相關操作命令

**健康檢查**
ceph -s –conf /etc/ceph/ceph.conf –name client.admin –keyring /etc/ceph/ceph.client.admin.keyring
ceph health
ceph quorum_status –format json-pretty
ceph osd dump
ceph osd stat
ceph mon dump
ceph mon stat
ceph mds dump
ceph mds stat
ceph pg dump
ceph pg stat

**osd/pool**
ceph osd tree
ceph osd pool ls detail
ceph osd pool set rbd crush_ruleset 1
ceph osd pool create sata-pool 256 rule-sata
ceph osd pool create ssd-pool 256 rule-ssd
ceph osd pool set data min_size 2

**配置相關**
ceph daemon osd.0 config show (在osd節點上執行)
ceph daemon osd.0 config set mon_allow_pool_delete true(在osd節點執行,重啟後失效)
ceph tell osd.0 config set mon_allow_pool_delete false (任意節點執行,重啟後失效)
ceph config set osd.0 mon_allow_pool_delete true(只支持13.x版本,任意節點執行,重啟有效,要求配置選項不在配置配置文件中,否則mon會忽略該設置)

**日誌相關**
ceph log last 100
map
ceph osd map
ceph pg dump
ceph pg map x.yz
ceph pg x.yz query

**驗證**
ceph auth get client.admin –name mon. –keyring /var/lib/ceph/mon/ceph-$hostname/keyring
ceph auth get osd.0
ceph auth get mon.
ceph auth ls

**crush相關**
ceph osd crush add-bucket root-sata root
ceph osd crush add-bucket ceph-1-sata host
ceph osd crush add-bucket ceph-2-sata host
ceph osd crush move ceph-1-sata root=root-sata
ceph osd crush move ceph-2-sata root=root-sata
ceph osd crush add osd.0 2 host=ceph-1-sata
ceph osd crush add osd.1 2 host=ceph-1-sata
ceph osd crush add osd.2 2 host=ceph-2-sata
ceph osd crush add osd.3 2 host=ceph-2-sata

ceph osd crush add-bucket root-ssd root
ceph osd crush add-bucket ceph-1-ssd host
ceph osd crush add-bucket ceph-2-ssd host

ceph osd getcrushmap -o /tmp/crush
crushtool -d /tmp/crush -o /tmp/crush.txt
update /tmp/crush.txt
crushtool -c /tmp/crush.txt -o /tmp/crush.bin
ceph osd setcrushmap -i /tmp/crush.bin