ceph 006 rbd高級特性 rbd快照 鏡像克隆 rbd快取 rbd增量備份 rbd鏡像單向同步

版本

[root@clienta ~]# ceph -v
ceph version 16.2.0-117.el8cp (0e34bb74700060ebfaa22d99b7d2cdc037b28a57) pacific (stable)
[root@clienta ~]# 

rbd高級特性

創建一個塊

[root@clienta ~]# ceph osd pool create rbd
pool 'rbd' created
[root@clienta ~]# rbd pool init rbd
[root@clienta ~]# ceph osd pool ls detail | grep rbd
pool 6 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 226 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
[root@clienta ~]# 
[root@clienta ~]# 
[root@clienta ~]# 
[root@clienta ~]# rbd create --size 1G -p rbd image1
[root@clienta ~]# rbd -p rbd ls
image1
[root@clienta ~]# rbd info image1
rbd image 'image1':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: fab9c898fcc1
    block_name_prefix: rbd_data.fab9c898fcc1
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features: 
    flags: 
    create_timestamp: Sun Aug 14 05:10:43 2022
    access_timestamp: Sun Aug 14 05:10:43 2022
    modify_timestamp: Sun Aug 14 05:10:43 2022
[root@clienta ~]# 

默認特性

[root@clienta ~]# ceph config ls | grep feature
enable_experimental_unrecoverable_data_corrupting_features
mon_debug_no_initial_persistent_features
rbd_default_features
[root@clienta ~]# ceph config get osd rbd_default_features
layering,exclusive-lock,object-map,fast-diff,deep-flatten
[root@clienta ~]#

layering 支援克隆
striping 支援條帶化  (raid)
exclusive-lock  排他鎖。是一個分散式鎖,主要用於防止多個客戶端同時寫入image導致數據不一致問題。(保證性能一致,性能可能下降,多客戶端訪問)
object-map  精簡配置,不立馬分配空間,寫數據了才分配  (require exclusive-lock)
fast-diff   io加速
deep-flatten  扁平化RBD鏡像的所有快照
journaling   支援日誌 鏡像功能依賴這個
data-pool    ec數據池支援(ec糾刪碼池)

[root@clienta ~]# rbd feature enable rbd/image1 journaling
[root@clienta ~]# rbd info image1
rbd image 'image1':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: fab9c898fcc1
    block_name_prefix: rbd_data.fab9c898fcc1
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, journaling
    op_features: 
開啟新特性   (我的版本不支援data-pool功能)
[root@clienta ~]# rbd feature disable rbd/image1 journaling

有的報錯會是不支援[0x40]  可以通過id來判斷   
rbd_defaults_features=24   (8 object-map + 16 fast-diff)


striping(類似raid條帶)
stripe-unit=1M
stripe-count=4 數據塊
並發寫4M

[root@clienta ~]# rbd create --size 1G --stripe-unit=1M --stripe-count=4 image2
[root@clienta ~]# rbd info image2
rbd image 'image2':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: fad5ece2e6b6
    block_name_prefix: rbd_data.fad5ece2e6b6
    format: 2
    features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features: 
    flags: 
    create_timestamp: Sun Aug 14 05:38:12 2022
    access_timestamp: Sun Aug 14 05:38:12 2022
    modify_timestamp: Sun Aug 14 05:38:12 2022
    stripe unit: 1 MiB
    stripe count: 4
[root@clienta ~]# 

默認一次寫一個塊,一個塊寫滿4M

[root@clienta ~]# rbd map rbd/image2
/dev/rbd0

1.rbd快照

[root@clienta ~]# rbd showmapped 
id  pool  namespace  image   snap  device   
0   rbd              image2  -     /dev/rbd0
[root@clienta ~]# rbd unmap  rbd/image2
[root@clienta ~]# rbd rm image1
Removing image: 100% complete...done.
[root@clienta ~]# rbd rm image2
Removing image: 100% complete...done.
[root@clienta ~]# 

[root@clienta ~]# rbd create rbd/image1 --size 1G
[root@clienta ~]# rbd map image1
/dev/rbd0
[root@clienta ~]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0              isize=512    agcount=8, agsize=32768 blks
        =                       sectsz=512   attr=2, projid32bit=1
        =                       crc=1        finobt=1, sparse=1, rmapbt=0
        =                       reflink=1
data     =                       bsize=4096   blocks=262144, imaxpct=25
        =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
        =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Discarding blocks...Done.
[root@clienta ~]# mkdir /mnt/rbddev
[root@clienta ~]# mount /dev/rbd0 /mnt/rbddev/
[root@clienta ~]# 

[root@clienta ~]# cp /etc/passwd   /etc/profile  /mnt/rbddev/
[root@clienta ~]# cd /mnt/rbddev/
[root@clienta rbddev]# dd if=/dev/zero of=file1 bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0169812 s, 617 MB/s
[root@clienta rbddev]# ls
file1  passwd  profile
[root@clienta rbddev]# sync

[root@clienta rbddev]# rbd snap  create rbd/image1@snap1
Creating snap: 100% complete...done.
[root@clienta rbddev]# rbd info image1
rbd image 'image1':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 1
    id: fae5f58d3b62
    block_name_prefix: rbd_data.fae5f58d3b62
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features: 
    flags: 
    create_timestamp: Sun Aug 14 05:42:02 2022
    access_timestamp: Sun Aug 14 05:42:02 2022
    modify_timestamp: Sun Aug 14 05:42:02 2022

[root@clienta rbddev]# rbd snap ls image1
SNAPID  NAME   SIZE   PROTECTED  TIMESTAMP               
 4  snap1  1 GiB             Sun Aug 14 05:48:07 2022

不可以掛載快照 (只讀的不讓掛)
快照和原本的塊,資訊一樣

測試快照效果

[root@clienta rbddev]# ls
file1  passwd  profile
[root@clienta rbddev]# rm -rf passwd 
[root@clienta rbddev]# ls
file1  profile
[root@clienta ~]# umount /mnt/rbddev 
[root@clienta ~]# rbd unmap /dev/rbd0
[root@clienta ~]# rbd snap rollback rbd/image1@snap1
Rolling back to snapshot: 100% complete...done.
[root@clienta ~]# 

他需要取消映射與掛載,才可以恢復

[root@clienta ~]# rbd showmapped 
id  pool  namespace  image   snap  device   
0   rbd              image1  -     /dev/rbd0
[root@clienta ~]# rbd unmap /dev/rbd0
rbd: sysfs write failed

回滾成功

[root@clienta ~]# mount /dev/rbd0 /mnt/rbddev/
[root@clienta ~]# cd /mnt/rbddev/
[root@clienta rbddev]# ls
file1  passwd  profile

存儲池是對象塊如果丟了,對單個對象進行回滾,這個是直接回滾到快照時的鏡像

快照原理

拍快照時,建議暫停訪問文件系統(停業務)
快照時源鏡像的只讀副本,寫時複製功能COW 節省空間


快照一開始不佔空間,當你將原文件修改時,原文件拷貝到快照,並被修改(寫前拷貝)

有的快照,他用一次就沒了
但是這裡。你用了他還會存在

SNAPID  NAME   SIZE   PROTECTED  TIMESTAMP               
    4  snap1  1 GiB             Sun Aug 14 05:48:07 2022
[root@clienta rbddev]# 

快照越多性能越慢

偷懶舉一個例子

[root@clienta rbddev]# rbd snap rm image1@snap1
Removing snap: 100% complete...done.
[root@clienta rbddev]# rbd snap ls image1
[root@clienta rbddev]# 

rbd快照 是原始鏡像的只讀副本
rbd克隆功能 (依賴快照)
快照數據不會變,基於快照,再去克隆

2.鏡像克隆

這是可以讀寫的

[root@clienta rbddev]# rbd clone rbd/image1@snap1 rbd/clone1
2022-08-14T06:30:28.220-0400 7febbdb2f700 -1 librbd::image::CloneRequest: 0x55f9906c6b00 validate_parent: parent snapshot must be protected
rbd: clone error: (22) Invalid argument
[root@clienta rbddev]# rbd snap  protect rbd/image1@snap1
[root@clienta rbddev]# rbd clone rbd/image1@snap1 rbd/clone1
[root@clienta rbddev]# 

[root@clienta rbddev]# rbd ls
clone1
image1
[root@clienta rbddev]# 

[root@clienta ~]# blkid
/dev/vda1: PARTUUID="fac7f1fb-3e8d-4137-a512-961de09a5549"
/dev/vda2: SEC_TYPE="msdos" UUID="7B77-95E7" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="68b2905b-df3e-4fb3-80fa-49d1e773aa33"
/dev/vda3: LABEL="root" UUID="d47ead13-ec24-428e-9175-46aefa764b26" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="6264d520-3fb9-423f-8ab8-7a0a8e3d3562"
/dev/rbd1: UUID="4d62ee62-03dc-4acc-b925-eeea77238c1b" BLOCK_SIZE="512" TYPE="xfs"
/dev/rbd0: UUID="4d62ee62-03dc-4acc-b925-eeea77238c1b" BLOCK_SIZE="512" TYPE="xfs"
[root@clienta ~]# 
克隆和鏡像一模一樣
克隆可以繼續創建文件,讀寫副本  (rbd快照用作基礎)(無操作時,依然是cow映射,還是看到的是原始鏡像)

克隆可以獨立於原始鏡像   
快照為cow   克隆支援cor 寫前複製,讀前複製

啟用cor,不然你每次讀的都是源鏡像,源鏡像壓力增大,不如讀前複製到克隆里

查看子鏡像。
並且使子鏡像(克隆)獨立

[root@clienta ~]# rbd children image1@snap1
rbd/clone1
[root@clienta ~]# rbd flatten rbd/clone1
Image flatten: 100% complete...done.
[root@clienta ~]# rbd children image1@snap1
[root@clienta ~]# 

3.rbd快取

服務端直接寫到硬碟,他的數據是沒有經過快取的數據聚合
但是用客戶端的快取 先寫客戶端記憶體,然後刷到集群那一端 快取功能默認開啟

第一個和第六個都得開,才是開始回寫

可以查

[root@clienta ~]# ceph config ls | grep rbd_cache
rbd_cache
rbd_cache_policy
rbd_cache_writethrough_until_flush
rbd_cache_size
rbd_cache_max_dirty
rbd_cache_target_dirty
rbd_cache_max_dirty_age
rbd_cache_max_dirty_object
rbd_cache_block_writes_upfront

4.鏡像的導入導出

像鏡像的備份
第一個ceph集群

[root@clienta ~]# rbd ls
clone1
image1
[root@clienta ~]# rbd snap ls image1
SNAPID  NAME   SIZE   PROTECTED  TIMESTAMP               
    6  snap1  1 GiB  yes        Sun Aug 14 06:29:50 2022
[root@clienta ~]# 
[root@clienta ~]# rbd export rbd/image1 image1-v1
Exporting image: 100% complete...done.
[root@clienta ~]# ls
-  image1-v1
[root@clienta ~]# rsync image1-v1  root@serverf:~
Warning: Permanently added 'serverf,172.25.250.15' (ECDSA) to the list of known hosts.
[root@clienta ~]# 

第二個ceph集群

[root@serverf ~]# ceph osd pool create f-rbd
pool 'f-rbd' created
[root@serverf ~]# rbd pool init f-rbd
[root@serverf ~]# 
[root@serverf ~]# rbd import image1-v1 f-rbd/image1 
Importing image: 100% complete...done.
[root@serverf ~]# rbd info f-rbd/image1
    rbd image 'image1':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: ac64adf717b3
    block_name_prefix: rbd_data.ac64adf717b3
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features: 
    flags: 
    create_timestamp: Sun Aug 14 07:42:46 2022
    access_timestamp: Sun Aug 14 07:42:46 2022
    modify_timestamp: Sun Aug 14 07:42:46 2022


[root@serverf ~]# rbd map f-rbd/image1
/dev/rbd0
[root@serverf ~]# mount /dev/rbd0 /mnt/
[root@serverf ~]# cd /mnt/
[root@serverf mnt]# ls
file1  passwd  profile
[root@serverf mnt]# 

基於時間點備份成功,那我繼續在clienta修改鏡像呢

[root@clienta ~]# rbd map image1
/dev/rbd0
[root@clienta ~]# mount /dev/rbd0 /mnt/rbddev/
[root@clienta ~]# 
[root@clienta rbddev]# cp /etc/group .
[root@clienta rbddev]# ls
file1  group  passwd  profile

可能是錯誤示範,得從快照導出,標記出時間點

再來

[root@clienta ~]# rbd snap  ls image1
SNAPID  NAME   SIZE   PROTECTED  TIMESTAMP               
    6  snap1  1 GiB  yes        Sun Aug 14 06:29:50 2022
[root@clienta ~]# rbd export rbd/image1@snap1 image-snap1
Exporting image: 100% complete...done.
[root@clienta ~]# 
[root@clienta ~]# rsync image-snap1 root@serverf:~
[root@clienta ~]# 


[root@serverf ~]# rbd import image-snap1  f-rbd/image1
Importing image: 100% complete...done.
[root@serverf ~]# rbd map f-rbd/image1
/dev/rbd0
[root@serverf ~]# mount /dev/rbd0 /mnt
cd[root@serverf ~]# cd /mnt
[root@serverf mnt]# ls
file1  passwd  profile
[root@clienta ~]# rbd export-diff    --from-snap snap1  rbd/image1 image1-v1-v2
[root@clienta ~]# rsync  image1-v1-v2  root@serverf:~


客戶端(在之前的全量備份的基礎上,再打一個快照)
[root@serverf ~]# rbd snap create f-rbd/image1@snap1
Creating snap: 100% complete...done.
[root@serverf ~]# rbd import-diff image1-v1-v2 f-rbd/image1
Importing image diff: 100% complete...done.
[root@serverf ~]# mount /dev/rbd0 /mnt
[root@serverf ~]# cd /mnt
[root@serverf mnt]# ls
file1  group  passwd  profile

理一遍思路(增量備份)

[root@clienta ~]# rbd create test  --size 1G --pool rbd
[root@clienta ~]# rbd ls
clone1
image1
test
[root@clienta ~]# rbd map test
/dev/rbd0
[root@clienta ~]# mkfs.xfs /dev/rbd0 
meta-data=/dev/rbd0              isize=512    agcount=8, agsize=32768 blks
        =                       sectsz=512   attr=2, projid32bit=1
        =                       crc=1        finobt=1, sparse=1, rmapbt=0
        =                       reflink=1
data     =                       bsize=4096   blocks=262144, imaxpct=25
        =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
        =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Discarding blocks...Done.
[root@clienta ~]# mount /dev/rbd0 /mnt/rbddev/
[root@clienta ~]# cd /mnt/rbddev/
[root@clienta rbddev]# ls
[root@clienta rbddev]# touch file{1..10}
[root@clienta rbddev]# ls
file1  file10  file2  file3  file4  file5  file6  file7  file8  file9
[root@clienta rbddev]# 
[root@clienta ~]# umount /mnt/rbddev 
[root@clienta ~]# rbd unmap rbd/test
[root@clienta ~]# rbd showmapped
[root@clienta ~]# rbd export rbd/test test-v1
Exporting image: 100% complete...done.
[root@clienta ~]# 
[root@clienta ~]# ls
-  test-v1
[root@clienta ~]# rsync test-v1 root@serverf:~


[root@serverf ~]# ls
-  ceph  test-v1
[root@serverf ~]# rbd import test-v1 f-rbd/test1
Importing image: 100% complete...done.
[root@serverf ~]# rbd map f-rbd/test1
/dev/rbd0
[root@serverf ~]# mount /dev/rbd0 /mnt/
[root@serverf ~]# cd /mnt/
[root@serverf mnt]# ls
file1  file10  file2  file3  file4  file5  file6  file7  file8  file9
[root@serverf mnt]# 
卸載
[root@serverf mnt]# cd
[root@serverf ~]# 
[root@serverf ~]# umount /mnt
[root@serverf ~]# rbd unmap f-rbd/test1
[root@serverf ~]# 

第一次全量備份完成

兩邊同時打上快照

[root@clienta ~]# rbd snap  create test@testsnap1
Creating snap: 100% complete...done.
[root@serverf ~]# rbd snap create f-rbd/test1@testsnap1     #上下兩個快照名字得一樣
Creating snap: 100% complete...done.

倘若名字不一樣,在最後一步會導致報錯

[root@serverf ~]# rbd import-diff snap1-snap2 f-rbd/test1
start snapshot 'testsnap1' does not exist in the image, aborting
Importing image diff: 0% complete...failed.

在主節點增量

[root@clienta ~]# rbd map rbd/test
/dev/rbd0
[root@clienta ~]# mount /dev/rbd0 /mnt/rbddev/
[root@clienta ~]# cd /mnt/rbddev/
[root@clienta rbddev]# ls
file1  file10  file2  file3  file4  file5  file6  file7  file8  file9
[root@clienta rbddev]# touch hello
[root@clienta rbddev]# touch mqy
[root@clienta rbddev]# cp /etc/passwd .
[root@clienta rbddev]# ls
file1  file10  file2  file3  file4  file5  file6  file7  file8  file9  hello  mqy  passwd
[root@clienta rbddev]# 
[root@clienta ~]# umount /mnt/rbddev 
[root@clienta ~]# rbd unmap rbd/test
[root@clienta ~]# 

創建第二次快照,並導出差異數據,傳輸

[root@clienta ~]# rbd snap create rbd/test@testsnap2
Creating snap: 100% complete...done.
[root@clienta ~]# rbd export-diff --from-snap testsnap1 rbd/test@testsnap2  snap1-snap2
Exporting image: 100% complete...done.
[root@clienta ~]# rsync snap1-snap2 root@serverf:~
[root@clienta ~]# 

備份節點,同步增量的數據

[root@serverf ~]# rbd import-diff snap1-snap2 f-rbd/test1
Importing image diff: 100% complete...done.
[root@serverf ~]# rbd map f-rbd/test1
/dev/rbd0
[root@serverf ~]# mount /dev/rbd0 /mnt/
[root@serverf ~]# cd /mnt/
[root@serverf mnt]# ls
file1  file10  file2  file3  file4  file5  file6  file7  file8  file9  hello  mqy  passwd

serverf在這個版本,非得給被增量的節點打一個快照(意義不明,以前版本並不用)
防止集群出現故障,可以拷貝,備份一下

5.rbd鏡像功能(單向同步)

容災,主集群直接掛掉了,不可訪問。無縫切換到備用集群
人為的問題不能解決。主被誤寫,備集群也誤寫
人為的操作,只能通過備份解決。搞錯了,可以再用備份導進來

兩邊強一致,導致他們在同步時,延遲就會很高,備集群,寫成功,才會告訴主集群,然後主機群才寫。

RBD-mirror
此功能需要開啟journal 性能會降低一點,先寫日誌。備集群讀,拉取日誌到備用集群。然後修改image。備集群主動拉取日誌。這是通過日誌,而不是原來緩慢的三副本寫完同步。

鏡像級別同步,單獨設置1對1鏡像同步
池級別同步 池和池的一對一 單雙向區別是看誰安裝rbd-mirror
雙向的話兩邊都有rbd-mirror

實踐

[root@clienta ~]# ceph osd pool create rbd
pool 'rbd' created
[root@clienta ~]# rbd pool init rbd


[root@serverf ~]# ceph osd pool create rbd
pool 'rbd' created
[root@serverf ~]# rbd pool init rbd

備集群需要rbd-mirror去要主集群的數據

[root@serverf ~]# ceph orch apply rbd-mirror --placement=serverf.lab.example.com
Scheduled rbd-mirror update...
[root@serverf ~]# ceph -s
cluster:
    id:     0bf7c358-25e1-11ec-ae02-52540000fa0f
    health: HEALTH_OK

services:
    mon:        1 daemons, quorum serverf.lab.example.com (age 9m)
    mgr:        serverf.lab.example.com.vuoooq(active, since 7m)
    osd:        5 osds: 5 up (since 8m), 5 in (since 10M)
    rbd-mirror: 1 daemon active (1 hosts)
    rgw:        1 daemon active (1 hosts, 1 zones)

data:
    pools:   6 pools, 137 pgs
    objects: 222 objects, 4.9 KiB
    usage:   96 MiB used, 50 GiB / 50 GiB avail
    pgs:     137 active+clean
多出rbd-mirror

主集群創建鏡像

[root@clienta ~]# rbd create image1 --size 1024 --pool rbd --image-feature exclusive-lock,journaling
[root@clienta ~]# rbd info image1
rbd image 'image1':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: d3bf73566d45
    block_name_prefix: rbd_data.d3bf73566d45
    format: 2
    features: exclusive-lock, journaling
    op_features: 
    flags: 
    create_timestamp: Sun Aug 14 11:12:54 2022
    access_timestamp: Sun Aug 14 11:12:54 2022
    modify_timestamp: Sun Aug 14 11:12:54 2022
    journal: d3bf73566d45
    mirroring state: disabled
[root@clienta ~]# 

開啟池模式

[root@clienta ~]# rbd mirror pool enable rbd pool
[root@clienta ~]# rbd info image1
rbd image 'image1':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: d3bf73566d45
    block_name_prefix: rbd_data.d3bf73566d45
    format: 2
    features: exclusive-lock, journaling
    op_features: 
    flags: 
    create_timestamp: Sun Aug 14 11:12:54 2022
    access_timestamp: Sun Aug 14 11:12:54 2022
    modify_timestamp: Sun Aug 14 11:12:54 2022
    journal: d3bf73566d45
    mirroring state: enabled
    mirroring mode: journal
    mirroring global id: 431aba12-fbe6-488f-b57e-320ab526d47a
    mirroring primary: true
[root@clienta ~]# 

建立聯繫
將主集群取名prod並將資訊導出

[root@clienta ~]# rbd mirror pool peer bootstrap create --site-name prod rbd > /root/prod
[root@clienta ~]# cat prod 
eyJmc2lkIjoiMmFlNmQwNWEtMjI5YS0xMWVjLTkyNWUtNTI1NDAwMDBmYTBjIiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFBSEV2bGlISnVwQ0JBQVZBZUk3Wnc3d215eEI3TytCZTR5V2c9PSIsIm1vbl9ob3N0IjoiW3YyOjE3Mi4yNS4yNTAuMTI6MzMwMC8wLHYxOjE3Mi4yNS4yNTAuMTI6Njc4OS8wXSJ9
[root@clienta ~]# rbd mirror pool info rbd
Mode: pool
Site Name: prod

Peer Sites: none
[root@clienta ~]# 
[root@clienta ~]# rsync prod root@serverf:~
Warning: Permanently added 'serverf,172.25.250.15' (ECDSA) to the list of known hosts.
[root@clienta ~]# 



[root@serverf ~]# rbd mirror pool peer bootstrap import --site-name bup --direction rx-only rbd prod 
    2022-08-14T11:21:20.321-0400 7f777f0392c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
    2022-08-14T11:21:20.323-0400 7f777f0392c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
    2022-08-14T11:21:20.323-0400 7f777f0392c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[root@serverf ~]# rbd mirror pool info rbd
Mode: pool
Site Name: bup

Peer Sites: 

UUID: cd5db8bb-4f03-449b-ae29-c04ac38df226
Name: prod
Direction: rx-only
Client: client.rbd-mirror-peer



[root@clienta ~]# rbd create image3 --size 1024 --pool rbd --image-feature exclusive-lock,journaling
[root@serverf ~]# rbd ls
image1
image3

[root@serverf ~]# rbd map image3
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable image3 journaling".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
[root@serverf ~]# dmesg | tail
[ 2442.874097] Key type dns_resolver registered
[ 2442.963090] Key type ceph registered
[ 2442.974459] libceph: loaded (mon/osd proto 15/24)
[ 2442.996598] rbd: loaded (major 251)
[ 2443.015744] libceph: mon0 (1)172.25.250.15:6789 session established
[ 2443.018969] libceph: mon0 (1)172.25.250.15:6789 socket closed (con state OPEN)
[ 2443.019780] libceph: mon0 (1)172.25.250.15:6789 session lost, hunting for new mon
[ 2443.024317] libceph: mon0 (1)172.25.250.15:6789 session established
[ 2443.026864] libceph: client34233 fsid 0bf7c358-25e1-11ec-ae02-52540000fa0f
[ 2443.053943] rbd: image image3: image uses unsupported features: 0x40
[root@serverf ~]# 
報錯0x40,因為journaling特性的開啟,反而不能用了

可以先創建池子,後給journaling特性

本來有個集群A,不再用了
開啟池模式,單向同步,一次性導入到集群B?
所以這個功能好像不是那麼常用

總結(單向同步)

主集群操作
ceph osd pool  create rbd
rbd pool  init rbd
rbd create image1 --size 1024 --pool rbd --image-feature exclusive-lock,journaling
rbd mirror pool enable rbd pool
rbd mirror pool peer bootstrap create --site-name prod rbd > /root/prod
rsycn prod root@serverf:~

備集群操作
ceph osd pool  create rbd
rbd pool init rbd
ceph orch apply rbd-mirror --placement=serverf.lab.example.com
rbd mirror pool peer bootstrap import --site-name bup --direction rx-only rbd /root/prod
rbd ls

官方文檔真的很好用
//access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/block_device_guide/mirroring-ceph-block-devices
scp可能出現安全問題,所以用rsync?
ceph作為一個開源的萬金油分散式存儲,勢必不如其他專一方向的分散式存儲,功能可能不如他們。但是原理是通用

Exit mobile version