rbd快照的导入导出等操作

基本操作命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
查询镜像
rbd showmapped

创建快照
rbd snap create {pool-name}/{image-name}@{snap-name}
rbd snap create rbdpool/pvc-7a1dc5e3-0a6a-11ea-8b13-005056925f82@v1

查询快照
rbd snap ls {pool-name}/{image-name}
rbd snap ls rbdpool/pvc-7a1dc5e3-0a6a-11ea-8b13-005056925f82

回滚快照
rbd snap rollback {pool-name}/{image-name}@{snap-name}
rbd snap rollback rbdpool/pvc-7a1dc5e3-0a6a-11ea-8b13-005056925f82@v2

删除某个版本快照
rbd snap rm {pool-name}/{image-name}@{snap-name}
rbd snap rm rbdpool/pvc-7a1dc5e3-0a6a-11ea-8b13-005056925f82@v2

删除所有快照
rbd snap purge {pool-name}/{image-name}
rbd snap purge rbdpool/pvc-7a1dc5e3-0a6a-11ea-8b13-005056925f82

保护快照
克隆体能够访问 父母快照. 如果 parent 快照被删除了, 所有克隆体都会损坏. 因此为了防止数据丢失, 需要保护要克隆的快照
rbd snap protect {pool-name}/{image-name}@{snap-name}
rbd snap protect rbdpool/pvc-7a1dc5e3-0a6a-11ea-8b13-005056925f82@v2

克隆快照
rbd clone {pool-name}/{parent-image}@{snap-name} {dest-pool-name}/{child-image-name}

列出快照的克隆体
rbd children {pool-name}/{image-name}${snapshot-name}

平坦化克隆镜像
rbd flatten {pool-name}/{image-name}

取消对快照的保护
rbd snap unprotect {pool-name}/{image-name}@{snapshot-name}

导出从开始到V1时间点的差异数据
rbd export-diff rbdpool/pvc-7a1dc5e3-0a6a-11ea-8b13-005056925f82@v1 test-copy-for-v1

导出V1到v2时间点的差异数据
rbd export-diff rbdpool/pvc-7a1dc5e3-0a6a-11ea-8b13-005056925f82@v2 --from-snap v1 test-copy-for-v1-v2

导出从开始到到当前的差异数据
rbd export rbdpool/pvc-7a1dc5e3-0a6a-11ea-8b13-005056925f82 test-copy-full

导入快照
rbd import-diff {备份文件} {pool-name}/{image-name}

验证

由于我们是用Rook&Ceph部署在k8s集群中的,所以验证时采用了以busy-box使用了Ceph的块存储

创建的块存储如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: rbdpool
namespace: rook-ceph
spec:
failureDomain: host
replicated:
size: 3 # OSD个数
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rbdpool-sc
provisioner: ceph.rook.io/block
parameters:
blockPool: rbdpool
# Specify the namespace of the rook cluster from which to create volumes.
# If not specified, it will use `rook` as the default namespace of the cluster.
# This is also the namespace where the cluster will be
clusterNamespace: rook-ceph
# Specify the filesystem type of the volume. If not specified, it will use `ext4`.
fstype: xfs
# (Optional) Specify an existing Ceph user that will be used for mounting storage with this StorageClass.
#mountUser: user1
# (Optional) Specify an existing Kubernetes secret name containing just one key holding the Ceph user secret.
# The secret must exist in each namespace(s) where the storage will be consumed.
#mountSecret: ceph-user1-secret
reclaimPolicy: Retain

测试用的busybox-pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: Pod
metadata:
name: busy-box-test1
namespace: default
spec:
restartPolicy: OnFailure
containers:
- name: busy-box-test1
image: busybox
volumeMounts:
- name: busy-box-test-pv1
mountPath: /mnt/busy-box
command: ["sleep", "60000"]
volumes:
- name: busy-box-test-pv1
persistentVolumeClaim:
claimName: busy-box-pvc

验证快照回滚的注意要点

  • 回滚快照时需要停止Pod运行,已达到取消挂载块存储的操作
  • 完成回滚操作后再重启Pod即可

验证快照导入导出的注意要点

  • 同样在操作导入快照时需要Pod在停止的状态,完成导入后再运行
  • 在导入快照文件时,要删除目前块存储上相同的快照,因为导入文件时会自动导入快照

参考

Ceph RBD增量备份与恢复

Ceph RDB 的快照、增量备份与恢复