「Ceph」- 块存储(学习笔记)

  CREATED BY JENKINSBOT

RBD, RADOS Block Device

第一步、配置客户端

检查内核模块

检查 Kernel 具有 rbd 模块:如下命令若无输出则表示模块已加载

server> modprobe rbd

配置认证信息

Ceph 提供认证,其要求 Client 必须具有凭证与权限,以此才能访问存储。

Ceph 默认创建 client.admin 用户,该用户具有足够的权限,但是我们这里将演示如何创建独立的用户:

server> ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd' 
[client.rbd]
	key = AQBAO29jy4slJhAA+eUkcCRKjKo08uBb8G63IQ==

server> ceph auth get-or-create client.rbd | ssh root@172.31.252.100 tee /etc/ceph/ceph.client.rbd.keyring

client> cat /etc/ceph/ceph.client.rbd.keyring >> /etc/ceph/keyring

在后续的测试中,我们发现,如果重启系统,将无法继续挂载 RBD 镜像:

// 并提示如下错误:

...
Sep  6 12:52:33 n01 kernel: [  875.165082] rbd: rbd0: blacklist of client2508785 failed: -13
Sep  6 12:52:33 n01 kernel: [  875.165082] rbd: rbd0: failed to acquire lock: -13
Sep  6 12:52:33 n01 kernel: [  875.165355] rbd: rbd0: no lock owners detected
...

// 解决方案
// 这可能与认证配置相关,后面将进一步讨论
ceph auth caps client.n01 mon 'profile rbd' osd 'profile rbd'

// 参考文献

https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/KAV6MXLIYJWP3OEK4WFXDWHYRLLLA3LF/

发送配置文件

server> scp ceph.conf root@172.31.252.100:/etc/ceph

client> ceph -s --name client.rbd
  cluster:
    id:     cf79beac-61eb-11ed-a2e0-080027d3c643
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-node-01,ceph-node-02,ceph-node-03 (age 12h)
    mgr: ceph-node-02.gktsek(active, since 12h), standbys: ceph-node-01.ymsncp
    osd: 3 osds: 3 up (since 12h), 3 in (since 12h)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 577 KiB
    usage:   25 MiB used, 30 GiB / 30 GiB avail
    pgs:     1 active+clean

第二步、创建存储池

创建 RBD 镜像

// 创建存储池

server> ceph osd pool create rbd
pool 'rbd' created

// 创建块设备

client> rbd create rbd1 --size 1024 --name client.rbd

查看 RBD 信息

client> rbd ls --name client.rbd
client> rbd -p rbd ls --name client.rbd
client> rbd -p rbd list --name client.rbd

client> rbd --image rbd1 info --name client.rbd
rbd image 'rbd1':
	size 1 GiB in 256 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: 12273add56623
	block_name_prefix: rbd_data.12273add56623
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Sat Nov 12 07:15:58 2022
	access_timestamp: Sat Nov 12 07:15:58 2022
	modify_timestamp: Sat Nov 12 07:15:58 2022

调整 RBD 大小

# rbd -n client.rbd resize --image rbd1 --size 2048 
Resizing image: 100% complete...done.

# rbd -n client.rbd info --image rbd1 
rbd image 'rbd1':
	size 2 GiB in 512 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: 12273add56623
	block_name_prefix: rbd_data.12273add56623
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Sat Nov 12 07:15:58 2022
	access_timestamp: Sat Nov 12 07:15:58 2022
	modify_timestamp: Sat Nov 12 07:15:58 2022

删除 RDB 镜像

rbd -p "<pool-name>" remove --image "<img-name>"

第三步、挂载块设备

# rbd -n client.rbd map --image rbd1
/dev/rbd0

# rbd -n client.rbd showmapped 
id  pool  namespace  image  snap  device   
0   rbd              rbd1   -     /dev/rbd0

使用块设备

mkfs.ext4 /dev/rbd0 
mount /dev/rbd0 /mnt/
cd /mnt/
ls

# dd if=/dev/zero of=/mnt/testfile count=100 bs=1M
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.227465 s, 461 MB/s

持久化挂载

# vim /etc/ceph/rbdmap 
rbd/rbd1 id=rbd,keyring=/etc/ceph/ceph.client.rbd.keyring

# systemctl enable rbdmap.service