版本历史
Name | Initial release | Latest | End of life (estimated) Quincy 2022-04-19 17.2.0 2024-06-01 Pacific 2021-03-31 16.2.7 2023-06-01 Octopus 2020-03-23 15.2.15 2022-06-01
环境概述
Debian 11 (bullseye)
ceph-node-01 192.168.200.1 172.31.252.201 2vCPU 2G 10G 10G
ceph-node-02 192.168.200.2 172.31.252.202 2vCPU 2G 10G 10G
ceph-node-03 192.168.200.3 172.31.252.203 2vCPU 2G 10G 10G
如果节点配置过低,可能会出现各种失败(诸如 服务进程调度失败、OSD 无法 up 状态 等等);
流程概述
cephadm 通过 SSH 管理 Ceph-Cluster 的整个声明周期(可以通过 CLI 或 GUI 来管理)
1)从 Bootstrap 开始,现在单个节点上创建小型 Ceph-Cluster(one Monitor, one Manager)
2)然后通过编排接口来展开集群,添加更多的节点,并提供守护进程及服务;
#1 REQUIREMENTS
Python 3
Systemd
Podman or Docker for running containers
Time synchronization (such as chrony or NTP)
LVM2 for provisioning storage devices
# install docker # ... apt-get install python-is-python3 systemd --version # systemd 245 (245.4-4ubuntu3.17) timedatectl set-timezone Asia/Shanghai timedatectl set-ntp true apt-get install lvm2
#2 INSTALL CEPHADM
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm chmod +x cephadm ./cephadm add-repo --release quincy ./cephadm install which cephadm # /usr/sbin/cephadm cephadm version
#3 BOOTSTRAP A NEW CLUSTER
# cephadm bootstrap --mon-ip 192.168.200.1 ... Fetching dashboard port number... Ceph Dashboard is now available at: URL: https://ceph-node-09:8443/ User: admin Password: 6a786ldki2 You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 424ec4d6-dc22-11ec-84a2-99e7897dfb3b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.
#4 ENABLE CEPH CLI
# cephadm shell -k /etc/ceph/ceph.keyring # cephadm shell Inferring fsid 424ec4d6-dc22-11ec-84a2-99e7897dfb3b Inferring config /var/lib/ceph/424ec4d6-dc22-11ec-84a2-99e7897dfb3b/mon.ceph-node-09/config Using recent ceph image quay.io/ceph/ceph@sha256:f2822b57d72d07e6352962dc830d2fa93dd8558b725e2468ec0d07af7b14c95d root@ceph-node-09:/# ceph -v ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable) root@ceph-node-09:/# ceph status cluster: id: 424ec4d6-dc22-11ec-84a2-99e7897dfb3b health: HEALTH_WARN OSD count 0 < osd_pool_default_size 3 services: mon: 1 daemons, quorum ceph-node-09 (age 5m) mgr: ceph-node-09.qbbmzj(active, since 4m) osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
#5 ADDING HOSTS
ssh-copy-id -f -i /etc/ceph/ceph.pub root@192.168.200.2 ssh-copy-id -f -i /etc/ceph/ceph.pub root@192.168.200.3 ceph-shell> ceph orch host add ceph-node-02 192.168.200.2 ceph-shell> ceph orch host add ceph-node-03 192.168.200.3 // 将节点设为管理节点,便于进行管理操作 ceph-shell> ceph orch host label add ceph-node-02 _admin ceph-shell> ceph orch host label add ceph-node-03 _admin
#6 ADDING ADDITIONAL MONS
默认存在三个 Ceph Monitor 实例,所以无需再添加;
在 ceph orch host add 后,需要稍等一会儿节点才能上线(或许是配置较低导致);
#7 ADDING STORAGE
# ceph orch apply osd --all-available-devices
#8 USING CEPH
# ceph status cluster: id: cf79beac-61eb-11ed-a2e0-080027d3c643 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-node-01,ceph-node-02,ceph-node-03 (age 4m) mgr: ceph-node-02.gktsek(active, since 12m), standbys: ceph-node-01.ymsncp osd: 3 osds: 3 up (since 12m), 3 in (since 20m) data: pools: 1 pools, 1 pgs objects: 2 objects, 577 KiB usage: 18 MiB used, 30 GiB / 30 GiB avail pgs: 1 active+clean # ceph -w // 健康状态 ... # ceph quorum_status --format json-pretty // 仲裁状态 ... # ceph mon dump // 导出 Monitor 信息 # ceph pg dump // 列出 PG 信息 # ceph df // 查看使用情况 # ceph mon stat # ceph osd stat # ceph osd lspools // 列出存储池 # ceph osd tree // 查看 OSD CRUSH map # ceph pg stat # ceph auth list // 查看集群认证密钥
参考文献
How to build a Ceph Distributed Storage Cluster on CentOS 7
Deploying a new Ceph cluster — Ceph Documentation