ANF CEPH 2022 du 03 au 07/10/2022 Sébastien Geiger Documentation https://docs.ceph.com/en/latest/cephadm/upgrade/ #version actuelle [ceph: root@ceph1 /]# ceph version ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable) # vérification de l'etat du cluster [ceph: root@ceph1 /]# ceph -s cluster: id: 92459a10-1975-11ed-9374-fa163e5fdb7c health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 7d) mgr: ceph1.inxizw(active, since 7d), standbys: ceph2.nwehoh osd: 8 osds: 8 up (since 7d), 8 in (since 7d) data: pools: 3 pools, 49 pgs objects: 44 objects, 15 MiB usage: 8.1 GiB used, 372 GiB / 380 GiB avail pgs: 49 active+clean # version des différents services du cluster [ceph: root@ceph1 /]# ceph versions { "mon": { "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 3 }, "mgr": { "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 2 }, "osd": { "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 8 }, "mds": {}, "overall": { "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 13 } } # rajouter le label _admin pour avoir la distribusion des cle ceph.admin sur ces nodes [ceph: root@ceph1 /]# ceph orch host label add ceph1 _admin Added label _admin to host ceph1 [ceph: root@ceph1 /]# ceph orch host label add ceph2 _admin Added label _admin to host ceph2 [ceph: root@ceph1 /]# ceph orch host ls HOST ADDR LABELS STATUS ceph1 172.16.7.125 _admin ceph2 172.16.7.245 _admin ceph3 172.16.7.180 ceph4 172.16.7.67 4 hosts in cluster # lancer la mise a jour de tous le cluster avec une seul commande en ligne [ceph: root@ceph1 /]# ceph orch upgrade start --image quay.io/ceph/ceph:v16.2.10 Initiating upgrade to quay.io/ceph/ceph:v16.2.10 # remarque: attendre quelques minutes le temps de télécharger les images des conteneurs pour la mise a jour du premier mgr ceph: root@ceph1 /]# ceph versions { "mon": { "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 3 }, "mgr": { "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 1, "ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)": 1 }, "osd": { "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 8 }, "mds": {}, "overall": { "ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)": 12, "ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)": 1 } } # Remarque: cela commence par les 2 mgr, puis les mon # Remarque: pour suivre les étapes utiliser un watch sur cephadmin [ceph: root@ceph1 /]# ceph -W cephadm cluster: id: 92459a10-1975-11ed-9374-fa163e5fdb7c health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 88s) mgr: ceph1.inxizw(active, since 2m), standbys: ceph2.nwehoh osd: 8 osds: 8 up (since 7d), 8 in (since 7d) data: pools: 3 pools, 49 pgs objects: 44 objects, 15 MiB usage: 8.1 GiB used, 372 GiB / 380 GiB avail pgs: 49 active+clean progress: Upgrade to 16.2.10 (99s) [=========...................] (remaining: 3m) 2022-08-18T19:25:41.383518+0000 mgr.ceph1.inxizw [INF] Upgrade: osd.6 is safe to restart 2022-08-18T19:25:41.383788+0000 mgr.ceph1.inxizw [INF] Upgrade: osd.7 is also safe to restart 2022-08-18T19:25:42.541265+0000 mgr.ceph1.inxizw [INF] Upgrade: Updating osd.6 (1/2) 2022-08-18T19:25:42.556074+0000 mgr.ceph1.inxizw [INF] Deploying daemon osd.6 on ceph4 2022-08-18T19:25:48.302031+0000 mgr.ceph1.inxizw [INF] Upgrade: Updating osd.7 (2/2) 2022-08-18T19:25:48.316986+0000 mgr.ceph1.inxizw [INF] Deploying daemon osd.7 on ceph4 2022-08-18T19:26:00.242402+0000 mgr.ceph1.inxizw [INF] Upgrade: Setting container_image for all osd 2022-08-18T19:26:00.312622+0000 mgr.ceph1.inxizw [INF] Upgrade: Setting require_osd_release to 16 pacific 2022-08-18T19:26:01.325309+0000 mgr.ceph1.inxizw [INF] Upgrade: Setting container_image for all mds 2022-08-18T19:26:01.338573+0000 mgr.ceph1.inxizw [INF] Upgrade: Setting container_image for all rgw 2022-08-18T19:26:01.350767+0000 mgr.ceph1.inxizw [INF] Upgrade: Setting container_image for all rbd-mirror 2022-08-18T19:26:01.365845+0000 mgr.ceph1.inxizw [INF] Upgrade: Setting container_image for all iscsi 2022-08-18T19:26:01.376125+0000 mgr.ceph1.inxizw [INF] Upgrade: Setting container_image for all nfs 2022-08-18T19:26:02.663413+0000 mgr.ceph1.inxizw [INF] Upgrade: Updating node-exporter.ceph1 2022-08-18T19:26:02.663823+0000 mgr.ceph1.inxizw [INF] Deploying daemon node-exporter.ceph1 on ceph1 2022-08-18T19:26:18.845933+0000 mgr.ceph1.inxizw [INF] Upgrade: Updating node-exporter.ceph2 2022-08-18T19:26:18.846459+0000 mgr.ceph1.inxizw [INF] Deploying daemon node-exporter.ceph2 on ceph2 2022-08-18T19:27:12.882893+0000 mgr.ceph1.inxizw [INF] Upgrade: Updating alertmanager.ceph1 2022-08-18T19:27:12.894915+0000 mgr.ceph1.inxizw [INF] Deploying daemon alertmanager.ceph1 on ceph1 2022-08-18T19:27:30.481393+0000 mgr.ceph1.inxizw [INF] Upgrade: Updating grafana.ceph1 2022-08-18T19:27:30.564949+0000 mgr.ceph1.inxizw [INF] Deploying daemon grafana.ceph1 on ceph1 2022-08-18T19:27:55.164326+0000 mgr.ceph1.inxizw [INF] Upgrade: Finalizing container_image settings 2022-08-18T19:27:55.242184+0000 mgr.ceph1.inxizw [INF] Upgrade: Complete! # vérification des versions [ceph: root@ceph1 /]# ceph versions { "mon": { "ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)": 3 }, "mgr": { "ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)": 2 }, "osd": { "ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)": 8 }, "mds": {}, "overall": { "ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)": 13 } } # le cluster est toujours opérationnel [ceph: root@ceph1 /]# ceph -s cluster: id: 92459a10-1975-11ed-9374-fa163e5fdb7c health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 7m) mgr: ceph1.inxizw(active, since 8m), standbys: ceph2.nwehoh osd: 8 osds: 8 up (since 4m), 8 in (since 7d) data: pools: 3 pools, 49 pgs objects: 44 objects, 15 MiB usage: 322 MiB used, 380 GiB / 380 GiB avail pgs: 49 active+clean