Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Nodes | Status |
---|---|---|---|---|---|---|---|---|---|---|---|
2022-06-27 18:04:18 | 2022-06-28 02:31:36 | 2022-06-28 02:57:44 | 0:26:08 | 0:14:49 | 0:11:19 | smithi | main | rhel | 8.6 | 2 | fail |
Description: orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate}
Sentry event: https://sentry.ceph.com/organizations/ceph/?query=a10cda444b234cc69ffe98c66b8a7148
Command failed on smithi070 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6815a256a07bf14ab703757b4e123bd2bc9f6e8a shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9f9d3b34-f68c-11ec-842c-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nHOST=$(hostname -s)\nOSD=$(ceph orch ps $HOST | grep osd | head -n 1 | awk \'"\'"\'{print $1}\'"\'"\')\necho "host $HOST, osd $OSD"\nceph orch daemon stop $OSD\nwhile ceph orch ps | grep $OSD | grep running ; do sleep 5 ; done\nceph auth export $OSD > k\nceph orch daemon rm $OSD --force\nceph orch ps --refresh\nwhile ceph orch ps | grep $OSD ; do sleep 5 ; done\nceph auth add $OSD -i k\nceph cephadm osd activate $HOST\nwhile ! ceph orch ps | grep $OSD | grep running ; do sleep 5 ; done\n\''