Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Nodes | Status |
---|---|---|---|---|---|---|---|---|---|---|---|
2022-02-11 17:05:05 | 2022-02-11 17:08:35 | 2022-02-11 17:25:46 | 0:17:11 | 0:08:10 | 0:09:01 | smithi | master | centos | 8.stream | 2 | fail |
Description: rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rmdir-reactivate}
Sentry event: https://sentry.ceph.com/organizations/ceph/?query=d197b49a13d349f4b0402a9812ccea37
Command failed on smithi100 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:31d9c206473bba8bc2236acfc6bcf79cb54a65c6 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c60df1c6-8b5e-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nHOST=$(hostname -s)\nOSD=$(ceph orch ps $HOST | grep osd | head -n 1 | awk \'"\'"\'{print $1}\'"\'"\')\necho "host $HOST, osd $OSD"\nceph orch daemon stop $OSD\nwhile ceph orch ps | grep $OSD | grep running ; do sleep 5 ; done\nceph auth export $OSD > k\nceph orch daemon rm $OSD --force\nceph orch ps --refresh\nwhile ceph orch ps | grep $OSD ; do sleep 5 ; done\nceph auth add $OSD -i k\nceph cephadm osd activate $HOST\nwhile ! ceph orch ps | grep $OSD | grep running ; do sleep 5 ; done\n\''