Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Nodes | Status |
---|---|---|---|---|---|---|---|---|---|---|---|
2022-02-05 22:59:31 | 2022-02-06 07:21:53 | 2022-02-06 07:45:27 | 0:23:34 | 0:16:21 | 0:07:13 | smithi | master | rhel | 8.4 | 2 | fail |
Description: rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate}
Sentry event: https://sentry.ceph.com/organizations/ceph/?query=1642fd7796644ec59564ecd913abbfc5
Command failed on smithi122 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6aa4fcc62bbc85390459e2e69fccdea5b9e83966 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a5ba7ae0-871f-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nHOST=$(hostname -s)\nOSD=$(ceph orch ps $HOST | grep osd | head -n 1 | awk \'"\'"\'{print $1}\'"\'"\')\necho "host $HOST, osd $OSD"\nceph orch daemon stop $OSD\nwhile ceph orch ps | grep $OSD | grep running ; do sleep 5 ; done\nceph auth export $OSD > k\nceph orch daemon rm $OSD --force\nceph orch ps --refresh\nwhile ceph orch ps | grep $OSD ; do sleep 5 ; done\nceph auth add $OSD -i k\nceph cephadm osd activate $HOST\nwhile ! ceph orch ps | grep $OSD | grep running ; do sleep 5 ; done\n\''