Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7078280 2022-10-23 07:07:39 2022-10-25 11:15:18 2022-10-25 11:29:15 0:13:57 0:07:02 0:06:55 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi093 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078281 2022-10-23 07:07:41 2022-10-25 11:15:59 2022-10-25 11:32:37 0:16:38 0:05:17 0:11:21 smithi main ubuntu 20.04 orch/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi018 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078282 2022-10-23 07:07:42 2022-10-25 11:16:59 2022-10-25 11:46:48 0:29:49 0:19:27 0:10:22 smithi main ubuntu 20.04 orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

Command failed on smithi145 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8ca9bf1e-5458-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

fail 7078283 2022-10-23 07:07:43 2022-10-25 11:17:20 2022-10-25 11:32:32 0:15:12 0:08:16 0:06:56 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Command failed on smithi153 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078284 2022-10-23 07:07:44 2022-10-25 11:17:40 2022-10-25 11:34:32 0:16:52 0:09:58 0:06:54 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi059 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078285 2022-10-23 07:07:45 2022-10-25 11:17:41 2022-10-25 11:35:55 0:18:14 0:09:56 0:08:18 smithi main centos 8.stream orch/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi027 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078286 2022-10-23 07:07:46 2022-10-25 11:17:51 2022-10-25 11:32:44 0:14:53 0:06:50 0:08:03 smithi main centos 8.stream orch/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

Command failed on smithi143 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078287 2022-10-23 07:07:47 2022-10-25 11:19:52 2022-10-25 11:36:14 0:16:22 0:09:44 0:06:38 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Command failed on smithi044 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078288 2022-10-23 07:07:48 2022-10-25 11:19:52 2022-10-25 11:35:20 0:15:28 0:07:40 0:07:48 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

dead 7078289 2022-10-23 07:07:49 2022-10-25 11:20:13 2022-10-25 23:31:57 12:11:44 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 7078290 2022-10-23 07:07:50 2022-10-25 11:20:33 2022-10-25 11:34:21 0:13:48 0:06:44 0:07:04 smithi main centos 8.stream orch/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi094 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078291 2022-10-23 07:07:51 2022-10-25 11:21:14 2022-10-25 11:36:43 0:15:29 0:04:52 0:10:37 smithi main ubuntu 20.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078292 2022-10-23 07:07:52 2022-10-25 11:21:25 2022-10-25 11:49:30 0:28:05 0:20:46 0:07:19 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi035 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 008ef9f8-5459-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7078293 2022-10-23 07:07:53 2022-10-25 11:21:55 2022-10-25 11:49:55 0:28:00 0:19:59 0:08:01 smithi main centos 8.stream orch/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools_crun} 2-node-mgr agent/off orchestrator_cli} 2
fail 7078294 2022-10-23 07:07:54 2022-10-25 11:22:05 2022-10-25 11:37:35 0:15:30 0:06:29 0:09:01 smithi main centos 8.stream orch/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

Command failed on smithi078 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078295 2022-10-23 07:07:55 2022-10-25 11:24:36 2022-10-25 11:40:14 0:15:38 0:06:40 0:08:58 smithi main centos 8.stream orch/cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} 3
Failure Reason:

Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078296 2022-10-23 07:07:56 2022-10-25 11:27:07 2022-10-25 11:40:10 0:13:03 0:06:30 0:06:33 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

Command failed on smithi154 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078297 2022-10-23 07:07:57 2022-10-25 11:27:08 2022-10-25 11:43:37 0:16:29 0:05:20 0:11:09 smithi main ubuntu 20.04 orch/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi079 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078298 2022-10-23 07:07:59 2022-10-25 11:28:08 2022-10-25 11:46:05 0:17:57 0:09:41 0:08:16 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi112 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078299 2022-10-23 07:08:00 2022-10-25 11:28:59 2022-10-25 11:46:22 0:17:23 0:09:42 0:07:41 smithi main centos 8.stream orch/cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi093 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078300 2022-10-23 07:08:01 2022-10-25 11:29:19 2022-10-25 11:59:08 0:29:49 0:19:21 0:10:28 smithi main ubuntu 20.04 orch/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/host rook/master} 3
Failure Reason:

'wait for toolbox' reached maximum tries (100) after waiting for 500 seconds

pass 7078301 2022-10-23 07:08:02 2022-10-25 11:30:00 2022-10-25 11:50:00 0:20:00 0:11:44 0:08:16 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/classic task/test_adoption} 1
fail 7078302 2022-10-23 07:08:03 2022-10-25 11:30:10 2022-10-25 11:46:39 0:16:29 0:07:54 0:08:35 smithi main centos 8.stream orch/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi049 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078303 2022-10-23 07:08:04 2022-10-25 11:30:41 2022-10-25 12:02:48 0:32:07 0:20:42 0:11:25 smithi main ubuntu 20.04 orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

Command failed on smithi062 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b9836fc4-545a-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)"\''

fail 7078304 2022-10-23 07:08:05 2022-10-25 11:31:21 2022-10-25 11:46:59 0:15:38 0:08:03 0:07:35 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

Command failed on smithi008 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

dead 7078305 2022-10-23 07:08:06 2022-10-25 11:32:02 2022-10-25 23:46:08 12:14:06 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 7078306 2022-10-23 07:08:07 2022-10-25 11:32:32 2022-10-25 11:47:33 0:15:01 0:09:14 0:05:47 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

Command failed on smithi153 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078307 2022-10-23 07:08:08 2022-10-25 11:32:33 2022-10-25 11:47:25 0:14:52 0:08:08 0:06:44 smithi main centos 8.stream orch/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi050 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078308 2022-10-23 07:08:09 2022-10-25 11:32:33 2022-10-25 11:49:39 0:17:06 0:11:07 0:05:59 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi156 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5fbb8171010367eef3729cd64d88a414f76e7d7a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7078309 2022-10-23 07:08:11 2022-10-25 11:32:33 2022-10-25 11:51:44 0:19:11 0:12:51 0:06:20 smithi main rhel 8.4 orch/cephadm/thrash/{0-distro/rhel_8.4_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi102 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078310 2022-10-23 07:08:12 2022-10-25 11:32:34 2022-10-25 11:51:08 0:18:34 0:13:07 0:05:27 smithi main rhel 8.4 orch/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi018 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078311 2022-10-23 07:08:13 2022-10-25 11:32:44 2022-10-25 11:47:42 0:14:58 0:09:17 0:05:41 smithi main rhel 8.4 orch/cephadm/smoke/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi143 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078312 2022-10-23 07:08:14 2022-10-25 11:32:54 2022-10-25 11:48:35 0:15:41 0:08:51 0:06:50 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

Command failed on smithi189 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

dead 7078313 2022-10-23 07:08:15 2022-10-25 11:33:05 2022-10-25 23:44:55 12:11:50 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 7078314 2022-10-23 07:08:16 2022-10-25 11:33:15 2022-10-25 11:50:23 0:17:08 0:06:48 0:10:20 smithi main ubuntu 20.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

Command failed on smithi026 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078315 2022-10-23 07:08:17 2022-10-25 11:33:26 2022-10-25 11:58:43 0:25:17 0:18:20 0:06:57 smithi main centos 8.stream orch/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

Command failed on smithi036 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cc10196c-545a-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

fail 7078316 2022-10-23 07:08:18 2022-10-25 11:33:26 2022-10-25 11:49:25 0:15:59 0:09:19 0:06:40 smithi main rhel 8.4 orch/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi094 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

pass 7078317 2022-10-23 07:08:19 2022-10-25 11:34:27 2022-10-25 11:51:29 0:17:02 0:10:20 0:06:42 smithi main rhel 8.4 orch/cephadm/workunits/{0-distro/rhel_8.4_container_tools_3.0 agent/on mon_election/classic task/test_cephadm_repos} 1
fail 7078318 2022-10-23 07:08:20 2022-10-25 11:34:37 2022-10-25 12:00:36 0:25:59 0:18:11 0:07:48 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 08b4593c-545b-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

fail 7078319 2022-10-23 07:08:21 2022-10-25 11:35:27 2022-10-25 11:51:50 0:16:23 0:08:00 0:08:23 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Command failed on smithi027 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078320 2022-10-23 07:08:23 2022-10-25 11:35:58 2022-10-25 11:54:35 0:18:37 0:12:43 0:05:54 smithi main rhel 8.4 orch/cephadm/thrash/{0-distro/rhel_8.4_container_tools_rhel8 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078321 2022-10-23 07:08:24 2022-10-25 11:35:58 2022-10-25 11:55:43 0:19:45 0:12:16 0:07:29 smithi main rhel 8.4 orch/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078322 2022-10-23 07:08:25 2022-10-25 11:36:49 2022-10-25 11:52:23 0:15:34 0:08:35 0:06:59 smithi main rhel 8.4 orch/cephadm/smoke/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi167 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078323 2022-10-23 07:08:26 2022-10-25 11:37:09 2022-10-25 11:52:16 0:15:07 0:07:52 0:07:15 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

Command failed on smithi044 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

dead 7078324 2022-10-23 07:08:27 2022-10-25 11:37:09 2022-10-25 23:49:32 12:12:23 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 7078325 2022-10-23 07:08:28 2022-10-25 11:37:40 2022-10-25 11:52:45 0:15:05 0:08:59 0:06:06 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078326 2022-10-23 07:08:29 2022-10-25 11:37:40 2022-10-25 11:53:29 0:15:49 0:08:19 0:07:30 smithi main rhel 8.4 orch/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi103 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078327 2022-10-23 07:08:30 2022-10-25 11:38:01 2022-10-25 11:56:55 0:18:54 0:11:00 0:07:54 smithi main rhel 8.4 orch/cephadm/workunits/{0-distro/rhel_8.4_container_tools_rhel8 agent/off mon_election/connectivity task/test_nfs} 1
Failure Reason:

Command failed on smithi196 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078328 2022-10-23 07:08:31 2022-10-25 11:40:12 2022-10-25 11:55:47 0:15:35 0:04:48 0:10:47 smithi main ubuntu 20.04 orch/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi045 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

pass 7078329 2022-10-23 07:08:32 2022-10-25 11:40:23 2022-10-25 12:07:27 0:27:04 0:21:12 0:05:52 smithi main rhel 8.4 orch/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.4_container_tools_rhel8} 2-node-mgr agent/on orchestrator_cli} 2
fail 7078330 2022-10-23 07:08:33 2022-10-25 11:40:23 2022-10-25 11:54:26 0:14:03 0:07:17 0:06:46 smithi main rhel 8.4 orch/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi191 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078331 2022-10-23 07:08:34 2022-10-25 11:41:24 2022-10-25 11:59:54 0:18:30 0:06:56 0:11:34 smithi main ubuntu 20.04 orch/cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi052 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078332 2022-10-23 07:08:35 2022-10-25 11:42:25 2022-10-25 12:01:24 0:18:59 0:07:29 0:11:30 smithi main ubuntu 20.04 orch/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi079 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078333 2022-10-23 07:08:36 2022-10-25 11:43:46 2022-10-25 12:05:30 0:21:44 0:09:36 0:12:08 smithi main ubuntu 20.04 orch/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.2} 3
Failure Reason:

Command failed on smithi112 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

fail 7078334 2022-10-23 07:08:37 2022-10-25 11:46:08 2022-10-25 12:01:42 0:15:34 0:07:58 0:07:36 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi093 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078335 2022-10-23 07:08:38 2022-10-25 11:46:29 2022-10-25 12:23:16 0:36:47 0:24:57 0:11:50 smithi main ubuntu 20.04 orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

Command failed on smithi049 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 293a8468-545d-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph versions | jq -e \'"\'"\'.mgr | length == 2\'"\'"\'\''

fail 7078336 2022-10-23 07:08:40 2022-10-25 11:46:49 2022-10-25 12:04:01 0:17:12 0:06:48 0:10:24 smithi main ubuntu 20.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Command failed on smithi145 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

dead 7078337 2022-10-23 07:08:41 2022-10-25 11:46:50 2022-10-25 23:59:00 12:12:10 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 7078338 2022-10-23 07:08:42 2022-10-25 11:47:00 2022-10-25 12:02:36 0:15:36 0:07:42 0:07:54 smithi main centos 8.stream orch/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi033 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078339 2022-10-23 07:08:43 2022-10-25 11:47:21 2022-10-25 12:04:43 0:17:22 0:07:00 0:10:22 smithi main ubuntu 20.04 orch/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

Command failed on smithi050 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078340 2022-10-23 07:08:44 2022-10-25 11:47:31 2022-10-25 12:04:28 0:16:57 0:07:55 0:09:02 smithi main ubuntu 20.04 orch/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078341 2022-10-23 07:08:45 2022-10-25 11:47:31 2022-10-25 12:14:36 0:27:05 0:20:57 0:06:08 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi153 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9f7ce31a-545c-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

fail 7078342 2022-10-23 07:08:46 2022-10-25 11:47:42 2022-10-25 12:02:35 0:14:53 0:07:40 0:07:13 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi143 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078343 2022-10-23 07:08:47 2022-10-25 11:47:52 2022-10-25 12:05:46 0:17:54 0:10:57 0:06:57 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi189 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078344 2022-10-23 07:08:48 2022-10-25 11:48:43 2022-10-25 12:08:58 0:20:15 0:11:19 0:08:56 smithi main centos 8.stream orch/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi035 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078345 2022-10-23 07:08:49 2022-10-25 11:49:33 2022-10-25 12:04:40 0:15:07 0:08:21 0:06:46 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

Command failed on smithi094 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078346 2022-10-23 07:08:50 2022-10-25 11:49:34 2022-10-25 12:05:53 0:16:19 0:08:26 0:07:53 smithi main centos 8.stream orch/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi157 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078347 2022-10-23 07:08:51 2022-10-25 11:50:04 2022-10-25 12:20:00 0:29:56 0:20:28 0:09:28 smithi main ubuntu 20.04 orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

Command failed on smithi012 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5e67468a-545d-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

fail 7078348 2022-10-23 07:08:52 2022-10-25 11:50:04 2022-10-25 12:07:14 0:17:10 0:07:03 0:10:07 smithi main ubuntu 20.04 orch/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

Command failed on smithi026 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078349 2022-10-23 07:08:53 2022-10-25 11:50:25 2022-10-25 12:07:22 0:16:57 0:09:43 0:07:14 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

Command failed on smithi005 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

pass 7078350 2022-10-23 07:08:54 2022-10-25 11:50:55 2022-10-25 12:10:00 0:19:05 0:10:57 0:08:08 smithi main ubuntu 20.04 orch/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_adoption} 1
dead 7078351 2022-10-23 07:08:55 2022-10-25 11:51:15 2022-10-26 00:03:13 12:11:58 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 7078352 2022-10-23 07:08:56 2022-10-25 11:51:36 2022-10-25 12:10:49 0:19:13 0:11:24 0:07:49 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi102 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078353 2022-10-23 07:08:57 2022-10-25 11:51:46 2022-10-25 12:09:47 0:18:01 0:11:13 0:06:48 smithi main centos 8.stream orch/cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi027 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078354 2022-10-23 07:08:59 2022-10-25 11:51:57 2022-10-25 12:07:16 0:15:19 0:09:12 0:06:07 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

Command failed on smithi044 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078355 2022-10-23 07:09:00 2022-10-25 11:52:17 2022-10-25 12:07:50 0:15:33 0:05:51 0:09:42 smithi main ubuntu 20.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

Command failed on smithi032 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078356 2022-10-23 07:09:01 2022-10-25 11:52:17 2022-10-25 12:09:35 0:17:18 0:09:36 0:07:42 smithi main rhel 8.4 orch/cephadm/smoke/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi167 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078357 2022-10-23 07:09:02 2022-10-25 11:52:28 2022-10-25 12:07:38 0:15:10 0:08:15 0:06:55 smithi main centos 8.stream orch/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078358 2022-10-23 07:09:03 2022-10-25 11:52:48 2022-10-25 12:09:53 0:17:05 0:11:16 0:05:49 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi197 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5fbb8171010367eef3729cd64d88a414f76e7d7a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7078359 2022-10-23 07:09:04 2022-10-25 11:52:49 2022-10-25 12:22:46 0:29:57 0:22:09 0:07:48 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi085 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8b6c416c-545d-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7078360 2022-10-23 07:09:05 2022-10-25 11:53:39 2022-10-25 12:19:13 0:25:34 0:17:59 0:07:35 smithi main centos 8.stream orch/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools} 2-node-mgr agent/off orchestrator_cli} 2
fail 7078361 2022-10-23 07:09:06 2022-10-25 11:53:39 2022-10-25 12:09:40 0:16:01 0:07:48 0:08:13 smithi main centos 8.stream orch/cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} 3
Failure Reason:

Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078362 2022-10-23 07:09:08 2022-10-25 11:54:40 2022-10-25 12:10:55 0:16:15 0:07:18 0:08:57 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

Command failed on smithi045 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078363 2022-10-23 07:09:09 2022-10-25 11:55:51 2022-10-25 12:08:46 0:12:55 0:06:56 0:05:59 smithi main centos 8.stream orch/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi084 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078364 2022-10-23 07:09:10 2022-10-25 11:55:51 2022-10-25 12:15:58 0:20:07 0:11:47 0:08:20 smithi main rhel 8.4 orch/cephadm/thrash/{0-distro/rhel_8.4_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078365 2022-10-23 07:09:11 2022-10-25 11:57:02 2022-10-25 12:18:12 0:21:10 0:11:38 0:09:32 smithi main rhel 8.4 orch/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi036 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078366 2022-10-23 07:09:12 2022-10-25 11:58:53 2022-10-25 12:30:15 0:31:22 0:21:30 0:09:52 smithi main ubuntu 20.04 orch/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/1-node k8s/1.21 net/flannel rook/master} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

dead 7078367 2022-10-23 07:09:13 2022-10-26 00:12:01 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 7078368 2022-10-23 07:09:14 2022-10-25 11:59:16 2022-10-25 12:23:07 0:23:51 0:15:59 0:07:52 smithi main centos 8.stream orch/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

Command failed on smithi052 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3ddfb5cc-545e-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)"\''

fail 7078369 2022-10-23 07:09:15 2022-10-25 12:00:08 2022-10-25 12:13:48 0:13:40 0:06:38 0:07:02 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078370 2022-10-23 07:09:16 2022-10-25 12:00:39 2022-10-25 12:14:38 0:13:59 0:07:25 0:06:34 smithi main rhel 8.4 orch/cephadm/smoke/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi079 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078371 2022-10-23 07:09:17 2022-10-25 12:01:30 2022-10-25 12:14:58 0:13:28 0:06:36 0:06:52 smithi main centos 8.stream orch/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi093 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

pass 7078372 2022-10-23 07:09:18 2022-10-25 12:01:50 2022-10-25 12:17:27 0:15:37 0:07:49 0:07:48 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/connectivity task/test_cephadm_repos} 1
fail 7078373 2022-10-23 07:09:19 2022-10-25 12:02:31 2022-10-25 12:17:50 0:15:19 0:08:08 0:07:11 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Command failed on smithi033 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078374 2022-10-23 07:09:21 2022-10-25 12:02:42 2022-10-25 12:19:26 0:16:44 0:10:54 0:05:50 smithi main rhel 8.4 orch/cephadm/thrash/{0-distro/rhel_8.4_container_tools_rhel8 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi143 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078375 2022-10-23 07:09:22 2022-10-25 12:02:42 2022-10-25 12:19:42 0:17:00 0:11:00 0:06:00 smithi main rhel 8.4 orch/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi062 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

dead 7078376 2022-10-23 07:09:23 2022-10-25 12:02:53 2022-10-26 00:14:54 12:12:01 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 7078377 2022-10-23 07:09:24 2022-10-25 12:04:04 2022-10-25 12:19:35 0:15:31 0:08:01 0:07:30 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078378 2022-10-23 07:09:25 2022-10-25 12:04:34 2022-10-25 12:19:54 0:15:20 0:05:43 0:09:37 smithi main ubuntu 20.04 orch/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi050 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078379 2022-10-23 07:09:26 2022-10-25 12:04:45 2022-10-25 12:20:10 0:15:25 0:05:46 0:09:39 smithi main ubuntu 20.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Command failed on smithi094 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078380 2022-10-23 07:09:27 2022-10-25 12:04:45 2022-10-25 12:20:29 0:15:44 0:08:16 0:07:28 smithi main rhel 8.4 orch/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi132 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078381 2022-10-23 07:09:28 2022-10-25 12:05:36 2022-10-25 12:22:30 0:16:54 0:11:26 0:05:28 smithi main rhel 8.4 orch/cephadm/workunits/{0-distro/rhel_8.4_container_tools_3.0 agent/on mon_election/classic task/test_nfs} 1
Failure Reason:

Command failed on smithi112 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078382 2022-10-23 07:09:29 2022-10-25 12:05:36 2022-10-25 12:36:06 0:30:30 0:19:59 0:10:31 smithi main ubuntu 20.04 orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
Failure Reason:

Command failed on smithi157 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 899749fc-545f-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

fail 7078383 2022-10-23 07:09:30 2022-10-25 12:05:56 2022-10-25 12:29:01 0:23:05 0:16:42 0:06:23 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi189 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1de1a27a-545f-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

fail 7078384 2022-10-23 07:09:31 2022-10-25 12:05:57 2022-10-25 12:22:11 0:16:14 0:07:13 0:09:01 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi044 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078385 2022-10-23 07:09:32 2022-10-25 12:07:17 2022-10-25 12:24:12 0:16:55 0:07:55 0:09:00 smithi main ubuntu 20.04 orch/cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi026 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078386 2022-10-23 07:09:33 2022-10-25 12:07:18 2022-10-25 12:25:02 0:17:44 0:07:34 0:10:10 smithi main ubuntu 20.04 orch/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

dead 7078387 2022-10-23 07:09:34 2022-10-25 12:07:28 2022-10-26 00:19:10 12:11:42 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 7078388 2022-10-23 07:09:35 2022-10-25 12:07:29 2022-10-25 12:20:32 0:13:03 0:07:13 0:05:50 smithi main centos 8.stream orch/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078389 2022-10-23 07:09:36 2022-10-25 12:07:39 2022-10-25 12:22:51 0:15:12 0:07:26 0:07:46 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Command failed on smithi032 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078390 2022-10-23 07:09:37 2022-10-25 12:07:59 2022-10-25 12:24:13 0:16:14 0:08:05 0:08:09 smithi main rhel 8.4 orch/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

Command failed on smithi035 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078391 2022-10-23 07:09:39 2022-10-25 12:09:00 2022-10-25 12:25:54 0:16:54 0:11:01 0:05:53 smithi main rhel 8.4 orch/cephadm/workunits/{0-distro/rhel_8.4_container_tools_rhel8 agent/off mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Command failed on smithi084 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078392 2022-10-23 07:09:40 2022-10-25 12:09:00 2022-10-25 12:24:39 0:15:39 0:08:33 0:07:06 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi167 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

pass 7078393 2022-10-23 07:09:41 2022-10-25 12:09:41 2022-10-25 12:34:36 0:24:55 0:18:57 0:05:58 smithi main rhel 8.4 orch/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.4_container_tools_rhel8} 2-node-mgr agent/on orchestrator_cli} 2
fail 7078394 2022-10-23 07:09:42 2022-10-25 12:09:41 2022-10-25 12:24:09 0:14:28 0:08:26 0:06:02 smithi main rhel 8.4 orch/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.4_container_tools_3.0} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078395 2022-10-23 07:09:43 2022-10-25 12:09:42 2022-10-25 12:27:43 0:18:01 0:10:08 0:07:53 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi027 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078396 2022-10-23 07:09:44 2022-10-25 12:09:52 2022-10-25 12:26:46 0:16:54 0:10:09 0:06:45 smithi main centos 8.stream orch/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi053 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078397 2022-10-23 07:09:45 2022-10-25 12:10:03 2022-10-25 12:26:05 0:16:02 0:07:46 0:08:16 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

Command failed on smithi102 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078398 2022-10-23 07:09:46 2022-10-25 12:10:53 2022-10-25 12:24:06 0:13:13 0:07:01 0:06:12 smithi main centos 8.stream orch/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi045 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078399 2022-10-23 07:09:47 2022-10-25 12:11:03 2022-10-25 12:32:06 0:21:03 0:07:36 0:13:27 smithi main ubuntu 20.04 orch/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/host rook/1.7.2} 3
Failure Reason:

Command failed on smithi079 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

fail 7078400 2022-10-23 07:09:48 2022-10-25 12:14:44 2022-10-25 12:42:01 0:27:17 0:18:31 0:08:46 smithi main ubuntu 20.04 orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

Command failed on smithi083 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7b1d18b0-5460-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)"\''

dead 7078401 2022-10-23 07:09:49 2022-10-25 12:14:45 2022-10-26 00:26:27 12:11:42 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 7078402 2022-10-23 07:09:50 2022-10-25 12:15:05 2022-10-25 12:29:00 0:13:55 0:07:20 0:06:35 smithi main rhel 8.4 orch/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078403 2022-10-23 07:09:51 2022-10-25 12:16:06 2022-10-25 12:33:04 0:16:58 0:05:15 0:11:43 smithi main ubuntu 20.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

pass 7078404 2022-10-23 07:09:52 2022-10-25 12:17:37 2022-10-25 12:37:10 0:19:33 0:12:51 0:06:42 smithi main rhel 8.4 orch/cephadm/workunits/{0-distro/rhel_8.4_container_tools_rhel8 agent/on mon_election/classic task/test_adoption} 1
fail 7078405 2022-10-23 07:09:53 2022-10-25 12:17:58 2022-10-25 12:45:35 0:27:37 0:20:02 0:07:35 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi036 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d7f61fd2-5460-11ed-8438-001a4aab830c -e sha1=5219abe5bdb882abcc3a550aa02e563f2cd638bd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

fail 7078406 2022-10-23 07:09:54 2022-10-25 12:18:18 2022-10-25 12:32:42 0:14:24 0:06:53 0:07:31 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

Command failed on smithi103 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078407 2022-10-23 07:09:55 2022-10-25 12:19:19 2022-10-25 12:36:13 0:16:54 0:09:45 0:07:09 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi143 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078408 2022-10-23 07:09:57 2022-10-25 12:19:29 2022-10-25 12:36:31 0:17:02 0:09:39 0:07:23 smithi main centos 8.stream orch/cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi087 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078409 2022-10-23 07:09:58 2022-10-25 12:19:30 2022-10-25 12:34:28 0:14:58 0:08:22 0:06:36 smithi main rhel 8.4 orch/cephadm/smoke/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'

fail 7078410 2022-10-23 07:09:59 2022-10-25 12:19:40 2022-10-25 12:32:39 0:12:59 0:06:55 0:06:04 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

Command failed on smithi062 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5219abe5bdb882abcc3a550aa02e563f2cd638bd pull'