Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6467144 2021-10-29 14:39:07 2021-10-29 14:40:22 2021-10-29 15:12:19 0:31:57 0:23:26 0:08:31 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6467145 2021-10-29 14:39:07 2021-10-29 14:41:02 2021-10-29 15:16:44 0:35:42 0:25:11 0:10:31 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/1-node k8s/1.21 net/flannel rook/1.7.0} 1
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 6467146 2021-10-29 14:39:08 2021-10-29 14:41:02 2021-10-29 15:19:23 0:38:21 0:23:36 0:14:45 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/dashboard} 2
Failure Reason:

Test failure: test_ganesha (unittest.loader._FailedTest)

dead 6467147 2021-10-29 14:39:09 2021-10-29 14:44:23 2021-10-29 16:05:29 1:21:06 smithi master ubuntu 20.04 rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
dead 6467148 2021-10-29 14:39:10 2021-10-29 14:44:44 2021-10-29 16:05:14 1:20:30 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} 2
dead 6467149 2021-10-29 14:39:10 2021-10-29 14:44:54 2021-10-29 16:04:49 1:19:55 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
dead 6467150 2021-10-29 14:39:11 2021-10-29 14:45:44 2021-10-29 16:04:39 1:18:55 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6467151 2021-10-29 14:39:12 2021-10-29 14:47:45 2021-10-29 15:16:51 0:29:06 0:19:42 0:09:24 smithi master centos 8.2 rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6467152 2021-10-29 14:39:13 2021-10-29 14:47:45 2021-10-29 15:17:07 0:29:22 0:19:02 0:10:20 smithi master centos 8.2 rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi112 with status 5: 'sudo systemctl stop ceph-45272056-38c9-11ec-8c28-001a4aab830c@mon.b'

fail 6467153 2021-10-29 14:39:14 2021-10-29 14:47:56 2021-10-29 15:09:03 0:21:07 0:09:11 0:11:56 smithi master centos 8.3 rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.3_container_tools_3.0} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi080 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5ad5661f3e361e8c573b395b18740c607fdfcced shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7e0d6cfe-38c9-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6467154 2021-10-29 14:39:14 2021-10-29 14:49:46 2021-10-29 15:23:41 0:33:55 0:22:27 0:11:28 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi195 with status 5: 'sudo systemctl stop ceph-f0c450be-38c9-11ec-8c28-001a4aab830c@mon.b'

fail 6467155 2021-10-29 14:39:15 2021-10-29 14:50:37 2021-10-29 15:30:19 0:39:42 0:33:22 0:06:20 smithi master rhel 8.4 rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi045 with status 5: 'sudo systemctl stop ceph-3a6e7360-38cb-11ec-8c28-001a4aab830c@mon.b'

fail 6467156 2021-10-29 14:39:16 2021-10-29 14:51:17 2021-10-29 15:22:54 0:31:37 0:24:02 0:07:35 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6467157 2021-10-29 14:39:17 2021-10-29 14:51:27 2021-10-29 15:23:59 0:32:32 0:20:08 0:12:24 smithi master centos 8.3 rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6467158 2021-10-29 14:39:18 2021-10-29 14:52:48 2021-10-29 15:29:35 0:36:47 0:23:36 0:13:11 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6467159 2021-10-29 14:39:18 2021-10-29 14:55:28 2021-10-29 15:29:53 0:34:25 0:21:38 0:12:47 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi158 with status 5: 'sudo systemctl stop ceph-0765e944-38cb-11ec-8c28-001a4aab830c@mon.b'

fail 6467160 2021-10-29 14:39:19 2021-10-29 14:56:19 2021-10-29 15:29:57 0:33:38 0:20:18 0:13:20 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6467161 2021-10-29 14:39:20 2021-10-29 14:59:00 2021-10-29 15:40:11 0:41:11 0:22:35 0:18:36 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi057 with status 5: 'sudo systemctl stop ceph-1237910a-38cc-11ec-8c28-001a4aab830c@mon.b'

fail 6467162 2021-10-29 14:39:21 2021-10-29 15:05:11 2021-10-29 15:36:51 0:31:40 0:24:01 0:07:39 smithi master rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6467163 2021-10-29 14:39:22 2021-10-29 15:05:51 2021-10-29 15:35:18 0:29:27 0:22:20 0:07:07 smithi master rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi192 with status 5: 'sudo systemctl stop ceph-dbb07d0e-38cb-11ec-8c28-001a4aab830c@mon.b'

dead 6467164 2021-10-29 14:39:22 2021-10-29 15:06:02 2021-10-29 16:05:41 0:59:39 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6467165 2021-10-29 14:39:23 2021-10-29 15:06:12 2021-10-29 15:37:46 0:31:34 0:20:32 0:11:02 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6467166 2021-10-29 14:39:24 2021-10-29 15:06:42 2021-10-29 15:40:26 0:33:44 0:23:28 0:10:16 smithi master rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6467167 2021-10-29 14:39:25 2021-10-29 15:09:13 2021-10-29 15:39:46 0:30:33 0:23:04 0:07:29 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 6467168 2021-10-29 14:39:25 2021-10-29 15:10:33 2021-10-29 16:04:39 0:54:06 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6467169 2021-10-29 14:39:26 2021-10-29 15:11:34 2021-10-29 15:46:22 0:34:48 0:22:04 0:12:44 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi039 with status 5: 'sudo systemctl stop ceph-4f1cd520-38cd-11ec-8c28-001a4aab830c@mon.b'

fail 6467170 2021-10-29 14:39:27 2021-10-29 15:12:14 2021-10-29 15:50:42 0:38:28 0:26:25 0:12:03 smithi master ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi168 with status 5: 'sudo systemctl stop ceph-35f2ea94-38cd-11ec-8c28-001a4aab830c@mon.b'

fail 6467171 2021-10-29 14:39:28 2021-10-29 15:12:24 2021-10-29 15:47:00 0:34:36 0:22:55 0:11:41 smithi master ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6467172 2021-10-29 14:39:29 2021-10-29 15:12:35 2021-10-29 15:47:52 0:35:17 0:23:36 0:11:41 smithi master ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi106 with status 5: 'sudo systemctl stop ceph-bf8c1ca4-38cc-11ec-8c28-001a4aab830c@mon.b'

fail 6467173 2021-10-29 14:39:29 2021-10-29 15:13:45 2021-10-29 15:45:54 0:32:09 0:23:48 0:08:21 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6467174 2021-10-29 14:39:30 2021-10-29 15:14:26 2021-10-29 15:33:57 0:19:31 0:10:14 0:09:17 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi133 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5ad5661f3e361e8c573b395b18740c607fdfcced shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2cfff0da-38cd-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6467175 2021-10-29 14:39:31 2021-10-29 15:14:46 2021-10-29 15:56:46 0:42:00 0:28:31 0:13:29 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/host rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 6467176 2021-10-29 14:39:32 2021-10-29 15:16:57 2021-10-29 15:50:46 0:33:49 0:23:22 0:10:27 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 6467177 2021-10-29 14:39:33 2021-10-29 15:17:17 2021-10-29 16:05:02 0:47:45 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} 2
dead 6467178 2021-10-29 14:39:34 2021-10-29 15:19:28 2021-10-29 16:05:52 0:46:24 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6467179 2021-10-29 14:39:34 2021-10-29 15:22:58 2021-10-29 15:52:52 0:29:54 0:19:58 0:09:56 smithi master centos 8.2 rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6467180 2021-10-29 14:39:35 2021-10-29 15:23:49 2021-10-29 15:41:44 0:17:55 0:07:23 0:10:32 smithi master ubuntu 20.04 rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi074 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5ad5661f3e361e8c573b395b18740c607fdfcced shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f15b9dee-38cd-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6467181 2021-10-29 14:39:36 2021-10-29 15:23:49 2021-10-29 15:57:13 0:33:24 0:22:17 0:11:07 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi188 with status 5: 'sudo systemctl stop ceph-9e947ed6-38ce-11ec-8c28-001a4aab830c@mon.b'

fail 6467182 2021-10-29 14:39:37 2021-10-29 15:24:09 2021-10-29 15:58:41 0:34:32 0:19:25 0:15:07 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 6467183 2021-10-29 14:39:38 2021-10-29 15:29:40 2021-10-29 16:05:20 0:35:40 smithi master centos 8.3 rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
fail 6467184 2021-10-29 14:39:38 2021-10-29 15:30:01 2021-10-29 16:01:35 0:31:34 0:19:44 0:11:50 smithi master centos 8.3 rados/cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi158 with status 5: 'sudo systemctl stop ceph-4e9e0b26-38cf-11ec-8c28-001a4aab830c@mon.b'

fail 6467185 2021-10-29 14:39:39 2021-10-29 15:30:01 2021-10-29 15:53:42 0:23:41 0:12:50 0:10:51 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi157 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5ad5661f3e361e8c573b395b18740c607fdfcced TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 6467186 2021-10-29 14:39:40 2021-10-29 15:30:01 2021-10-29 16:01:21 0:31:20 0:20:55 0:10:25 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 6467187 2021-10-29 14:39:41 2021-10-29 15:30:22 2021-10-29 16:04:34 0:34:12 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
dead 6467188 2021-10-29 14:39:41 2021-10-29 15:32:52 2021-10-29 16:04:42 0:31:50 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
dead 6467189 2021-10-29 14:39:42 2021-10-29 15:35:23 2021-10-29 16:05:57 0:30:34 smithi master centos 8.3 rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
dead 6467190 2021-10-29 14:39:43 2021-10-29 15:36:54 2021-10-29 16:04:50 0:27:56 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} 2
dead 6467191 2021-10-29 14:39:44 2021-10-29 15:37:54 2021-10-29 16:04:57 0:27:03 smithi master rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
dead 6467192 2021-10-29 14:39:44 2021-10-29 15:39:55 2021-10-29 16:05:11 0:25:16 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
dead 6467193 2021-10-29 14:39:45 2021-10-29 15:40:15 2021-10-29 16:05:39 0:25:24 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} 3
dead 6467194 2021-10-29 14:39:46 2021-10-29 15:40:35 2021-10-29 16:04:48 0:24:13 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
dead 6467195 2021-10-29 14:39:47 2021-10-29 15:41:46 2021-10-29 16:05:30 0:23:44 smithi master rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
dead 6467196 2021-10-29 14:39:47 2021-10-29 15:44:16 2021-10-29 16:05:44 0:21:28 smithi master rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} 2
dead 6467197 2021-10-29 14:39:48 2021-10-29 15:44:27 2021-10-29 16:04:27 0:20:00 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_nfs} 1
dead 6467198 2021-10-29 14:39:49 2021-10-29 15:45:47 2021-10-29 16:05:11 0:19:24 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
dead 6467199 2021-10-29 14:39:50 2021-10-29 15:45:57 2021-10-29 16:05:01 0:19:04 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
dead 6467200 2021-10-29 14:39:51 2021-10-29 15:46:28 2021-10-29 16:05:24 0:18:56 smithi master ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
dead 6467201 2021-10-29 14:39:51 2021-10-29 15:47:08 2021-10-29 16:05:01 0:17:53 smithi master centos 8.2 rados/cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} 2
dead 6467202 2021-10-29 14:39:52 2021-10-29 15:47:59 2021-10-29 16:05:47 0:17:48 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
dead 6467203 2021-10-29 14:39:53 2021-10-29 15:50:49 2021-10-29 16:04:33 0:13:44 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
dead 6467204 2021-10-29 14:39:54 2021-10-29 15:50:50 2021-10-29 16:04:02 0:13:12 smithi master centos 8.3 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} 2
dead 6467205 2021-10-29 14:39:55 2021-10-29 15:53:00 2021-10-29 16:05:53 0:12:53 smithi master centos 8.3 rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
dead 6467206 2021-10-29 14:39:55 2021-10-29 15:56:51 2021-10-29 16:04:23 0:07:32 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
dead 6467207 2021-10-29 14:39:56 2021-10-29 15:56:52 2021-10-29 16:04:24 0:07:32 smithi master centos 8.2 rados/cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
dead 6467208 2021-10-29 14:39:57 2021-10-29 15:57:22 2021-10-29 16:05:44 0:08:22 smithi master centos 8.2 rados/cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} 2
fail 6467209 2021-10-29 14:39:58 2021-10-29 15:58:43 2021-10-29 16:11:29 0:12:46 smithi master rhel 8.4 rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/basic 3-final}
Failure Reason:

machine smithi013.front.sepia.ceph.com is locked by scheduled_matan@teuthology, not scheduled_nojha@teuthology

fail 6467210 2021-10-29 14:39:58 2021-10-29 16:01:23 2021-10-29 16:12:46 0:11:23 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root}
Failure Reason:

machine smithi042.front.sepia.ceph.com is locked by scheduled_matan@teuthology, not scheduled_nojha@teuthology

fail 6467211 2021-10-29 14:39:59 2021-10-29 16:01:43 2021-10-29 16:13:44 0:12:01 smithi master rhel 8.4 rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests}
Failure Reason:

machine smithi040.front.sepia.ceph.com is locked by scheduled_matan@teuthology, not scheduled_nojha@teuthology

fail 6467212 2021-10-29 14:40:00 2021-10-29 16:03:04 2021-10-29 16:33:14 0:30:10 0:23:26 0:06:44 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds