Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6465272 2021-10-28 20:09:26 2021-10-28 20:10:13 2021-10-28 20:56:15 0:46:02 0:30:55 0:15:07 smithi master ubuntu 20.04 rados:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi154 with status 5: 'sudo systemctl stop ceph-f3aab92c-382d-11ec-8c28-001a4aab830c@mon.b'

dead 6465273 2021-10-28 20:09:27 2021-10-28 20:10:13 2021-10-29 08:22:05 12:11:52 smithi master centos 8.3 rados:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

dead 6465274 2021-10-28 20:09:28 2021-10-28 20:10:13 2021-10-29 08:21:52 12:11:39 smithi master centos 8.2 rados:cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6465275 2021-10-28 20:09:29 2021-10-28 20:10:13 2021-10-28 20:36:53 0:26:40 0:16:10 0:10:30 smithi master centos 8.3 rados:cephadm/orchestrator_cli/{0-random-distro$/{centos_8.3_container_tools_3.0} 2-node-mgr agent/off orchestrator_cli} 2
fail 6465276 2021-10-28 20:09:30 2021-10-28 20:10:14 2021-10-28 20:50:24 0:40:10 0:25:20 0:14:50 smithi master centos 8.2 rados:cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465277 2021-10-28 20:09:30 2021-10-28 20:10:14 2021-10-28 20:45:55 0:35:41 0:24:24 0:11:17 smithi master centos 8.2 rados:cephadm/smoke/{0-nvme-loop agent/on distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi186 with status 5: 'sudo systemctl stop ceph-6be7bf62-382d-11ec-8c28-001a4aab830c@mon.b'

fail 6465278 2021-10-28 20:09:31 2021-10-28 20:10:14 2021-10-28 20:30:52 0:20:38 0:13:47 0:06:51 smithi master rhel 8.4 rados:cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi194 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:434b1409634993c5a89bbdcd2c0af0f073acdf91 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1d16f8f8-382d-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6465279 2021-10-28 20:09:32 2021-10-28 20:10:15 2021-10-28 20:51:12 0:40:57 0:27:07 0:13:50 smithi master centos 8.2 rados:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi178 with status 5: 'sudo systemctl stop ceph-2148962e-382e-11ec-8c28-001a4aab830c@mon.b'

dead 6465280 2021-10-28 20:09:33 2021-10-28 20:10:15 2021-10-29 08:24:57 12:14:42 smithi master centos 8.3 rados:cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

hit max job timeout

fail 6465281 2021-10-28 20:09:34 2021-10-28 20:10:15 2021-10-28 20:50:42 0:40:27 0:26:45 0:13:42 smithi master centos 8.2 rados:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi197 with status 5: 'sudo systemctl stop ceph-2051586e-382e-11ec-8c28-001a4aab830c@mon.b'

pass 6465282 2021-10-28 20:09:35 2021-10-28 20:10:16 2021-10-28 20:39:16 0:29:00 0:14:03 0:14:57 smithi master centos 8.2 rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 agent/on mon_election/classic task/test_adoption} 1
fail 6465283 2021-10-28 20:09:36 2021-10-28 20:10:16 2021-10-28 20:52:37 0:42:21 0:29:15 0:13:06 smithi master ubuntu 20.04 rados:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465284 2021-10-28 20:09:36 2021-10-28 20:10:16 2021-10-28 20:50:14 0:39:58 0:26:54 0:13:04 smithi master centos 8.3 rados:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi148 with status 5: 'sudo systemctl stop ceph-4c11e9fa-382e-11ec-8c28-001a4aab830c@mon.b'

fail 6465285 2021-10-28 20:09:37 2021-10-28 20:10:16 2021-10-28 20:53:16 0:43:00 0:34:45 0:08:15 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi093 with status 5: 'sudo systemctl stop ceph-d8833e98-382e-11ec-8c28-001a4aab830c@mon.b'

fail 6465286 2021-10-28 20:09:38 2021-10-28 20:10:17 2021-10-28 20:48:33 0:38:16 0:25:05 0:13:11 smithi master centos 8.2 rados:cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465287 2021-10-28 20:09:39 2021-10-28 20:10:17 2021-10-28 20:49:53 0:39:36 0:27:02 0:12:34 smithi master centos 8.3 rados:cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465288 2021-10-28 20:09:40 2021-10-28 20:10:17 2021-10-28 20:50:02 0:39:45 0:25:29 0:14:16 smithi master centos 8.3 rados:cephadm/smoke/{0-nvme-loop agent/off distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi140 with status 5: 'sudo systemctl stop ceph-cbbec098-382d-11ec-8c28-001a4aab830c@mon.b'

fail 6465289 2021-10-28 20:09:41 2021-10-28 20:10:18 2021-10-28 20:53:33 0:43:15 0:34:53 0:08:22 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi099 with status 5: 'sudo systemctl stop ceph-172a4d12-382f-11ec-8c28-001a4aab830c@mon.b'

fail 6465290 2021-10-28 20:09:42 2021-10-28 20:10:18 2021-10-28 20:37:34 0:27:16 0:18:34 0:08:42 smithi master centos 8.2 rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 agent/off mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi102 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=434b1409634993c5a89bbdcd2c0af0f073acdf91 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 6465291 2021-10-28 20:09:42 2021-10-28 20:10:18 2021-10-28 20:56:38 0:46:20 0:33:28 0:12:52 smithi master ubuntu 20.04 rados:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi174 with status 5: 'sudo systemctl stop ceph-bb2d1f9a-382d-11ec-8c28-001a4aab830c@mon.b'

fail 6465292 2021-10-28 20:09:43 2021-10-28 20:10:19 2021-10-28 20:49:32 0:39:13 0:25:30 0:13:43 smithi master centos 8.3 rados:cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465293 2021-10-28 20:09:44 2021-10-28 20:10:19 2021-10-28 20:50:13 0:39:54 0:26:55 0:12:59 smithi master centos 8.2 rados:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi097 with status 5: 'sudo systemctl stop ceph-0cd5eb42-382e-11ec-8c28-001a4aab830c@mon.b'

dead 6465294 2021-10-28 20:09:45 2021-10-28 20:10:19 2021-10-29 08:22:42 12:12:23 smithi master ubuntu 20.04 rados:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
Failure Reason:

hit max job timeout

fail 6465295 2021-10-28 20:09:46 2021-10-28 20:10:20 2021-10-28 20:49:53 0:39:33 0:27:36 0:11:57 smithi master centos 8.2 rados:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi134 with status 5: 'sudo systemctl stop ceph-f7b3ebd8-382d-11ec-8c28-001a4aab830c@mon.b'

fail 6465296 2021-10-28 20:09:47 2021-10-28 20:10:20 2021-10-28 20:47:06 0:36:46 0:28:35 0:08:11 smithi master rhel 8.4 rados:cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465297 2021-10-28 20:09:48 2021-10-28 20:10:20 2021-10-28 20:48:39 0:38:19 0:28:16 0:10:03 smithi master rhel 8.4 rados:cephadm/smoke/{0-nvme-loop agent/on distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi155 with status 5: 'sudo systemctl stop ceph-ae807544-382d-11ec-8c28-001a4aab830c@mon.b'

fail 6465298 2021-10-28 20:09:49 2021-10-28 20:10:21 2021-10-28 20:51:43 0:41:22 0:26:49 0:14:33 smithi master centos 8.3 rados:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi172 with status 5: 'sudo systemctl stop ceph-47b750b6-382e-11ec-8c28-001a4aab830c@mon.b'

pass 6465299 2021-10-28 20:09:49 2021-10-28 20:10:21 2021-10-28 20:30:43 0:20:22 0:07:34 0:12:48 smithi master centos 8.2 rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 agent/on mon_election/classic task/test_cephadm_repos} 1
fail 6465300 2021-10-28 20:09:50 2021-10-28 20:10:21 2021-10-28 20:54:11 0:43:50 0:34:24 0:09:26 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi187 with status 5: 'sudo systemctl stop ceph-06becf48-382f-11ec-8c28-001a4aab830c@mon.b'

fail 6465301 2021-10-28 20:09:51 2021-10-28 20:10:21 2021-10-28 20:47:43 0:37:22 0:28:25 0:08:57 smithi master rhel 8.4 rados:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 6465302 2021-10-28 20:09:52 2021-10-28 20:10:22 2021-10-29 08:23:24 12:13:02 smithi master centos 8.3 rados:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6465303 2021-10-28 20:09:53 2021-10-28 20:10:22 2021-10-28 20:55:14 0:44:52 0:34:56 0:09:56 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi167 with status 5: 'sudo systemctl stop ceph-2996faea-382f-11ec-8c28-001a4aab830c@mon.b'

fail 6465304 2021-10-28 20:09:54 2021-10-28 20:10:22 2021-10-28 20:49:37 0:39:15 0:29:31 0:09:44 smithi master rhel 8.4 rados:cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465305 2021-10-28 20:09:55 2021-10-28 20:10:23 2021-10-28 20:45:54 0:35:31 0:27:41 0:07:50 smithi master rhel 8.4 rados:cephadm/smoke/{0-nvme-loop agent/off distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi123 with status 5: 'sudo systemctl stop ceph-662550d0-382d-11ec-8c28-001a4aab830c@mon.b'

fail 6465306 2021-10-28 20:09:56 2021-10-28 20:10:23 2021-10-28 20:55:45 0:45:22 0:29:44 0:15:38 smithi master ubuntu 20.04 rados:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi077 with status 5: 'sudo systemctl stop ceph-f6aef5b6-382d-11ec-8c28-001a4aab830c@mon.b'

fail 6465307 2021-10-28 20:09:57 2021-10-28 20:10:23 2021-10-28 20:38:59 0:28:36 0:15:36 0:13:00 smithi master centos 8.2 rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 agent/off mon_election/connectivity task/test_nfs} 1
Failure Reason:

Command failed on smithi016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:434b1409634993c5a89bbdcd2c0af0f073acdf91 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fb707b7e-382d-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6465308 2021-10-28 20:09:58 2021-10-28 20:10:24 2021-10-28 20:49:41 0:39:17 0:30:39 0:08:38 smithi master rhel 8.4 rados:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 6465309 2021-10-28 20:09:58 2021-10-28 20:10:24 2021-10-29 08:20:54 12:10:30 smithi master centos 8.2 rados:cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6465310 2021-10-28 20:09:59 2021-10-28 20:10:24 2021-10-28 20:46:38 0:36:14 0:26:26 0:09:48 smithi master centos 8.2 rados:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi052 with status 5: 'sudo systemctl stop ceph-6c7ce9ca-382d-11ec-8c28-001a4aab830c@mon.b'

dead 6465311 2021-10-28 20:10:00 2021-10-28 20:10:25 2021-10-29 08:24:31 12:14:06 smithi master ubuntu 20.04 rados:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

hit max job timeout

fail 6465312 2021-10-28 20:10:01 2021-10-28 20:10:25 2021-10-28 20:51:16 0:40:51 0:26:40 0:14:11 smithi master centos 8.2 rados:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi143 with status 5: 'sudo systemctl stop ceph-432fbeb6-382e-11ec-8c28-001a4aab830c@mon.b'

fail 6465313 2021-10-28 20:10:02 2021-10-28 20:10:25 2021-10-28 20:52:15 0:41:50 0:27:30 0:14:20 smithi master centos 8.3 rados:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi041 with status 5: 'sudo systemctl stop ceph-56b495d8-382e-11ec-8c28-001a4aab830c@mon.b'

fail 6465314 2021-10-28 20:10:03 2021-10-28 20:10:26 2021-10-28 20:55:45 0:45:19 0:30:36 0:14:43 smithi master ubuntu 20.04 rados:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465315 2021-10-28 20:10:04 2021-10-28 20:10:26 2021-10-28 20:55:32 0:45:06 0:30:10 0:14:56 smithi master ubuntu 20.04 rados:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465316 2021-10-28 20:10:05 2021-10-28 20:10:27 2021-10-28 20:54:36 0:44:09 0:30:09 0:14:00 smithi master ubuntu 20.04 rados:cephadm/smoke/{0-nvme-loop agent/on distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi125 with status 5: 'sudo systemctl stop ceph-9451b674-382d-11ec-8c28-001a4aab830c@mon.b'

fail 6465317 2021-10-28 20:10:06 2021-10-28 20:10:27 2021-10-28 20:54:21 0:43:54 0:33:44 0:10:10 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi183 with status 5: 'sudo systemctl stop ceph-093486be-382f-11ec-8c28-001a4aab830c@mon.b'

fail 6465318 2021-10-28 20:10:07 2021-10-28 20:10:27 2021-10-28 20:29:56 0:19:29 0:10:56 0:08:33 smithi master centos 8.2 rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 agent/on mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi173 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:434b1409634993c5a89bbdcd2c0af0f073acdf91 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6caf13aa-382d-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6465319 2021-10-28 20:10:07 2021-10-28 20:10:27 2021-10-28 20:54:59 0:44:32 0:34:49 0:09:43 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi179 with status 5: 'sudo systemctl stop ceph-24bfbfca-382f-11ec-8c28-001a4aab830c@mon.b'

fail 6465320 2021-10-28 20:10:08 2021-10-28 20:10:28 2021-10-28 20:51:01 0:40:33 0:26:32 0:14:01 smithi master centos 8.2 rados:cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465321 2021-10-28 20:10:09 2021-10-28 20:10:28 2021-10-28 20:55:46 0:45:18 0:32:35 0:12:43 smithi master ubuntu 20.04 rados:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi145 with status 5: 'sudo systemctl stop ceph-bb16aa30-382d-11ec-8c28-001a4aab830c@mon.b'

dead 6465322 2021-10-28 20:10:10 2021-10-28 20:10:28 2021-10-29 08:22:21 12:11:53 smithi master centos 8.3 rados:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6465323 2021-10-28 20:10:11 2021-10-28 20:10:29 2021-10-28 20:38:33 0:28:04 0:16:24 0:11:40 smithi master centos 8.3 rados:cephadm/orchestrator_cli/{0-random-distro$/{centos_8.3_container_tools_3.0} 2-node-mgr agent/on orchestrator_cli} 2
fail 6465324 2021-10-28 20:10:12 2021-10-28 20:10:29 2021-10-28 20:50:16 0:39:47 0:26:21 0:13:26 smithi master centos 8.2 rados:cephadm/osds/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465325 2021-10-28 20:10:13 2021-10-28 20:10:29 2021-10-28 21:41:21 1:30:52 1:16:12 0:14:40 smithi master centos 8.2 rados:cephadm/smoke/{0-nvme-loop agent/off distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi059 with status 5: 'sudo systemctl stop ceph-e18e8dcc-382d-11ec-8c28-001a4aab830c@mon.b'

fail 6465326 2021-10-28 20:10:14 2021-10-28 20:10:30 2021-10-28 20:36:01 0:25:31 0:13:09 0:12:22 smithi master centos 8.3 rados:cephadm/smoke-singlehost/{0-distro$/{centos_8.3_container_tools_3.0} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi017 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:434b1409634993c5a89bbdcd2c0af0f073acdf91 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 977b5ba2-382d-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6465327 2021-10-28 20:10:15 2021-10-28 20:10:30 2021-10-28 20:51:16 0:40:46 0:27:02 0:13:44 smithi master centos 8.2 rados:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi138 with status 5: 'sudo systemctl stop ceph-488779e4-382e-11ec-8c28-001a4aab830c@mon.b'

dead 6465328 2021-10-28 20:10:16 2021-10-28 20:10:30 2021-10-29 08:23:22 12:12:52 smithi master centos 8.3 rados:cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
Failure Reason:

hit max job timeout

fail 6465329 2021-10-28 20:10:17 2021-10-28 20:10:31 2021-10-28 20:48:42 0:38:11 0:27:45 0:10:26 smithi master centos 8.2 rados:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi159 with status 5: 'sudo systemctl stop ceph-c3631d9a-382d-11ec-8c28-001a4aab830c@mon.b'

pass 6465330 2021-10-28 20:10:18 2021-10-28 20:10:31 2021-10-28 20:36:32 0:26:01 0:15:10 0:10:51 smithi master centos 8.2 rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 agent/off mon_election/connectivity task/test_adoption} 1
fail 6465331 2021-10-28 20:10:19 2021-10-28 20:50:29 1582 smithi master centos 8.3 rados:cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465332 2021-10-28 20:10:20 2021-10-28 20:10:31 2021-10-28 21:42:20 1:31:49 1:17:09 0:14:40 smithi master centos 8.3 rados:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi150 with status 5: 'sudo systemctl stop ceph-54dc9f8a-382e-11ec-8c28-001a4aab830c@mon.b'

fail 6465333 2021-10-28 20:10:21 2021-10-28 20:10:32 2021-10-28 20:55:19 0:44:47 0:34:02 0:10:45 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi132 with status 5: 'sudo systemctl stop ceph-2803c492-382f-11ec-8c28-001a4aab830c@mon.b'

fail 6465334 2021-10-28 20:10:22 2021-10-28 20:10:32 2021-10-28 20:49:27 0:38:55 0:26:08 0:12:47 smithi master centos 8.3 rados:cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465335 2021-10-28 20:10:23 2021-10-28 20:10:32 2021-10-28 21:00:13 0:49:41 0:19:05 0:30:36 smithi master centos 8.3 rados:cephadm/smoke/{0-nvme-loop agent/on distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi173 with status 5: 'sudo systemctl stop ceph-16d33eae-3830-11ec-8c28-001a4aab830c@mon.b'

fail 6465336 2021-10-28 20:10:24 2021-10-28 20:37:02 2021-10-28 21:57:00 1:19:58 1:12:28 0:07:30 smithi master rhel 8.4 rados:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465337 2021-10-28 20:10:25 2021-10-28 20:37:42 2021-10-28 21:18:08 0:40:26 0:33:21 0:07:05 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi191 with status 5: 'sudo systemctl stop ceph-914af832-3832-11ec-8c28-001a4aab830c@mon.b'

fail 6465338 2021-10-28 20:10:26 2021-10-28 20:38:43 2021-10-28 21:01:50 0:23:07 0:14:22 0:08:45 smithi master centos 8.2 rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 agent/on mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=434b1409634993c5a89bbdcd2c0af0f073acdf91 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 6465339 2021-10-28 20:10:27 2021-10-28 21:22:17 1497 smithi master ubuntu 20.04 rados:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi186 with status 5: 'sudo systemctl stop ceph-4b65eb38-3832-11ec-8c28-001a4aab830c@mon.b'

dead 6465340 2021-10-28 20:10:28 2021-10-28 20:46:04 2021-10-29 08:56:02 12:09:58 smithi master centos 8.2 rados:cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6465341 2021-10-28 20:10:29 2021-10-28 20:46:04 2021-10-28 21:19:44 0:33:40 0:22:45 0:10:55 smithi master centos 8.2 rados:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi052 with status 5: 'sudo systemctl stop ceph-99c190b6-3832-11ec-8c28-001a4aab830c@mon.b'

dead 6465342 2021-10-28 20:10:30 2021-10-28 20:46:45 2021-10-29 08:58:58 12:12:13 smithi master ubuntu 20.04 rados:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

hit max job timeout

fail 6465343 2021-10-28 20:10:31 2021-10-28 21:19:01 1290 smithi master centos 8.2 rados:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi196 with status 5: 'sudo systemctl stop ceph-b7427d9e-3832-11ec-8c28-001a4aab830c@mon.b'

fail 6465344 2021-10-28 20:10:32 2021-10-28 20:47:45 2021-10-28 21:19:50 0:32:05 0:23:58 0:08:07 smithi master rhel 8.4 rados:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465345 2021-10-28 20:10:33 2021-10-28 20:48:36 2021-10-28 21:20:16 0:31:40 0:24:15 0:07:25 smithi master rhel 8.4 rados:cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465346 2021-10-28 20:10:34 2021-10-28 20:48:46 2021-10-28 21:18:21 0:29:35 0:23:15 0:06:20 smithi master rhel 8.4 rados:cephadm/smoke/{0-nvme-loop agent/off distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi155 with status 5: 'sudo systemctl stop ceph-9dd2d624-3832-11ec-8c28-001a4aab830c@mon.b'

fail 6465347 2021-10-28 20:10:35 2021-10-28 20:48:47 2021-10-28 21:22:53 0:34:06 0:22:12 0:11:54 smithi master centos 8.3 rados:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi190 with status 5: 'sudo systemctl stop ceph-1c8afa5a-3833-11ec-8c28-001a4aab830c@mon.b'

pass 6465348 2021-10-28 20:10:35 2021-10-28 20:49:37 2021-10-28 21:05:12 0:15:35 0:07:07 0:08:28 smithi master centos 8.2 rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 agent/off mon_election/connectivity task/test_cephadm_repos} 1
fail 6465349 2021-10-28 20:10:36 2021-10-28 20:49:37 2021-10-28 21:29:21 0:39:44 0:33:09 0:06:35 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi104 with status 5: 'sudo systemctl stop ceph-367e4d58-3834-11ec-8c28-001a4aab830c@mon.b'

fail 6465350 2021-10-28 20:10:37 2021-10-28 20:49:38 2021-10-28 21:21:34 0:31:56 0:21:42 0:10:14 smithi master ubuntu 20.04 rados:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 6465351 2021-10-28 20:10:38 2021-10-28 20:49:48 2021-10-29 09:00:04 12:10:16 smithi master centos 8.3 rados:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6465352 2021-10-28 20:10:39 2021-10-28 20:49:48 2021-10-28 21:31:04 0:41:16 0:33:55 0:07:21 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi192 with status 5: 'sudo systemctl stop ceph-32b6c3da-3834-11ec-8c28-001a4aab830c@mon.b'

fail 6465353 2021-10-28 20:10:40 2021-10-28 20:49:58 2021-10-28 21:21:04 0:31:06 0:24:24 0:06:42 smithi master rhel 8.4 rados:cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465354 2021-10-28 20:10:41 2021-10-28 20:49:59 2021-10-28 21:21:28 0:31:29 0:23:30 0:07:59 smithi master rhel 8.4 rados:cephadm/smoke/{0-nvme-loop agent/on distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi140 with status 5: 'sudo systemctl stop ceph-ce62d4f6-3832-11ec-8c28-001a4aab830c@mon.b'

fail 6465355 2021-10-28 20:10:42 2021-10-28 20:50:09 2021-10-28 21:26:23 0:36:14 0:24:53 0:11:21 smithi master ubuntu 20.04 rados:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi084 with status 5: 'sudo systemctl stop ceph-d9a1f2c0-3832-11ec-8c28-001a4aab830c@mon.b'

fail 6465356 2021-10-28 20:10:43 2021-10-28 20:50:19 2021-10-28 21:09:31 0:19:12 0:10:47 0:08:25 smithi master centos 8.2 rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 agent/on mon_election/classic task/test_nfs} 1
Failure Reason:

Command failed on smithi148 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:434b1409634993c5a89bbdcd2c0af0f073acdf91 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ed0787b2-3832-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6465357 2021-10-28 20:10:44 2021-10-28 20:50:20 2021-10-28 21:19:40 0:29:20 0:19:32 0:09:48 smithi master centos 8.2 rados:cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465358 2021-10-28 20:10:45 2021-10-28 20:50:20 2021-10-28 21:22:02 0:31:42 0:21:33 0:10:09 smithi master centos 8.2 rados:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi204 with status 5: 'sudo systemctl stop ceph-1f63c4a0-3833-11ec-8c28-001a4aab830c@mon.b'

dead 6465359 2021-10-28 20:10:45 2021-10-28 20:50:30 2021-10-29 09:02:31 12:12:01 smithi master ubuntu 20.04 rados:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
Failure Reason:

hit max job timeout

fail 6465360 2021-10-28 20:10:46 2021-10-28 20:50:31 2021-10-28 21:21:56 0:31:25 0:22:01 0:09:24 smithi master centos 8.2 rados:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi197 with status 5: 'sudo systemctl stop ceph-201394ca-3833-11ec-8c28-001a4aab830c@mon.b'

fail 6465361 2021-10-28 20:10:47 2021-10-28 20:50:51 2021-10-28 21:24:10 0:33:19 0:22:48 0:10:31 smithi master centos 8.3 rados:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi098 with status 5: 'sudo systemctl stop ceph-511c043a-3833-11ec-8c28-001a4aab830c@mon.b'

fail 6465362 2021-10-28 20:10:48 2021-10-28 20:51:11 2021-10-28 21:23:27 0:32:16 0:20:58 0:11:18 smithi master centos 8.3 rados:cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465363 2021-10-28 20:10:49 2021-10-28 20:51:22 2021-10-28 22:17:26 1:26:04 1:14:23 0:11:41 smithi master ubuntu 20.04 rados:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6465364 2021-10-28 20:10:50 2021-10-28 20:51:22 2021-10-28 21:27:52 0:36:30 0:23:57 0:12:33 smithi master ubuntu 20.04 rados:cephadm/smoke/{0-nvme-loop agent/off distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi178 with status 5: 'sudo systemctl stop ceph-d2c786ea-3832-11ec-8c28-001a4aab830c@mon.b'

fail 6465365 2021-10-28 20:10:51 2021-10-28 20:51:22 2021-10-28 21:31:07 0:39:45 0:32:49 0:06:56 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi172 with status 5: 'sudo systemctl stop ceph-5bcb2018-3834-11ec-8c28-001a4aab830c@mon.b'

fail 6465366 2021-10-28 20:10:52 2021-10-28 20:51:53 2021-10-28 21:11:07 0:19:14 0:10:41 0:08:33 smithi master centos 8.2 rados:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 agent/off mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Command failed on smithi074 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:434b1409634993c5a89bbdcd2c0af0f073acdf91 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 28f9d6bc-3833-11ec-8c28-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6465367 2021-10-28 20:10:53 2021-10-28 20:51:53 2021-10-28 21:33:22 0:41:29 0:34:51 0:06:38 smithi master rhel 8.4 rados:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi041 with status 5: 'sudo systemctl stop ceph-8fb44a4e-3834-11ec-8c28-001a4aab830c@mon.b'