Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7474263 2023-12-01 16:55:53 2023-12-01 20:59:10 2023-12-02 09:11:20 12:12:10 smithi main ubuntu 20.04 rados/objectstore/{backends/objectstore supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

pass 7474264 2023-12-01 16:55:54 2023-12-01 20:59:10 2023-12-01 21:38:18 0:39:08 0:30:13 0:08:55 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7474265 2023-12-01 16:55:55 2023-12-01 20:59:10 2023-12-01 21:41:38 0:42:28 0:32:01 0:10:27 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7474266 2023-12-01 16:55:56 2023-12-01 21:00:31 2023-12-01 21:30:41 0:30:10 0:19:14 0:10:56 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_cls_all} 2
Failure Reason:

"2023-12-01T21:27:47.898429+0000 mon.a (mon.0) 470 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7474267 2023-12-01 16:55:57 2023-12-01 21:01:11 2023-12-01 21:42:59 0:41:48 0:31:28 0:10:20 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7474268 2023-12-01 16:55:58 2023-12-01 21:01:22 2023-12-01 21:41:49 0:40:27 0:28:54 0:11:33 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7474269 2023-12-01 16:55:58 2023-12-01 21:02:42 2023-12-01 21:30:12 0:27:30 0:16:33 0:10:57 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
pass 7474270 2023-12-01 16:55:59 2023-12-01 21:02:43 2023-12-01 21:40:14 0:37:31 0:26:52 0:10:39 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
pass 7474271 2023-12-01 16:56:00 2023-12-01 21:03:23 2023-12-01 21:26:16 0:22:53 0:13:43 0:09:10 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
pass 7474272 2023-12-01 16:56:01 2023-12-01 21:03:23 2023-12-01 21:38:00 0:34:37 0:18:31 0:16:06 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7474273 2023-12-01 16:56:02 2023-12-01 21:08:54 2023-12-01 21:50:59 0:42:05 0:29:55 0:12:10 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7474274 2023-12-01 16:56:03 2023-12-01 21:09:35 2023-12-01 21:29:44 0:20:09 0:08:58 0:11:11 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi135 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0dc074ce0b0b3b4f393f56dd32ef36884ebf45a9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

pass 7474275 2023-12-01 16:56:03 2023-12-01 21:09:35 2023-12-01 21:37:17 0:27:42 0:16:51 0:10:51 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
pass 7474276 2023-12-01 16:56:04 2023-12-01 21:11:26 2023-12-01 21:50:24 0:38:58 0:29:37 0:09:21 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7474277 2023-12-01 16:56:05 2023-12-01 21:11:26 2023-12-01 21:44:04 0:32:38 0:21:45 0:10:53 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 7474278 2023-12-01 16:56:06 2023-12-01 21:12:37 2023-12-01 21:53:10 0:40:33 0:28:50 0:11:43 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7474279 2023-12-01 16:56:07 2023-12-01 21:13:47 2023-12-01 21:37:04 0:23:17 0:14:19 0:08:58 smithi main rhel 8.4 rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi162 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0dc074ce0b0b3b4f393f56dd32ef36884ebf45a9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 7474280 2023-12-01 16:56:08 2023-12-01 21:13:48 2023-12-01 21:47:16 0:33:28 0:20:18 0:13:10 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2023-12-01T21:42:58.353990+0000 mon.a (mon.0) 468 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7474281 2023-12-01 16:56:09 2023-12-01 21:15:58 2023-12-01 21:56:50 0:40:52 0:28:28 0:12:24 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7474282 2023-12-01 16:56:09 2023-12-01 21:17:39 2023-12-01 21:38:24 0:20:45 0:07:55 0:12:50 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=0dc074ce0b0b3b4f393f56dd32ef36884ebf45a9

pass 7474283 2023-12-01 16:56:10 2023-12-01 21:17:39 2023-12-01 22:01:39 0:44:00 0:30:17 0:13:43 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7474284 2023-12-01 16:56:11 2023-12-01 21:19:10 2023-12-01 21:47:55 0:28:45 0:17:17 0:11:28 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi029 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0dc074ce0b0b3b4f393f56dd32ef36884ebf45a9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 78772b64-9091-11ee-95a2-87774f69a715 -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 7474285 2023-12-01 16:56:12 2023-12-01 21:19:40 2023-12-01 22:01:33 0:41:53 0:31:58 0:09:55 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7474286 2023-12-01 16:56:13 2023-12-01 21:20:41 2023-12-01 22:07:45 0:47:04 0:35:56 0:11:08 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} 3
fail 7474287 2023-12-01 16:56:14 2023-12-01 21:21:21 2023-12-01 21:52:48 0:31:27 0:19:15 0:12:12 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 481de77c-9092-11ee-95a2-87774f69a715 -e sha1=0dc074ce0b0b3b4f393f56dd32ef36884ebf45a9 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 7474288 2023-12-01 16:56:15 2023-12-01 21:23:42 2023-12-01 21:51:17 0:27:35 0:15:58 0:11:37 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
fail 7474289 2023-12-01 16:56:15 2023-12-01 21:24:02 2023-12-01 21:45:21 0:21:19 0:12:34 0:08:45 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Command failed on smithi154 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0dc074ce0b0b3b4f393f56dd32ef36884ebf45a9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5e5680b2-9092-11ee-95a2-87774f69a715 -- ceph mon dump -f json'

pass 7474290 2023-12-01 16:56:16 2023-12-01 21:24:03 2023-12-01 22:05:23 0:41:20 0:31:04 0:10:16 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7474291 2023-12-01 16:56:17 2023-12-01 21:24:13 2023-12-01 21:51:25 0:27:12 0:19:35 0:07:37 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
pass 7474292 2023-12-01 16:56:18 2023-12-01 21:24:24 2023-12-01 21:56:27 0:32:03 0:20:20 0:11:43 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
pass 7474293 2023-12-01 16:56:19 2023-12-01 21:25:04 2023-12-01 22:09:43 0:44:39 0:31:46 0:12:53 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7474294 2023-12-01 16:56:20 2023-12-01 21:26:25 2023-12-01 22:09:05 0:42:40 0:30:34 0:12:06 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7474295 2023-12-01 16:56:20 2023-12-01 21:28:05 2023-12-01 22:14:31 0:46:26 0:30:55 0:15:31 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7474296 2023-12-01 16:56:21 2023-12-01 21:29:36 2023-12-01 22:14:14 0:44:38 0:32:46 0:11:52 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
pass 7474297 2023-12-01 16:56:22 2023-12-01 21:30:06 2023-12-01 21:53:09 0:23:03 0:13:15 0:09:48 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
pass 7474298 2023-12-01 16:56:23 2023-12-01 21:30:06 2023-12-01 21:59:41 0:29:35 0:20:58 0:08:37 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
pass 7474299 2023-12-01 16:56:24 2023-12-01 21:30:17 2023-12-01 21:57:39 0:27:22 0:17:59 0:09:23 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7474300 2023-12-01 16:56:25 2023-12-01 21:30:47 2023-12-01 22:10:24 0:39:37 0:28:46 0:10:51 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7474301 2023-12-01 16:56:25 2023-12-01 21:30:58 2023-12-01 21:49:00 0:18:02 0:08:43 0:09:19 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi106 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0dc074ce0b0b3b4f393f56dd32ef36884ebf45a9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

pass 7474302 2023-12-01 16:56:26 2023-12-01 21:30:58 2023-12-01 22:29:12 0:58:14 0:44:01 0:14:13 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} 3
fail 7474303 2023-12-01 16:56:27 2023-12-01 21:35:39 2023-12-01 22:22:57 0:47:18 0:34:43 0:12:35 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

timeout expired in wait_until_healthy

pass 7474304 2023-12-01 16:56:28 2023-12-01 21:37:20 2023-12-01 22:09:26 0:32:06 0:21:29 0:10:37 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 7474305 2023-12-01 16:56:29 2023-12-01 21:38:00 2023-12-01 22:15:29 0:37:29 0:26:36 0:10:53 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7474306 2023-12-01 16:56:30 2023-12-01 21:38:20 2023-12-01 22:08:15 0:29:55 0:20:12 0:09:43 smithi main rhel 8.4 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/one workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi170 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0dc074ce0b0b3b4f393f56dd32ef36884ebf45a9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

pass 7474307 2023-12-01 16:56:30 2023-12-01 21:38:31 2023-12-01 22:29:20 0:50:49 0:38:12 0:12:37 smithi main centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
pass 7474308 2023-12-01 16:56:31 2023-12-01 21:40:23 2023-12-02 01:06:17 3:25:54 3:16:18 0:09:36 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/osd} 1
fail 7474309 2023-12-01 16:56:32 2023-12-01 21:41:04 2023-12-01 22:16:44 0:35:40 0:24:36 0:11:04 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi070 with status 5: 'sudo systemctl stop ceph-72972d76-9095-11ee-95a2-87774f69a715@mon.smithi070'

pass 7474310 2023-12-01 16:56:33 2023-12-01 21:41:44 2023-12-01 22:29:07 0:47:23 0:35:14 0:12:09 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/rbd_cls} 3
fail 7474311 2023-12-01 16:56:34 2023-12-01 21:43:05 2023-12-01 22:28:29 0:45:24 0:37:20 0:08:04 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7474312 2023-12-01 16:56:35 2023-12-01 21:43:25 2023-12-01 22:01:43 0:18:18 0:07:56 0:10:22 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=0dc074ce0b0b3b4f393f56dd32ef36884ebf45a9

fail 7474313 2023-12-01 16:56:35 2023-12-01 21:44:06 2023-12-01 22:18:44 0:34:38 0:23:10 0:11:28 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi154 with status 5: 'sudo systemctl stop ceph-d5e22b9c-9095-11ee-95a2-87774f69a715@mon.smithi154'

fail 7474314 2023-12-01 16:56:36 2023-12-01 21:45:26 2023-12-01 22:18:33 0:33:07 0:21:30 0:11:37 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2023-12-01T22:15:50.943674+0000 mon.a (mon.0) 471 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7474315 2023-12-01 16:56:37 2023-12-01 21:47:17 2023-12-01 22:23:15 0:35:58 0:24:24 0:11:34 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi073 with status 5: 'sudo systemctl stop ceph-5558ab6c-9096-11ee-95a2-87774f69a715@mon.smithi073'

fail 7474316 2023-12-01 16:56:38 2023-12-01 21:47:57 2023-12-01 22:36:10 0:48:13 0:38:01 0:10:12 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

timeout expired in wait_until_healthy