Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7501336 2023-12-26 16:06:31 2023-12-26 16:07:20 2023-12-26 22:34:41 6:27:21 5:32:35 0:54:46 smithi main ubuntu 20.04 rados/objectstore/{backends/objectstore supported-random-distro$/{ubuntu_latest}} 1
pass 7501337 2023-12-26 16:06:32 2023-12-26 16:07:21 2023-12-26 16:51:36 0:44:15 0:29:45 0:14:30 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
fail 7501338 2023-12-26 16:06:33 2023-12-26 16:07:21 2023-12-26 16:49:32 0:42:11 0:27:31 0:14:40 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} 2
Failure Reason:

"2023-12-26T16:43:54.769285+0000 mon.a (mon.0) 482 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7501339 2023-12-26 16:06:33 2023-12-26 16:07:21 2023-12-26 16:53:26 0:46:05 0:32:18 0:13:47 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
pass 7501340 2023-12-26 16:06:34 2023-12-26 16:07:21 2023-12-26 16:59:56 0:52:35 0:37:46 0:14:49 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7501341 2023-12-26 16:06:35 2023-12-26 16:07:22 2023-12-26 16:59:34 0:52:12 0:38:27 0:13:45 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7501342 2023-12-26 16:06:36 2023-12-26 16:07:22 2023-12-26 16:54:26 0:47:04 0:36:02 0:11:02 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
pass 7501343 2023-12-26 16:06:36 2023-12-26 16:07:23 2023-12-26 16:39:20 0:31:57 0:21:32 0:10:25 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7501344 2023-12-26 16:06:37 2023-12-26 16:07:23 2023-12-26 17:06:46 0:59:23 0:43:51 0:15:32 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/cache-snaps} 3
pass 7501345 2023-12-26 16:06:38 2023-12-26 16:07:23 2023-12-26 16:58:23 0:51:00 0:37:14 0:13:46 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7501346 2023-12-26 16:06:38 2023-12-26 16:07:24 2023-12-26 16:29:40 0:22:16 0:12:55 0:09:21 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs 3-final} 2
pass 7501347 2023-12-26 16:06:39 2023-12-26 16:07:24 2023-12-26 16:40:14 0:32:50 0:17:48 0:15:02 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} 1
fail 7501348 2023-12-26 16:06:40 2023-12-26 16:07:24 2023-12-26 16:43:13 0:35:49 0:21:36 0:14:13 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi160 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b14a0a106b8cffae0144ff9fb83d9ce1af1a4bd2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

pass 7501349 2023-12-26 16:06:41 2023-12-26 16:07:24 2023-12-26 16:59:11 0:51:47 0:37:24 0:14:23 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
pass 7501350 2023-12-26 16:06:41 2023-12-26 16:07:25 2023-12-26 17:00:20 0:52:55 0:38:57 0:13:58 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7501351 2023-12-26 16:06:42 2023-12-26 16:07:25 2023-12-26 16:56:27 0:49:02 0:38:24 0:10:38 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} 3
pass 7501352 2023-12-26 16:06:43 2023-12-26 16:07:25 2023-12-26 16:55:16 0:47:51 0:34:28 0:13:23 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7501353 2023-12-26 16:06:44 2023-12-26 16:07:26 2023-12-26 16:43:59 0:36:33 0:24:41 0:11:52 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2023-12-26T16:40:15.004264+0000 mon.a (mon.0) 472 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7501354 2023-12-26 16:06:44 2023-12-26 16:07:26 2023-12-26 16:57:17 0:49:51 0:36:25 0:13:26 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7501355 2023-12-26 16:06:45 2023-12-26 16:07:26 2023-12-26 16:55:41 0:48:15 0:33:37 0:14:38 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi151 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b14a0a106b8cffae0144ff9fb83d9ce1af1a4bd2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7501356 2023-12-26 16:06:46 2023-12-26 16:07:27 2023-12-26 16:57:39 0:50:12 0:36:40 0:13:32 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7501357 2023-12-26 16:06:47 2023-12-26 16:07:27 2023-12-26 16:49:46 0:42:19 0:19:11 0:23:08 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7501358 2023-12-26 16:06:48 2023-12-26 16:07:28 2023-12-26 16:41:30 0:34:02 0:21:14 0:12:48 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi018 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:b14a0a106b8cffae0144ff9fb83d9ce1af1a4bd2 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f358c38c-a40b-11ee-95a5-87774f69a715 -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

fail 7501359 2023-12-26 16:06:48 2023-12-26 16:07:28 2023-12-26 16:50:28 0:43:00 0:26:40 0:16:20 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi076 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b14a0a106b8cffae0144ff9fb83d9ce1af1a4bd2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7501360 2023-12-26 16:06:49 2023-12-26 16:07:28 2023-12-26 16:57:13 0:49:45 0:38:11 0:11:34 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7501361 2023-12-26 16:06:50 2023-12-26 16:07:29 2023-12-26 16:45:00 0:37:31 0:25:17 0:12:14 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi059 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 44e2c6bc-a40c-11ee-95a5-87774f69a715 -e sha1=b14a0a106b8cffae0144ff9fb83d9ce1af1a4bd2 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 7501362 2023-12-26 16:06:51 2023-12-26 16:07:29 2023-12-26 16:52:15 0:44:46 0:29:51 0:14:55 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
pass 7501363 2023-12-26 16:06:51 2023-12-26 16:07:29 2023-12-26 16:50:54 0:43:25 0:29:33 0:13:52 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
fail 7501364 2023-12-26 16:06:52 2023-12-26 16:07:30 2023-12-26 16:50:33 0:43:03 0:31:09 0:11:54 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b14a0a106b8cffae0144ff9fb83d9ce1af1a4bd2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7501365 2023-12-26 16:06:53 2023-12-26 16:07:30 2023-12-26 17:00:53 0:53:23 0:38:27 0:14:56 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7501366 2023-12-26 16:06:54 2023-12-26 16:07:30 2023-12-26 16:59:50 0:52:20 0:36:44 0:15:36 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7501367 2023-12-26 16:06:54 2023-12-26 16:07:31 2023-12-26 16:42:46 0:35:15 0:22:32 0:12:43 smithi main ubuntu 18.04 rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_18.04} 1-start 2-services/basic 3-final} 1
pass 7501368 2023-12-26 16:06:55 2023-12-26 16:07:31 2023-12-26 17:04:16 0:56:45 0:40:56 0:15:49 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
pass 7501369 2023-12-26 16:06:56 2023-12-26 16:07:31 2023-12-26 16:36:55 0:29:24 0:19:27 0:09:57 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
pass 7501370 2023-12-26 16:06:57 2023-12-26 16:07:31 2023-12-26 16:40:07 0:32:36 0:21:17 0:11:19 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7501371 2023-12-26 16:06:57 2023-12-26 16:07:32 2023-12-26 16:58:36 0:51:04 0:39:42 0:11:22 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7501372 2023-12-26 16:06:58 2023-12-26 16:07:32 2023-12-26 16:36:33 0:29:01 0:16:18 0:12:43 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} 1
fail 7501373 2023-12-26 16:06:59 2023-12-26 16:07:32 2023-12-26 16:36:12 0:28:40 0:15:22 0:13:18 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi086 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b14a0a106b8cffae0144ff9fb83d9ce1af1a4bd2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

pass 7501374 2023-12-26 16:07:00 2023-12-26 16:07:33 2023-12-26 17:00:04 0:52:31 0:37:10 0:15:21 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7501375 2023-12-26 16:07:00 2023-12-26 16:07:33 2023-12-26 16:57:18 0:49:45 0:35:17 0:14:28 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7501376 2023-12-26 16:07:01 2023-12-26 16:07:33 2023-12-26 17:32:29 1:24:56 1:09:58 0:14:58 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
pass 7501377 2023-12-26 16:07:02 2023-12-26 16:07:34 2023-12-26 17:05:52 0:58:18 0:43:49 0:14:29 smithi main centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/one workloads/rados_mon_workunits} 2
pass 7501378 2023-12-26 16:07:03 2023-12-26 16:07:34 2023-12-26 17:00:11 0:52:37 0:37:48 0:14:49 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7501379 2023-12-26 16:07:03 2023-12-26 16:07:34 2023-12-26 16:31:56 0:24:22 0:13:43 0:10:39 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
fail 7501380 2023-12-26 16:07:04 2023-12-26 16:07:35 2023-12-26 16:55:13 0:47:38 0:33:18 0:14:20 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b14a0a106b8cffae0144ff9fb83d9ce1af1a4bd2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7501381 2023-12-26 16:07:05 2023-12-26 16:07:35 2023-12-26 16:59:03 0:51:28 0:36:43 0:14:45 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7501382 2023-12-26 16:07:06 2023-12-26 16:07:36 2023-12-26 17:16:04 1:08:28 0:55:40 0:12:48 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
pass 7501383 2023-12-26 16:07:07 2023-12-26 16:07:36 2023-12-26 16:39:58 0:32:22 0:20:57 0:11:25 smithi main centos 8.stream rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
fail 7501384 2023-12-26 16:07:07 2023-12-26 16:07:36 2023-12-26 16:49:31 0:41:55 0:28:15 0:13:40 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2023-12-26T16:45:22.156152+0000 mon.a (mon.0) 468 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7501385 2023-12-26 16:07:08 2023-12-26 16:07:37 2023-12-26 16:46:25 0:38:48 0:25:55 0:12:53 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
pass 7501386 2023-12-26 16:07:09 2023-12-26 16:07:37 2023-12-26 17:28:48 1:21:11 1:08:50 0:12:21 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
pass 7501387 2023-12-26 16:07:09 2023-12-26 16:07:37 2023-12-26 16:47:54 0:40:17 0:26:24 0:13:53 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
fail 7501388 2023-12-26 16:07:10 2023-12-26 16:07:38 2023-12-26 17:14:41 1:07:03 0:51:39 0:15:24 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

pass 7501389 2023-12-26 16:07:11 2023-12-26 16:07:38 2023-12-26 17:01:50 0:54:12 0:38:48 0:15:24 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7501390 2023-12-26 16:07:12 2023-12-26 16:07:38 2023-12-26 16:47:40 0:40:02 0:24:52 0:15:10 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi099 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e690af74-a40c-11ee-95a5-87774f69a715 -e sha1=b14a0a106b8cffae0144ff9fb83d9ce1af1a4bd2 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 7501391 2023-12-26 16:07:13 2023-12-26 16:07:39 2023-12-26 16:44:06 0:36:27 0:23:16 0:13:11 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2