Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7354823 2023-07-27 22:39:02 2023-07-27 22:43:22 2023-07-27 23:22:26 0:39:04 0:28:39 0:10:25 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

pass 7354824 2023-07-27 22:39:03 2023-07-27 22:43:23 2023-07-27 23:28:51 0:45:28 0:33:45 0:11:43 smithi main rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7354825 2023-07-27 22:39:04 2023-07-27 22:44:33 2023-07-27 23:25:29 0:40:56 0:27:39 0:13:17 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/redirect} 2
pass 7354826 2023-07-27 22:39:05 2023-07-27 22:48:04 2023-07-27 23:43:54 0:55:50 0:44:59 0:10:51 smithi main centos 8.stream rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 1
pass 7354827 2023-07-27 22:39:06 2023-07-27 22:48:05 2023-07-27 23:12:13 0:24:08 0:14:14 0:09:54 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
fail 7354828 2023-07-27 22:39:06 2023-07-27 22:48:55 2023-07-27 23:26:21 0:37:26 0:29:08 0:08:18 smithi main rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/rados_cls_all} 2
Failure Reason:

"2023-07-27T23:22:52.578494+0000 mon.a (mon.0) 522 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7354829 2023-07-27 22:39:07 2023-07-27 22:49:06 2023-07-27 23:30:12 0:41:06 0:20:06 0:21:00 smithi main centos 8.stream rados/rest/{mgr-restful supported-random-distro$/{centos_8}} 1
fail 7354830 2023-07-27 22:39:08 2023-07-27 22:53:07 2023-07-27 23:36:40 0:43:33 0:31:36 0:11:57 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi124 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2dfc9d0e12cf19dcb739a527dae47ac94e230bb4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7354831 2023-07-27 22:39:09 2023-07-27 22:53:27 2023-07-27 23:47:42 0:54:15 0:35:06 0:19:09 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/rados_api_tests validater/lockdep} 2
pass 7354832 2023-07-27 22:39:10 2023-07-27 22:53:38 2023-07-27 23:23:54 0:30:16 0:20:01 0:10:15 smithi main rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7354833 2023-07-27 22:39:11 2023-07-27 22:54:48 2023-07-27 23:44:16 0:49:28 0:38:37 0:10:51 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
pass 7354834 2023-07-27 22:39:11 2023-07-27 22:55:39 2023-07-27 23:29:28 0:33:49 0:19:57 0:13:52 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} 2
pass 7354835 2023-07-27 22:39:12 2023-07-27 22:59:19 2023-07-27 23:45:24 0:46:05 0:33:13 0:12:52 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-localized} 2
pass 7354836 2023-07-27 22:39:13 2023-07-27 23:00:20 2023-07-27 23:26:12 0:25:52 0:16:42 0:09:10 smithi main rhel 8.4 rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 7354837 2023-07-27 22:39:14 2023-07-27 23:00:20 2023-07-27 23:21:49 0:21:29 0:13:09 0:08:20 smithi main rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi116 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2dfc9d0e12cf19dcb739a527dae47ac94e230bb4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b00c3050-2cd3-11ee-9b35-001a4aab830c -- ceph osd stat -f json'

pass 7354838 2023-07-27 22:39:15 2023-07-27 23:00:31 2023-07-27 23:36:42 0:36:11 0:20:53 0:15:18 smithi main centos 8.stream rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} 2
pass 7354839 2023-07-27 22:39:16 2023-07-27 23:03:32 2023-07-27 23:42:49 0:39:17 0:31:04 0:08:13 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/small-objects} 2
pass 7354840 2023-07-27 22:39:16 2023-07-27 23:03:42 2023-07-27 23:52:59 0:49:17 0:35:13 0:14:04 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/cache-snaps} 3
pass 7354841 2023-07-27 22:39:17 2023-07-27 23:07:03 2023-07-28 00:44:36 1:37:33 1:27:15 0:10:18 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} 3
pass 7354842 2023-07-27 22:39:18 2023-07-27 23:07:14 2023-07-27 23:50:13 0:42:59 0:30:39 0:12:20 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7354843 2023-07-27 22:39:19 2023-07-27 23:07:44 2023-07-27 23:45:29 0:37:45 0:25:38 0:12:07 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

pass 7354844 2023-07-27 22:39:20 2023-07-27 23:08:15 2023-07-27 23:59:25 0:51:10 0:37:48 0:13:22 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/snaps-few-objects} 2
pass 7354845 2023-07-27 22:39:21 2023-07-27 23:08:15 2023-07-27 23:42:21 0:34:06 0:21:51 0:12:15 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 7354846 2023-07-27 22:39:21 2023-07-27 23:10:36 2023-07-27 23:33:44 0:23:08 0:13:15 0:09:53 smithi main rhel 8.4 rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} 2
pass 7354847 2023-07-27 22:39:22 2023-07-27 23:12:16 2023-07-27 23:37:30 0:25:14 0:15:55 0:09:19 smithi main rhel 8.4 rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 7354848 2023-07-27 22:39:23 2023-07-27 23:12:17 2023-07-27 23:49:41 0:37:24 0:22:27 0:14:57 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2023-07-27T23:45:22.452671+0000 mon.a (mon.0) 473 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7354849 2023-07-27 22:39:24 2023-07-27 23:16:38 2023-07-28 00:05:19 0:48:41 0:29:53 0:18:48 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7354850 2023-07-27 22:39:25 2023-07-27 23:21:59 2023-07-27 23:47:31 0:25:32 0:16:57 0:08:35 smithi main rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} 2
pass 7354851 2023-07-27 22:39:25 2023-07-27 23:22:29 2023-07-27 23:57:28 0:34:59 0:23:45 0:11:14 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
pass 7354852 2023-07-27 22:39:26 2023-07-27 23:23:10 2023-07-27 23:56:00 0:32:50 0:21:44 0:11:06 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7354853 2023-07-27 22:39:27 2023-07-27 23:24:01 2023-07-27 23:45:58 0:21:57 0:12:29 0:09:28 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} 1
pass 7354854 2023-07-27 22:39:28 2023-07-27 23:24:01 2023-07-27 23:50:55 0:26:54 0:16:03 0:10:51 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} 2
pass 7354855 2023-07-27 22:39:28 2023-07-27 23:25:31 2023-07-28 00:22:32 0:57:01 0:47:19 0:09:42 smithi main rhel 8.4 rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 2
pass 7354856 2023-07-27 22:39:29 2023-07-27 23:26:32 2023-07-28 00:30:43 1:04:11 0:51:02 0:13:09 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
fail 7354857 2023-07-27 22:39:30 2023-07-27 23:28:53 2023-07-27 23:47:12 0:18:19 0:08:00 0:10:19 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=2dfc9d0e12cf19dcb739a527dae47ac94e230bb4

pass 7354858 2023-07-27 22:39:32 2023-07-27 23:29:33 2023-07-28 00:52:52 1:23:19 1:11:16 0:12:03 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-agent-big} 2
fail 7354859 2023-07-27 22:39:33 2023-07-27 23:30:14 2023-07-27 23:59:45 0:29:31 0:16:21 0:13:10 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi002 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2dfc9d0e12cf19dcb739a527dae47ac94e230bb4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e0188a4c-2cd7-11ee-9b35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 7354860 2023-07-27 22:39:33 2023-07-27 23:31:44 2023-07-28 00:03:17 0:31:33 0:19:30 0:12:03 smithi main rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} 2
pass 7354861 2023-07-27 22:39:34 2023-07-27 23:33:35 2023-07-27 23:58:53 0:25:18 0:17:32 0:07:46 smithi main rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
pass 7354862 2023-07-27 22:39:35 2023-07-27 23:33:45 2023-07-28 00:10:30 0:36:45 0:19:37 0:17:08 smithi main centos 8.stream rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} 3
fail 7354863 2023-07-27 22:39:36 2023-07-27 23:36:46 2023-07-28 00:16:36 0:39:50 0:29:52 0:09:58 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

pass 7354864 2023-07-27 22:39:37 2023-07-27 23:36:47 2023-07-28 00:04:31 0:27:44 0:18:35 0:09:09 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
pass 7354865 2023-07-27 22:39:38 2023-07-27 23:36:47 2023-07-28 00:29:53 0:53:06 0:36:19 0:16:47 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/rados_api_tests} 2
fail 7354866 2023-07-27 22:39:39 2023-07-27 23:37:27 2023-07-28 00:56:18 1:18:51 1:05:19 0:13:32 smithi main centos 8.stream rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} tasks/dashboard} 2
Failure Reason:

"2023-07-28T00:08:46.911343+0000 mon.a (mon.0) 853 : cluster [WRN] Health check failed: 2 client(s) laggy due to laggy OSDs (MDS_CLIENTS_LAGGY)" in cluster log

pass 7354867 2023-07-27 22:39:39 2023-07-27 23:41:08 2023-07-28 00:27:17 0:46:09 0:37:35 0:08:34 smithi main rhel 8.4 rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 7354868 2023-07-27 22:39:40 2023-07-27 23:41:09 2023-07-28 03:05:24 3:24:15 3:02:02 0:22:13 smithi main ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} 4
pass 7354869 2023-07-27 22:39:41 2023-07-27 23:42:50 2023-07-28 00:13:28 0:30:38 0:20:16 0:10:22 smithi main rhel 8.4 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7354870 2023-07-27 22:39:42 2023-07-27 23:45:30 2023-07-28 00:32:10 0:46:40 0:31:40 0:15:00 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi154 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2dfc9d0e12cf19dcb739a527dae47ac94e230bb4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7354871 2023-07-27 22:39:43 2023-07-27 23:45:31 2023-07-28 00:10:53 0:25:22 0:16:18 0:09:04 smithi main rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/readwrite} 2
pass 7354872 2023-07-27 22:39:44 2023-07-27 23:47:21 2023-07-28 00:13:42 0:26:21 0:15:45 0:10:36 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} 1
pass 7354873 2023-07-27 22:39:44 2023-07-27 23:47:22 2023-07-28 00:19:37 0:32:15 0:24:08 0:08:07 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/redirect_promote_tests} 2
pass 7354874 2023-07-27 22:39:45 2023-07-27 23:47:32 2023-07-28 00:14:25 0:26:53 0:16:11 0:10:42 smithi main rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} 2
pass 7354875 2023-07-27 22:39:46 2023-07-27 23:47:53 2023-07-28 00:15:37 0:27:44 0:16:24 0:11:20 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
fail 7354876 2023-07-27 22:39:47 2023-07-27 23:49:43 2023-07-28 00:32:46 0:43:03 0:30:48 0:12:15 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

pass 7354877 2023-07-27 22:39:48 2023-07-27 23:50:24 2023-07-28 00:34:15 0:43:51 0:22:15 0:21:36 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 7354878 2023-07-27 22:39:48 2023-07-27 23:51:04 2023-07-28 00:29:23 0:38:19 0:27:06 0:11:13 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/small-objects-balanced} 2
pass 7354879 2023-07-27 22:39:49 2023-07-27 23:53:05 2023-07-28 00:35:25 0:42:20 0:33:28 0:08:52 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7354880 2023-07-27 22:39:50 2023-07-27 23:53:05 2023-07-28 00:31:24 0:38:19 0:24:13 0:14:06 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
fail 7354881 2023-07-27 22:39:51 2023-07-27 23:56:06 2023-07-28 00:19:12 0:23:06 0:08:32 0:14:34 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=2dfc9d0e12cf19dcb739a527dae47ac94e230bb4

pass 7354882 2023-07-27 22:39:52 2023-07-27 23:57:37 2023-07-28 00:29:53 0:32:16 0:23:11 0:09:05 smithi main rhel 8.4 rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_3.0} 1-start 2-services/rgw 3-final} 1
fail 7354883 2023-07-27 22:39:52 2023-07-27 23:57:37 2023-07-28 00:43:51 0:46:14 0:25:49 0:20:25 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2023-07-28T00:38:32.149557+0000 mon.a (mon.0) 525 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7354884 2023-07-27 22:39:53 2023-07-27 23:58:58 2023-07-28 00:34:15 0:35:17 0:21:54 0:13:23 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-comp-snappy supported-random-distro$/{centos_8} tasks/crash} 2
pass 7354885 2023-07-27 22:39:54 2023-07-27 23:59:18 2023-07-28 01:11:59 1:12:41 0:51:49 0:20:52 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
pass 7354886 2023-07-27 22:39:55 2023-07-27 23:59:49 2023-07-28 00:34:41 0:34:52 0:20:04 0:14:48 smithi main rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} 2
pass 7354887 2023-07-27 22:39:56 2023-07-28 00:05:49 2023-07-28 00:43:30 0:37:41 0:27:16 0:10:25 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/write_fadvise_dontneed} 2
pass 7354888 2023-07-27 22:39:57 2023-07-28 00:06:24 2023-07-28 00:54:54 0:48:30 0:35:01 0:13:29 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/test_rbd_api} 3
pass 7354889 2023-07-27 22:39:58 2023-07-28 00:08:54 2023-07-28 02:05:16 1:56:22 1:46:17 0:10:05 smithi main ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} 1
fail 7354890 2023-07-27 22:39:58 2023-07-28 00:08:54 2023-07-28 06:05:40 5:56:46 5:00:58 0:55:48 smithi main ubuntu 20.04 rados/objectstore/{backends/objectstore supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed on smithi160 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\''

pass 7354891 2023-07-27 22:39:59 2023-07-28 00:10:50 2023-07-28 00:38:31 0:27:41 0:17:48 0:09:53 smithi main rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
pass 7354892 2023-07-27 22:40:00 2023-07-28 00:10:50 2023-07-28 00:57:43 0:46:53 0:35:17 0:11:36 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
pass 7354893 2023-07-27 22:40:01 2023-07-28 00:11:16 2023-07-28 01:06:39 0:55:23 0:43:29 0:11:54 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/mon_recovery validater/valgrind} 2
pass 7354894 2023-07-27 22:40:02 2023-07-28 00:13:57 2023-07-28 00:42:36 0:28:39 0:18:31 0:10:08 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} 2
pass 7354895 2023-07-27 22:40:03 2023-07-28 00:13:57 2023-07-28 00:43:30 0:29:33 0:14:19 0:15:14 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} tasks/failover} 2
fail 7354896 2023-07-27 22:40:03 2023-07-28 00:14:33 2023-07-28 00:48:23 0:33:50 0:21:12 0:12:38 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_cls_all} 2
Failure Reason:

"2023-07-28T00:43:18.622170+0000 mon.a (mon.0) 472 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7354897 2023-07-27 22:40:04 2023-07-28 00:16:06 2023-07-28 00:55:32 0:39:26 0:26:16 0:13:10 smithi main rhel 8.4 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/sync workloads/pool-create-delete} 2
fail 7354898 2023-07-27 22:40:05 2023-07-28 00:17:17 2023-07-28 00:58:24 0:41:07 0:26:24 0:14:43 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)