Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7276796 2023-05-17 19:48:43 2023-05-17 19:49:35 2023-05-17 20:17:59 0:28:24 0:18:33 0:09:51 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Test failure: test_cephfs_mirror (tasks.cephadm_cases.test_cli.TestCephadmCLI)

fail 7276797 2023-05-17 19:48:44 2023-05-17 19:49:35 2023-05-17 20:07:18 0:17:43 0:06:57 0:10:46 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi132 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

dead 7276798 2023-05-17 19:48:45 2023-05-17 19:49:35 2023-05-18 08:01:08 12:11:33 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_20.04}} 1
Failure Reason:

hit max job timeout

dead 7276799 2023-05-17 19:48:46 2023-05-17 19:49:36 2023-05-18 08:00:33 12:10:57 smithi main ubuntu 20.04 rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

hit max job timeout

fail 7276800 2023-05-17 19:48:47 2023-05-17 19:49:36 2023-05-17 20:15:26 0:25:50 0:15:22 0:10:28 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi019 with status 95: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:3c36284dac0d5cb692d3962260f6ebccd7a3cc3e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d45d9088-f4ee-11ed-9b02-001a4aab830c -- ceph rgw realm bootstrap -i -'

fail 7276801 2023-05-17 19:48:47 2023-05-17 19:49:37 2023-05-17 20:24:46 0:35:09 0:26:27 0:08:42 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi035 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3c36284dac0d5cb692d3962260f6ebccd7a3cc3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7276802 2023-05-17 19:48:48 2023-05-17 19:49:37 2023-05-17 20:09:00 0:19:23 0:07:15 0:12:08 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi050 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7276803 2023-05-17 19:48:49 2023-05-17 19:49:37 2023-05-17 20:12:47 0:23:10 0:11:27 0:11:43 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

pass 7276804 2023-05-17 19:48:50 2023-05-17 19:49:38 2023-05-17 22:45:35 2:55:57 2:42:23 0:13:34 smithi main ubuntu 22.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-radosbench} 2
pass 7276805 2023-05-17 19:48:50 2023-05-17 19:49:38 2023-05-17 20:19:31 0:29:53 0:09:00 0:20:53 smithi main ubuntu 20.04 rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_20.04}} 1
pass 7276806 2023-05-17 19:48:51 2023-05-17 20:05:11 2023-05-17 20:27:18 0:22:07 0:11:46 0:10:21 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7276807 2023-05-17 19:48:52 2023-05-17 20:05:22 2023-05-17 20:36:56 0:31:34 0:24:10 0:07:24 smithi main rhel 8.6 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{rhel_8}} 2
pass 7276808 2023-05-17 19:48:53 2023-05-17 20:05:42 2023-05-17 20:46:19 0:40:37 0:33:53 0:06:44 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_nfs} 1
pass 7276809 2023-05-17 19:48:54 2023-05-17 20:05:42 2023-05-17 20:45:42 0:40:00 0:29:12 0:10:48 smithi main ubuntu 22.04 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/rados_api_tests} 2
pass 7276810 2023-05-17 19:48:54 2023-05-17 20:06:13 2023-05-17 20:45:10 0:38:57 0:26:55 0:12:02 smithi main ubuntu 20.04 rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_20.04}} 2
pass 7276811 2023-05-17 19:48:55 2023-05-17 20:07:33 2023-05-17 20:32:26 0:24:53 0:14:16 0:10:37 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_20.04} thrashers/morepggrow thrashosds-health workloads/write_fadvise_dontneed} 2
pass 7276812 2023-05-17 19:48:56 2023-05-17 20:08:44 2023-05-17 20:38:01 0:29:17 0:22:54 0:06:23 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{rhel_8} tasks/per_module_finisher_stats} 2
pass 7276813 2023-05-17 19:48:57 2023-05-17 20:09:05 2023-05-17 20:42:32 0:33:27 0:23:57 0:09:30 smithi main ubuntu 22.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
pass 7276814 2023-05-17 19:48:58 2023-05-17 20:09:05 2023-05-17 20:29:50 0:20:45 0:10:23 0:10:22 smithi main ubuntu 22.04 rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 2
fail 7276815 2023-05-17 19:48:58 2023-05-17 20:10:05 2023-05-17 20:47:46 0:37:41 0:26:58 0:10:43 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi179 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph_test_cls_rbd --gtest_filter=-TestClsRbd.get_features:TestClsRbd.parents:TestClsRbd.mirror'"

pass 7276816 2023-05-17 19:48:59 2023-05-17 20:10:36 2023-05-17 20:38:33 0:27:57 0:15:44 0:12:13 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7276817 2023-05-17 19:49:00 2023-05-17 20:13:47 2023-05-17 20:37:23 0:23:36 0:14:41 0:08:55 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
fail 7276818 2023-05-17 19:49:01 2023-05-17 20:13:57 2023-05-17 20:42:55 0:28:58 0:21:54 0:07:04 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_orch_cli} 1
Failure Reason:

Test failure: test_cephfs_mirror (tasks.cephadm_cases.test_cli.TestCephadmCLI)

pass 7276819 2023-05-17 19:49:01 2023-05-17 20:13:58 2023-05-17 20:34:44 0:20:46 0:12:02 0:08:44 smithi main centos 8.stream rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
pass 7276820 2023-05-17 19:49:02 2023-05-17 20:13:58 2023-05-17 20:47:02 0:33:04 0:25:19 0:07:45 smithi main rhel 8.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
pass 7276821 2023-05-17 19:49:03 2023-05-17 20:15:29 2023-05-17 21:23:39 1:08:10 0:56:51 0:11:19 smithi main ubuntu 22.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/rados_mon_osdmap_prune} 2
pass 7276822 2023-05-17 19:49:04 2023-05-17 20:18:09 2023-05-17 20:53:05 0:34:56 0:23:38 0:11:18 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_20.04} thrashers/default thrashosds-health workloads/cache-pool-snaps-readproxy} 2
pass 7276823 2023-05-17 19:49:05 2023-05-17 20:18:10 2023-05-17 20:55:23 0:37:13 0:27:48 0:09:25 smithi main rhel 8.6 rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 3
fail 7276824 2023-05-17 19:49:05 2023-05-17 20:19:40 2023-05-17 20:35:50 0:16:10 0:06:12 0:09:58 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi124 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

dead 7276825 2023-05-17 19:49:06 2023-05-17 20:19:51 2023-05-18 08:29:39 12:09:48 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_20.04}} 1
Failure Reason:

hit max job timeout

dead 7276826 2023-05-17 19:49:07 2023-05-17 20:19:51 2023-05-18 08:32:16 12:12:25 smithi main centos 8.stream rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

hit max job timeout

fail 7276827 2023-05-17 19:49:08 2023-05-17 20:23:22 2023-05-17 20:50:54 0:27:32 0:16:46 0:10:46 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi047 with status 95: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:3c36284dac0d5cb692d3962260f6ebccd7a3cc3e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8267591c-f4f3-11ed-9b02-001a4aab830c -- ceph rgw realm bootstrap -i -'

pass 7276828 2023-05-17 19:49:08 2023-05-17 20:23:43 2023-05-17 21:06:59 0:43:16 0:34:32 0:08:44 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps} 2
fail 7276829 2023-05-17 19:49:09 2023-05-17 20:23:53 2023-05-17 20:45:18 0:21:25 0:14:07 0:07:18 smithi main rhel 8.6 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi081 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3c36284dac0d5cb692d3962260f6ebccd7a3cc3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 7276830 2023-05-17 19:49:10 2023-05-17 20:24:13 2023-05-17 20:47:25 0:23:12 0:13:17 0:09:55 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

"1684356197.8184178 mon.a (mon.0) 196 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 7276831 2023-05-17 19:49:11 2023-05-17 20:24:14 2023-05-17 22:50:07 2:25:53 2:10:58 0:14:55 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
pass 7276832 2023-05-17 19:49:11 2023-05-17 20:24:54 2023-05-17 20:51:40 0:26:46 0:18:08 0:08:38 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{rhel_8} tasks/prometheus} 2
pass 7276833 2023-05-17 19:49:12 2023-05-17 20:25:35 2023-05-17 21:13:13 0:47:38 0:41:25 0:06:13 smithi main rhel 8.6 rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 2
pass 7276834 2023-05-17 19:49:13 2023-05-17 20:26:15 2023-05-17 20:51:34 0:25:19 0:18:55 0:06:24 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
pass 7276835 2023-05-17 19:49:14 2023-05-17 20:26:26 2023-05-17 23:04:05 2:37:39 2:11:33 0:26:06 smithi main centos 8.stream rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
pass 7276836 2023-05-17 19:49:15 2023-05-17 20:26:56 2023-05-17 20:56:25 0:29:29 0:20:44 0:08:45 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/cache-snaps} 2
pass 7276837 2023-05-17 19:49:15 2023-05-17 20:27:27 2023-05-17 21:04:43 0:37:16 0:24:08 0:13:08 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
pass 7276838 2023-05-17 19:49:16 2023-05-17 20:28:27 2023-05-17 21:09:53 0:41:26 0:29:22 0:12:04 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_20.04} thrashers/one workloads/rados_mon_workunits} 2
pass 7276839 2023-05-17 19:49:17 2023-05-17 20:29:58 2023-05-17 20:50:28 0:20:30 0:10:43 0:09:47 smithi main ubuntu 22.04 rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} 3
pass 7276840 2023-05-17 19:49:18 2023-05-17 20:30:38 2023-05-17 20:57:47 0:27:09 0:20:05 0:07:04 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_adoption} 1
pass 7276841 2023-05-17 19:49:18 2023-05-17 20:30:39 2023-05-17 20:50:23 0:19:44 0:10:03 0:09:41 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7276842 2023-05-17 19:49:19 2023-05-17 20:30:49 2023-05-17 21:03:12 0:32:23 0:21:51 0:10:32 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 7276843 2023-05-17 19:49:20 2023-05-17 20:31:10 2023-05-17 21:04:09 0:32:59 0:25:47 0:07:12 smithi main rhel 8.6 rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 2
pass 7276844 2023-05-17 19:49:21 2023-05-17 20:31:10 2023-05-17 20:55:36 0:24:26 0:14:06 0:10:20 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/repave-all} 2
pass 7276845 2023-05-17 19:49:22 2023-05-17 20:32:30 2023-05-17 21:00:39 0:28:09 0:18:21 0:09:48 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/dedup-io-mixed} 2
pass 7276846 2023-05-17 19:49:23 2023-05-17 20:33:01 2023-05-17 21:13:27 0:40:26 0:28:07 0:12:19 smithi main ubuntu 22.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7276847 2023-05-17 19:49:23 2023-05-17 20:35:52 2023-05-17 21:01:50 0:25:58 0:19:14 0:06:44 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7276848 2023-05-17 19:49:24 2023-05-17 20:36:52 2023-05-17 21:04:20 0:27:28 0:19:59 0:07:29 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} tasks/repair_test} 2
pass 7276849 2023-05-17 19:49:25 2023-05-17 20:37:03 2023-05-17 21:16:40 0:39:37 0:27:41 0:11:56 smithi main ubuntu 22.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7276850 2023-05-17 19:49:26 2023-05-17 20:37:53 2023-05-17 20:59:43 0:21:50 0:12:22 0:09:28 smithi main ubuntu 22.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} 1
pass 7276851 2023-05-17 19:49:26 2023-05-17 20:37:54 2023-05-17 20:59:02 0:21:08 0:15:17 0:05:51 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-stupid} supported-random-distro$/{rhel_8} tasks/workunits} 2
pass 7276852 2023-05-17 19:49:27 2023-05-17 20:38:04 2023-05-17 21:09:40 0:31:36 0:23:51 0:07:45 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7276853 2023-05-17 19:49:28 2023-05-17 20:38:35 2023-05-17 21:00:40 0:22:05 0:10:39 0:11:26 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7276854 2023-05-17 19:49:29 2023-05-17 20:41:05 2023-05-17 21:10:09 0:29:04 0:18:54 0:10:10 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} 2
fail 7276855 2023-05-17 19:49:30 2023-05-17 20:41:16 2023-05-17 21:06:21 0:25:05 0:18:20 0:06:45 smithi main rhel 8.6 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3c36284dac0d5cb692d3962260f6ebccd7a3cc3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 7276856 2023-05-17 19:49:31 2023-05-17 20:41:16 2023-05-17 21:05:29 0:24:13 0:17:29 0:06:44 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi050 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3c36284dac0d5cb692d3962260f6ebccd7a3cc3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7276857 2023-05-17 19:49:31 2023-05-17 20:42:37 2023-05-17 21:14:22 0:31:45 0:20:14 0:11:31 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/test_rbd_api} 3
pass 7276858 2023-05-17 19:49:32 2023-05-17 20:45:18 2023-05-17 21:24:58 0:39:40 0:33:10 0:06:30 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/pool-snaps-few-objects} 2
pass 7276859 2023-05-17 19:49:33 2023-05-17 20:45:48 2023-05-17 21:22:30 0:36:42 0:24:59 0:11:43 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
pass 7276860 2023-05-17 19:49:34 2023-05-17 20:46:29 2023-05-17 21:30:40 0:44:11 0:32:39 0:11:32 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_20.04}} 1
fail 7276861 2023-05-17 19:49:35 2023-05-17 20:46:39 2023-05-17 21:16:26 0:29:47 0:22:54 0:06:53 smithi main rhel 8.6 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/pool-create-delete} 2
Failure Reason:

Command failed on smithi079 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_rados_delete_pools_parallel'

pass 7276862 2023-05-17 19:49:35 2023-05-17 20:47:10 2023-05-17 21:12:41 0:25:31 0:18:13 0:07:18 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 7276863 2023-05-17 19:49:36 2023-05-17 20:47:30 2023-05-17 21:06:02 0:18:32 0:07:32 0:11:00 smithi main ubuntu 20.04 rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{ubuntu_20.04}} 1
pass 7276864 2023-05-17 19:49:37 2023-05-17 21:14:43 616 smithi main centos 8.stream rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
pass 7276865 2023-05-17 19:49:38 2023-05-17 21:25:12 1666 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} 2
pass 7276866 2023-05-17 19:49:39 2023-05-17 20:47:52 2023-05-17 21:21:02 0:33:10 0:23:56 0:09:14 smithi main rhel 8.6 rados/singleton/{all/backfill-toofull mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 1
pass 7276867 2023-05-17 19:49:40 2023-05-17 20:50:22 2023-05-17 21:10:04 0:19:42 0:10:18 0:09:24 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_20.04}} 1
pass 7276868 2023-05-17 19:49:40 2023-05-17 20:50:23 2023-05-17 21:12:16 0:21:53 0:12:20 0:09:33 smithi main ubuntu 22.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_read} 1
pass 7276869 2023-05-17 19:49:41 2023-05-17 20:50:33 2023-05-17 21:12:54 0:22:21 0:12:13 0:10:08 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_20.04} tasks/scrub_test} 2
pass 7276870 2023-05-17 19:49:42 2023-05-17 21:25:25 1404 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_20.04} thrashers/none thrashosds-health workloads/radosbench-high-concurrency} 2
fail 7276871 2023-05-17 19:49:43 2023-05-17 20:51:04 2023-05-17 21:29:50 0:38:46 0:25:59 0:12:47 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi032 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3c36284dac0d5cb692d3962260f6ebccd7a3cc3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7276872 2023-05-17 19:49:44 2023-05-17 20:51:04 2023-05-17 21:10:51 0:19:47 0:06:35 0:13:12 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi039 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7276873 2023-05-17 19:49:45 2023-05-17 20:51:45 2023-05-17 21:26:35 0:34:50 0:23:47 0:11:03 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/minsize_recovery thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2