User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-01-24 16:05:13 | 2023-01-24 18:10:54 | 2023-01-25 01:04:03 | 6:53:09 | rados | wip-yuri7-testing-2023-01-23-1532-quincy | smithi | 49f8fb0 | 171 | 145 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7135579 | 2023-01-24 16:06:20 | 2023-01-24 18:10:54 | 2023-01-24 18:29:36 | 0:18:42 | 0:11:56 | 0:06:46 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7135581 | 2023-01-24 16:06:22 | 2023-01-24 18:11:04 | 2023-01-24 21:49:07 | 3:38:03 | 3:26:26 | 0:11:37 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd-backfill} | 1 | |
fail | 7135583 | 2023-01-24 16:06:23 | 2023-01-24 18:13:25 | 2023-01-24 18:44:30 | 0:31:05 | 0:21:45 | 0:09:20 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
Failure Reason:
"2023-01-24T18:40:20.262152+0000 mgr.x (mgr.4100) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135585 | 2023-01-24 16:06:24 | 2023-01-24 18:13:26 | 2023-01-24 18:27:54 | 0:14:28 | 0:09:25 | 0:05:03 | smithi | main | rhel | 8.4 | rados/cephadm/workunits/{0-distro/rhel_8.4_container_tools_3.0 agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 7135587 | 2023-01-24 16:06:25 | 2023-01-24 18:13:36 | 2023-01-24 18:54:44 | 0:41:08 | 0:29:22 | 0:11:46 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7135589 | 2023-01-24 16:06:26 | 2023-01-24 18:15:27 | 2023-01-24 18:49:12 | 0:33:45 | 0:27:22 | 0:06:23 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |
Failure Reason:
"2023-01-24T18:38:31.306324+0000 mon.a (mon.0) 1181 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135591 | 2023-01-24 16:06:27 | 2023-01-24 18:17:48 | 2023-01-24 18:46:11 | 0:28:23 | 0:17:59 | 0:10:24 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity random-objectstore$/{filestore-xfs} supported-random-distro$/{ubuntu_latest} tasks/progress} | 2 | |
pass | 7135594 | 2023-01-24 16:06:28 | 2023-01-24 18:19:19 | 2023-01-24 18:47:27 | 0:28:08 | 0:16:09 | 0:11:59 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/repair_test} | 2 | |
pass | 7135596 | 2023-01-24 16:06:30 | 2023-01-24 18:26:22 | 2023-01-24 18:59:31 | 0:33:09 | 0:22:58 | 0:10:11 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} | 3 | |
pass | 7135598 | 2023-01-24 16:06:31 | 2023-01-24 18:28:03 | 2023-01-24 18:49:30 | 0:21:27 | 0:09:57 | 0:11:30 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4K_rand_read} | 1 | |
fail | 7135599 | 2023-01-24 16:06:32 | 2023-01-24 18:29:33 | 2023-01-24 18:54:39 | 0:25:06 | 0:19:09 | 0:05:57 | smithi | main | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135601 | 2023-01-24 16:06:33 | 2023-01-24 18:30:04 | 2023-01-24 18:54:20 | 0:24:16 | 0:15:52 | 0:08:24 | smithi | main | rhel | 8.4 | rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7135603 | 2023-01-24 16:06:34 | 2023-01-24 18:30:45 | 2023-01-24 18:49:17 | 0:18:32 | 0:10:47 | 0:07:45 | smithi | main | centos | 8.stream | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} | 2 | |
pass | 7135605 | 2023-01-24 16:06:35 | 2023-01-24 18:30:55 | 2023-01-24 18:52:42 | 0:21:47 | 0:11:30 | 0:10:17 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135607 | 2023-01-24 16:06:37 | 2023-01-24 18:33:36 | 2023-01-24 19:10:51 | 0:37:15 | 0:29:04 | 0:08:11 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps} | 2 | |
Failure Reason:
"2023-01-24T18:52:17.887500+0000 mgr.y (mgr.4112) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135609 | 2023-01-24 16:06:38 | 2023-01-24 18:35:27 | 2023-01-24 18:55:44 | 0:20:17 | 0:09:22 | 0:10:55 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/rados_5925} | 2 | |
fail | 7135611 | 2023-01-24 16:06:39 | 2023-01-24 18:37:58 | 2023-01-24 19:09:46 | 0:31:48 | 0:21:12 | 0:10:36 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7135614 | 2023-01-24 16:06:40 | 2023-01-24 18:39:09 | 2023-01-24 19:52:12 | 1:13:03 | 1:02:56 | 0:10:07 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
"2023-01-24T19:11:10.332591+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135615 | 2023-01-24 16:06:41 | 2023-01-24 18:39:29 | 2023-01-24 19:05:46 | 0:26:17 | 0:15:23 | 0:10:54 | smithi | main | centos | 8.stream | rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7135617 | 2023-01-24 16:06:42 | 2023-01-24 18:41:40 | 2023-01-24 19:02:23 | 0:20:43 | 0:13:48 | 0:06:55 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7135619 | 2023-01-24 16:06:44 | 2023-01-24 18:42:01 | 2023-01-24 19:15:16 | 0:33:15 | 0:21:20 | 0:11:55 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/cache-snaps-balanced} | 2 | |
pass | 7135621 | 2023-01-24 16:06:45 | 2023-01-24 18:43:12 | 2023-01-24 21:22:39 | 2:39:27 | 2:17:40 | 0:21:47 | smithi | main | centos | 8.stream | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} | 1 | |
fail | 7135623 | 2023-01-24 16:06:46 | 2023-01-24 18:44:32 | 2023-01-24 19:09:29 | 0:24:57 | 0:14:46 | 0:10:11 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
Failure Reason:
"2023-01-24T19:04:53.706735+0000 mgr.x (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135625 | 2023-01-24 16:06:47 | 2023-01-24 18:47:33 | 2023-01-24 19:27:12 | 0:39:39 | 0:28:03 | 0:11:36 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7135627 | 2023-01-24 16:06:48 | 2023-01-24 18:49:24 | 2023-01-24 19:26:48 | 0:37:24 | 0:30:12 | 0:07:12 | smithi | main | rhel | 8.4 | rados/cephadm/workunits/{0-distro/rhel_8.4_container_tools_rhel8 agent/on mon_election/connectivity task/test_nfs} | 1 | |
fail | 7135629 | 2023-01-24 16:06:49 | 2023-01-24 18:52:25 | 2023-01-24 19:29:52 | 0:37:27 | 0:28:09 | 0:09:18 | smithi | main | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi182 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=49f8fb05584886826e8eade75f7105fba754560c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 7135631 | 2023-01-24 16:06:50 | 2023-01-24 18:53:57 | 2023-01-24 19:15:21 | 0:21:24 | 0:10:37 | 0:10:47 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/mon-auth-caps mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7135633 | 2023-01-24 16:06:52 | 2023-01-24 18:54:47 | 2023-01-24 19:27:56 | 0:33:09 | 0:24:11 | 0:08:58 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/e2e} | 2 | |
fail | 7135635 | 2023-01-24 16:06:53 | 2023-01-24 18:54:58 | 2023-01-24 19:11:38 | 0:16:40 | 0:05:34 | 0:11:06 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/master} | 1 | |
Failure Reason:
Command failed on smithi173 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7135637 | 2023-01-24 16:06:54 | 2023-01-24 18:55:49 | 2023-01-24 19:15:12 | 0:19:23 | 0:13:12 | 0:06:11 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7135639 | 2023-01-24 16:06:55 | 2023-01-24 18:56:09 | 2023-01-24 19:27:55 | 0:31:46 | 0:21:25 | 0:10:21 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/cache-snaps} | 2 | |
Failure Reason:
"2023-01-24T19:16:03.240713+0000 mgr.y (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135641 | 2023-01-24 16:06:56 | 2023-01-24 18:56:10 | 2023-01-24 19:34:05 | 0:37:55 | 0:26:31 | 0:11:24 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |
Failure Reason:
"2023-01-24T19:16:00.621475+0000 mgr.x (mgr.4114) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135643 | 2023-01-24 16:06:57 | 2023-01-24 18:58:31 | 2023-01-24 19:19:28 | 0:20:57 | 0:10:15 | 0:10:42 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} | 1 | |
fail | 7135645 | 2023-01-24 16:06:58 | 2023-01-24 18:59:42 | 2023-01-24 19:27:49 | 0:28:07 | 0:18:36 | 0:09:31 | smithi | main | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135647 | 2023-01-24 16:07:00 | 2023-01-24 19:03:23 | 2023-01-24 19:25:14 | 0:21:51 | 0:11:07 | 0:10:44 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/scrub_test} | 2 | |
pass | 7135649 | 2023-01-24 16:07:01 | 2023-01-24 19:04:04 | 2023-01-24 19:25:54 | 0:21:50 | 0:10:08 | 0:11:42 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7135651 | 2023-01-24 16:07:02 | 2023-01-24 19:07:55 | 2023-01-24 19:44:34 | 0:36:39 | 0:24:30 | 0:12:09 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7135653 | 2023-01-24 16:07:03 | 2023-01-24 19:09:36 | 2023-01-24 19:48:09 | 0:38:33 | 0:30:02 | 0:08:31 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |
Failure Reason:
"2023-01-24T19:28:10.770272+0000 mgr.y (mgr.4111) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135655 | 2023-01-24 16:07:04 | 2023-01-24 19:09:56 | 2023-01-24 19:34:02 | 0:24:06 | 0:15:35 | 0:08:31 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/cache} | 2 | |
Failure Reason:
"2023-01-24T19:27:54.127733+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135657 | 2023-01-24 16:07:05 | 2023-01-24 19:10:57 | 2023-01-24 19:30:39 | 0:19:42 | 0:13:05 | 0:06:37 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7135659 | 2023-01-24 16:07:06 | 2023-01-24 19:12:38 | 2023-01-24 22:14:35 | 3:01:57 | 2:52:46 | 0:09:11 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/osd} | 1 | |
fail | 7135661 | 2023-01-24 16:07:08 | 2023-01-24 19:13:19 | 2023-01-24 19:41:35 | 0:28:16 | 0:18:03 | 0:10:13 | smithi | main | centos | 8.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135663 | 2023-01-24 16:07:09 | 2023-01-24 19:14:40 | 2023-01-24 19:37:34 | 0:22:54 | 0:11:50 | 0:11:04 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/classic random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{ubuntu_latest} tasks/prometheus} | 2 | |
pass | 7135665 | 2023-01-24 16:07:10 | 2023-01-24 19:16:41 | 2023-01-24 19:41:36 | 0:24:55 | 0:16:49 | 0:08:06 | smithi | main | centos | 8.stream | rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135667 | 2023-01-24 16:07:11 | 2023-01-24 19:21:02 | 2023-01-24 19:43:43 | 0:22:41 | 0:10:18 | 0:12:23 | smithi | main | centos | 8.stream | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
Failure Reason:
"2023-01-24T19:42:49.337618+0000 mgr.x (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135669 | 2023-01-24 16:07:12 | 2023-01-24 19:25:23 | 2023-01-24 19:49:14 | 0:23:51 | 0:14:13 | 0:09:38 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/dedup-io-mixed} | 2 | |
fail | 7135671 | 2023-01-24 16:07:13 | 2023-01-24 19:25:24 | 2023-01-24 19:52:44 | 0:27:20 | 0:21:12 | 0:06:08 | smithi | main | rhel | 8.4 | rados/cephadm/smoke/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135673 | 2023-01-24 16:07:14 | 2023-01-24 19:27:22 | 2023-01-24 19:50:49 | 0:23:27 | 0:14:31 | 0:08:56 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135675 | 2023-01-24 16:07:16 | 2023-01-24 19:28:03 | 2023-01-24 20:07:26 | 0:39:23 | 0:29:25 | 0:09:58 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/rados_api_tests} | 2 | |
Failure Reason:
"2023-01-24T19:54:08.017068+0000 mon.f (mon.1) 391 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135677 | 2023-01-24 16:07:17 | 2023-01-24 19:30:14 | 2023-01-24 19:59:54 | 0:29:40 | 0:18:52 | 0:10:48 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/lockdep} | 2 | |
Failure Reason:
"2023-01-24T19:56:16.425060+0000 mon.a (mon.0) 518 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135679 | 2023-01-24 16:07:18 | 2023-01-24 19:32:55 | 2023-01-24 19:56:56 | 0:24:01 | 0:12:44 | 0:11:17 | smithi | main | centos | 8.stream | rados/singleton/{all/mon-config mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7135681 | 2023-01-24 16:07:19 | 2023-01-24 19:34:06 | 2023-01-24 22:12:30 | 2:38:24 | 2:14:11 | 0:24:13 | smithi | main | rhel | 8.4 | rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{rhel_8}} | 1 | |
pass | 7135683 | 2023-01-24 16:07:20 | 2023-01-24 19:36:17 | 2023-01-24 19:55:45 | 0:19:28 | 0:09:49 | 0:09:39 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_read} | 1 | |
pass | 7135685 | 2023-01-24 16:07:21 | 2023-01-24 19:37:07 | 2023-01-24 19:59:40 | 0:22:33 | 0:11:26 | 0:11:07 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7135687 | 2023-01-24 16:07:23 | 2023-01-24 19:37:58 | 2023-01-24 20:22:47 | 0:44:49 | 0:34:55 | 0:09:54 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/snaps-few-objects} | 3 | |
pass | 7135689 | 2023-01-24 16:07:24 | 2023-01-24 19:39:49 | 2023-01-24 20:09:31 | 0:29:42 | 0:17:53 | 0:11:49 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_orch_cli} | 1 | |
pass | 7135691 | 2023-01-24 16:07:25 | 2023-01-24 19:41:40 | 2023-01-24 20:09:56 | 0:28:16 | 0:14:28 | 0:13:48 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/dedup-io-snaps} | 2 | |
pass | 7135693 | 2023-01-24 16:07:26 | 2023-01-24 19:43:51 | 2023-01-24 20:05:46 | 0:21:55 | 0:14:48 | 0:07:07 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7135695 | 2023-01-24 16:07:27 | 2023-01-24 19:46:42 | 2023-01-24 20:12:10 | 0:25:28 | 0:16:54 | 0:08:34 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7135697 | 2023-01-24 16:07:28 | 2023-01-24 19:48:13 | 2023-01-24 20:16:33 | 0:28:20 | 0:17:52 | 0:10:28 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/libcephsqlite} | 2 | |
Failure Reason:
"2023-01-24T20:07:04.062261+0000 mgr.x (mgr.4114) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135699 | 2023-01-24 16:07:30 | 2023-01-24 19:48:43 | 2023-01-24 20:34:55 | 0:46:12 | 0:35:23 | 0:10:49 | smithi | main | centos | 8.stream | rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"2023-01-24T20:15:32.352968+0000 mon.a (mon.0) 228 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135701 | 2023-01-24 16:07:31 | 2023-01-24 19:50:24 | 2023-01-24 20:07:29 | 0:17:05 | 0:09:53 | 0:07:12 | smithi | main | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
Command failed on smithi174 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:49f8fb05584886826e8eade75f7105fba754560c shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 08f9044c-9c22-11ed-9e56-001a4aab830c -- ceph mon dump -f json' |
||||||||||||||
pass | 7135703 | 2023-01-24 16:07:32 | 2023-01-24 19:50:55 | 2023-01-24 20:24:45 | 0:33:50 | 0:21:49 | 0:12:01 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects} | 2 | |
pass | 7135705 | 2023-01-24 16:07:33 | 2023-01-24 19:52:16 | 2023-01-24 20:28:49 | 0:36:33 | 0:25:02 | 0:11:31 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
fail | 7135707 | 2023-01-24 16:07:34 | 2023-01-24 19:55:27 | 2023-01-24 20:29:37 | 0:34:10 | 0:23:16 | 0:10:54 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{centos_8}} | 2 | |
Failure Reason:
"2023-01-24T20:21:32.998312+0000 mon.a (mon.0) 313 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135709 | 2023-01-24 16:07:35 | 2023-01-24 19:56:58 | 2023-01-24 20:50:04 | 0:53:06 | 0:47:12 | 0:05:54 | smithi | main | rhel | 8.4 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
"2023-01-24T20:19:13.447165+0000 mon.a (mon.0) 223 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135711 | 2023-01-24 16:07:36 | 2023-01-24 19:57:38 | 2023-01-24 20:37:43 | 0:40:05 | 0:29:16 | 0:10:49 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
Failure Reason:
"2023-01-24T20:17:22.480099+0000 mgr.x (mgr.4111) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135713 | 2023-01-24 16:07:38 | 2023-01-24 19:59:49 | 2023-01-24 20:28:47 | 0:28:58 | 0:21:35 | 0:07:23 | smithi | main | rhel | 8.4 | rados/cephadm/smoke/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135715 | 2023-01-24 16:07:39 | 2023-01-24 20:00:00 | 2023-01-24 20:40:16 | 0:40:16 | 0:27:35 | 0:12:41 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} | 2 | |
fail | 7135717 | 2023-01-24 16:07:40 | 2023-01-24 20:03:11 | 2023-01-24 20:36:38 | 0:33:27 | 0:20:49 | 0:12:38 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"2023-01-24T20:30:50.284054+0000 mon.a (mon.0) 282 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135719 | 2023-01-24 16:07:41 | 2023-01-24 20:05:42 | 2023-01-24 22:45:53 | 2:40:11 | 2:31:17 | 0:08:54 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} | 1 | |
pass | 7135721 | 2023-01-24 16:07:42 | 2023-01-24 20:06:13 | 2023-01-24 20:33:55 | 0:27:42 | 0:17:14 | 0:10:28 | smithi | main | centos | 8.stream | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{centos_8} tasks/mon_recovery} | 3 | |
fail | 7135723 | 2023-01-24 16:07:43 | 2023-01-24 20:07:14 | 2023-01-24 20:28:51 | 0:21:37 | 0:11:52 | 0:09:45 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity random-objectstore$/{bluestore-bitmap} supported-random-distro$/{centos_8} tasks/workunits} | 2 | |
Failure Reason:
"2023-01-24T20:25:52.260278+0000 mgr.y (mgr.4114) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135724 | 2023-01-24 16:07:44 | 2023-01-24 20:07:34 | 2023-01-24 20:27:11 | 0:19:37 | 0:10:09 | 0:09:28 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
pass | 7135725 | 2023-01-24 16:07:45 | 2023-01-24 20:07:34 | 2023-01-24 20:35:33 | 0:27:59 | 0:17:53 | 0:10:06 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7135726 | 2023-01-24 16:07:47 | 2023-01-24 20:07:35 | 2023-01-24 20:48:38 | 0:41:03 | 0:27:02 | 0:14:01 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
fail | 7135727 | 2023-01-24 16:07:48 | 2023-01-24 20:10:05 | 2023-01-24 20:46:42 | 0:36:37 | 0:25:44 | 0:10:53 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
Failure Reason:
"2023-01-24T20:30:15.114166+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135728 | 2023-01-24 16:07:49 | 2023-01-24 20:11:36 | 2023-01-24 20:52:43 | 0:41:07 | 0:31:27 | 0:09:40 | smithi | main | centos | 8.stream | rados/objectstore/{backends/objectstore-filestore-memstore supported-random-distro$/{centos_8}} | 1 | |
fail | 7135729 | 2023-01-24 16:07:50 | 2023-01-24 20:12:16 | 2023-01-24 21:04:33 | 0:52:17 | 0:44:25 | 0:07:52 | smithi | main | rhel | 8.4 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_mon_osdmap_prune} | 2 | |
Failure Reason:
"2023-01-24T20:36:51.528142+0000 mon.a (mon.0) 1239 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135730 | 2023-01-24 16:07:51 | 2023-01-24 20:13:17 | 2023-01-24 20:53:52 | 0:40:35 | 0:27:20 | 0:13:15 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/rados_api_tests} | 2 | |
fail | 7135731 | 2023-01-24 16:07:52 | 2023-01-24 20:14:47 | 2023-01-24 20:56:35 | 0:41:48 | 0:31:17 | 0:10:31 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
"2023-01-24T20:44:26.698278+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135732 | 2023-01-24 16:07:54 | 2023-01-24 20:15:58 | 2023-01-24 20:51:41 | 0:35:43 | 0:28:31 | 0:07:12 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
Failure Reason:
"2023-01-24T20:33:07.635056+0000 mgr.x (mgr.4111) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135733 | 2023-01-24 16:07:55 | 2023-01-24 20:16:38 | 2023-01-24 20:35:42 | 0:19:04 | 0:12:51 | 0:06:13 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7135734 | 2023-01-24 16:07:56 | 2023-01-24 20:16:38 | 2023-01-24 20:48:14 | 0:31:36 | 0:19:37 | 0:11:59 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135735 | 2023-01-24 16:07:57 | 2023-01-24 20:17:49 | 2023-01-24 20:39:31 | 0:21:42 | 0:11:06 | 0:10:36 | smithi | main | centos | 8.stream | rados/singleton/{all/peer mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135736 | 2023-01-24 16:07:58 | 2023-01-24 20:17:49 | 2023-01-24 20:43:06 | 0:25:17 | 0:14:16 | 0:11:01 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
Failure Reason:
"2023-01-24T20:39:51.547246+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135737 | 2023-01-24 16:07:59 | 2023-01-24 20:20:10 | 2023-01-24 21:41:03 | 1:20:53 | 1:09:35 | 0:11:18 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/radosbench} | 2 | |
fail | 7135738 | 2023-01-24 16:08:01 | 2023-01-24 20:21:00 | 2023-01-24 20:59:51 | 0:38:51 | 0:30:27 | 0:08:24 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
Failure Reason:
"2023-01-24T20:45:20.501035+0000 mon.a (mon.0) 1314 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135739 | 2023-01-24 16:08:02 | 2023-01-24 20:22:51 | 2023-01-24 20:46:47 | 0:23:56 | 0:13:23 | 0:10:33 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_adoption} | 1 | |
fail | 7135740 | 2023-01-24 16:08:03 | 2023-01-24 20:22:51 | 2023-01-24 21:08:15 | 0:45:24 | 0:38:15 | 0:07:09 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135741 | 2023-01-24 16:08:04 | 2023-01-24 20:23:22 | 2023-01-24 20:47:51 | 0:24:29 | 0:14:11 | 0:10:18 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 7135742 | 2023-01-24 16:08:05 | 2023-01-24 20:24:02 | 2023-01-24 21:03:13 | 0:39:11 | 0:32:37 | 0:06:34 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
Failure Reason:
"2023-01-24T20:46:33.414701+0000 mon.a (mon.0) 1161 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135743 | 2023-01-24 16:08:06 | 2023-01-24 20:50:43 | 764 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/redirect} | 2 | ||||
pass | 7135744 | 2023-01-24 16:08:08 | 2023-01-24 20:24:53 | 2023-01-24 20:44:31 | 0:19:38 | 0:10:15 | 0:09:23 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | |
fail | 7135745 | 2023-01-24 16:08:09 | 2023-01-24 20:24:53 | 2023-01-24 21:03:21 | 0:38:28 | 0:27:41 | 0:10:47 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135746 | 2023-01-24 16:08:10 | 2023-01-24 20:25:03 | 2023-01-24 21:00:54 | 0:35:51 | 0:22:38 | 0:13:13 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 7135747 | 2023-01-24 16:08:11 | 2023-01-24 20:27:14 | 2023-01-24 20:53:42 | 0:26:28 | 0:16:12 | 0:10:16 | smithi | main | centos | 8.stream | rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7135748 | 2023-01-24 16:08:12 | 2023-01-24 20:28:54 | 2023-01-24 21:00:41 | 0:31:47 | 0:21:48 | 0:09:59 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7135749 | 2023-01-24 16:08:13 | 2023-01-24 20:28:55 | 2023-01-24 20:50:08 | 0:21:13 | 0:14:47 | 0:06:26 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7135750 | 2023-01-24 16:08:14 | 2023-01-24 20:28:55 | 2023-01-24 21:10:35 | 0:41:40 | 0:30:59 | 0:10:41 | smithi | main | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7135751 | 2023-01-24 16:08:16 | 2023-01-24 20:28:55 | 2023-01-24 20:52:18 | 0:23:23 | 0:16:25 | 0:06:58 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/redirect_promote_tests} | 2 | |
Failure Reason:
"2023-01-24T20:46:02.139498+0000 mgr.x (mgr.4099) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135752 | 2023-01-24 16:08:17 | 2023-01-24 20:29:25 | 2023-01-24 20:57:58 | 0:28:33 | 0:19:03 | 0:09:30 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135753 | 2023-01-24 16:08:18 | 2023-01-24 20:29:26 | 2023-01-24 20:59:42 | 0:30:16 | 0:18:48 | 0:11:28 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} | 2 | |
pass | 7135754 | 2023-01-24 16:08:19 | 2023-01-24 20:29:46 | 2023-01-24 21:04:18 | 0:34:32 | 0:23:45 | 0:10:47 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} | 3 | |
pass | 7135755 | 2023-01-24 16:08:20 | 2023-01-24 20:30:17 | 2023-01-24 20:50:06 | 0:19:49 | 0:12:19 | 0:07:30 | smithi | main | rhel | 8.4 | rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7135756 | 2023-01-24 16:08:21 | 2023-01-24 20:30:17 | 2023-01-24 21:01:15 | 0:30:58 | 0:20:14 | 0:10:44 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_token_ttl (tasks.mgr.dashboard.test_auth.AuthTest) |
||||||||||||||
pass | 7135757 | 2023-01-24 16:08:23 | 2023-01-24 20:32:07 | 2023-01-24 20:54:10 | 0:22:03 | 0:11:27 | 0:10:36 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-stupid} supported-random-distro$/{ubuntu_latest} tasks/crash} | 2 | |
pass | 7135758 | 2023-01-24 16:08:24 | 2023-01-24 20:32:18 | 2023-01-24 20:49:58 | 0:17:40 | 0:08:34 | 0:09:06 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/alloc-hint supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7135759 | 2023-01-24 16:08:25 | 2023-01-24 20:32:18 | 2023-01-24 20:56:03 | 0:23:45 | 0:12:03 | 0:11:42 | smithi | main | ubuntu | 20.04 | rados/rest/{mgr-restful supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7135760 | 2023-01-24 16:08:26 | 2023-01-24 20:33:58 | 2023-01-24 20:49:39 | 0:15:41 | 0:05:30 | 0:10:11 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi016 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
fail | 7135761 | 2023-01-24 16:08:27 | 2023-01-24 20:33:59 | 2023-01-24 21:02:40 | 0:28:41 | 0:20:21 | 0:08:20 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"2023-01-24T20:58:38.129584+0000 mon.a (mon.0) 205 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135762 | 2023-01-24 16:08:28 | 2023-01-24 20:33:59 | 2023-01-24 20:57:34 | 0:23:35 | 0:12:41 | 0:10:54 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/c2c} | 1 | |
fail | 7135763 | 2023-01-24 16:08:30 | 2023-01-24 20:34:59 | 2023-01-24 21:23:25 | 0:48:26 | 0:36:51 | 0:11:35 | smithi | main | centos | 8.stream | rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi062 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 7135764 | 2023-01-24 16:08:31 | 2023-01-24 20:35:50 | 2023-01-24 21:07:32 | 0:31:42 | 0:20:46 | 0:10:56 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 | |
Failure Reason:
"2023-01-24T21:03:29.923370+0000 mgr.x (mgr.4100) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135765 | 2023-01-24 16:08:32 | 2023-01-24 20:36:40 | 2023-01-24 21:07:26 | 0:30:46 | 0:19:57 | 0:10:49 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/redirect_set_object} | 2 | |
Failure Reason:
"2023-01-24T20:57:01.627667+0000 mgr.x (mgr.4120) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135766 | 2023-01-24 16:08:33 | 2023-01-24 20:37:51 | 2023-01-24 20:59:08 | 0:21:17 | 0:12:30 | 0:08:47 | smithi | main | rhel | 8.4 | rados/cephadm/workunits/{0-distro/rhel_8.4_container_tools_3.0 agent/on mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi088 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=49f8fb05584886826e8eade75f7105fba754560c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 7135767 | 2023-01-24 16:08:34 | 2023-01-24 20:39:31 | 2023-01-24 21:21:27 | 0:41:56 | 0:31:29 | 0:10:27 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
Failure Reason:
"2023-01-24T21:05:24.596613+0000 mon.d (mon.6) 1500 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135768 | 2023-01-24 16:08:35 | 2023-01-24 20:40:21 | 2023-01-24 21:16:11 | 0:35:50 | 0:26:04 | 0:09:46 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/rados_api_tests validater/lockdep} | 2 | |
Failure Reason:
"2023-01-24T20:59:15.317840+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135769 | 2023-01-24 16:08:36 | 2023-01-24 20:41:02 | 2023-01-24 21:03:09 | 0:22:07 | 0:10:38 | 0:11:29 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7135770 | 2023-01-24 16:08:38 | 2023-01-24 20:43:13 | 2023-01-24 21:16:24 | 0:33:11 | 0:21:09 | 0:12:02 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7135771 | 2023-01-24 16:08:39 | 2023-01-24 20:44:33 | 2023-01-24 21:05:35 | 0:21:02 | 0:13:14 | 0:07:48 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7135772 | 2023-01-24 16:08:40 | 2023-01-24 20:46:44 | 2023-01-24 21:10:51 | 0:24:07 | 0:11:19 | 0:12:48 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
fail | 7135773 | 2023-01-24 16:08:41 | 2023-01-24 20:47:54 | 2023-01-24 21:19:53 | 0:31:59 | 0:21:46 | 0:10:13 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135774 | 2023-01-24 16:08:42 | 2023-01-24 20:48:24 | 2023-01-24 21:06:15 | 0:17:51 | 0:08:18 | 0:09:33 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} | 1 | |
pass | 7135775 | 2023-01-24 16:08:44 | 2023-01-24 20:48:25 | 2023-01-24 21:12:40 | 0:24:15 | 0:14:36 | 0:09:39 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/set-chunks-read} | 2 | |
pass | 7135776 | 2023-01-24 16:08:45 | 2023-01-24 20:48:45 | 2023-01-24 21:18:37 | 0:29:52 | 0:19:35 | 0:10:17 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 7135777 | 2023-01-24 16:08:46 | 2023-01-24 20:48:45 | 2023-01-24 21:26:07 | 0:37:22 | 0:28:55 | 0:08:27 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 2 | |
Failure Reason:
"2023-01-24T21:08:00.044802+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135778 | 2023-01-24 16:08:47 | 2023-01-24 20:49:46 | 2023-01-24 21:16:58 | 0:27:12 | 0:18:05 | 0:09:07 | smithi | main | centos | 8.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135779 | 2023-01-24 16:08:48 | 2023-01-24 20:50:06 | 2023-01-24 21:30:38 | 0:40:32 | 0:28:46 | 0:11:46 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
pass | 7135780 | 2023-01-24 16:08:50 | 2023-01-24 20:50:16 | 2023-01-24 21:14:58 | 0:24:42 | 0:15:15 | 0:09:27 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135781 | 2023-01-24 16:08:51 | 2023-01-24 20:50:17 | 2023-01-24 21:22:34 | 0:32:17 | 0:21:44 | 0:10:33 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/small-objects-balanced} | 2 | |
Failure Reason:
"2023-01-24T21:16:56.004506+0000 mon.a (mon.0) 608 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135782 | 2023-01-24 16:08:52 | 2023-01-24 20:50:47 | 2023-01-24 21:14:49 | 0:24:02 | 0:16:16 | 0:07:46 | smithi | main | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/rados_python} | 2 | |
Failure Reason:
"2023-01-24T21:06:06.611314+0000 mgr.y (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135783 | 2023-01-24 16:08:53 | 2023-01-24 20:51:47 | 2023-01-24 21:26:06 | 0:34:19 | 0:24:05 | 0:10:14 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7135784 | 2023-01-24 16:08:54 | 2023-01-24 20:52:28 | 2023-01-24 21:18:33 | 0:26:05 | 0:14:54 | 0:11:11 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7135785 | 2023-01-24 16:08:56 | 2023-01-24 21:24:09 | 1121 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | ||||
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135786 | 2023-01-24 16:08:57 | 2023-01-24 20:53:58 | 2023-01-24 21:13:43 | 0:19:45 | 0:09:01 | 0:10:44 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7135787 | 2023-01-24 16:08:58 | 2023-01-24 20:53:59 | 2023-01-24 21:25:24 | 0:31:25 | 0:22:27 | 0:08:58 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/small-objects-localized} | 2 | |
Failure Reason:
"2023-01-24T21:12:40.906163+0000 mgr.y (mgr.4103) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135788 | 2023-01-24 16:08:59 | 2023-01-24 20:54:19 | 2023-01-24 21:20:50 | 0:26:31 | 0:15:10 | 0:11:21 | smithi | main | centos | 8.stream | rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{centos_8}} | 1 | |
pass | 7135789 | 2023-01-24 16:09:00 | 2023-01-24 20:56:10 | 2023-01-24 21:16:12 | 0:20:02 | 0:11:48 | 0:08:14 | smithi | main | rhel | 8.4 | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/mon_clock_no_skews} | 2 | |
fail | 7135790 | 2023-01-24 16:09:01 | 2023-01-24 20:56:40 | 2023-01-24 22:13:32 | 1:16:52 | 1:07:53 | 0:08:59 | smithi | main | rhel | 8.4 | rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135791 | 2023-01-24 16:09:03 | 2023-01-24 20:57:40 | 2023-01-24 21:12:28 | 0:14:48 | 0:09:04 | 0:05:44 | smithi | main | rhel | 8.4 | rados/cephadm/workunits/{0-distro/rhel_8.4_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 7135792 | 2023-01-24 16:09:04 | 2023-01-24 20:58:01 | 2023-01-24 21:17:29 | 0:19:28 | 0:08:23 | 0:11:05 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} | 1 | |
pass | 7135793 | 2023-01-24 16:09:05 | 2023-01-24 20:58:01 | 2023-01-24 21:23:38 | 0:25:37 | 0:13:11 | 0:12:26 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{ubuntu_latest} tasks/failover} | 2 | |
fail | 7135794 | 2023-01-24 16:09:06 | 2023-01-24 20:59:52 | 2023-01-24 21:38:54 | 0:39:02 | 0:33:27 | 0:05:35 | smithi | main | rhel | 8.4 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_workunits} | 2 | |
Failure Reason:
"2023-01-24T21:22:12.900152+0000 mon.a (mon.0) 306 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135795 | 2023-01-24 16:09:07 | 2023-01-24 21:00:02 | 2023-01-24 21:17:35 | 0:17:33 | 0:07:29 | 0:10:04 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7135796 | 2023-01-24 16:09:09 | 2023-01-24 21:00:02 | 2023-01-24 21:33:29 | 0:33:27 | 0:23:00 | 0:10:27 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/crush} | 1 | |
fail | 7135797 | 2023-01-24 16:09:10 | 2023-01-24 21:00:02 | 2023-01-24 21:33:48 | 0:33:46 | 0:23:49 | 0:09:57 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/small-objects} | 2 | |
Failure Reason:
"2023-01-24T21:19:16.048225+0000 mgr.x (mgr.4106) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135798 | 2023-01-24 16:09:11 | 2023-01-24 21:00:43 | 2023-01-24 21:46:53 | 0:46:10 | 0:35:07 | 0:11:03 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |
fail | 7135799 | 2023-01-24 16:09:12 | 2023-01-24 21:01:03 | 2023-01-24 22:06:28 | 1:05:25 | 0:55:47 | 0:09:38 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
"2023-01-24T21:30:05.145354+0000 mgr.y (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135800 | 2023-01-24 16:09:13 | 2023-01-24 21:01:24 | 2023-01-24 21:25:24 | 0:24:00 | 0:11:59 | 0:12:01 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7135801 | 2023-01-24 16:09:15 | 2023-01-24 21:03:14 | 2023-01-24 21:34:08 | 0:30:54 | 0:21:37 | 0:09:17 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135802 | 2023-01-24 16:09:16 | 2023-01-24 21:03:14 | 2023-01-24 21:23:40 | 0:20:26 | 0:08:59 | 0:11:27 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 7135803 | 2023-01-24 16:09:17 | 2023-01-24 21:03:25 | 2023-01-24 21:43:56 | 0:40:31 | 0:32:29 | 0:08:02 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
Failure Reason:
"2023-01-24T21:27:23.445107+0000 mon.a (mon.0) 1198 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135804 | 2023-01-24 16:09:18 | 2023-01-24 21:04:25 | 2023-01-24 21:46:19 | 0:41:54 | 0:31:30 | 0:10:24 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
Failure Reason:
"2023-01-24T21:23:52.124431+0000 mgr.y (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135805 | 2023-01-24 16:09:19 | 2023-01-24 21:04:36 | 2023-01-24 21:23:00 | 0:18:24 | 0:13:27 | 0:04:57 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/crushdiff mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7135806 | 2023-01-24 16:09:20 | 2023-01-24 21:04:36 | 2023-01-24 21:34:12 | 0:29:36 | 0:18:21 | 0:11:15 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/rados_stress_watch} | 2 | |
fail | 7135807 | 2023-01-24 16:09:22 | 2023-01-24 21:06:16 | 2023-01-24 21:33:01 | 0:26:45 | 0:19:17 | 0:07:28 | smithi | main | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7135808 | 2023-01-24 16:09:23 | 2023-01-24 21:07:27 | 2023-01-24 21:36:29 | 0:29:02 | 0:19:40 | 0:09:22 | smithi | main | centos | 8.stream | rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_crash.sh) on smithi182 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=49f8fb05584886826e8eade75f7105fba754560c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_crash.sh' |
||||||||||||||
pass | 7135809 | 2023-01-24 16:09:24 | 2023-01-24 21:07:37 | 2023-01-24 21:46:28 | 0:38:51 | 0:25:50 | 0:13:01 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
fail | 7135810 | 2023-01-24 16:09:25 | 2023-01-24 21:10:38 | 2023-01-24 21:50:03 | 0:39:25 | 0:28:52 | 0:10:33 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
Failure Reason:
"2023-01-24T21:29:48.000575+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135811 | 2023-01-24 16:09:26 | 2023-01-24 21:10:58 | 2023-01-24 21:32:45 | 0:21:47 | 0:11:15 | 0:10:32 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7135812 | 2023-01-24 16:09:28 | 2023-01-24 21:10:58 | 2023-01-24 21:58:11 | 0:47:13 | 0:36:25 | 0:10:48 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_nfs} | 1 | |
pass | 7135813 | 2023-01-24 16:09:29 | 2023-01-24 21:12:29 | 2023-01-24 21:30:49 | 0:18:20 | 0:08:35 | 0:09:45 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 | |
pass | 7135814 | 2023-01-24 16:09:30 | 2023-01-24 21:12:49 | 2023-01-24 21:49:40 | 0:36:51 | 0:26:05 | 0:10:46 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7135815 | 2023-01-24 16:09:31 | 2023-01-24 21:13:49 | 2023-01-24 21:33:46 | 0:19:57 | 0:12:22 | 0:07:35 | smithi | main | rhel | 8.4 | rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7135816 | 2023-01-24 16:09:32 | 2023-01-24 21:14:50 | 2023-01-24 21:35:45 | 0:20:55 | 0:10:45 | 0:10:10 | smithi | main | centos | 8.stream | rados/objectstore/{backends/filejournal supported-random-distro$/{centos_8}} | 1 | |
pass | 7135817 | 2023-01-24 16:09:33 | 2023-01-24 21:14:50 | 2023-01-24 21:52:19 | 0:37:29 | 0:25:50 | 0:11:39 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 2 | |
fail | 7135818 | 2023-01-24 16:09:35 | 2023-01-24 21:16:21 | 2023-01-24 21:35:06 | 0:18:45 | 0:10:45 | 0:08:00 | smithi | main | centos | 8.stream | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 2 | |
Failure Reason:
"2023-01-24T21:34:09.778010+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135819 | 2023-01-24 16:09:36 | 2023-01-24 21:16:21 | 2023-01-24 21:45:30 | 0:29:09 | 0:22:00 | 0:07:09 | smithi | main | rhel | 8.4 | rados/cephadm/smoke/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135820 | 2023-01-24 16:09:37 | 2023-01-24 21:16:31 | 2023-01-24 21:35:58 | 0:19:27 | 0:12:14 | 0:07:13 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7135821 | 2023-01-24 16:09:38 | 2023-01-24 21:17:02 | 2023-01-24 21:49:37 | 0:32:35 | 0:22:21 | 0:10:14 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 7135822 | 2023-01-24 16:09:39 | 2023-01-24 21:17:42 | 2023-01-24 21:44:28 | 0:26:46 | 0:15:58 | 0:10:48 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7135823 | 2023-01-24 16:09:40 | 2023-01-24 21:18:42 | 2023-01-24 21:47:38 | 0:28:56 | 0:19:06 | 0:09:50 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} | 2 | |
Failure Reason:
"2023-01-24T21:36:49.405373+0000 mgr.y (mgr.4101) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135824 | 2023-01-24 16:09:41 | 2023-01-24 21:18:43 | 2023-01-24 22:05:31 | 0:46:48 | 0:38:05 | 0:08:43 | smithi | main | rhel | 8.4 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/one workloads/snaps-few-objects} | 2 | |
Failure Reason:
"2023-01-24T21:43:05.244221+0000 mon.b (mon.2) 898 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135825 | 2023-01-24 16:09:43 | 2023-01-24 21:20:03 | 2023-01-24 21:46:20 | 0:26:17 | 0:19:07 | 0:07:10 | smithi | main | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7135826 | 2023-01-24 16:09:44 | 2023-01-24 21:21:34 | 2023-01-24 21:46:07 | 0:24:33 | 0:16:41 | 0:07:52 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
Failure Reason:
"2023-01-24T21:39:35.626582+0000 mgr.y (mgr.4116) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135827 | 2023-01-24 16:09:45 | 2023-01-24 21:22:44 | 2023-01-24 21:50:20 | 0:27:36 | 0:16:44 | 0:10:52 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-bitmap} supported-random-distro$/{centos_8} tasks/insights} | 2 | |
Failure Reason:
"2023-01-24T21:42:14.373170+0000 mgr.y (mgr.4114) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135828 | 2023-01-24 16:09:46 | 2023-01-24 21:22:44 | 2023-01-24 21:45:15 | 0:22:31 | 0:15:21 | 0:07:10 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
Failure Reason:
"2023-01-24T21:41:50.907501+0000 mgr.x (mgr.4103) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135829 | 2023-01-24 16:09:47 | 2023-01-24 21:23:45 | 2023-01-24 21:40:50 | 0:17:05 | 0:10:38 | 0:06:27 | smithi | main | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_striper} | 2 | |
Failure Reason:
"2023-01-24T21:38:26.444406+0000 mgr.y (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135830 | 2023-01-24 16:09:48 | 2023-01-24 21:23:45 | 2023-01-24 21:58:27 | 0:34:42 | 0:29:19 | 0:05:23 | smithi | main | rhel | 8.4 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=49f8fb05584886826e8eade75f7105fba754560c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 7135831 | 2023-01-24 16:09:50 | 2023-01-24 21:23:45 | 2023-01-24 21:45:03 | 0:21:18 | 0:12:19 | 0:08:59 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7135832 | 2023-01-24 16:09:51 | 2023-01-24 21:24:16 | 2023-01-24 23:00:33 | 1:36:17 | 1:29:41 | 0:06:36 | smithi | main | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/erasure-code} | 1 | |
pass | 7135833 | 2023-01-24 16:09:52 | 2023-01-24 21:24:16 | 2023-01-24 22:51:25 | 1:27:09 | 1:16:01 | 0:11:08 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 2 | |
dead | 7135834 | 2023-01-24 16:09:53 | 2023-01-24 21:25:26 | 2023-01-24 21:41:22 | 0:15:56 | 0:04:45 | 0:11:11 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
Failure Reason:
{'smithi080.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'invocation': {'module_args': {'allow_unauthenticated': False, 'autoclean': False, 'autoremove': False, 'cache_valid_time': 0, 'deb': None, 'default_release': None, 'dpkg_options': 'force-confdef,force-confold', 'force': True, 'force_apt_get': False, 'install_recommends': None, 'name': ['ceph', 'ceph-common', 'libcephfs1', 'radosgw', 'python-ceph', 'python-rados', 'python-cephfs', 'python-rbd', 'librbd1', 'librados2', 'ceph-fs-common-dbg', 'ceph-fs-common', 'openmpi-common'], 'only_upgrade': False, 'package': ['ceph', 'ceph-common', 'libcephfs1', 'radosgw', 'python-ceph', 'python-rados', 'python-cephfs', 'python-rbd', 'librbd1', 'librados2', 'ceph-fs-common-dbg', 'ceph-fs-common', 'openmpi-common'], 'policy_rc_d': None, 'purge': False, 'state': 'absent', 'update_cache': None, 'update_cache_retries': 5, 'update_cache_retry_max_delay': 12, 'upgrade': None}}, 'msg': "'apt-get remove 'librbd1' 'librados2'' failed: W: --force-yes is deprecated, use one of the options starting with --allow instead.\nE: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 14185 (apt-get)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n", 'rc': 100, 'stderr': 'W: --force-yes is deprecated, use one of the options starting with --allow instead.\nE: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 14185 (apt-get)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n', 'stderr_lines': ['W: --force-yes is deprecated, use one of the options starting with --allow instead.', 'E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 14185 (apt-get)', 'E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?'], 'stdout': '', 'stdout_lines': []}} |
||||||||||||||
pass | 7135835 | 2023-01-24 16:09:54 | 2023-01-24 21:25:27 | 2023-01-24 21:52:15 | 0:26:48 | 0:17:17 | 0:09:31 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli} | 1 | |
fail | 7135836 | 2023-01-24 16:09:56 | 2023-01-24 21:25:27 | 2023-01-24 22:21:06 | 0:55:39 | 0:45:43 | 0:09:56 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
Failure Reason:
"2023-01-24T21:44:19.011033+0000 mgr.x (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135837 | 2023-01-24 16:09:57 | 2023-01-24 21:26:08 | 2023-01-24 21:45:35 | 0:19:27 | 0:08:21 | 0:11:06 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} | 1 | |
pass | 7135838 | 2023-01-24 16:09:58 | 2023-01-24 21:26:18 | 2023-01-24 22:03:53 | 0:37:35 | 0:24:03 | 0:13:32 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} | 2 | |
fail | 7135839 | 2023-01-24 16:09:59 | 2023-01-24 21:30:39 | 2023-01-24 21:51:52 | 0:21:13 | 0:05:44 | 0:15:29 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/flannel rook/master} | 3 | |
Failure Reason:
Command failed on smithi016 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7135840 | 2023-01-24 16:10:00 | 2023-01-24 21:32:49 | 2023-01-24 21:54:50 | 0:22:01 | 0:11:52 | 0:10:09 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135841 | 2023-01-24 16:10:01 | 2023-01-24 21:33:10 | 2023-01-24 23:15:14 | 1:42:04 | 1:35:28 | 0:06:36 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-radosbench} | 2 | |
Failure Reason:
"2023-01-24T21:50:17.449437+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135842 | 2023-01-24 16:10:02 | 2023-01-24 21:33:30 | 2023-01-24 22:22:47 | 0:49:17 | 0:42:32 | 0:06:45 | smithi | main | rhel | 8.4 | rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 2 | |
Failure Reason:
"2023-01-24T21:56:06.135126+0000 mon.a (mon.0) 797 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135843 | 2023-01-24 16:10:04 | 2023-01-24 21:33:51 | 2023-01-24 23:17:38 | 1:43:47 | 1:35:09 | 0:08:38 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |
fail | 7135844 | 2023-01-24 16:10:05 | 2023-01-24 21:34:11 | 2023-01-24 22:01:40 | 0:27:29 | 0:21:32 | 0:05:57 | smithi | main | rhel | 8.4 | rados/cephadm/smoke/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7135845 | 2023-01-24 16:10:06 | 2023-01-24 21:34:21 | 2023-01-24 22:10:04 | 0:35:43 | 0:25:10 | 0:10:33 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache-agent-big} | 2 | |
Failure Reason:
"2023-01-24T21:54:02.565148+0000 mgr.x (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135846 | 2023-01-24 16:10:07 | 2023-01-24 21:35:12 | 2023-01-24 22:15:12 | 0:40:00 | 0:28:34 | 0:11:26 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
Failure Reason:
"2023-01-24T21:55:05.812247+0000 mgr.x (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135847 | 2023-01-24 16:10:08 | 2023-01-24 21:36:02 | 2023-01-25 00:16:39 | 2:40:37 | 2:29:32 | 0:11:05 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7135848 | 2023-01-24 16:10:09 | 2023-01-24 21:36:32 | 2023-01-24 22:06:50 | 0:30:18 | 0:19:28 | 0:10:50 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7135849 | 2023-01-24 16:10:11 | 2023-01-24 21:36:53 | 2023-01-24 21:57:53 | 0:21:00 | 0:13:35 | 0:07:25 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7135850 | 2023-01-24 16:10:12 | 2023-01-24 21:39:03 | 2023-01-24 22:08:43 | 0:29:40 | 0:19:26 | 0:10:14 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135851 | 2023-01-24 16:10:13 | 2023-01-24 21:39:04 | 2023-01-24 22:05:06 | 0:26:02 | 0:14:22 | 0:11:40 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} | 3 | |
pass | 7135852 | 2023-01-24 16:10:14 | 2023-01-24 21:41:04 | 2023-01-24 22:05:08 | 0:24:04 | 0:13:56 | 0:10:08 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/cache-agent-small} | 2 | |
fail | 7135853 | 2023-01-24 16:10:15 | 2023-01-24 21:41:25 | 2023-01-24 22:22:11 | 0:40:46 | 0:33:49 | 0:06:57 | smithi | main | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_big} | 2 | |
Failure Reason:
"2023-01-24T21:55:40.261872+0000 mgr.x (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135854 | 2023-01-24 16:10:16 | 2023-01-24 21:41:25 | 2023-01-24 22:16:59 | 0:35:34 | 0:23:23 | 0:12:11 | smithi | main | centos | 8.stream | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 2 | |
Failure Reason:
"2023-01-24T22:09:03.400342+0000 mon.a (mon.0) 209 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135855 | 2023-01-24 16:10:18 | 2023-01-24 21:44:06 | 2023-01-24 22:22:29 | 0:38:23 | 0:27:13 | 0:11:10 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7135856 | 2023-01-24 16:10:19 | 2023-01-24 21:45:16 | 2023-01-24 22:08:09 | 0:22:53 | 0:14:34 | 0:08:19 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135857 | 2023-01-24 16:10:20 | 2023-01-24 21:45:16 | 2023-01-24 22:30:08 | 0:44:52 | 0:31:50 | 0:13:02 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
"2023-01-24T22:18:27.748946+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135858 | 2023-01-24 16:10:21 | 2023-01-24 21:45:37 | 2023-01-24 22:05:15 | 0:19:38 | 0:08:44 | 0:10:54 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
fail | 7135859 | 2023-01-24 16:10:22 | 2023-01-24 21:45:37 | 2023-01-24 22:17:44 | 0:32:07 | 0:20:25 | 0:11:42 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/pool-create-delete} | 2 | |
Failure Reason:
"2023-01-24T22:13:13.454322+0000 mon.a (mon.0) 14 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135860 | 2023-01-24 16:10:23 | 2023-01-24 21:46:17 | 2023-01-24 22:09:30 | 0:23:13 | 0:15:45 | 0:07:28 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
Failure Reason:
"2023-01-24T22:04:19.475653+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135861 | 2023-01-24 16:10:25 | 2023-01-24 21:46:28 | 2023-01-24 22:24:48 | 0:38:20 | 0:27:05 | 0:11:15 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |
fail | 7135862 | 2023-01-24 16:10:26 | 2023-01-24 21:46:38 | 2023-01-24 22:16:10 | 0:29:32 | 0:20:41 | 0:08:51 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8} tasks/module_selftest} | 2 | |
Failure Reason:
Test failure: test_prometheus (tasks.mgr.test_module_selftest.TestModuleSelftest) |
||||||||||||||
fail | 7135863 | 2023-01-24 16:10:27 | 2023-01-24 21:46:59 | 2023-01-24 22:12:52 | 0:25:53 | 0:18:57 | 0:06:56 | smithi | main | rhel | 8.4 | rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135864 | 2023-01-24 16:10:28 | 2023-01-24 21:46:59 | 2023-01-24 22:06:21 | 0:19:22 | 0:11:10 | 0:08:12 | smithi | main | centos | 8.stream | rados/singleton/{all/watch-notify-same-primary mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135865 | 2023-01-24 16:10:29 | 2023-01-24 21:47:39 | 2023-01-24 22:28:51 | 0:41:12 | 0:33:21 | 0:07:51 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
Failure Reason:
"2023-01-24T22:12:43.692085+0000 mon.a (mon.0) 1240 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135866 | 2023-01-24 16:10:30 | 2023-01-24 21:49:40 | 2023-01-24 22:14:45 | 0:25:05 | 0:18:02 | 0:07:03 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7135867 | 2023-01-24 16:10:32 | 2023-01-24 21:49:40 | 2023-01-24 22:16:35 | 0:26:55 | 0:17:36 | 0:09:19 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/mgr} | 1 | |
fail | 7135868 | 2023-01-24 16:10:33 | 2023-01-24 21:49:40 | 2023-01-24 22:21:58 | 0:32:18 | 0:22:56 | 0:09:22 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
"2023-01-24T22:17:45.248035+0000 mgr.y (mgr.4100) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135869 | 2023-01-24 16:10:34 | 2023-01-24 21:49:41 | 2023-01-24 22:33:05 | 0:43:24 | 0:33:31 | 0:09:53 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
Failure Reason:
"2023-01-24T22:08:50.181975+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135870 | 2023-01-24 16:10:35 | 2023-01-24 21:50:11 | 2023-01-24 22:23:57 | 0:33:46 | 0:25:37 | 0:08:09 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/cache-pool-snaps} | 2 | |
Failure Reason:
"2023-01-24T22:08:18.967039+0000 mgr.y (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135871 | 2023-01-24 16:10:36 | 2023-01-24 21:50:21 | 2023-01-24 22:30:46 | 0:40:25 | 0:26:56 | 0:13:29 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7135872 | 2023-01-24 16:10:37 | 2023-01-24 21:52:02 | 2023-01-24 22:25:40 | 0:33:38 | 0:23:03 | 0:10:35 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} | 2 | |
Failure Reason:
"2023-01-24T22:11:18.015749+0000 mgr.x (mgr.4111) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135873 | 2023-01-24 16:10:39 | 2023-01-24 21:52:22 | 2023-01-24 22:11:21 | 0:18:59 | 0:12:05 | 0:06:54 | smithi | main | rhel | 8.4 | rados/singleton/{all/admin-socket mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7135874 | 2023-01-24 16:10:40 | 2023-01-24 21:52:22 | 2023-01-24 22:31:41 | 0:39:19 | 0:25:05 | 0:14:14 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7135875 | 2023-01-24 16:10:41 | 2023-01-24 21:54:53 | 2023-01-25 00:39:54 | 2:45:01 | 2:31:36 | 0:13:25 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7135876 | 2023-01-24 16:10:42 | 2023-01-24 21:57:54 | 2023-01-24 22:29:02 | 0:31:08 | 0:23:56 | 0:07:12 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache-snaps-balanced} | 2 | |
Failure Reason:
"2023-01-24T22:15:24.421825+0000 mgr.x (mgr.4120) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135877 | 2023-01-24 16:10:43 | 2023-01-24 21:58:34 | 2023-01-24 22:40:42 | 0:42:08 | 0:33:21 | 0:08:47 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
Failure Reason:
"2023-01-24T22:18:22.533010+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135878 | 2023-01-24 16:10:44 | 2023-01-24 22:01:45 | 2023-01-24 22:33:14 | 0:31:29 | 0:21:46 | 0:09:43 | smithi | main | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} | 2 | |
Failure Reason:
"2023-01-24T22:18:45.567578+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135879 | 2023-01-24 16:10:46 | 2023-01-24 22:03:56 | 2023-01-24 22:22:57 | 0:19:01 | 0:13:21 | 0:05:40 | smithi | main | rhel | 8.4 | rados/cephadm/workunits/{0-distro/rhel_8.4_container_tools_3.0 agent/off mon_election/classic task/test_adoption} | 1 | |
pass | 7135880 | 2023-01-24 16:10:47 | 2023-01-24 22:03:56 | 2023-01-24 22:39:07 | 0:35:11 | 0:23:32 | 0:11:39 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} | 1 | |
pass | 7135881 | 2023-01-24 16:10:48 | 2023-01-24 22:05:16 | 2023-01-24 22:37:26 | 0:32:10 | 0:22:24 | 0:09:46 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7135882 | 2023-01-24 16:10:49 | 2023-01-24 22:05:17 | 2023-01-24 22:33:22 | 0:28:05 | 0:18:12 | 0:09:53 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7135883 | 2023-01-24 16:10:50 | 2023-01-24 22:05:17 | 2023-01-24 22:26:36 | 0:21:19 | 0:10:43 | 0:10:36 | smithi | main | centos | 8.stream | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} | 3 | |
pass | 7135884 | 2023-01-24 16:10:51 | 2023-01-24 22:05:17 | 2023-01-24 22:37:55 | 0:32:38 | 0:20:30 | 0:12:08 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-snaps} | 2 | |
fail | 7135885 | 2023-01-24 16:10:53 | 2023-01-24 22:05:38 | 2023-01-24 22:36:31 | 0:30:53 | 0:20:09 | 0:10:44 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135886 | 2023-01-24 16:10:54 | 2023-01-24 22:06:38 | 2023-01-24 22:40:14 | 0:33:36 | 0:23:36 | 0:10:00 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/rbd_cls} | 3 | |
pass | 7135887 | 2023-01-24 16:10:55 | 2023-01-24 22:06:58 | 2023-01-24 22:28:10 | 0:21:12 | 0:09:28 | 0:11:44 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/deduptool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7135888 | 2023-01-24 16:10:56 | 2023-01-24 22:08:19 | 2023-01-24 22:44:36 | 0:36:17 | 0:26:51 | 0:09:26 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} | 2 | |
Failure Reason:
"2023-01-24T22:27:49.278369+0000 mgr.x (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135889 | 2023-01-24 16:10:57 | 2023-01-24 22:09:39 | 2023-01-24 22:31:01 | 0:21:22 | 0:15:37 | 0:05:45 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7135890 | 2023-01-24 16:10:58 | 2023-01-24 22:10:10 | 2023-01-24 22:33:30 | 0:23:20 | 0:10:45 | 0:12:35 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_5925} | 2 | |
pass | 7135891 | 2023-01-24 16:11:00 | 2023-01-24 22:11:30 | 2023-01-24 22:30:25 | 0:18:55 | 0:08:23 | 0:10:32 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7135892 | 2023-01-24 16:11:01 | 2023-01-24 22:12:41 | 2023-01-24 22:44:33 | 0:31:52 | 0:21:45 | 0:10:07 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7135893 | 2023-01-24 16:11:02 | 2023-01-24 22:13:01 | 2023-01-24 22:42:21 | 0:29:20 | 0:17:34 | 0:11:46 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache} | 2 | |
Failure Reason:
"2023-01-24T22:34:13.964462+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135894 | 2023-01-24 16:11:03 | 2023-01-24 22:14:42 | 2023-01-24 22:52:29 | 0:37:47 | 0:29:44 | 0:08:03 | smithi | main | rhel | 8.4 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{rhel_8} tasks/progress} | 2 | |
Failure Reason:
"2023-01-24T22:31:59.424548+0000 mgr.z (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135895 | 2023-01-24 16:11:04 | 2023-01-24 22:15:22 | 2023-01-24 22:37:12 | 0:21:50 | 0:14:29 | 0:07:21 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7135896 | 2023-01-24 16:11:05 | 2023-01-24 22:16:13 | 2023-01-24 22:37:33 | 0:21:20 | 0:11:51 | 0:09:29 | smithi | main | centos | 8.stream | rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135897 | 2023-01-24 16:11:06 | 2023-01-24 22:16:43 | 2023-01-24 22:35:43 | 0:19:00 | 0:12:22 | 0:06:38 | smithi | main | rhel | 8.4 | rados/cephadm/workunits/{0-distro/rhel_8.4_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi195 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=49f8fb05584886826e8eade75f7105fba754560c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 7135898 | 2023-01-24 16:11:08 | 2023-01-24 22:17:03 | 2023-01-24 23:03:50 | 0:46:47 | 0:36:58 | 0:09:49 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135899 | 2023-01-24 16:11:09 | 2023-01-24 22:17:04 | 2023-01-24 23:33:05 | 1:16:01 | 1:08:02 | 0:07:59 | smithi | main | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/misc} | 1 | |
pass | 7135900 | 2023-01-24 16:11:10 | 2023-01-24 22:17:54 | 2023-01-24 22:42:52 | 0:24:58 | 0:11:29 | 0:13:29 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/dedup-io-mixed} | 2 | |
fail | 7135901 | 2023-01-24 16:11:11 | 2023-01-24 22:21:15 | 2023-01-24 22:59:48 | 0:38:33 | 0:27:44 | 0:10:49 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_mostlyread} | 2 | |
Failure Reason:
"2023-01-24T22:40:16.203457+0000 mgr.y (mgr.4099) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135902 | 2023-01-24 16:11:12 | 2023-01-24 22:22:05 | 2023-01-24 22:41:50 | 0:19:45 | 0:09:57 | 0:09:48 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} | 1 | |
fail | 7135903 | 2023-01-24 16:11:13 | 2023-01-24 22:22:16 | 2023-01-24 22:55:56 | 0:33:40 | 0:24:19 | 0:09:21 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |
Failure Reason:
"2023-01-24T22:41:36.098358+0000 mgr.x (mgr.4109) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135904 | 2023-01-24 16:11:14 | 2023-01-24 22:22:36 | 2023-01-24 22:40:39 | 0:18:03 | 0:07:33 | 0:10:30 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/fusestore supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7135905 | 2023-01-24 16:11:15 | 2023-01-24 22:22:36 | 2023-01-24 22:41:33 | 0:18:57 | 0:13:10 | 0:05:47 | smithi | main | rhel | 8.4 | rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7135906 | 2023-01-24 16:11:17 | 2023-01-24 22:22:37 | 2023-01-24 22:57:36 | 0:34:59 | 0:28:18 | 0:06:41 | smithi | main | rhel | 8.4 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi093 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=49f8fb05584886826e8eade75f7105fba754560c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 7135907 | 2023-01-24 16:11:18 | 2023-01-24 22:22:37 | 2023-01-24 22:52:40 | 0:30:03 | 0:19:02 | 0:11:01 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135908 | 2023-01-24 16:11:19 | 2023-01-24 22:22:57 | 2023-01-24 22:57:04 | 0:34:07 | 0:24:08 | 0:09:59 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7135909 | 2023-01-24 16:11:20 | 2023-01-24 22:22:58 | 2023-01-24 22:51:31 | 0:28:33 | 0:14:58 | 0:13:35 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/dedup-io-snaps} | 2 | |
pass | 7135910 | 2023-01-24 16:11:21 | 2023-01-24 22:23:58 | 2023-01-24 22:47:53 | 0:23:55 | 0:13:31 | 0:10:24 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135911 | 2023-01-24 16:11:22 | 2023-01-24 22:24:48 | 2023-01-24 23:02:57 | 0:38:09 | 0:28:13 | 0:09:56 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
Failure Reason:
"2023-01-24T22:44:38.408869+0000 mgr.y (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135912 | 2023-01-24 16:11:23 | 2023-01-24 22:25:49 | 2023-01-24 22:46:50 | 0:21:01 | 0:13:48 | 0:07:13 | smithi | main | rhel | 8.4 | rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7135913 | 2023-01-24 16:11:24 | 2023-01-24 22:25:49 | 2023-01-24 22:57:54 | 0:32:05 | 0:21:20 | 0:10:45 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135914 | 2023-01-24 16:11:26 | 2023-01-24 22:26:40 | 2023-01-24 22:46:17 | 0:19:37 | 0:07:38 | 0:11:59 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} | 2 | |
pass | 7135915 | 2023-01-24 16:11:27 | 2023-01-24 22:28:20 | 2023-01-24 23:06:58 | 0:38:38 | 0:27:18 | 0:11:20 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
fail | 7135916 | 2023-01-24 16:11:28 | 2023-01-24 22:29:00 | 2023-01-24 22:57:44 | 0:28:44 | 0:20:14 | 0:08:30 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_remove_from_blocklist (tasks.mgr.dashboard.test_auth.AuthTest) |
||||||||||||||
fail | 7135917 | 2023-01-24 16:11:29 | 2023-01-24 22:29:11 | 2023-01-24 22:44:39 | 0:15:28 | 0:05:36 | 0:09:52 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi026 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
fail | 7135918 | 2023-01-24 16:11:30 | 2023-01-24 22:29:11 | 2023-01-24 23:02:30 | 0:33:19 | 0:21:31 | 0:11:48 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"2023-01-24T22:57:27.629931+0000 mon.a (mon.0) 206 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135919 | 2023-01-24 16:11:31 | 2023-01-24 22:30:12 | 2023-01-24 23:20:28 | 0:50:16 | 0:39:32 | 0:10:44 | smithi | main | ubuntu | 20.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi012 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 7135920 | 2023-01-24 16:11:32 | 2023-01-24 22:30:32 | 2023-01-24 23:42:01 | 1:11:29 | 1:04:02 | 0:07:27 | smithi | main | rhel | 8.4 | rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
"2023-01-24T22:53:16.179749+0000 mon.a (mon.0) 416 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135921 | 2023-01-24 16:11:34 | 2023-01-24 22:30:52 | 2023-01-25 01:04:03 | 2:33:11 | 2:23:39 | 0:09:32 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
"2023-01-24T23:00:24.449184+0000 mgr.y (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135922 | 2023-01-24 16:11:35 | 2023-01-24 22:31:03 | 2023-01-24 22:53:37 | 0:22:34 | 0:15:35 | 0:06:59 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
Failure Reason:
"2023-01-24T22:50:17.450068+0000 mgr.y (mgr.4106) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135923 | 2023-01-24 16:11:36 | 2023-01-24 22:31:43 | 2023-01-24 23:11:08 | 0:39:25 | 0:28:26 | 0:10:59 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/rados_api_tests} | 2 | |
pass | 7135924 | 2023-01-24 16:11:37 | 2023-01-24 22:33:14 | 2023-01-24 22:49:41 | 0:16:27 | 0:06:17 | 0:10:10 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 7135925 | 2023-01-24 16:11:38 | 2023-01-24 22:33:14 | 2023-01-24 22:51:16 | 0:18:02 | 0:08:32 | 0:09:30 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} | 1 | |
fail | 7135926 | 2023-01-24 16:11:39 | 2023-01-24 22:33:24 | 2023-01-24 23:16:28 | 0:43:04 | 0:37:19 | 0:05:45 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
"2023-01-24T22:50:15.773071+0000 mgr.y (mgr.4106) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135927 | 2023-01-24 16:11:40 | 2023-01-24 22:33:25 | 2023-01-24 22:56:25 | 0:23:00 | 0:14:09 | 0:08:51 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/readwrite} | 2 | |
Failure Reason:
"2023-01-24T22:51:07.260641+0000 mgr.y (mgr.4106) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135928 | 2023-01-24 16:11:41 | 2023-01-24 22:33:35 | 2023-01-24 22:56:29 | 0:22:54 | 0:12:33 | 0:10:21 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135929 | 2023-01-24 16:11:42 | 2023-01-24 22:35:45 | 2023-01-24 23:14:15 | 0:38:30 | 0:30:16 | 0:08:14 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
Failure Reason:
"2023-01-24T23:00:00.742841+0000 mon.a (mon.0) 1155 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135930 | 2023-01-24 16:11:43 | 2023-01-24 22:37:16 | 2023-01-24 23:56:18 | 1:19:02 | 1:09:50 | 0:09:12 | smithi | main | centos | 8.stream | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"2023-01-24T23:02:49.484523+0000 mon.a (mon.0) 301 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135931 | 2023-01-24 16:11:44 | 2023-01-24 22:37:16 | 2023-01-24 23:04:13 | 0:26:57 | 0:18:04 | 0:08:53 | smithi | main | centos | 8.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135932 | 2023-01-24 16:11:45 | 2023-01-24 22:37:17 | 2023-01-24 22:59:42 | 0:22:25 | 0:12:00 | 0:10:25 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-stupid} supported-random-distro$/{ubuntu_latest} tasks/prometheus} | 2 | |
pass | 7135933 | 2023-01-24 16:11:47 | 2023-01-24 22:37:37 | 2023-01-24 23:26:58 | 0:49:21 | 0:37:21 | 0:12:00 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 3 | |
pass | 7135934 | 2023-01-24 16:11:48 | 2023-01-24 22:39:18 | 2023-01-24 23:06:18 | 0:27:00 | 0:14:55 | 0:12:05 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/keyvaluedb supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7135935 | 2023-01-24 16:11:49 | 2023-01-24 22:40:18 | 2023-01-24 23:23:15 | 0:42:57 | 0:36:17 | 0:06:40 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
Failure Reason:
"2023-01-24T22:56:58.075346+0000 mgr.y (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135936 | 2023-01-24 16:11:50 | 2023-01-24 22:40:18 | 2023-01-24 23:13:42 | 0:33:24 | 0:27:00 | 0:06:24 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |
Failure Reason:
"2023-01-24T22:57:20.476910+0000 mgr.x (mgr.4103) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135937 | 2023-01-24 16:11:51 | 2023-01-24 22:40:49 | 2023-01-24 23:08:39 | 0:27:50 | 0:18:43 | 0:09:07 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135938 | 2023-01-24 16:11:52 | 2023-01-24 22:41:39 | 2023-01-24 23:05:45 | 0:24:06 | 0:13:35 | 0:10:31 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7135939 | 2023-01-24 16:11:53 | 2023-01-24 22:41:59 | 2023-01-24 23:55:36 | 1:13:37 | 1:07:46 | 0:05:51 | smithi | main | rhel | 8.4 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/mon} | 1 | |
pass | 7135940 | 2023-01-24 16:11:54 | 2023-01-24 22:42:30 | 2023-01-24 23:00:08 | 0:17:38 | 0:11:45 | 0:05:53 | smithi | main | rhel | 8.4 | rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7135941 | 2023-01-24 16:11:55 | 2023-01-24 22:42:30 | 2023-01-24 23:23:58 | 0:41:28 | 0:31:18 | 0:10:10 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
Failure Reason:
"2023-01-24T23:01:03.822073+0000 mgr.y (mgr.4115) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135942 | 2023-01-24 16:11:56 | 2023-01-24 22:43:01 | 2023-01-25 00:13:38 | 1:30:37 | 1:22:06 | 0:08:31 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/radosbench} | 2 | |
Failure Reason:
"2023-01-24T23:01:31.929176+0000 mgr.y (mgr.4113) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135943 | 2023-01-24 16:11:57 | 2023-01-24 22:44:41 | 2023-01-24 23:14:22 | 0:29:41 | 0:22:11 | 0:07:30 | smithi | main | rhel | 8.4 | rados/cephadm/smoke/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7135944 | 2023-01-24 16:11:58 | 2023-01-24 22:44:41 | 2023-01-25 00:15:43 | 1:31:02 | 1:23:49 | 0:07:13 | smithi | main | rhel | 8.4 | rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
"2023-01-24T23:06:59.915957+0000 mon.a (mon.0) 290 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135945 | 2023-01-24 16:11:59 | 2023-01-24 22:44:42 | 2023-01-24 23:03:46 | 0:19:04 | 0:08:33 | 0:10:31 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7135946 | 2023-01-24 16:12:00 | 2023-01-24 22:46:02 | 2023-01-24 23:06:35 | 0:20:33 | 0:10:08 | 0:10:25 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} | 1 | |
fail | 7135947 | 2023-01-24 16:12:01 | 2023-01-24 22:46:22 | 2023-01-24 23:12:02 | 0:25:40 | 0:15:08 | 0:10:32 | smithi | main | centos | 8.stream | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/mon_recovery} | 2 | |
Failure Reason:
"2023-01-24T23:04:41.518891+0000 mgr.y (mgr.4111) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135948 | 2023-01-24 16:12:03 | 2023-01-24 22:46:53 | 2023-01-24 23:18:21 | 0:31:28 | 0:16:15 | 0:15:13 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/repair_test} | 2 | |
fail | 7135949 | 2023-01-24 16:12:04 | 2023-01-24 22:49:43 | 2023-01-24 23:17:22 | 0:27:39 | 0:16:02 | 0:11:37 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/redirect} | 2 | |
Failure Reason:
"2023-01-24T23:11:21.139232+0000 mgr.y (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135950 | 2023-01-24 16:12:05 | 2023-01-24 22:51:34 | 2023-01-24 23:30:44 | 0:39:10 | 0:29:42 | 0:09:28 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_nfs} | 1 | |
fail | 7135951 | 2023-01-24 16:12:06 | 2023-01-24 22:51:34 | 2023-01-24 23:13:40 | 0:22:06 | 0:15:17 | 0:06:49 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
Failure Reason:
"2023-01-24T23:10:06.398570+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135952 | 2023-01-24 16:12:07 | 2023-01-24 22:52:35 | 2023-01-24 23:15:47 | 0:23:12 | 0:13:30 | 0:09:42 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} | 2 | |
Failure Reason:
"2023-01-24T23:10:52.723208+0000 mgr.x (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135953 | 2023-01-24 16:12:08 | 2023-01-24 22:52:45 | 2023-01-25 00:17:28 | 1:24:43 | 1:17:16 | 0:07:27 | smithi | main | rhel | 8.4 | rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
"2023-01-24T23:16:38.508116+0000 mon.a (mon.0) 324 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
fail | 7135954 | 2023-01-24 16:12:09 | 2023-01-24 22:53:46 | 2023-01-25 00:02:40 | 1:08:54 | 1:01:02 | 0:07:52 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_osdmap_prune} | 2 | |
Failure Reason:
"2023-01-24T23:18:54.474959+0000 mon.h (mon.5) 1159 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135955 | 2023-01-24 16:12:10 | 2023-01-24 22:53:46 | 2023-01-24 23:11:59 | 0:18:13 | 0:10:30 | 0:07:43 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135956 | 2023-01-24 16:12:11 | 2023-01-24 22:53:46 | 2023-01-24 23:31:22 | 0:37:36 | 0:26:34 | 0:11:02 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
Failure Reason:
"2023-01-24T23:14:50.551849+0000 mgr.x (mgr.4105) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135957 | 2023-01-24 16:12:13 | 2023-01-24 22:56:07 | 2023-01-24 23:25:24 | 0:29:17 | 0:18:46 | 0:10:31 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7135958 | 2023-01-24 16:12:14 | 2023-01-24 22:56:27 | 2023-01-24 23:24:11 | 0:27:44 | 0:20:06 | 0:07:38 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/redirect_promote_tests} | 2 | |
Failure Reason:
"2023-01-24T23:14:05.264604+0000 mgr.y (mgr.4108) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135959 | 2023-01-24 16:12:15 | 2023-01-24 22:57:07 | 2023-01-24 23:20:57 | 0:23:50 | 0:13:36 | 0:10:14 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7135960 | 2023-01-24 16:12:16 | 2023-01-24 22:57:48 | 2023-01-24 23:18:43 | 0:20:55 | 0:11:05 | 0:09:50 | smithi | main | centos | 8.stream | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135961 | 2023-01-24 16:12:17 | 2023-01-24 22:57:48 | 2023-01-24 23:33:05 | 0:35:17 | 0:26:25 | 0:08:52 | smithi | main | centos | 8.stream | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"2023-01-24T23:23:04.283514+0000 mon.a (mon.0) 95 : cluster [WRN] Health check failed: 1 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log |
||||||||||||||
pass | 7135962 | 2023-01-24 16:12:18 | 2023-01-24 22:57:58 | 2023-01-24 23:18:50 | 0:20:52 | 0:11:28 | 0:09:24 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/crushdiff mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7135963 | 2023-01-24 16:12:19 | 2023-01-24 22:57:59 | 2023-01-24 23:19:14 | 0:21:15 | 0:12:59 | 0:08:16 | smithi | main | rhel | 8.4 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{rhel_8} tasks/workunits} | 2 | |
Failure Reason:
"2023-01-24T23:16:48.616471+0000 mgr.x (mgr.4103) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
fail | 7135964 | 2023-01-24 16:12:20 | 2023-01-24 22:59:49 | 2023-01-24 23:26:58 | 0:27:09 | 0:21:13 | 0:05:56 | smithi | main | rhel | 8.4 | rados/cephadm/smoke/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7135965 | 2023-01-24 16:12:21 | 2023-01-24 22:59:49 | 2023-01-24 23:33:52 | 0:34:03 | 0:20:40 | 0:13:23 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-small-objects} | 2 | |
fail | 7135966 | 2023-01-24 16:12:22 | 2023-01-24 23:00:40 | 2023-01-24 23:28:01 | 0:27:21 | 0:15:22 | 0:11:59 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/redirect_set_object} | 2 | |
Failure Reason:
"2023-01-24T23:21:34.943948+0000 mgr.x (mgr.4104) 1 : cluster [ERR] Failed to load ceph-mgr modules: prometheus" in cluster log |
||||||||||||||
pass | 7135967 | 2023-01-24 16:12:24 | 2023-01-24 23:03:00 | 2023-01-24 23:24:12 | 0:21:12 | 0:09:57 | 0:11:15 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} | 1 |