User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
lflores | 2023-01-24 21:45:32 | 2023-01-29 09:15:49 | 2023-01-30 01:53:18 | 16:37:29 | rados | main | smithi | 510284b | 263 | 55 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7136232 | 2023-01-24 21:46:48 | 2023-01-29 09:15:49 | 2023-01-29 09:38:02 | 0:22:13 | 0:12:48 | 0:09:25 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{centos_8} tasks/workunits} | 2 | |
pass | 7136233 | 2023-01-24 21:46:50 | 2023-01-29 09:36:28 | 598 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | ||||
pass | 7136234 | 2023-01-24 21:46:51 | 2023-01-29 09:16:50 | 2023-01-29 09:48:25 | 0:31:35 | 0:24:21 | 0:07:14 | smithi | main | rhel | 8.6 | rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7136235 | 2023-01-24 21:46:52 | 2023-01-29 09:18:00 | 2023-01-29 09:56:39 | 0:38:39 | 0:26:48 | 0:11:51 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
"/var/log/ceph/f1b6b488-9fb8-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T09:43:48.413+0000 7f7e0f84f700 0 log_channel(cluster) log [WRN] : Health check failed: 2/5 mons down, quorum a,e,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7136236 | 2023-01-24 21:46:53 | 2023-01-29 09:18:11 | 2023-01-29 09:53:01 | 0:34:50 | 0:22:53 | 0:11:57 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
pass | 7136237 | 2023-01-24 21:46:54 | 2023-01-29 09:18:11 | 2023-01-29 09:58:30 | 0:40:19 | 0:32:33 | 0:07:46 | smithi | main | centos | 8.stream | rados/objectstore/{backends/objectstore-filestore-memstore supported-random-distro$/{centos_8}} | 1 | |
pass | 7136238 | 2023-01-24 21:46:56 | 2023-01-29 09:18:11 | 2023-01-29 10:11:25 | 0:53:14 | 0:45:56 | 0:07:18 | smithi | main | rhel | 8.6 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_mon_osdmap_prune} | 2 | |
pass | 7136239 | 2023-01-24 21:46:57 | 2023-01-29 09:18:12 | 2023-01-29 09:56:05 | 0:37:53 | 0:28:35 | 0:09:18 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/rados_api_tests} | 2 | |
pass | 7136240 | 2023-01-24 21:46:58 | 2023-01-29 09:18:22 | 2023-01-29 10:00:05 | 0:41:43 | 0:31:15 | 0:10:28 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/mon_recovery validater/valgrind} | 2 | |
pass | 7136241 | 2023-01-24 21:46:59 | 2023-01-29 09:18:22 | 2023-01-29 09:56:19 | 0:37:57 | 0:26:46 | 0:11:11 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
pass | 7136242 | 2023-01-24 21:47:00 | 2023-01-29 09:18:42 | 2023-01-29 09:42:18 | 0:23:36 | 0:11:36 | 0:12:00 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7136243 | 2023-01-24 21:47:01 | 2023-01-29 09:20:43 | 2023-01-29 09:49:11 | 0:28:28 | 0:17:38 | 0:10:50 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"/var/log/ceph/41248028-9fb8-11ed-9e56-001a4aab830c/ceph-mon.smithi153.log:2023-01-29T09:45:07.671+0000 7f088a0c7700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7136244 | 2023-01-24 21:47:03 | 2023-01-29 09:21:13 | 2023-01-29 09:40:49 | 0:19:36 | 0:08:30 | 0:11:06 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/peer mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136245 | 2023-01-24 21:47:04 | 2023-01-29 09:21:14 | 2023-01-29 09:48:08 | 0:26:54 | 0:16:08 | 0:10:46 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7136246 | 2023-01-24 21:47:05 | 2023-01-29 09:24:54 | 2023-01-29 10:26:24 | 1:01:30 | 0:52:28 | 0:09:02 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/radosbench} | 2 | |
pass | 7136247 | 2023-01-24 21:47:06 | 2023-01-29 09:25:15 | 2023-01-29 10:07:00 | 0:41:45 | 0:33:57 | 0:07:48 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7136248 | 2023-01-24 21:47:08 | 2023-01-29 09:26:05 | 2023-01-29 09:46:59 | 0:20:54 | 0:13:58 | 0:06:56 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_adoption} | 1 | |
pass | 7136249 | 2023-01-24 21:47:09 | 2023-01-29 09:26:36 | 2023-01-29 10:09:23 | 0:42:47 | 0:31:13 | 0:11:34 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136250 | 2023-01-24 21:47:10 | 2023-01-29 09:28:16 | 2023-01-29 09:53:49 | 0:25:33 | 0:18:51 | 0:06:42 | smithi | main | rhel | 8.6 | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 2 | |
pass | 7136251 | 2023-01-24 21:47:11 | 2023-01-29 09:28:17 | 2023-01-29 10:08:42 | 0:40:25 | 0:29:38 | 0:10:47 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
pass | 7136252 | 2023-01-24 21:47:12 | 2023-01-29 09:28:27 | 2023-01-29 09:52:14 | 0:23:47 | 0:14:57 | 0:08:50 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/redirect} | 2 | |
fail | 7136253 | 2023-01-24 21:47:13 | 2023-01-29 09:29:27 | 2023-01-29 10:06:40 | 0:37:13 | 0:23:25 | 0:13:48 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/da1c341e-9fb9-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T09:58:12.882+0000 7f1ad3b58700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7136254 | 2023-01-24 21:47:15 | 2023-01-29 09:32:38 | 2023-01-29 09:53:53 | 0:21:15 | 0:10:10 | 0:11:05 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | |
fail | 7136255 | 2023-01-24 21:47:16 | 2023-01-29 09:34:09 | 2023-01-29 10:08:47 | 0:34:38 | 0:23:22 | 0:11:16 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
"/var/log/ceph/e6f5b074-9fba-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T09:57:08.298+0000 7f7aabfec700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
pass | 7136256 | 2023-01-24 21:47:17 | 2023-01-29 09:35:49 | 2023-01-29 10:00:46 | 0:24:57 | 0:16:15 | 0:08:42 | smithi | main | centos | 8.stream | rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136257 | 2023-01-24 21:47:18 | 2023-01-29 09:35:59 | 2023-01-29 10:07:45 | 0:31:46 | 0:21:26 | 0:10:20 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7136258 | 2023-01-24 21:47:19 | 2023-01-29 09:36:10 | 2023-01-29 09:58:21 | 0:22:11 | 0:11:16 | 0:10:55 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136259 | 2023-01-24 21:47:20 | 2023-01-29 09:36:30 | 2023-01-29 10:18:16 | 0:41:46 | 0:31:33 | 0:10:13 | smithi | main | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136260 | 2023-01-24 21:47:22 | 2023-01-29 09:36:30 | 2023-01-29 10:00:45 | 0:24:15 | 0:13:18 | 0:10:57 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/redirect_promote_tests} | 2 | |
fail | 7136261 | 2023-01-24 21:47:23 | 2023-01-29 09:36:41 | 2023-01-29 10:01:47 | 0:25:06 | 0:15:31 | 0:09:35 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"/var/log/ceph/ba783224-9fba-11ed-9e56-001a4aab830c/ceph-mon.smithi040.log:2023-01-29T09:58:37.888+0000 7f5d8c040700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7136262 | 2023-01-24 21:47:24 | 2023-01-29 09:36:51 | 2023-01-29 10:05:17 | 0:28:26 | 0:17:50 | 0:10:36 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_cls_all} | 2 | |
pass | 7136263 | 2023-01-24 21:47:25 | 2023-01-29 09:38:11 | 2023-01-29 10:14:12 | 0:36:01 | 0:27:44 | 0:08:17 | smithi | main | rhel | 8.6 | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/mon_recovery} | 3 | |
pass | 7136264 | 2023-01-24 21:47:27 | 2023-01-29 09:40:22 | 2023-01-29 09:59:33 | 0:19:11 | 0:12:57 | 0:06:14 | smithi | main | rhel | 8.6 | rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7136265 | 2023-01-24 21:47:28 | 2023-01-29 09:40:22 | 2023-01-29 10:20:33 | 0:40:11 | 0:28:28 | 0:11:43 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_full_health (tasks.mgr.dashboard.test_health.HealthTest) |
||||||||||||||
pass | 7136266 | 2023-01-24 21:47:29 | 2023-01-29 09:40:53 | 2023-01-29 10:04:57 | 0:24:04 | 0:11:03 | 0:13:01 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{ubuntu_latest} tasks/crash} | 2 | |
pass | 7136267 | 2023-01-24 21:47:30 | 2023-01-29 09:42:33 | 2023-01-29 10:01:51 | 0:19:18 | 0:12:55 | 0:06:23 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/alloc-hint supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136268 | 2023-01-24 21:47:31 | 2023-01-29 09:42:33 | 2023-01-29 10:07:57 | 0:25:24 | 0:14:49 | 0:10:35 | smithi | main | centos | 8.stream | rados/rest/{mgr-restful supported-random-distro$/{centos_8}} | 1 | |
fail | 7136269 | 2023-01-24 21:47:33 | 2023-01-29 09:42:44 | 2023-01-29 09:58:06 | 0:15:22 | 0:05:41 | 0:09:41 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi033 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7136270 | 2023-01-24 21:47:34 | 2023-01-29 09:42:44 | 2023-01-29 10:11:17 | 0:28:33 | 0:21:42 | 0:06:51 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136271 | 2023-01-24 21:47:35 | 2023-01-29 09:44:34 | 2023-01-29 10:08:36 | 0:24:02 | 0:12:45 | 0:11:17 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/c2c} | 1 | |
fail | 7136272 | 2023-01-24 21:47:36 | 2023-01-29 09:44:35 | 2023-01-29 10:28:58 | 0:44:23 | 0:33:20 | 0:11:03 | smithi | main | centos | 8.stream | rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools_crun} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed on smithi159 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone --depth 1 --branch quincy https://github.com/chrisphoffman/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0' |
||||||||||||||
pass | 7136273 | 2023-01-24 21:47:37 | 2023-01-29 09:45:35 | 2023-01-29 10:16:32 | 0:30:57 | 0:21:12 | 0:09:45 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 | |
pass | 7136274 | 2023-01-24 21:47:38 | 2023-01-29 09:45:35 | 2023-01-29 10:13:38 | 0:28:03 | 0:20:55 | 0:07:08 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm} | 1 | |
pass | 7136275 | 2023-01-24 21:47:40 | 2023-01-29 09:46:46 | 2023-01-29 10:16:46 | 0:30:00 | 0:23:21 | 0:06:39 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/redirect_set_object} | 2 | |
pass | 7136276 | 2023-01-24 21:47:41 | 2023-01-29 09:47:06 | 2023-01-29 10:25:42 | 0:38:36 | 0:26:37 | 0:11:59 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
pass | 7136277 | 2023-01-24 21:47:42 | 2023-01-29 09:47:06 | 2023-01-29 10:23:36 | 0:36:30 | 0:26:18 | 0:10:12 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/rados_api_tests validater/lockdep} | 2 | |
pass | 7136278 | 2023-01-24 21:47:43 | 2023-01-29 09:48:17 | 2023-01-29 10:10:59 | 0:22:42 | 0:10:49 | 0:11:53 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7136279 | 2023-01-24 21:47:44 | 2023-01-29 09:48:27 | 2023-01-29 10:21:35 | 0:33:08 | 0:26:23 | 0:06:45 | smithi | main | rhel | 8.6 | rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7136280 | 2023-01-24 21:47:46 | 2023-01-29 09:49:18 | 2023-01-29 10:17:27 | 0:28:09 | 0:17:01 | 0:11:08 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/a94f5804-9fbc-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T10:09:17.214+0000 7fd12b123700 0 log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7136281 | 2023-01-24 21:47:47 | 2023-01-29 09:49:48 | 2023-01-29 10:10:55 | 0:21:07 | 0:12:16 | 0:08:51 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136282 | 2023-01-24 21:47:48 | 2023-01-29 09:49:48 | 2023-01-29 10:15:04 | 0:25:16 | 0:15:32 | 0:09:44 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7136283 | 2023-01-24 21:47:49 | 2023-01-29 09:53:09 | 2023-01-29 10:11:04 | 0:17:55 | 0:08:19 | 0:09:36 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} | 1 | |
pass | 7136284 | 2023-01-24 21:47:50 | 2023-01-29 09:53:09 | 2023-01-29 10:21:14 | 0:28:05 | 0:14:57 | 0:13:08 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/set-chunks-read} | 2 | |
dead | 7136285 | 2023-01-24 21:47:52 | 2023-01-29 09:53:50 | 2023-01-29 09:58:22 | 0:04:32 | smithi | main | rhel | 8.6 | rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 2 | |||
Failure Reason:
Error reimaging machines: 'NoneType' object has no attribute '_fields' |
||||||||||||||
pass | 7136286 | 2023-01-24 21:47:53 | 2023-01-29 09:54:00 | 2023-01-29 10:17:15 | 0:23:15 | 0:13:59 | 0:09:16 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_3.0} 1-start 2-services/basic 3-final} | 1 | |
pass | 7136287 | 2023-01-24 21:47:54 | 2023-01-29 09:56:11 | 2023-01-29 10:35:41 | 0:39:30 | 0:28:44 | 0:10:46 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 2 | |
pass | 7136288 | 2023-01-24 21:47:55 | 2023-01-29 09:56:21 | 2023-01-29 10:29:38 | 0:33:17 | 0:23:52 | 0:09:25 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
pass | 7136289 | 2023-01-24 21:47:56 | 2023-01-29 09:56:41 | 2023-01-29 10:19:28 | 0:22:47 | 0:13:29 | 0:09:18 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136290 | 2023-01-24 21:47:58 | 2023-01-29 09:56:42 | 2023-01-29 10:28:47 | 0:32:05 | 0:21:53 | 0:10:12 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/small-objects-balanced} | 2 | |
pass | 7136291 | 2023-01-24 21:47:59 | 2023-01-29 09:56:42 | 2023-01-29 10:23:11 | 0:26:29 | 0:15:04 | 0:11:25 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} | 2 | |
fail | 7136292 | 2023-01-24 21:48:00 | 2023-01-29 09:57:32 | 2023-01-29 10:23:02 | 0:25:30 | 0:14:54 | 0:10:36 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
"/var/log/ceph/aeaa794a-9fbd-11ed-9e56-001a4aab830c/ceph-mon.smithi136.log:2023-01-29T10:19:40.137+0000 7f06e543d700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7136293 | 2023-01-24 21:48:01 | 2023-01-29 09:57:33 | 2023-01-29 10:33:20 | 0:35:47 | 0:26:02 | 0:09:45 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7136294 | 2023-01-24 21:48:02 | 2023-01-29 09:58:23 | 2023-01-29 10:20:45 | 0:22:22 | 0:17:40 | 0:04:42 | smithi | main | rhel | 8.6 | rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136295 | 2023-01-24 21:48:03 | 2023-01-29 09:58:33 | 2023-01-29 10:18:51 | 0:20:18 | 0:11:46 | 0:08:32 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136296 | 2023-01-24 21:48:05 | 2023-01-29 09:58:34 | 2023-01-29 10:31:01 | 0:32:27 | 0:22:52 | 0:09:35 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/small-objects-localized} | 2 | |
pass | 7136297 | 2023-01-24 21:48:06 | 2023-01-29 09:59:34 | 2023-01-29 10:25:14 | 0:25:40 | 0:15:34 | 0:10:06 | smithi | main | centos | 8.stream | rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{centos_8}} | 1 | |
pass | 7136298 | 2023-01-24 21:48:07 | 2023-01-29 10:00:14 | 2023-01-29 10:16:18 | 0:16:04 | 0:06:03 | 0:10:01 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
pass | 7136299 | 2023-01-24 21:48:08 | 2023-01-29 10:00:15 | 2023-01-29 10:20:09 | 0:19:54 | 0:10:35 | 0:09:19 | smithi | main | centos | 8.stream | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} | 2 | |
pass | 7136300 | 2023-01-24 21:48:09 | 2023-01-29 10:00:55 | 2023-01-29 10:58:48 | 0:57:53 | 0:48:50 | 0:09:03 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136301 | 2023-01-24 21:48:10 | 2023-01-29 10:00:55 | 2023-01-29 10:22:56 | 0:22:01 | 0:08:21 | 0:13:40 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} | 1 | |
pass | 7136302 | 2023-01-24 21:48:12 | 2023-01-29 10:01:56 | 2023-01-29 10:26:15 | 0:24:19 | 0:17:52 | 0:06:27 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{rhel_8} tasks/failover} | 2 | |
pass | 7136303 | 2023-01-24 21:48:13 | 2023-01-29 10:01:56 | 2023-01-29 10:44:46 | 0:42:50 | 0:31:48 | 0:11:02 | smithi | main | rhel | 8.6 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_workunits} | 2 | |
pass | 7136304 | 2023-01-24 21:48:14 | 2023-01-29 10:05:07 | 2023-01-29 10:23:16 | 0:18:09 | 0:07:23 | 0:10:46 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136305 | 2023-01-24 21:48:15 | 2023-01-29 10:05:17 | 2023-01-29 10:35:48 | 0:30:31 | 0:24:13 | 0:06:18 | smithi | main | rhel | 8.6 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/crush} | 1 | |
pass | 7136306 | 2023-01-24 21:48:16 | 2023-01-29 10:05:17 | 2023-01-29 10:39:54 | 0:34:37 | 0:23:13 | 0:11:24 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/small-objects} | 2 | |
fail | 7136307 | 2023-01-24 21:48:17 | 2023-01-29 10:06:48 | 2023-01-29 10:48:15 | 0:41:27 | 0:32:31 | 0:08:56 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
"/var/log/ceph/45215cb2-9fbf-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T10:39:59.999+0000 7fb61211a700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
pass | 7136308 | 2023-01-24 21:48:18 | 2023-01-29 10:07:08 | 2023-01-29 11:10:48 | 1:03:40 | 0:54:35 | 0:09:05 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
fail | 7136309 | 2023-01-24 21:48:20 | 2023-01-29 10:07:49 | 2023-01-29 10:36:07 | 0:28:18 | 0:17:04 | 0:11:14 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/3968de72-9fbf-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T10:28:14.762+0000 7ff2d02d5700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7136310 | 2023-01-24 21:48:21 | 2023-01-29 10:08:39 | 2023-01-29 10:30:52 | 0:22:13 | 0:11:35 | 0:10:38 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7136311 | 2023-01-24 21:48:22 | 2023-01-29 10:08:50 | 2023-01-29 10:30:50 | 0:22:00 | 0:12:02 | 0:09:58 | smithi | main | centos | 8.stream | rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 2 | |
pass | 7136312 | 2023-01-24 21:48:23 | 2023-01-29 10:09:30 | 2023-01-29 10:50:39 | 0:41:09 | 0:29:17 | 0:11:52 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7136313 | 2023-01-24 21:48:24 | 2023-01-29 10:11:00 | 2023-01-29 10:37:41 | 0:26:41 | 0:17:08 | 0:09:33 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
pass | 7136314 | 2023-01-24 21:48:25 | 2023-01-29 10:11:01 | 2023-01-29 10:50:26 | 0:39:25 | 0:29:12 | 0:10:13 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
pass | 7136315 | 2023-01-24 21:48:26 | 2023-01-29 10:11:11 | 2023-01-29 10:32:29 | 0:21:18 | 0:11:44 | 0:09:34 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/crushdiff mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136316 | 2023-01-24 21:48:27 | 2023-01-29 10:11:21 | 2023-01-29 10:39:41 | 0:28:20 | 0:17:51 | 0:10:29 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/rados_stress_watch} | 2 | |
fail | 7136317 | 2023-01-24 21:48:29 | 2023-01-29 10:11:32 | 2023-01-29 10:41:39 | 0:30:07 | 0:16:47 | 0:13:20 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_crash.sh) on smithi062 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8965ca3bc5c900c1b534ee8ca638a8aa0e2c61db TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_crash.sh' |
||||||||||||||
pass | 7136318 | 2023-01-24 21:48:30 | 2023-01-29 10:13:42 | 2023-01-29 10:54:29 | 0:40:47 | 0:33:31 | 0:07:16 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
fail | 7136319 | 2023-01-24 21:48:31 | 2023-01-29 10:14:23 | 2023-01-29 10:39:00 | 0:24:37 | 0:16:05 | 0:08:32 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"/var/log/ceph/b74f2b34-9fbf-11ed-9e56-001a4aab830c/ceph-mon.smithi110.log:2023-01-29T10:34:48.787+0000 7f0f90a77700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7136320 | 2023-01-24 21:48:32 | 2023-01-29 10:15:13 | 2023-01-29 10:51:11 | 0:35:58 | 0:25:57 | 0:10:01 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
pass | 7136321 | 2023-01-24 21:48:33 | 2023-01-29 10:15:13 | 2023-01-29 10:34:20 | 0:19:07 | 0:08:03 | 0:11:04 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136322 | 2023-01-24 21:48:34 | 2023-01-29 10:16:24 | 2023-01-29 10:34:24 | 0:18:00 | 0:08:31 | 0:09:29 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 | |
pass | 7136323 | 2023-01-24 21:48:36 | 2023-01-29 10:16:34 | 2023-01-29 10:54:14 | 0:37:40 | 0:30:19 | 0:07:21 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7136324 | 2023-01-24 21:48:37 | 2023-01-29 10:16:54 | 2023-01-29 10:35:48 | 0:18:54 | 0:12:49 | 0:06:05 | smithi | main | rhel | 8.6 | rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7136325 | 2023-01-24 21:48:38 | 2023-01-29 10:17:25 | 2023-01-29 10:40:48 | 0:23:23 | 0:17:06 | 0:06:17 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/fb85a9f4-9fbf-11ed-9e56-001a4aab830c/ceph-mon.c.log:2023-01-29T10:35:44.996+0000 7f04c6067700 7 mon.c@2(peon).log v168 update_from_paxos applying incremental log 168 2023-01-29T10:35:44.367461+0000 mon.a (mon.0) 521 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7136326 | 2023-01-24 21:48:39 | 2023-01-29 10:17:35 | 2023-01-29 10:37:15 | 0:19:40 | 0:11:05 | 0:08:35 | smithi | main | centos | 8.stream | rados/objectstore/{backends/filejournal supported-random-distro$/{centos_8}} | 1 | |
pass | 7136327 | 2023-01-24 21:48:40 | 2023-01-29 10:18:25 | 2023-01-29 10:55:36 | 0:37:11 | 0:26:26 | 0:10:45 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 2 | |
pass | 7136328 | 2023-01-24 21:48:41 | 2023-01-29 10:19:36 | 2023-01-29 10:38:17 | 0:18:41 | 0:07:37 | 0:11:04 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} | 2 | |
pass | 7136329 | 2023-01-24 21:48:43 | 2023-01-29 10:20:16 | 2023-01-29 10:42:19 | 0:22:03 | 0:08:58 | 0:13:05 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136330 | 2023-01-24 21:48:44 | 2023-01-29 10:20:36 | 2023-01-29 10:53:37 | 0:33:01 | 0:23:22 | 0:09:39 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
fail | 7136331 | 2023-01-24 21:48:45 | 2023-01-29 10:20:47 | 2023-01-29 11:01:39 | 0:40:52 | 0:30:57 | 0:09:55 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
"/var/log/ceph/4b15a2ac-9fc1-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T10:43:11.013+0000 7f67cc564700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7136332 | 2023-01-24 21:48:46 | 2023-01-29 10:21:17 | 2023-01-29 10:41:35 | 0:20:18 | 0:12:46 | 0:07:32 | smithi | main | rhel | 8.6 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi027 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8965ca3bc5c900c1b534ee8ca638a8aa0e2c61db TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
pass | 7136333 | 2023-01-24 21:48:47 | 2023-01-29 10:21:17 | 2023-01-29 10:52:52 | 0:31:35 | 0:18:56 | 0:12:39 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} | 2 | |
pass | 7136334 | 2023-01-24 21:48:49 | 2023-01-29 10:22:58 | 2023-01-29 11:12:27 | 0:49:29 | 0:38:53 | 0:10:36 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/one workloads/snaps-few-objects} | 2 | |
pass | 7136335 | 2023-01-24 21:48:50 | 2023-01-29 10:23:08 | 2023-01-29 10:48:24 | 0:25:16 | 0:16:01 | 0:09:15 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
pass | 7136336 | 2023-01-24 21:48:51 | 2023-01-29 10:23:19 | 2023-01-29 10:47:53 | 0:24:34 | 0:13:08 | 0:11:26 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{ubuntu_latest} tasks/insights} | 2 | |
pass | 7136337 | 2023-01-24 21:48:52 | 2023-01-29 10:23:39 | 2023-01-29 10:48:40 | 0:25:01 | 0:14:35 | 0:10:26 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7136338 | 2023-01-24 21:48:53 | 2023-01-29 10:25:20 | 2023-01-29 10:45:25 | 0:20:05 | 0:10:43 | 0:09:22 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_striper} | 2 | |
pass | 7136339 | 2023-01-24 21:48:54 | 2023-01-29 10:25:50 | 2023-01-29 11:11:31 | 0:45:41 | 0:34:49 | 0:10:52 | smithi | main | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136340 | 2023-01-24 21:48:56 | 2023-01-29 10:26:20 | 2023-01-29 10:47:12 | 0:20:52 | 0:14:44 | 0:06:08 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7136341 | 2023-01-24 21:48:57 | 2023-01-29 10:26:31 | 2023-01-29 10:46:39 | 0:20:08 | 0:14:03 | 0:06:05 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136342 | 2023-01-24 21:48:58 | 2023-01-29 10:26:31 | 2023-01-29 12:04:42 | 1:38:11 | 1:25:31 | 0:12:40 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/erasure-code} | 1 | |
pass | 7136343 | 2023-01-24 21:48:59 | 2023-01-29 10:28:51 | 2023-01-29 11:49:43 | 1:20:52 | 1:15:33 | 0:05:19 | smithi | main | rhel | 8.6 | rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 2 | |
pass | 7136344 | 2023-01-24 21:49:00 | 2023-01-29 10:29:02 | 2023-01-29 10:53:09 | 0:24:07 | 0:10:55 | 0:13:12 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7136345 | 2023-01-24 21:49:01 | 2023-01-29 10:29:42 | 2023-01-29 11:03:43 | 0:34:01 | 0:25:57 | 0:08:04 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 7136346 | 2023-01-24 21:49:03 | 2023-01-29 10:30:53 | 2023-01-29 10:48:39 | 0:17:46 | 0:08:19 | 0:09:27 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} | 1 | |
fail | 7136347 | 2023-01-24 21:49:04 | 2023-01-29 10:30:53 | 2023-01-29 10:53:35 | 0:22:42 | 0:17:28 | 0:05:14 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/d26e3304-9fc1-11ed-9e56-001a4aab830c/ceph-mon.c.log:2023-01-29T10:47:01.534+0000 7f474bf4b700 7 mon.c@2(peon).log v99 update_from_paxos applying incremental log 99 2023-01-29T10:47:01.496949+0000 mon.a (mon.0) 345 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7136348 | 2023-01-24 21:49:05 | 2023-01-29 10:30:53 | 2023-01-29 11:06:17 | 0:35:24 | 0:25:07 | 0:10:17 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} tasks/e2e} | 2 | |
Failure Reason:
"/var/log/ceph/ce04ed66-9fc2-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T11:00:46.679+0000 7f21ec716700 0 log_channel(cluster) log [WRN] : Health check failed: 1 host is in maintenance mode (HOST_IN_MAINTENANCE)" in cluster log |
||||||||||||||
fail | 7136349 | 2023-01-24 21:49:06 | 2023-01-29 10:31:04 | 2023-01-29 10:49:25 | 0:18:21 | 0:06:01 | 0:12:20 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} | 3 | |
Failure Reason:
Command failed on smithi033 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7136350 | 2023-01-24 21:49:07 | 2023-01-29 10:33:24 | 2023-01-29 10:53:52 | 0:20:28 | 0:11:43 | 0:08:45 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136351 | 2023-01-24 21:49:08 | 2023-01-29 10:33:25 | 2023-01-29 11:44:38 | 1:11:13 | 0:59:29 | 0:11:44 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-radosbench} | 2 | |
pass | 7136352 | 2023-01-24 21:49:10 | 2023-01-29 10:34:25 | 2023-01-29 11:25:51 | 0:51:26 | 0:39:47 | 0:11:39 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 7136353 | 2023-01-24 21:49:11 | 2023-01-29 10:35:46 | 2023-01-29 12:37:29 | 2:01:43 | 1:50:47 | 0:10:56 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
"/var/log/ceph/57bc1110-9fc3-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T11:09:59.999+0000 7f7b8497e700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN noscrub,nodeep-scrub flag(s) set; Degraded data redundancy: 15532/9 objects degraded (172577.778%), 6 pgs degraded, 6 pgs undersized" in cluster log |
||||||||||||||
fail | 7136354 | 2023-01-24 21:49:12 | 2023-01-29 10:36:16 | 2023-01-29 11:00:46 | 0:24:30 | 0:17:35 | 0:06:55 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
"/var/log/ceph/02b90434-9fc3-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T10:57:14.590+0000 7f11acaf9700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7136355 | 2023-01-24 21:49:13 | 2023-01-29 10:36:16 | 2023-01-29 11:08:39 | 0:32:23 | 0:21:19 | 0:11:04 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache-agent-big} | 2 | |
pass | 7136356 | 2023-01-24 21:49:15 | 2023-01-29 10:37:47 | 2023-01-29 11:15:42 | 0:37:55 | 0:27:09 | 0:10:46 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7136357 | 2023-01-24 21:49:16 | 2023-01-29 10:38:27 | 2023-01-29 13:13:26 | 2:34:59 | 2:24:43 | 0:10:16 | smithi | main | centos | 8.stream | rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{centos_8}} | 1 | |
pass | 7136358 | 2023-01-24 21:49:17 | 2023-01-29 10:39:07 | 2023-01-29 11:08:42 | 0:29:35 | 0:23:39 | 0:05:56 | smithi | main | rhel | 8.6 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 2 | |
pass | 7136359 | 2023-01-24 21:49:18 | 2023-01-29 10:39:48 | 2023-01-29 10:57:59 | 0:18:11 | 0:13:29 | 0:04:42 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7136360 | 2023-01-24 21:49:19 | 2023-01-29 10:39:48 | 2023-01-29 11:07:57 | 0:28:09 | 0:18:37 | 0:09:32 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
"/var/log/ceph/30c16448-9fc3-11ed-9e56-001a4aab830c/ceph-mon.smithi142.log:2023-01-29T10:58:28.957+0000 7f8e23de0700 0 log_channel(cluster) log [WRN] : Health check failed: 1 slow ops, oldest one blocked for 31 sec, mon.smithi142 has slow ops (SLOW_OPS)" in cluster log |
||||||||||||||
pass | 7136361 | 2023-01-24 21:49:20 | 2023-01-29 10:39:58 | 2023-01-29 11:10:44 | 0:30:46 | 0:19:11 | 0:11:35 | smithi | main | centos | 8.stream | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/mon_recovery} | 3 | |
pass | 7136362 | 2023-01-24 21:49:22 | 2023-01-29 10:40:59 | 2023-01-29 11:09:56 | 0:28:57 | 0:17:21 | 0:11:36 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-agent-small} | 2 | |
pass | 7136363 | 2023-01-24 21:49:23 | 2023-01-29 10:41:49 | 2023-01-29 11:21:56 | 0:40:07 | 0:31:31 | 0:08:36 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_big} | 2 | |
pass | 7136364 | 2023-01-24 21:49:24 | 2023-01-29 11:12:31 | 2023-01-29 11:45:38 | 0:33:07 | 0:23:08 | 0:09:59 | smithi | main | centos | 8.stream | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 2 | |
fail | 7136365 | 2023-01-24 21:49:25 | 2023-01-29 11:12:32 | 2023-01-29 11:36:35 | 0:24:03 | 0:12:05 | 0:11:58 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_librados_build.sh) on smithi191 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8965ca3bc5c900c1b534ee8ca638a8aa0e2c61db TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_librados_build.sh' |
||||||||||||||
fail | 7136366 | 2023-01-24 21:49:26 | 2023-01-29 11:14:52 | 2023-01-29 11:35:51 | 0:20:59 | 0:14:36 | 0:06:23 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"/var/log/ceph/f824ca9e-9fc7-11ed-9e56-001a4aab830c/ceph-mon.smithi114.log:2023-01-29T11:32:58.882+0000 7f0164b22700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7136367 | 2023-01-24 21:49:27 | 2023-01-29 11:15:03 | 2023-01-29 11:54:58 | 0:39:55 | 0:30:17 | 0:09:38 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
pass | 7136368 | 2023-01-24 21:49:29 | 2023-01-29 11:15:43 | 2023-01-29 11:33:29 | 0:17:46 | 0:08:19 | 0:09:27 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
pass | 7136369 | 2023-01-24 21:49:30 | 2023-01-29 11:15:43 | 2023-01-29 11:40:16 | 0:24:33 | 0:14:45 | 0:09:48 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/pool-create-delete} | 2 | |
pass | 7136370 | 2023-01-24 21:49:31 | 2023-01-29 11:16:34 | 2023-01-29 11:41:30 | 0:24:56 | 0:14:13 | 0:10:43 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7136371 | 2023-01-24 21:49:32 | 2023-01-29 11:18:24 | 2023-01-29 11:57:40 | 0:39:16 | 0:30:13 | 0:09:03 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |
pass | 7136372 | 2023-01-24 21:49:33 | 2023-01-29 11:18:25 | 2023-01-29 11:58:08 | 0:39:43 | 0:33:25 | 0:06:18 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{rhel_8} tasks/module_selftest} | 2 | |
pass | 7136373 | 2023-01-24 21:49:34 | 2023-01-29 11:18:35 | 2023-01-29 11:37:50 | 0:19:15 | 0:12:18 | 0:06:57 | smithi | main | rhel | 8.6 | rados/singleton/{all/watch-notify-same-primary mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7136374 | 2023-01-24 21:49:36 | 2023-01-29 11:19:15 | 2023-01-29 11:56:28 | 0:37:13 | 0:29:02 | 0:08:11 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
"/var/log/ceph/97dcaf60-9fc9-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T11:42:52.919+0000 7f9459f09700 0 log_channel(cluster) log [WRN] : Health check failed: 1/5 mons down, quorum a,e,c,d (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7136375 | 2023-01-24 21:49:37 | 2023-01-29 11:20:56 | 2023-01-29 12:05:15 | 0:44:19 | 0:33:02 | 0:11:17 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7136376 | 2023-01-24 21:49:38 | 2023-01-29 11:22:06 | 2023-01-29 11:50:32 | 0:28:26 | 0:17:40 | 0:10:46 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136377 | 2023-01-24 21:49:39 | 2023-01-29 11:22:07 | 2023-01-29 11:49:50 | 0:27:43 | 0:14:29 | 0:13:14 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mgr} | 1 | |
pass | 7136378 | 2023-01-24 21:49:41 | 2023-01-29 11:25:47 | 2023-01-29 11:54:47 | 0:29:00 | 0:20:40 | 0:08:20 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
pass | 7136379 | 2023-01-24 21:49:42 | 2023-01-29 11:25:48 | 2023-01-29 12:11:12 | 0:45:24 | 0:36:14 | 0:09:10 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
pass | 7136380 | 2023-01-24 21:49:43 | 2023-01-29 11:25:58 | 2023-01-29 12:00:27 | 0:34:29 | 0:24:07 | 0:10:22 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache-pool-snaps} | 2 | |
fail | 7136381 | 2023-01-24 21:49:44 | 2023-01-29 11:27:18 | 2023-01-29 12:04:58 | 0:37:40 | 0:24:18 | 0:13:22 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/3716fa72-9fca-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T11:54:46.038+0000 7f9d6d00f700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7136382 | 2023-01-24 21:49:45 | 2023-01-29 11:27:29 | 2023-01-29 11:59:06 | 0:31:37 | 0:20:03 | 0:11:34 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} | 2 | |
pass | 7136383 | 2023-01-24 21:49:47 | 2023-01-29 11:29:09 | 2023-01-29 11:49:07 | 0:19:58 | 0:12:26 | 0:07:32 | smithi | main | rhel | 8.6 | rados/singleton/{all/admin-socket mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136384 | 2023-01-24 21:49:48 | 2023-01-29 11:29:09 | 2023-01-29 12:19:31 | 0:50:22 | 0:42:53 | 0:07:29 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 2 | |
pass | 7136385 | 2023-01-24 21:49:49 | 2023-01-29 11:30:10 | 2023-01-29 14:37:57 | 3:07:47 | 2:57:44 | 0:10:03 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136386 | 2023-01-24 21:49:50 | 2023-01-29 11:30:10 | 2023-01-29 12:01:56 | 0:31:46 | 0:22:41 | 0:09:05 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache-snaps-balanced} | 2 | |
pass | 7136387 | 2023-01-24 21:49:51 | 2023-01-29 12:14:11 | 2023-01-29 12:56:14 | 0:42:03 | 0:35:11 | 0:06:52 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7136388 | 2023-01-24 21:49:52 | 2023-01-29 12:14:42 | 2023-01-29 12:44:43 | 0:30:01 | 0:22:10 | 0:07:51 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} | 2 | |
pass | 7136389 | 2023-01-24 21:49:53 | 2023-01-29 12:15:12 | 2023-01-29 12:35:08 | 0:19:56 | 0:09:14 | 0:10:42 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_adoption} | 1 | |
pass | 7136390 | 2023-01-24 21:49:55 | 2023-01-29 12:15:12 | 2023-01-29 12:50:29 | 0:35:17 | 0:23:03 | 0:12:14 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} | 1 | |
pass | 7136391 | 2023-01-24 21:49:56 | 2023-01-29 12:15:33 | 2023-01-29 12:48:16 | 0:32:43 | 0:23:59 | 0:08:44 | smithi | main | centos | 8.stream | rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136392 | 2023-01-24 21:49:57 | 2023-01-29 12:15:33 | 2023-01-29 12:46:42 | 0:31:09 | 0:18:05 | 0:13:04 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136393 | 2023-01-24 21:49:58 | 2023-01-29 12:19:04 | 2023-01-29 12:38:27 | 0:19:23 | 0:08:12 | 0:11:11 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 3 | |
fail | 7136394 | 2023-01-24 21:49:59 | 2023-01-29 12:19:34 | 2023-01-29 12:48:58 | 0:29:24 | 0:17:27 | 0:11:57 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"/var/log/ceph/6a4c1a24-9fd1-11ed-9e56-001a4aab830c/ceph-mon.smithi158.log:2023-01-29T12:45:14.812+0000 7fb952f01700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7136395 | 2023-01-24 21:50:01 | 2023-01-29 12:20:04 | 2023-01-29 12:53:12 | 0:33:08 | 0:22:41 | 0:10:27 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/cache-snaps} | 2 | |
pass | 7136396 | 2023-01-24 21:50:02 | 2023-01-29 12:22:15 | 2023-01-29 12:56:11 | 0:33:56 | 0:23:18 | 0:10:38 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/rbd_cls} | 3 | |
fail | 7136397 | 2023-01-24 21:50:03 | 2023-01-29 12:22:55 | 2023-01-29 12:43:04 | 0:20:09 | 0:09:26 | 0:10:43 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/deduptool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_dedup_tool.sh) on smithi116 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8965ca3bc5c900c1b534ee8ca638a8aa0e2c61db TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_dedup_tool.sh' |
||||||||||||||
pass | 7136398 | 2023-01-24 21:50:04 | 2023-01-29 12:22:56 | 2023-01-29 13:00:20 | 0:37:24 | 0:26:44 | 0:10:40 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} | 2 | |
pass | 7136399 | 2023-01-24 21:50:05 | 2023-01-29 12:25:26 | 2023-01-29 12:49:28 | 0:24:02 | 0:11:19 | 0:12:43 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7136400 | 2023-01-24 21:50:06 | 2023-01-29 12:27:27 | 2023-01-29 12:49:16 | 0:21:49 | 0:14:45 | 0:07:04 | smithi | main | rhel | 8.6 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_5925} | 2 | |
fail | 7136401 | 2023-01-24 21:50:08 | 2023-01-29 12:27:27 | 2023-01-29 12:57:46 | 0:30:19 | 0:18:07 | 0:12:12 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/1470b040-9fd3-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T12:50:02.487+0000 7fb8ec84e700 0 log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7136402 | 2023-01-24 21:50:09 | 2023-01-29 12:29:28 | 2023-01-29 12:51:05 | 0:21:37 | 0:11:07 | 0:10:30 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136403 | 2023-01-24 21:50:10 | 2023-01-29 12:30:18 | 2023-01-29 12:59:55 | 0:29:37 | 0:19:03 | 0:10:34 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache} | 2 | |
pass | 7136404 | 2023-01-24 21:50:11 | 2023-01-29 12:30:18 | 2023-01-29 13:05:35 | 0:35:17 | 0:27:06 | 0:08:11 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{rhel_8} tasks/progress} | 2 | |
pass | 7136405 | 2023-01-24 21:50:12 | 2023-01-29 12:31:59 | 2023-01-29 12:55:54 | 0:23:55 | 0:10:48 | 0:13:07 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7136406 | 2023-01-24 21:50:14 | 2023-01-29 12:33:30 | 2023-01-29 12:51:55 | 0:18:25 | 0:13:18 | 0:05:07 | smithi | main | rhel | 8.6 | rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136407 | 2023-01-24 21:50:15 | 2023-01-29 12:33:40 | 2023-01-29 13:00:27 | 0:26:47 | 0:18:10 | 0:08:37 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} | 1 | |
pass | 7136408 | 2023-01-24 21:50:16 | 2023-01-29 12:33:40 | 2023-01-29 13:15:20 | 0:41:40 | 0:30:47 | 0:10:53 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7136409 | 2023-01-24 21:50:17 | 2023-01-29 12:34:10 | 2023-01-29 13:06:50 | 0:32:40 | 0:25:56 | 0:06:44 | smithi | main | rhel | 8.6 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/misc} | 1 | |
Failure Reason:
Command failed (workunit test misc/test-ceph-helpers.sh) on smithi106 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8965ca3bc5c900c1b534ee8ca638a8aa0e2c61db TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/test-ceph-helpers.sh' |
||||||||||||||
pass | 7136410 | 2023-01-24 21:50:18 | 2023-01-29 12:34:11 | 2023-01-29 12:58:18 | 0:24:07 | 0:11:21 | 0:12:46 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/dedup-io-mixed} | 2 | |
pass | 7136411 | 2023-01-24 21:50:19 | 2023-01-29 12:36:01 | 2023-01-29 13:09:04 | 0:33:03 | 0:27:43 | 0:05:20 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mostlyread} | 2 | |
pass | 7136412 | 2023-01-24 21:50:20 | 2023-01-29 12:36:01 | 2023-01-29 12:55:49 | 0:19:48 | 0:09:57 | 0:09:51 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} | 1 | |
pass | 7136413 | 2023-01-24 21:50:22 | 2023-01-29 12:36:02 | 2023-01-29 13:06:42 | 0:30:40 | 0:20:38 | 0:10:02 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |
pass | 7136414 | 2023-01-24 21:50:23 | 2023-01-29 12:36:42 | 2023-01-29 12:55:53 | 0:19:11 | 0:12:03 | 0:07:08 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/fusestore supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136415 | 2023-01-24 21:50:24 | 2023-01-29 12:37:02 | 2023-01-29 12:56:56 | 0:19:54 | 0:14:00 | 0:05:54 | smithi | main | rhel | 8.6 | rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7136416 | 2023-01-24 21:50:25 | 2023-01-29 12:37:03 | 2023-01-29 13:02:49 | 0:25:46 | 0:15:22 | 0:10:24 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
"/var/log/ceph/04476550-9fd4-11ed-9e56-001a4aab830c/ceph-mon.smithi150.log:2023-01-29T12:59:43.670+0000 7fcda8bef700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7136417 | 2023-01-24 21:50:26 | 2023-01-29 12:37:33 | 2023-01-29 13:18:31 | 0:40:58 | 0:35:44 | 0:05:14 | smithi | main | rhel | 8.6 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136418 | 2023-01-24 21:50:28 | 2023-01-29 12:37:33 | 2023-01-29 13:13:04 | 0:35:31 | 0:27:56 | 0:07:35 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7136419 | 2023-01-24 21:50:29 | 2023-01-29 12:37:34 | 2023-01-29 13:02:39 | 0:25:05 | 0:13:04 | 0:12:01 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/dedup-io-snaps} | 2 | |
pass | 7136420 | 2023-01-24 21:50:30 | 2023-01-29 12:38:34 | 2023-01-29 13:01:08 | 0:22:34 | 0:10:39 | 0:11:55 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136421 | 2023-01-24 21:50:31 | 2023-01-29 12:38:34 | 2023-01-29 13:12:50 | 0:34:16 | 0:21:18 | 0:12:58 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
fail | 7136422 | 2023-01-24 21:50:32 | 2023-01-29 12:39:25 | 2023-01-29 13:09:11 | 0:29:46 | 0:17:10 | 0:12:36 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/92e5fe02-9fd4-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T13:00:21.712+0000 7f481527b700 0 log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7136423 | 2023-01-24 21:50:34 | 2023-01-29 12:42:15 | 2023-01-29 13:05:57 | 0:23:42 | 0:12:45 | 0:10:57 | smithi | main | centos | 8.stream | rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136424 | 2023-01-24 21:50:35 | 2023-01-29 12:43:06 | 2023-01-29 13:00:30 | 0:17:24 | 0:11:42 | 0:05:42 | smithi | main | rhel | 8.6 | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} | 2 | |
pass | 7136425 | 2023-01-24 21:50:36 | 2023-01-29 12:43:16 | 2023-01-29 13:22:19 | 0:39:03 | 0:31:47 | 0:07:16 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
fail | 7136426 | 2023-01-24 21:50:37 | 2023-01-29 12:43:27 | 2023-01-29 13:20:11 | 0:36:44 | 0:28:29 | 0:08:15 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} tasks/dashboard} | 2 | |
Failure Reason:
Test failure: test_full_health (tasks.mgr.dashboard.test_health.HealthTest) |
||||||||||||||
fail | 7136427 | 2023-01-24 21:50:38 | 2023-01-29 12:43:27 | 2023-01-29 13:00:38 | 0:17:11 | 0:05:31 | 0:11:40 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi099 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7136428 | 2023-01-24 21:50:39 | 2023-01-29 12:44:47 | 2023-01-29 13:14:00 | 0:29:13 | 0:21:48 | 0:07:25 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7136429 | 2023-01-24 21:50:41 | 2023-01-29 12:44:48 | 2023-01-29 13:26:35 | 0:41:47 | 0:34:13 | 0:07:34 | smithi | main | rhel | 8.6 | rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_3.0} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed on smithi121 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone --depth 1 --branch quincy https://github.com/chrisphoffman/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0' |
||||||||||||||
pass | 7136430 | 2023-01-24 21:50:42 | 2023-01-29 12:45:38 | 2023-01-29 13:05:48 | 0:20:10 | 0:09:16 | 0:10:54 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 7136431 | 2023-01-24 21:50:43 | 2023-01-29 12:46:48 | 2023-01-29 13:53:33 | 1:06:45 | 1:01:40 | 0:05:05 | smithi | main | rhel | 8.6 | rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136432 | 2023-01-24 21:50:44 | 2023-01-29 12:47:09 | 2023-01-29 15:21:34 | 2:34:25 | 2:25:13 | 0:09:12 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
pass | 7136433 | 2023-01-24 21:50:45 | 2023-01-29 12:48:19 | 2023-01-29 13:11:48 | 0:23:29 | 0:11:37 | 0:11:52 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7136434 | 2023-01-24 21:50:47 | 2023-01-29 12:49:10 | 2023-01-29 13:26:38 | 0:37:28 | 0:25:52 | 0:11:36 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/rados_api_tests} | 2 | |
pass | 7136435 | 2023-01-24 21:50:48 | 2023-01-29 12:49:20 | 2023-01-29 13:07:17 | 0:17:57 | 0:08:28 | 0:09:29 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} | 1 | |
pass | 7136436 | 2023-01-24 21:50:49 | 2023-01-29 12:49:30 | 2023-01-29 13:35:01 | 0:45:31 | 0:35:42 | 0:09:49 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/rados_api_tests} | 2 | |
pass | 7136437 | 2023-01-24 21:50:50 | 2023-01-29 12:49:30 | 2023-01-29 13:09:55 | 0:20:25 | 0:13:55 | 0:06:30 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 1-start 2-services/basic 3-final} | 1 | |
pass | 7136438 | 2023-01-24 21:50:51 | 2023-01-29 12:49:31 | 2023-01-29 13:15:05 | 0:25:34 | 0:13:59 | 0:11:35 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/readwrite} | 2 | |
pass | 7136439 | 2023-01-24 21:50:53 | 2023-01-29 12:51:11 | 2023-01-29 13:12:51 | 0:21:40 | 0:12:31 | 0:09:09 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136440 | 2023-01-24 21:50:54 | 2023-01-29 12:52:02 | 2023-01-29 13:33:59 | 0:41:57 | 0:28:13 | 0:13:44 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7136441 | 2023-01-24 21:50:55 | 2023-01-29 12:55:52 | 2023-01-29 14:01:10 | 1:05:18 | 0:56:09 | 0:09:09 | smithi | main | centos | 8.stream | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136442 | 2023-01-24 21:50:56 | 2023-01-29 12:56:03 | 2023-01-29 13:21:38 | 0:25:35 | 0:14:42 | 0:10:53 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{centos_8} tasks/prometheus} | 2 | |
fail | 7136443 | 2023-01-24 21:50:57 | 2023-01-29 12:56:03 | 2023-01-29 13:45:24 | 0:49:21 | 0:39:10 | 0:10:11 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
"/var/log/ceph/f9422246-9fd6-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T13:17:50.116+0000 7f529606c700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
fail | 7136444 | 2023-01-24 21:50:59 | 2023-01-29 12:56:13 | 2023-01-29 13:22:28 | 0:26:15 | 0:15:39 | 0:10:36 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"/var/log/ceph/bb35f19e-9fd6-11ed-9e56-001a4aab830c/ceph-mon.smithi114.log:2023-01-29T13:18:52.242+0000 7fc9984c0700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7136445 | 2023-01-24 21:51:00 | 2023-01-29 12:56:24 | 2023-01-29 13:19:38 | 0:23:14 | 0:16:33 | 0:06:41 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/keyvaluedb supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136446 | 2023-01-24 21:51:01 | 2023-01-29 12:56:24 | 2023-01-29 13:39:07 | 0:42:43 | 0:31:38 | 0:11:05 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
pass | 7136447 | 2023-01-24 21:51:02 | 2023-01-29 12:56:44 | 2023-01-29 13:31:41 | 0:34:57 | 0:26:43 | 0:08:14 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |
pass | 7136448 | 2023-01-24 21:51:03 | 2023-01-29 12:57:55 | 2023-01-29 13:19:12 | 0:21:17 | 0:12:17 | 0:09:00 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7136449 | 2023-01-24 21:51:04 | 2023-01-29 12:57:55 | 2023-01-29 13:19:24 | 0:21:29 | 0:12:11 | 0:09:18 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/mon} | 1 | |
Failure Reason:
Command failed (workunit test mon/health-mute.sh) on smithi132 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8965ca3bc5c900c1b534ee8ca638a8aa0e2c61db TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/health-mute.sh' |
||||||||||||||
pass | 7136450 | 2023-01-24 21:51:06 | 2023-01-29 12:58:25 | 2023-01-29 13:17:30 | 0:19:05 | 0:11:52 | 0:07:13 | smithi | main | rhel | 8.6 | rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7136451 | 2023-01-24 21:51:07 | 2023-01-29 12:58:26 | 2023-01-29 13:22:21 | 0:23:55 | 0:17:06 | 0:06:49 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/8b0486c0-9fd6-11ed-9e56-001a4aab830c/ceph-mon.c.log:2023-01-29T13:14:40.238+0000 7fb1dcbfe700 7 mon.c@2(synchronizing).log v58 update_from_paxos applying incremental log 57 2023-01-29T13:14:38.251241+0000 mon.a (mon.0) 204 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7136452 | 2023-01-24 21:51:08 | 2023-01-29 12:59:56 | 2023-01-29 13:35:23 | 0:35:27 | 0:26:46 | 0:08:41 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7136453 | 2023-01-24 21:51:09 | 2023-01-29 13:00:27 | 2023-01-29 13:57:58 | 0:57:31 | 0:51:33 | 0:05:58 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/radosbench} | 2 | |
pass | 7136454 | 2023-01-24 21:51:10 | 2023-01-29 13:00:37 | 2023-01-29 14:25:51 | 1:25:14 | 1:15:19 | 0:09:55 | smithi | main | centos | 8.stream | rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136455 | 2023-01-24 21:51:12 | 2023-01-29 13:00:37 | 2023-01-29 13:21:40 | 0:21:03 | 0:11:47 | 0:09:16 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136456 | 2023-01-24 21:51:13 | 2023-01-29 13:00:48 | 2023-01-29 13:21:03 | 0:20:15 | 0:10:10 | 0:10:05 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} | 1 | |
fail | 7136457 | 2023-01-24 21:51:14 | 2023-01-29 13:01:18 | 2023-01-29 13:17:46 | 0:16:28 | 0:06:49 | 0:09:39 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
Command failed on smithi035 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
pass | 7136458 | 2023-01-24 21:51:15 | 2023-01-29 13:01:18 | 2023-01-29 13:24:49 | 0:23:31 | 0:12:26 | 0:11:05 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} | 2 | |
pass | 7136459 | 2023-01-24 21:51:16 | 2023-01-29 13:02:49 | 2023-01-29 13:31:00 | 0:28:11 | 0:17:11 | 0:11:00 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/repair_test} | 2 | |
pass | 7136460 | 2023-01-24 21:51:17 | 2023-01-29 13:02:59 | 2023-01-29 13:31:23 | 0:28:24 | 0:18:15 | 0:10:09 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/redirect} | 2 | |
pass | 7136461 | 2023-01-24 21:51:19 | 2023-01-29 13:05:40 | 2023-01-29 13:29:55 | 0:24:15 | 0:14:30 | 0:09:45 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7136462 | 2023-01-24 21:51:20 | 2023-01-29 13:06:50 | 2023-01-29 13:30:44 | 0:23:54 | 0:13:54 | 0:10:00 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} | 2 | |
pass | 7136463 | 2023-01-24 21:51:21 | 2023-01-29 13:07:00 | 2023-01-29 14:07:56 | 1:00:56 | 0:55:16 | 0:05:40 | smithi | main | rhel | 8.6 | rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136464 | 2023-01-24 21:51:22 | 2023-01-29 13:07:21 | 2023-01-29 14:14:25 | 1:07:04 | 0:58:17 | 0:08:47 | smithi | main | rhel | 8.6 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_osdmap_prune} | 2 | |
fail | 7136465 | 2023-01-24 21:51:23 | 2023-01-29 13:09:11 | 2023-01-29 13:45:45 | 0:36:34 | 0:30:12 | 0:06:22 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
"/var/log/ceph/6eba4d7c-9fd8-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T13:30:41.448+0000 7f7427b74700 0 log_channel(cluster) log [WRN] : Replacing daemon mds.a.smithi084.qhhukv as rank 0 with standby daemon mds.user_test_fs.smithi084.dolfwg" in cluster log |
||||||||||||||
pass | 7136466 | 2023-01-24 21:51:25 | 2023-01-29 13:09:22 | 2023-01-29 13:27:15 | 0:17:53 | 0:11:58 | 0:05:55 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136467 | 2023-01-24 21:51:26 | 2023-01-29 13:09:22 | 2023-01-29 13:42:43 | 0:33:21 | 0:24:45 | 0:08:36 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 7136468 | 2023-01-24 21:51:27 | 2023-01-29 13:11:53 | 2023-01-29 13:40:37 | 0:28:44 | 0:17:42 | 0:11:02 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/redirect_promote_tests} | 2 | |
pass | 7136469 | 2023-01-24 21:51:28 | 2023-01-29 13:11:53 | 2023-01-29 13:36:18 | 0:24:25 | 0:11:26 | 0:12:59 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7136470 | 2023-01-24 21:51:29 | 2023-01-29 13:12:53 | 2023-01-29 13:44:43 | 0:31:50 | 0:21:58 | 0:09:52 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7136471 | 2023-01-24 21:51:31 | 2023-01-29 13:13:14 | 2023-01-29 13:31:03 | 0:17:49 | 0:08:18 | 0:09:31 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136472 | 2023-01-24 21:51:32 | 2023-01-29 13:13:14 | 2023-01-29 13:56:58 | 0:43:44 | 0:37:44 | 0:06:00 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136473 | 2023-01-24 21:51:33 | 2023-01-29 13:13:34 | 2023-01-29 13:34:46 | 0:21:12 | 0:09:00 | 0:12:12 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/crushdiff mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136474 | 2023-01-24 21:51:34 | 2023-01-29 13:14:05 | 2023-01-29 13:36:02 | 0:21:57 | 0:08:37 | 0:13:20 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{filestore-xfs} supported-random-distro$/{ubuntu_latest} tasks/workunits} | 2 | |
pass | 7136475 | 2023-01-24 21:51:35 | 2023-01-29 13:15:05 | 2023-01-29 13:55:25 | 0:40:20 | 0:32:19 | 0:08:01 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-small-objects} | 2 | |
pass | 7136476 | 2023-01-24 21:51:36 | 2023-01-29 13:16:26 | 2023-01-29 13:39:50 | 0:23:24 | 0:12:42 | 0:10:42 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/redirect_set_object} | 2 | |
pass | 7136477 | 2023-01-24 21:51:38 | 2023-01-29 13:17:36 | 2023-01-29 13:47:16 | 0:29:40 | 0:23:05 | 0:06:35 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
pass | 7136478 | 2023-01-24 21:51:39 | 2023-01-29 13:38:57 | 584 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} | 1 | ||||
pass | 7136479 | 2023-01-24 21:51:40 | 2023-01-29 13:19:17 | 2023-01-29 13:39:04 | 0:19:47 | 0:11:13 | 0:08:34 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136480 | 2023-01-24 21:51:41 | 2023-01-29 13:19:27 | 2023-01-29 13:40:51 | 0:21:24 | 0:13:00 | 0:08:24 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136481 | 2023-01-24 21:51:42 | 2023-01-29 13:19:47 | 2023-01-29 16:55:12 | 3:35:25 | 3:25:51 | 0:09:34 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/osd-backfill} | 1 | |
pass | 7136482 | 2023-01-24 21:51:43 | 2023-01-29 13:20:18 | 2023-01-29 13:52:51 | 0:32:33 | 0:23:37 | 0:08:56 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
pass | 7136483 | 2023-01-24 21:51:45 | 2023-01-29 13:20:18 | 2023-01-29 13:55:44 | 0:35:26 | 0:23:47 | 0:11:39 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7136484 | 2023-01-24 21:51:46 | 2023-01-29 13:21:48 | 2023-01-29 13:52:27 | 0:30:39 | 0:18:28 | 0:12:11 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
"/var/log/ceph/3da0b2a6-9fda-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T13:42:57.380+0000 7ff1f6c93700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7136485 | 2023-01-24 21:51:47 | 2023-01-29 13:21:49 | 2023-01-29 13:45:36 | 0:23:47 | 0:12:54 | 0:10:53 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/scrub_test} | 2 | |
pass | 7136486 | 2023-01-24 21:51:48 | 2023-01-29 13:22:29 | 2023-01-29 13:48:48 | 0:26:19 | 0:19:50 | 0:06:29 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/set-chunks-read} | 2 | |
fail | 7136487 | 2023-01-24 21:51:49 | 2023-01-29 13:22:29 | 2023-01-29 13:51:35 | 0:29:06 | 0:20:27 | 0:08:39 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
"/var/log/ceph/9de7d41e-9fda-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T13:43:49.708+0000 7f819c808700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
pass | 7136488 | 2023-01-24 21:51:50 | 2023-01-29 13:22:30 | 2023-01-29 13:50:04 | 0:27:34 | 0:14:56 | 0:12:38 | smithi | main | centos | 8.stream | rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136489 | 2023-01-24 21:51:52 | 2023-01-29 13:24:50 | 2023-01-29 13:46:11 | 0:21:21 | 0:08:11 | 0:13:10 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 3 | |
pass | 7136490 | 2023-01-24 21:51:53 | 2023-01-29 13:26:41 | 2023-01-29 13:44:20 | 0:17:39 | 0:08:43 | 0:08:56 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7136491 | 2023-01-24 21:51:54 | 2023-01-29 13:26:41 | 2023-01-29 14:03:04 | 0:36:23 | 0:24:36 | 0:11:47 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/98e3c658-9fda-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T13:46:06.458+0000 7fc087ae5700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7136492 | 2023-01-24 21:51:55 | 2023-01-29 13:27:22 | 2023-01-29 13:53:06 | 0:25:44 | 0:14:02 | 0:11:42 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7136493 | 2023-01-24 21:51:56 | 2023-01-29 13:30:02 | 2023-01-29 14:11:41 | 0:41:39 | 0:31:40 | 0:09:59 | smithi | main | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136494 | 2023-01-24 21:51:58 | 2023-01-29 13:30:03 | 2023-01-29 14:03:29 | 0:33:26 | 0:20:28 | 0:12:58 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/small-objects-balanced} | 2 | |
pass | 7136495 | 2023-01-24 21:51:59 | 2023-01-29 13:30:53 | 2023-01-29 14:36:26 | 1:05:33 | 0:54:50 | 0:10:43 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 7136496 | 2023-01-24 21:52:00 | 2023-01-29 13:31:03 | 2023-01-29 14:10:52 | 0:39:49 | 0:29:31 | 0:10:18 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/rados_mon_workunits} | 2 | |
pass | 7136497 | 2023-01-24 21:52:01 | 2023-01-29 13:31:14 | 2023-01-29 13:52:39 | 0:21:25 | 0:15:20 | 0:06:05 | smithi | main | rhel | 8.6 | rados/singleton/{all/mon-auth-caps mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7136498 | 2023-01-24 21:52:02 | 2023-01-29 13:31:24 | 2023-01-29 13:55:38 | 0:24:14 | 0:15:58 | 0:08:16 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"/var/log/ceph/3a698882-9fdb-11ed-9e56-001a4aab830c/ceph-mon.smithi037.log:2023-01-29T13:51:58.828+0000 7fb9d2583700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7136499 | 2023-01-24 21:52:04 | 2023-01-29 13:31:44 | 2023-01-29 13:52:25 | 0:20:41 | 0:12:40 | 0:08:01 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136500 | 2023-01-24 21:52:05 | 2023-01-29 13:31:45 | 2023-01-29 13:54:19 | 0:22:34 | 0:09:56 | 0:12:38 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_read} | 1 | |
fail | 7136501 | 2023-01-24 21:52:06 | 2023-01-29 13:34:05 | 2023-01-29 16:18:14 | 2:44:09 | 2:22:10 | 0:21:59 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed on smithi112 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-bluestore 20" ceph_test_objectstore --gtest_filter=*/2:-*SyntheticMatrixC* --gtest_catch_exceptions=0\'' |
||||||||||||||
pass | 7136502 | 2023-01-24 21:52:07 | 2023-01-29 13:34:05 | 2023-01-29 14:18:40 | 0:44:35 | 0:32:26 | 0:12:09 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7136503 | 2023-01-24 21:52:08 | 2023-01-29 13:35:06 | 2023-01-29 14:05:50 | 0:30:44 | 0:24:27 | 0:06:17 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/small-objects-localized} | 2 | |
pass | 7136504 | 2023-01-24 21:52:09 | 2023-01-29 13:35:26 | 2023-01-29 13:55:51 | 0:20:25 | 0:14:22 | 0:06:03 | smithi | main | rhel | 8.6 | rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7136505 | 2023-01-24 21:52:11 | 2023-01-29 13:35:26 | 2023-01-29 14:14:42 | 0:39:16 | 0:27:21 | 0:11:55 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
"/var/log/ceph/0500ab1a-9fdd-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T14:05:52.347+0000 7f247c174700 0 log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
dead | 7136506 | 2023-01-24 21:52:12 | 2023-01-29 13:36:27 | 2023-01-30 01:53:18 | 12:16:51 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7136507 | 2023-01-24 21:52:13 | 2023-01-29 13:39:08 | 2023-01-29 14:12:14 | 0:33:06 | 0:24:24 | 0:08:42 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/e2e} | 2 | |
Failure Reason:
"/var/log/ceph/f6b2703e-9fdc-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T14:07:43.165+0000 7ff0d859a700 0 log_channel(cluster) log [WRN] : Health check failed: 1 host is in maintenance mode (HOST_IN_MAINTENANCE)" in cluster log |
||||||||||||||
pass | 7136508 | 2023-01-24 21:52:14 | 2023-01-29 13:39:08 | 2023-01-29 14:01:07 | 0:21:59 | 0:15:31 | 0:06:28 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{rhel_8} tasks/crash} | 2 | |
fail | 7136509 | 2023-01-24 21:52:16 | 2023-01-29 13:39:58 | 2023-01-29 13:57:52 | 0:17:54 | 0:05:39 | 0:12:15 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
Command failed on smithi064 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7136510 | 2023-01-24 21:52:17 | 2023-01-29 13:40:59 | 2023-01-29 14:03:54 | 0:22:55 | 0:11:25 | 0:11:30 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136511 | 2023-01-24 21:52:18 | 2023-01-29 13:42:49 | 2023-01-29 14:05:00 | 0:22:11 | 0:15:44 | 0:06:27 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/libcephsqlite} | 2 | |
pass | 7136512 | 2023-01-24 21:52:19 | 2023-01-29 13:44:30 | 2023-01-29 14:16:14 | 0:31:44 | 0:24:33 | 0:07:11 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/small-objects} | 2 | |
pass | 7136513 | 2023-01-24 21:52:20 | 2023-01-29 13:44:50 | 2023-01-29 14:20:34 | 0:35:44 | 0:28:13 | 0:07:31 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/minsize_recovery thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |
fail | 7136514 | 2023-01-24 21:52:21 | 2023-01-29 13:45:31 | 2023-01-29 14:13:04 | 0:27:33 | 0:17:06 | 0:10:27 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/83e334c0-9fdd-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T14:04:24.962+0000 7f9a59bfb700 0 log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7136515 | 2023-01-24 21:52:23 | 2023-01-29 13:45:41 | 2023-01-29 14:09:35 | 0:23:54 | 0:18:53 | 0:05:01 | smithi | main | rhel | 8.6 | rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136516 | 2023-01-24 21:52:24 | 2023-01-29 13:45:41 | 2023-01-29 14:21:36 | 0:35:55 | 0:28:53 | 0:07:02 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7136517 | 2023-01-24 21:52:25 | 2023-01-29 13:46:21 | 2023-01-29 14:08:20 | 0:21:59 | 0:14:10 | 0:07:49 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136518 | 2023-01-24 21:52:26 | 2023-01-29 13:46:22 | 2023-01-29 16:50:12 | 3:03:50 | 2:54:38 | 0:09:12 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} | 1 | |
pass | 7136519 | 2023-01-24 21:52:27 | 2023-01-29 13:46:22 | 2023-01-29 14:22:30 | 0:36:08 | 0:27:49 | 0:08:19 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
pass | 7136520 | 2023-01-24 21:52:29 | 2023-01-29 13:47:22 | 2023-01-29 14:13:42 | 0:26:20 | 0:15:04 | 0:11:16 | smithi | main | centos | 8.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/rgw 3-final} | 1 | |
pass | 7136521 | 2023-01-24 21:52:30 | 2023-01-29 13:48:53 | 2023-01-29 14:07:23 | 0:18:30 | 0:13:25 | 0:05:05 | smithi | main | rhel | 8.6 | rados/singleton/{all/mon-config mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136522 | 2023-01-24 21:52:31 | 2023-01-29 13:48:53 | 2023-01-29 14:10:41 | 0:21:48 | 0:10:42 | 0:11:06 | smithi | main | centos | 8.stream | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} | 3 | |
pass | 7136523 | 2023-01-24 21:52:32 | 2023-01-29 13:51:44 | 2023-01-29 14:11:54 | 0:20:10 | 0:10:08 | 0:10:02 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
pass | 7136524 | 2023-01-24 21:52:34 | 2023-01-29 13:51:44 | 2023-01-29 14:16:43 | 0:24:59 | 0:12:00 | 0:12:59 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7136525 | 2023-01-24 21:52:35 | 2023-01-29 13:52:55 | 2023-01-29 14:20:42 | 0:27:47 | 0:18:48 | 0:08:59 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} | 2 | |
fail | 7136526 | 2023-01-24 21:52:36 | 2023-01-29 13:53:15 | 2023-01-29 14:14:13 | 0:20:58 | 0:14:51 | 0:06:07 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"/var/log/ceph/19e4bf5c-9fde-11ed-9e56-001a4aab830c/ceph-mon.smithi093.log:2023-01-29T14:11:32.399+0000 7f4aec120700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7136527 | 2023-01-24 21:52:37 | 2023-01-29 13:53:15 | 2023-01-29 14:16:28 | 0:23:13 | 0:14:05 | 0:09:08 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_librados_build.sh) on smithi191 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8965ca3bc5c900c1b534ee8ca638a8aa0e2c61db TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_librados_build.sh' |
||||||||||||||
pass | 7136528 | 2023-01-24 21:52:38 | 2023-01-29 13:53:36 | 2023-01-29 14:34:48 | 0:41:12 | 0:30:42 | 0:10:30 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
pass | 7136529 | 2023-01-24 21:52:39 | 2023-01-29 13:55:26 | 2023-01-29 14:33:49 | 0:38:23 | 0:27:50 | 0:10:33 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/snaps-few-objects} | 2 | |
pass | 7136530 | 2023-01-24 21:52:41 | 2023-01-29 13:55:47 | 2023-01-29 14:39:33 | 0:43:46 | 0:37:35 | 0:06:11 | smithi | main | rhel | 8.6 | rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7136531 | 2023-01-24 21:52:42 | 2023-01-29 13:55:47 | 2023-01-29 16:31:40 | 2:35:53 | 2:11:44 | 0:24:09 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{rhel_8}} | 1 | |
fail | 7136532 | 2023-01-24 21:52:43 | 2023-01-29 13:55:47 | 2023-01-29 14:40:56 | 0:45:09 | 0:33:23 | 0:11:46 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
"/var/log/ceph/91a37474-9fdf-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T14:19:33.769+0000 7fb721596700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
pass | 7136533 | 2023-01-24 21:52:44 | 2023-01-29 13:57:08 | 2023-01-29 14:19:15 | 0:22:07 | 0:12:58 | 0:09:09 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_adoption} | 1 | |
pass | 7136534 | 2023-01-24 21:52:45 | 2023-01-29 13:57:58 | 2023-01-29 14:21:39 | 0:23:41 | 0:12:53 | 0:10:48 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7136535 | 2023-01-24 21:52:47 | 2023-01-29 13:57:58 | 2023-01-29 14:36:24 | 0:38:26 | 0:28:54 | 0:09:32 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} | 2 | |
pass | 7136536 | 2023-01-24 21:52:48 | 2023-01-29 14:38:20 | 1611 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/snaps-few-objects} | 2 | ||||
pass | 7136537 | 2023-01-24 21:52:49 | 2023-01-29 14:01:20 | 2023-01-29 14:29:24 | 0:28:04 | 0:18:01 | 0:10:03 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136538 | 2023-01-24 21:52:50 | 2023-01-29 14:03:10 | 2023-01-29 15:18:41 | 1:15:31 | 1:05:57 | 0:09:34 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7136539 | 2023-01-24 21:52:51 | 2023-01-29 14:03:10 | 2023-01-29 14:31:12 | 0:28:02 | 0:17:08 | 0:10:54 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/18a00d0c-9fe0-11ed-9e56-001a4aab830c/ceph-mon.a.log:2023-01-29T14:23:01.349+0000 7f7252f9c700 0 log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7136540 | 2023-01-24 21:52:52 | 2023-01-29 14:03:31 | 2023-01-29 14:41:57 | 0:38:26 | 0:28:17 | 0:10:09 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 2 | |
pass | 7136541 | 2023-01-24 21:52:54 | 2023-01-29 14:05:01 | 2023-01-29 14:29:38 | 0:24:37 | 0:13:46 | 0:10:51 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
pass | 7136542 | 2023-01-24 21:52:55 | 2023-01-29 14:05:52 | 2023-01-29 14:32:37 | 0:26:45 | 0:15:43 | 0:11:02 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-hybrid} supported-random-distro$/{centos_8} tasks/failover} | 2 | |
pass | 7136543 | 2023-01-24 21:52:56 | 2023-01-29 14:07:32 | 2023-01-29 14:44:24 | 0:36:52 | 0:28:54 | 0:07:58 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{rhel_8}} | 2 | |
pass | 7136544 | 2023-01-24 21:52:57 | 2023-01-29 14:08:22 | 2023-01-29 14:38:05 | 0:29:43 | 0:20:02 | 0:09:41 | smithi | main | centos | 8.stream | rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7136545 | 2023-01-24 21:52:58 | 2023-01-29 14:09:43 | 2023-01-29 14:30:46 | 0:21:03 | 0:09:52 | 0:11:11 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | |
pass | 7136546 | 2023-01-24 21:53:00 | 2023-01-29 14:10:43 | 2023-01-29 14:45:40 | 0:34:57 | 0:25:26 | 0:09:31 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7136547 | 2023-01-24 21:53:01 | 2023-01-29 14:10:44 | 2023-01-29 14:37:22 | 0:26:38 | 0:19:51 | 0:06:47 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_cephadm} | 1 | |
pass | 7136548 | 2023-01-24 21:53:02 | 2023-01-29 14:10:54 | 2023-01-29 14:43:30 | 0:32:36 | 0:21:51 | 0:10:45 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 7136549 | 2023-01-24 21:53:03 | 2023-01-29 14:11:44 | 2023-01-29 14:34:57 | 0:23:13 | 0:13:16 | 0:09:57 | smithi | main | centos | 8.stream | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/mon_recovery} | 2 | |
pass | 7136550 | 2023-01-24 21:53:04 | 2023-01-29 14:12:15 | 2023-01-29 14:30:16 | 0:18:01 | 0:08:09 | 0:09:52 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/peer mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7136551 | 2023-01-24 21:53:06 | 2023-01-29 14:12:15 | 2023-01-29 14:44:26 | 0:32:11 | 0:20:52 | 0:11:19 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 |