User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2024-02-08 23:15:40 | 2024-02-09 02:22:16 | 2024-02-09 14:34:26 | 12:12:10 | rados | wip-yuri10-testing-2024-02-08-0854-pacific | smithi | 0e714d9 | 287 | 136 | 5 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7552593 | 2024-02-08 23:17:48 | 2024-02-09 02:21:43 | 2024-02-09 03:09:27 | 0:47:44 | 0:37:27 | 0:10:17 | smithi | main | centos | 8.stream | rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
"2024-02-09T02:43:39.629103+0000 mon.a (mon.0) 160 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552594 | 2024-02-08 23:17:48 | 2024-02-09 02:21:43 | 2024-02-09 02:51:22 | 0:29:39 | 0:18:51 | 0:10:48 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552595 | 2024-02-08 23:17:49 | 2024-02-09 02:21:44 | 2024-02-09 02:42:02 | 0:20:18 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/radosbench} | 3 | |||
Failure Reason:
Failed to reconnect to smithi120 |
||||||||||||||
fail | 7552596 | 2024-02-08 23:17:50 | 2024-02-09 02:22:04 | 2024-02-09 02:47:17 | 0:25:13 | 0:18:56 | 0:06:17 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-02-09T02:44:24.885221+0000 mon.a (mon.0) 712 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
dead | 7552597 | 2024-02-08 23:17:51 | 2024-02-09 02:22:15 | 2024-02-09 14:34:26 | 12:12:11 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/objectstore supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7552598 | 2024-02-08 23:17:52 | 2024-02-09 02:22:15 | 2024-02-09 02:48:26 | 0:26:11 | 0:15:05 | 0:11:06 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} | 3 | |
pass | 7552599 | 2024-02-08 23:17:53 | 2024-02-09 02:22:15 | 2024-02-09 02:47:30 | 0:25:15 | 0:15:01 | 0:10:14 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
pass | 7552600 | 2024-02-08 23:17:54 | 2024-02-09 02:22:16 | 2024-02-09 02:47:20 | 0:25:04 | 0:15:21 | 0:09:43 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} | 1 | |
pass | 7552601 | 2024-02-08 23:17:55 | 2024-02-09 02:22:16 | 2024-02-09 02:57:17 | 0:35:01 | 0:26:10 | 0:08:51 | smithi | main | centos | 8.stream | rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7552602 | 2024-02-08 23:17:55 | 2024-02-09 02:22:16 | 2024-02-09 02:52:58 | 0:30:42 | 0:20:13 | 0:10:29 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
pass | 7552603 | 2024-02-08 23:17:56 | 2024-02-09 02:25:17 | 2024-02-09 03:43:24 | 1:18:07 | 1:07:59 | 0:10:08 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/radosbench} | 2 | |
fail | 7552604 | 2024-02-08 23:17:57 | 2024-02-09 02:25:28 | 2024-02-09 03:01:46 | 0:36:18 | 0:26:11 | 0:10:07 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi138 with status 5: 'sudo systemctl stop ceph-09944264-c6f5-11ee-95b6-87774f69a715@mon.smithi138' |
||||||||||||||
fail | 7552605 | 2024-02-08 23:17:58 | 2024-02-09 02:26:38 | 2024-02-09 03:02:10 | 0:35:32 | 0:23:45 | 0:11:47 | smithi | main | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi184 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e39d6e1e-c6f4-11ee-95b6-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7552606 | 2024-02-08 23:17:59 | 2024-02-09 02:28:09 | 2024-02-09 02:48:46 | 0:20:37 | 0:09:25 | 0:11:12 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7552607 | 2024-02-08 23:17:59 | 2024-02-09 02:28:09 | 2024-02-09 03:20:12 | 0:52:03 | 0:39:57 | 0:12:06 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_mon_osdmap_prune} | 2 | |
pass | 7552608 | 2024-02-08 23:18:00 | 2024-02-09 02:28:30 | 2024-02-09 03:07:43 | 0:39:13 | 0:31:36 | 0:07:37 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} | 2 | |
pass | 7552609 | 2024-02-08 23:18:01 | 2024-02-09 02:30:00 | 2024-02-09 03:13:11 | 0:43:11 | 0:34:06 | 0:09:05 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/mon_recovery validater/valgrind} | 2 | |
pass | 7552610 | 2024-02-08 23:18:02 | 2024-02-09 02:30:01 | 2024-02-09 03:07:11 | 0:37:10 | 0:27:33 | 0:09:37 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
fail | 7552611 | 2024-02-08 23:18:03 | 2024-02-09 02:30:11 | 2024-02-09 03:00:08 | 0:29:57 | 0:18:47 | 0:11:10 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"2024-02-09T02:55:21.880558+0000 mon.smithi100 (mon.0) 637 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7552612 | 2024-02-08 23:18:04 | 2024-02-09 02:30:11 | 2024-02-09 02:50:22 | 0:20:11 | 0:09:01 | 0:11:10 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/peer mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7552613 | 2024-02-08 23:18:04 | 2024-02-09 02:30:12 | 2024-02-09 03:14:49 | 0:44:37 | 0:35:15 | 0:09:22 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/rbd_cls} | 3 | |
pass | 7552614 | 2024-02-08 23:18:05 | 2024-02-09 02:31:22 | 2024-02-09 02:59:46 | 0:28:24 | 0:17:48 | 0:10:36 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} tasks/progress} | 2 | |
pass | 7552615 | 2024-02-08 23:18:06 | 2024-02-09 02:31:33 | 2024-02-09 02:54:38 | 0:23:05 | 0:12:15 | 0:10:50 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7552616 | 2024-02-08 23:18:07 | 2024-02-09 02:32:13 | 2024-02-09 03:07:23 | 0:35:10 | 0:26:14 | 0:08:56 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
"2024-02-09T02:54:19.539145+0000 mon.a (mon.0) 499 : cluster [WRN] Replacing daemon mds.a.smithi142.shtlel as rank 0 with standby daemon mds.user_test_fs.smithi142.ijudtb" in cluster log |
||||||||||||||
pass | 7552617 | 2024-02-08 23:18:08 | 2024-02-09 02:32:14 | 2024-02-09 03:15:42 | 0:43:28 | 0:28:56 | 0:14:32 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7552618 | 2024-02-08 23:18:09 | 2024-02-09 02:34:34 | 2024-02-09 03:03:47 | 0:29:13 | 0:20:46 | 0:08:27 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/redirect} | 2 | |
fail | 7552619 | 2024-02-08 23:18:10 | 2024-02-09 02:34:55 | 2024-02-09 03:07:04 | 0:32:09 | 0:21:47 | 0:10:22 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-09T02:52:51.323009+0000 mon.a (mon.0) 162 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552620 | 2024-02-08 23:18:10 | 2024-02-09 02:35:15 | 2024-02-09 02:58:54 | 0:23:39 | 0:16:12 | 0:07:27 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
pass | 7552621 | 2024-02-08 23:18:11 | 2024-02-09 02:35:46 | 2024-02-09 03:22:36 | 0:46:50 | 0:38:03 | 0:08:47 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7552622 | 2024-02-08 23:18:12 | 2024-02-09 02:37:26 | 2024-02-09 03:10:46 | 0:33:20 | 0:27:24 | 0:05:56 | smithi | main | rhel | 8.6 | rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 2 | |
fail | 7552623 | 2024-02-08 23:18:13 | 2024-02-09 02:37:37 | 2024-02-09 03:27:21 | 0:49:44 | 0:39:00 | 0:10:44 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
"2024-02-09T03:04:48.174436+0000 mon.a (mon.0) 711 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 7552624 | 2024-02-08 23:18:14 | 2024-02-09 02:37:37 | 2024-02-09 03:18:38 | 0:41:01 | 0:35:16 | 0:05:45 | smithi | main | rhel | 8.6 | rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
"2024-02-09T03:05:17.160021+0000 mon.a (mon.0) 159 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552625 | 2024-02-08 23:18:15 | 2024-02-09 02:37:37 | 2024-02-09 03:13:06 | 0:35:29 | 0:26:31 | 0:08:58 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
fail | 7552626 | 2024-02-08 23:18:15 | 2024-02-09 02:37:38 | 2024-02-09 03:02:34 | 0:24:56 | 0:15:41 | 0:09:15 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
"2024-02-09T02:59:49.887783+0000 mon.a (mon.0) 420 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7552627 | 2024-02-08 23:18:16 | 2024-02-09 02:37:38 | 2024-02-09 03:01:37 | 0:23:59 | 0:13:58 | 0:10:01 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/redirect_promote_tests} | 2 | |
fail | 7552628 | 2024-02-08 23:18:17 | 2024-02-09 02:37:48 | 2024-02-09 02:58:04 | 0:20:16 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |||
Failure Reason:
Failed to reconnect to smithi062 |
||||||||||||||
pass | 7552629 | 2024-02-08 23:18:18 | 2024-02-09 02:37:49 | 2024-02-09 03:00:09 | 0:22:20 | 0:11:51 | 0:10:29 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | |
fail | 7552630 | 2024-02-08 23:18:19 | 2024-02-09 02:37:49 | 2024-02-09 03:12:32 | 0:34:43 | 0:26:12 | 0:08:31 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi174 with status 5: 'sudo systemctl stop ceph-985d51ce-c6f6-11ee-95b6-87774f69a715@mon.smithi174' |
||||||||||||||
pass | 7552631 | 2024-02-08 23:18:19 | 2024-02-09 02:37:50 | 2024-02-09 03:06:55 | 0:29:05 | 0:22:53 | 0:06:12 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7552632 | 2024-02-08 23:18:20 | 2024-02-09 02:37:50 | 2024-02-09 03:04:51 | 0:27:01 | 0:17:59 | 0:09:02 | smithi | main | centos | 8.stream | rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7552633 | 2024-02-08 23:18:21 | 2024-02-09 02:37:51 | 2024-02-09 03:16:53 | 0:39:02 | 0:32:24 | 0:06:38 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7552634 | 2024-02-08 23:18:22 | 2024-02-09 02:37:51 | 2024-02-09 03:46:52 | 1:09:01 | 0:53:50 | 0:15:11 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
"2024-02-09T03:20:00.000168+0000 mon.a (mon.0) 1187 : cluster [WRN] Health detail: HEALTH_WARN nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
pass | 7552635 | 2024-02-08 23:18:23 | 2024-02-09 02:42:22 | 2024-02-09 03:24:27 | 0:42:05 | 0:31:58 | 0:10:07 | smithi | main | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552636 | 2024-02-08 23:18:23 | 2024-02-09 02:42:32 | 2024-02-09 03:18:04 | 0:35:32 | 0:23:10 | 0:12:22 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-02-09T03:06:56.698212+0000 mon.a (mon.0) 438 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7552637 | 2024-02-08 23:18:24 | 2024-02-09 02:44:13 | 2024-02-09 03:11:54 | 0:27:41 | 0:14:44 | 0:12:57 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"2024-02-09T03:08:06.265019+0000 mon.smithi063 (mon.0) 626 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7552638 | 2024-02-08 23:18:25 | 2024-02-09 02:46:44 | 2024-02-09 03:18:10 | 0:31:26 | 0:24:48 | 0:06:38 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/redirect_set_object} | 2 | |
fail | 7552639 | 2024-02-08 23:18:26 | 2024-02-09 02:47:04 | 2024-02-09 03:14:22 | 0:27:18 | 0:20:17 | 0:07:01 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/rados_cls_all} | 2 | |
Failure Reason:
"2024-02-09T03:10:42.361288+0000 mon.a (mon.0) 529 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7552640 | 2024-02-08 23:18:27 | 2024-02-09 02:47:05 | 2024-02-09 03:25:03 | 0:37:58 | 0:27:46 | 0:10:12 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
"2024-02-09T03:20:09.742041+0000 mon.a (mon.0) 979 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
pass | 7552641 | 2024-02-08 23:18:28 | 2024-02-09 02:47:35 | 2024-02-09 03:22:44 | 0:35:09 | 0:23:13 | 0:11:56 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} | 3 | |
pass | 7552642 | 2024-02-08 23:18:28 | 2024-02-09 02:48:26 | 2024-02-09 03:06:29 | 0:18:03 | 0:08:30 | 0:09:33 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552643 | 2024-02-08 23:18:29 | 2024-02-09 02:48:26 | 2024-02-09 03:44:38 | 0:56:12 | 0:44:07 | 0:12:05 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
pass | 7552644 | 2024-02-08 23:18:30 | 2024-02-09 02:48:26 | 2024-02-09 03:54:38 | 1:06:12 | 0:55:20 | 0:10:52 | smithi | main | centos | 8.stream | rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/dashboard} | 2 | |
pass | 7552645 | 2024-02-08 23:18:31 | 2024-02-09 02:48:47 | 2024-02-09 03:15:36 | 0:26:49 | 0:19:13 | 0:07:36 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/alloc-hint supported-random-distro$/{rhel_8}} | 1 | |
pass | 7552646 | 2024-02-08 23:18:32 | 2024-02-09 02:50:27 | 2024-02-09 03:15:25 | 0:24:58 | 0:13:28 | 0:11:30 | smithi | main | ubuntu | 20.04 | rados/rest/{mgr-restful supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7552647 | 2024-02-08 23:18:32 | 2024-02-09 02:51:28 | 2024-02-09 03:20:48 | 0:29:20 | 0:18:37 | 0:10:43 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7552648 | 2024-02-08 23:18:33 | 2024-02-09 02:52:48 | 2024-02-09 03:18:14 | 0:25:26 | 0:14:38 | 0:10:48 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/crush} | 1 | |
fail | 7552649 | 2024-02-08 23:18:34 | 2024-02-09 02:52:49 | 2024-02-09 03:12:59 | 0:20:10 | smithi | main | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04} | 4 | |||
Failure Reason:
Failed to reconnect to smithi022 |
||||||||||||||
pass | 7552650 | 2024-02-08 23:18:35 | 2024-02-09 02:53:09 | 2024-02-09 03:26:51 | 0:33:42 | 0:23:48 | 0:09:54 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 | |
fail | 7552651 | 2024-02-08 23:18:36 | 2024-02-09 02:53:09 | 2024-02-09 03:28:47 | 0:35:38 | 0:24:32 | 0:11:06 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} | 2 | |
Failure Reason:
"2024-02-09T03:23:28.625879+0000 mon.a (mon.0) 371 : cluster [WRN] Health check failed: 1 host is in maintenance mode (HOST_IN_MAINTENANCE)" in cluster log |
||||||||||||||
fail | 7552652 | 2024-02-08 23:18:36 | 2024-02-09 02:53:10 | 2024-02-09 03:30:17 | 0:37:07 | 0:26:34 | 0:10:33 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi171 with status 5: 'sudo systemctl stop ceph-f16835de-c6f8-11ee-95b6-87774f69a715@mon.smithi171' |
||||||||||||||
pass | 7552653 | 2024-02-08 23:18:37 | 2024-02-09 02:53:10 | 2024-02-09 03:30:50 | 0:37:40 | 0:25:42 | 0:11:58 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/rados_mon_workunits} | 2 | |
pass | 7552654 | 2024-02-08 23:18:38 | 2024-02-09 02:54:41 | 2024-02-09 03:32:00 | 0:37:19 | 0:24:53 | 0:12:26 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/rados_api_tests validater/lockdep} | 2 | |
fail | 7552655 | 2024-02-08 23:18:39 | 2024-02-09 02:54:41 | 2024-02-09 03:29:52 | 0:35:11 | 0:23:16 | 0:11:55 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi190 with status 5: 'sudo systemctl stop ceph-f6c6ad6c-c6f8-11ee-95b6-87774f69a715@mon.smithi190' |
||||||||||||||
pass | 7552656 | 2024-02-08 23:18:40 | 2024-02-09 02:56:22 | 2024-02-09 03:22:00 | 0:25:38 | 0:16:43 | 0:08:55 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/set-chunks-read} | 2 | |
pass | 7552657 | 2024-02-08 23:18:40 | 2024-02-09 02:56:52 | 2024-02-09 03:20:44 | 0:23:52 | 0:14:19 | 0:09:33 | smithi | main | centos | 8.stream | rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools} 2-node-mgr orchestrator_cli} | 2 | |
pass | 7552658 | 2024-02-08 23:18:41 | 2024-02-09 02:57:03 | 2024-02-09 03:24:56 | 0:27:53 | 0:19:46 | 0:08:07 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/filestore-xfs supported-random-distro$/{rhel_8} tasks/prometheus} | 2 | |
pass | 7552659 | 2024-02-08 23:18:42 | 2024-02-09 02:57:23 | 2024-02-09 03:27:12 | 0:29:49 | 0:21:12 | 0:08:37 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7552660 | 2024-02-08 23:18:43 | 2024-02-09 02:59:54 | 2024-02-09 03:39:48 | 0:39:54 | 0:31:49 | 0:08:05 | smithi | main | rhel | 8.6 | rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7552661 | 2024-02-08 23:18:44 | 2024-02-09 03:00:14 | 2024-02-09 03:22:52 | 0:22:38 | 0:12:16 | 0:10:22 | smithi | main | centos | 8.stream | rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} | 3 | |
Failure Reason:
'package_manager_version' |
||||||||||||||
pass | 7552662 | 2024-02-08 23:18:44 | 2024-02-09 03:01:45 | 2024-02-09 03:22:38 | 0:20:53 | 0:11:40 | 0:09:13 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7552663 | 2024-02-08 23:18:45 | 2024-02-09 03:01:45 | 2024-02-09 03:29:24 | 0:27:39 | 0:21:21 | 0:06:18 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
fail | 7552664 | 2024-02-08 23:18:46 | 2024-02-09 03:02:06 | 2024-02-09 03:30:50 | 0:28:44 | 0:17:05 | 0:11:39 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-09T03:24:26.876069+0000 mon.a (mon.0) 523 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552665 | 2024-02-08 23:18:47 | 2024-02-09 03:03:56 | 2024-02-09 03:28:40 | 0:24:44 | 0:13:05 | 0:11:39 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} | 1 | |
fail | 7552666 | 2024-02-08 23:18:48 | 2024-02-09 03:04:47 | 2024-02-09 03:57:12 | 0:52:25 | 0:42:46 | 0:09:39 | smithi | main | centos | 8.stream | rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
pass | 7552667 | 2024-02-08 23:18:48 | 2024-02-09 03:04:57 | 2024-02-09 03:27:54 | 0:22:57 | 0:14:43 | 0:08:14 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.6_container_tools_rhel8} 1-start 2-services/basic 3-final} | 1 | |
pass | 7552668 | 2024-02-08 23:18:49 | 2024-02-09 03:06:38 | 2024-02-09 03:46:13 | 0:39:35 | 0:31:40 | 0:07:55 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/small-objects-balanced} | 2 | |
pass | 7552669 | 2024-02-08 23:18:50 | 2024-02-09 03:07:18 | 2024-02-09 03:41:37 | 0:34:19 | 0:23:49 | 0:10:30 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 7552670 | 2024-02-08 23:18:51 | 2024-02-09 03:07:19 | 2024-02-09 03:55:33 | 0:48:14 | 0:37:47 | 0:10:27 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
"2024-02-09T03:35:23.670998+0000 mon.a (mon.0) 718 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
pass | 7552671 | 2024-02-08 23:18:52 | 2024-02-09 03:07:49 | 2024-02-09 03:43:50 | 0:36:01 | 0:26:16 | 0:09:45 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 2 | |
fail | 7552672 | 2024-02-08 23:18:53 | 2024-02-09 03:08:40 | 2024-02-09 03:57:36 | 0:48:56 | 0:39:14 | 0:09:42 | smithi | main | centos | 8.stream | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9c640ed0-c6fa-11ee-95b6-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7552673 | 2024-02-08 23:18:53 | 2024-02-09 03:08:40 | 2024-02-09 03:38:35 | 0:29:55 | 0:20:39 | 0:09:16 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
pass | 7552674 | 2024-02-08 23:18:54 | 2024-02-09 03:08:40 | 2024-02-09 03:34:14 | 0:25:34 | 0:15:31 | 0:10:03 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7552675 | 2024-02-08 23:18:55 | 2024-02-09 03:08:51 | 2024-02-09 03:57:54 | 0:49:03 | 0:42:06 | 0:06:57 | smithi | main | rhel | 8.6 | rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
"2024-02-09T03:39:22.281272+0000 mon.a (mon.0) 1097 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7552676 | 2024-02-08 23:18:56 | 2024-02-09 03:08:51 | 2024-02-09 03:33:47 | 0:24:56 | 0:15:59 | 0:08:57 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/rados_python} | 2 | |
pass | 7552677 | 2024-02-08 23:18:57 | 2024-02-09 03:08:51 | 2024-02-09 03:29:52 | 0:21:01 | 0:11:22 | 0:09:39 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} | 1 | |
pass | 7552678 | 2024-02-08 23:18:57 | 2024-02-09 03:08:52 | 2024-02-09 03:43:00 | 0:34:08 | 0:23:41 | 0:10:27 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7552679 | 2024-02-08 23:18:58 | 2024-02-09 03:08:52 | 2024-02-09 03:52:34 | 0:43:42 | 0:33:29 | 0:10:13 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
"2024-02-09T03:28:11.514710+0000 mon.a (mon.0) 178 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
pass | 7552680 | 2024-02-08 23:18:59 | 2024-02-09 03:08:53 | 2024-02-09 03:45:01 | 0:36:08 | 0:23:01 | 0:13:07 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/small-objects-localized} | 2 | |
pass | 7552681 | 2024-02-08 23:19:00 | 2024-02-09 03:10:53 | 2024-02-09 03:35:56 | 0:25:03 | 0:13:07 | 0:11:56 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552682 | 2024-02-08 23:19:01 | 2024-02-09 03:10:54 | 2024-02-09 04:01:12 | 0:50:18 | 0:41:51 | 0:08:27 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
fail | 7552683 | 2024-02-08 23:19:02 | 2024-02-09 03:10:54 | 2024-02-09 03:39:44 | 0:28:50 | 0:18:11 | 0:10:39 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi037 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 7552684 | 2024-02-08 23:19:02 | 2024-02-09 03:13:15 | 2024-02-09 03:32:54 | 0:19:39 | 0:09:10 | 0:10:29 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552685 | 2024-02-08 23:19:03 | 2024-02-09 03:13:15 | 2024-02-09 03:38:20 | 0:25:05 | 0:19:25 | 0:05:40 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
"2024-02-09T03:36:27.959937+0000 mon.smithi043 (mon.0) 607 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7552686 | 2024-02-08 23:19:04 | 2024-02-09 03:13:15 | 2024-02-09 03:33:01 | 0:19:46 | 0:10:14 | 0:09:32 | smithi | main | centos | 8.stream | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} | 2 | |
fail | 7552687 | 2024-02-08 23:19:05 | 2024-02-09 03:13:56 | 2024-02-09 03:42:09 | 0:28:13 | 0:20:24 | 0:07:49 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-02-09T03:38:30.735199+0000 mon.a (mon.0) 658 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552688 | 2024-02-08 23:19:06 | 2024-02-09 03:14:56 | 2024-02-09 04:21:53 | 1:06:57 | 0:59:09 | 0:07:48 | smithi | main | centos | 8.stream | rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7552689 | 2024-02-08 23:19:07 | 2024-02-09 03:14:57 | 2024-02-09 03:52:47 | 0:37:50 | 0:30:46 | 0:07:04 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/small-objects} | 2 | |
fail | 7552690 | 2024-02-08 23:19:08 | 2024-02-09 03:15:37 | 2024-02-09 04:02:22 | 0:46:45 | 0:35:31 | 0:11:14 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
"2024-02-09T03:36:47.379496+0000 mon.a (mon.0) 181 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
pass | 7552691 | 2024-02-08 23:19:08 | 2024-02-09 03:15:48 | 2024-02-09 03:38:57 | 0:23:09 | 0:11:18 | 0:11:51 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} | 1 | |
fail | 7552692 | 2024-02-08 23:19:09 | 2024-02-09 03:16:58 | 2024-02-09 03:53:56 | 0:36:58 | 0:26:02 | 0:10:56 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi114 with status 5: 'sudo systemctl stop ceph-630c6ce8-c6fc-11ee-95b6-87774f69a715@mon.smithi114' |
||||||||||||||
pass | 7552693 | 2024-02-08 23:19:10 | 2024-02-09 03:18:19 | 2024-02-09 04:05:39 | 0:47:20 | 0:41:20 | 0:06:00 | smithi | main | rhel | 8.6 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_workunits} | 2 | |
pass | 7552694 | 2024-02-08 23:19:11 | 2024-02-09 03:18:29 | 2024-02-09 03:44:04 | 0:25:35 | 0:16:41 | 0:08:54 | smithi | main | centos | 8.stream | rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{centos_8}} | 1 | |
pass | 7552695 | 2024-02-08 23:19:12 | 2024-02-09 03:18:50 | 2024-02-09 04:38:08 | 1:19:18 | 1:09:59 | 0:09:19 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 7552696 | 2024-02-08 23:19:12 | 2024-02-09 03:18:50 | 2024-02-09 03:44:04 | 0:25:14 | 0:17:01 | 0:08:13 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
pass | 7552697 | 2024-02-08 23:19:13 | 2024-02-09 03:20:20 | 2024-02-09 03:38:02 | 0:17:42 | 0:07:26 | 0:10:16 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 7552698 | 2024-02-08 23:19:14 | 2024-02-09 03:20:51 | 2024-02-09 03:42:29 | 0:21:38 | 0:12:08 | 0:09:30 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/workunits} | 2 | |
pass | 7552699 | 2024-02-08 23:19:15 | 2024-02-09 03:20:51 | 2024-02-09 03:51:45 | 0:30:54 | 0:22:31 | 0:08:23 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7552700 | 2024-02-08 23:19:16 | 2024-02-09 03:22:42 | 2024-02-09 03:48:19 | 0:25:37 | 0:17:54 | 0:07:43 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7552701 | 2024-02-08 23:19:17 | 2024-02-09 03:22:52 | 2024-02-09 04:25:56 | 1:03:04 | 0:52:57 | 0:10:07 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
"2024-02-09T03:47:07.102580+0000 mon.a (mon.0) 530 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552702 | 2024-02-08 23:19:17 | 2024-02-09 03:22:53 | 2024-02-09 03:49:04 | 0:26:11 | 0:18:36 | 0:07:35 | smithi | main | rhel | 8.6 | rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} | 2 | |
pass | 7552703 | 2024-02-08 23:19:18 | 2024-02-09 03:24:03 | 2024-02-09 04:17:55 | 0:53:52 | 0:43:38 | 0:10:14 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/erasure-code} | 1 | |
pass | 7552704 | 2024-02-08 23:19:19 | 2024-02-09 03:24:04 | 2024-02-09 04:05:00 | 0:40:56 | 0:34:57 | 0:05:59 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
fail | 7552705 | 2024-02-08 23:19:20 | 2024-02-09 03:24:04 | 2024-02-09 03:44:02 | 0:19:58 | smithi | main | ubuntu | 18.04 | rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |||
Failure Reason:
Failed to reconnect to smithi120 |
||||||||||||||
pass | 7552706 | 2024-02-08 23:19:21 | 2024-02-09 03:24:05 | 2024-02-09 04:02:54 | 0:38:49 | 0:29:01 | 0:09:48 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 7552707 | 2024-02-08 23:19:22 | 2024-02-09 03:24:05 | 2024-02-09 03:49:09 | 0:25:04 | 0:17:48 | 0:07:16 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-09T03:45:40.192489+0000 mon.a (mon.0) 704 : cluster [WRN] Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7552708 | 2024-02-08 23:19:22 | 2024-02-09 03:24:05 | 2024-02-09 03:43:23 | 0:19:18 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |||
Failure Reason:
Failed to reconnect to smithi059 |
||||||||||||||
pass | 7552709 | 2024-02-08 23:19:23 | 2024-02-09 03:24:06 | 2024-02-09 03:48:53 | 0:24:47 | 0:16:35 | 0:08:12 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} | 2 | |
pass | 7552710 | 2024-02-08 23:19:24 | 2024-02-09 03:24:06 | 2024-02-09 03:48:38 | 0:24:32 | 0:16:05 | 0:08:27 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} | 1 | |
pass | 7552711 | 2024-02-08 23:19:25 | 2024-02-09 03:24:06 | 2024-02-09 03:42:12 | 0:18:06 | 0:08:30 | 0:09:36 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7552712 | 2024-02-08 23:19:26 | 2024-02-09 03:24:07 | 2024-02-09 03:46:01 | 0:21:54 | 0:12:23 | 0:09:31 | smithi | main | centos | 8.stream | rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7552713 | 2024-02-08 23:19:27 | 2024-02-09 03:24:07 | 2024-02-09 03:59:09 | 0:35:02 | 0:26:27 | 0:08:35 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
fail | 7552714 | 2024-02-08 23:19:28 | 2024-02-09 03:24:07 | 2024-02-09 04:53:12 | 1:29:05 | 1:16:36 | 0:12:29 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
"2024-02-09T04:10:00.000161+0000 mon.a (mon.0) 1492 : cluster [WRN] Health detail: HEALTH_WARN nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
pass | 7552715 | 2024-02-08 23:19:28 | 2024-02-09 03:27:18 | 2024-02-09 04:02:08 | 0:34:50 | 0:24:28 | 0:10:22 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
fail | 7552716 | 2024-02-08 23:19:29 | 2024-02-09 03:27:19 | 2024-02-09 03:51:07 | 0:23:48 | 0:17:11 | 0:06:37 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"2024-02-09T03:48:19.812348+0000 mon.smithi129 (mon.0) 569 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7552717 | 2024-02-08 23:19:30 | 2024-02-09 03:27:59 | 2024-02-09 04:06:24 | 0:38:25 | 0:26:22 | 0:12:03 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi163 with status 5: 'sudo systemctl stop ceph-f4007216-c6fd-11ee-95b6-87774f69a715@mon.smithi163' |
||||||||||||||
fail | 7552718 | 2024-02-08 23:19:31 | 2024-02-09 03:28:50 | 2024-02-09 03:59:39 | 0:30:49 | 0:19:46 | 0:11:03 | smithi | main | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |
Failure Reason:
"2024-02-09T03:47:46.841912+0000 mon.a (mon.0) 249 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552719 | 2024-02-08 23:19:32 | 2024-02-09 03:29:30 | 2024-02-09 03:51:39 | 0:22:09 | 0:12:47 | 0:09:22 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 | |
pass | 7552720 | 2024-02-08 23:19:33 | 2024-02-09 03:29:30 | 2024-02-09 04:08:00 | 0:38:30 | 0:27:21 | 0:11:09 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7552721 | 2024-02-08 23:19:33 | 2024-02-09 03:30:51 | 2024-02-09 04:26:14 | 0:55:23 | 0:44:02 | 0:11:21 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
pass | 7552722 | 2024-02-08 23:19:34 | 2024-02-09 03:32:02 | 2024-02-09 03:57:28 | 0:25:26 | 0:18:48 | 0:06:38 | smithi | main | rhel | 8.6 | rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |
dead | 7552723 | 2024-02-08 23:19:35 | 2024-02-09 03:32:02 | 2024-02-09 03:53:22 | 0:21:20 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds |
||||||||||||||
fail | 7552724 | 2024-02-08 23:19:36 | 2024-02-09 03:33:02 | 2024-02-09 04:10:09 | 0:37:07 | 0:27:08 | 0:09:59 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
"2024-02-09T03:56:00.293078+0000 mon.a (mon.0) 498 : cluster [WRN] Replacing daemon mds.a.smithi136.lxcebu as rank 0 with standby daemon mds.user_test_fs.smithi136.guyzpf" in cluster log |
||||||||||||||
pass | 7552725 | 2024-02-08 23:19:37 | 2024-02-09 03:33:03 | 2024-02-09 04:15:04 | 0:42:01 | 0:35:40 | 0:06:21 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/snaps-few-objects} | 2 | |
fail | 7552726 | 2024-02-08 23:19:38 | 2024-02-09 03:33:53 | 2024-02-09 04:05:50 | 0:31:57 | 0:21:42 | 0:10:15 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-02-09T03:56:48.018723+0000 mon.a (mon.0) 481 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552727 | 2024-02-08 23:19:38 | 2024-02-09 03:34:04 | 2024-02-09 03:57:53 | 0:23:49 | 0:17:42 | 0:06:07 | smithi | main | rhel | 8.6 | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} | 2 | |
pass | 7552728 | 2024-02-08 23:19:39 | 2024-02-09 03:34:24 | 2024-02-09 04:21:39 | 0:47:15 | 0:35:00 | 0:12:15 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} | 3 | |
pass | 7552729 | 2024-02-08 23:19:40 | 2024-02-09 03:36:05 | 2024-02-09 04:09:54 | 0:33:49 | 0:23:52 | 0:09:57 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
fail | 7552730 | 2024-02-08 23:19:41 | 2024-02-09 03:36:05 | 2024-02-09 04:11:46 | 0:35:41 | 0:23:04 | 0:12:37 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi070 with status 5: 'sudo systemctl stop ceph-bcfb0104-c6fe-11ee-95b6-87774f69a715@mon.smithi070' |
||||||||||||||
pass | 7552731 | 2024-02-08 23:19:42 | 2024-02-09 03:38:36 | 2024-02-09 04:11:41 | 0:33:05 | 0:26:54 | 0:06:11 | smithi | main | rhel | 8.6 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7552732 | 2024-02-08 23:19:42 | 2024-02-09 03:38:37 | 2024-02-09 04:06:48 | 0:28:11 | 0:17:36 | 0:10:35 | smithi | main | centos | 8.stream | rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
pass | 7552733 | 2024-02-08 23:19:43 | 2024-02-09 03:39:47 | 2024-02-09 03:59:28 | 0:19:41 | 0:10:17 | 0:09:24 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552734 | 2024-02-08 23:19:44 | 2024-02-09 03:39:47 | 2024-02-09 04:25:33 | 0:45:46 | 0:34:30 | 0:11:16 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
"2024-02-09T04:05:09.654440+0000 mon.a (mon.0) 524 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7552735 | 2024-02-08 23:19:45 | 2024-02-09 03:39:48 | 2024-02-09 04:08:36 | 0:28:48 | 0:18:24 | 0:10:24 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} | 2 | |
Failure Reason:
"2024-02-09T04:04:11.374180+0000 mon.a (mon.0) 524 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
pass | 7552736 | 2024-02-08 23:19:46 | 2024-02-09 03:39:48 | 2024-02-09 04:32:57 | 0:53:09 | 0:46:54 | 0:06:15 | smithi | main | rhel | 8.6 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/one workloads/snaps-few-objects} | 2 | |
pass | 7552737 | 2024-02-08 23:19:47 | 2024-02-09 03:39:49 | 2024-02-09 04:10:31 | 0:30:42 | 0:24:49 | 0:05:53 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
pass | 7552738 | 2024-02-08 23:19:48 | 2024-02-09 03:39:49 | 2024-02-09 04:03:11 | 0:23:22 | 0:13:05 | 0:10:17 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_8} tasks/crash} | 2 | |
pass | 7552739 | 2024-02-08 23:19:49 | 2024-02-09 03:39:49 | 2024-02-09 04:02:49 | 0:23:00 | 0:14:30 | 0:08:30 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7552740 | 2024-02-08 23:19:49 | 2024-02-09 03:39:50 | 2024-02-09 04:31:40 | 0:51:50 | 0:41:40 | 0:10:10 | smithi | main | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
"2024-02-09T04:12:12.826919+0000 mon.a (mon.0) 696 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7552741 | 2024-02-08 23:19:50 | 2024-02-09 03:39:50 | 2024-02-09 04:01:05 | 0:21:15 | 0:09:50 | 0:11:25 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_striper} | 2 | |
pass | 7552742 | 2024-02-08 23:19:51 | 2024-02-09 03:41:41 | 2024-02-09 04:23:44 | 0:42:03 | 0:32:13 | 0:09:50 | smithi | main | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552743 | 2024-02-08 23:19:52 | 2024-02-09 03:41:41 | 2024-02-09 04:07:44 | 0:26:03 | 0:15:29 | 0:10:34 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
"2024-02-09T04:05:51.312567+0000 mon.a (mon.0) 403 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7552744 | 2024-02-08 23:19:53 | 2024-02-09 03:42:22 | 2024-02-09 04:00:25 | 0:18:03 | 0:08:50 | 0:09:13 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/filejournal supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7552745 | 2024-02-08 23:19:53 | 2024-02-09 03:42:32 | 2024-02-09 05:02:50 | 1:20:18 | 1:11:55 | 0:08:23 | smithi | main | rhel | 8.6 | rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 2 | |
fail | 7552746 | 2024-02-08 23:19:54 | 2024-02-09 03:43:03 | 2024-02-09 04:20:37 | 0:37:34 | 0:28:19 | 0:09:15 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi137 with status 5: 'sudo systemctl stop ceph-1bdf0cf0-c700-11ee-95b6-87774f69a715@mon.smithi137' |
||||||||||||||
pass | 7552747 | 2024-02-08 23:19:55 | 2024-02-09 03:43:33 | 2024-02-09 04:06:17 | 0:22:44 | 0:11:18 | 0:11:26 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7552748 | 2024-02-08 23:19:56 | 2024-02-09 03:43:54 | 2024-02-09 04:07:54 | 0:24:00 | 0:16:03 | 0:07:57 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7552749 | 2024-02-08 23:19:57 | 2024-02-09 03:44:14 | 2024-02-09 04:11:18 | 0:27:04 | 0:20:17 | 0:06:47 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7552750 | 2024-02-08 23:19:58 | 2024-02-09 03:44:15 | 2024-02-09 04:19:24 | 0:35:09 | 0:22:45 | 0:12:24 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-09T04:09:53.018329+0000 mon.a (mon.0) 521 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552751 | 2024-02-08 23:19:58 | 2024-02-09 03:45:05 | 2024-02-09 04:19:32 | 0:34:27 | 0:27:40 | 0:06:47 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 7552752 | 2024-02-08 23:19:59 | 2024-02-09 03:46:16 | 2024-02-09 04:08:41 | 0:22:25 | 0:12:31 | 0:09:54 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} | 1 | |
pass | 7552753 | 2024-02-08 23:20:00 | 2024-02-09 03:46:17 | 2024-02-09 04:16:06 | 0:29:49 | 0:19:37 | 0:10:12 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
fail | 7552754 | 2024-02-08 23:20:01 | 2024-02-09 03:48:47 | 2024-02-09 04:31:00 | 0:42:13 | 0:30:52 | 0:11:21 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
"2024-02-09T04:24:47.971011+0000 mon.a (mon.0) 969 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
pass | 7552755 | 2024-02-08 23:20:02 | 2024-02-09 03:49:48 | 2024-02-09 04:20:58 | 0:31:10 | 0:23:36 | 0:07:34 | smithi | main | rhel | 8.6 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/mgr} | 1 | |
pass | 7552756 | 2024-02-08 23:20:03 | 2024-02-09 03:49:49 | 2024-02-09 04:56:09 | 1:06:20 | 0:56:28 | 0:09:52 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-radosbench} | 2 | |
pass | 7552757 | 2024-02-08 23:20:03 | 2024-02-09 03:49:59 | 2024-02-09 04:37:56 | 0:47:57 | 0:37:27 | 0:10:30 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 7552758 | 2024-02-08 23:20:04 | 2024-02-09 03:50:00 | 2024-02-09 04:53:22 | 1:03:22 | 0:52:17 | 0:11:05 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
"2024-02-09T04:30:00.000119+0000 mon.a (mon.0) 1418 : cluster [WRN] Health detail: HEALTH_WARN noscrub,nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
pass | 7552759 | 2024-02-08 23:20:05 | 2024-02-09 03:51:50 | 2024-02-09 04:14:45 | 0:22:55 | 0:16:18 | 0:06:37 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
pass | 7552760 | 2024-02-08 23:20:06 | 2024-02-09 03:51:51 | 2024-02-09 04:12:44 | 0:20:53 | 0:10:13 | 0:10:40 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552761 | 2024-02-08 23:20:07 | 2024-02-09 03:52:52 | 2024-02-09 04:26:53 | 0:34:01 | 0:23:40 | 0:10:21 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} | 2 | |
Failure Reason:
"2024-02-09T04:24:08.455862+0000 mon.a (mon.0) 499 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7552762 | 2024-02-08 23:20:08 | 2024-02-09 03:53:42 | 2024-02-09 05:18:32 | 1:24:50 | 1:17:34 | 0:07:16 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/cache-agent-big} | 2 | |
pass | 7552763 | 2024-02-08 23:20:08 | 2024-02-09 03:54:43 | 2024-02-09 04:31:53 | 0:37:10 | 0:27:55 | 0:09:15 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7552764 | 2024-02-08 23:20:09 | 2024-02-09 03:55:03 | 2024-02-09 04:35:29 | 0:40:26 | 0:28:47 | 0:11:39 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi027 with status 5: 'sudo systemctl stop ceph-065f7bc4-c702-11ee-95b6-87774f69a715@mon.smithi027' |
||||||||||||||
pass | 7552765 | 2024-02-08 23:20:10 | 2024-02-09 03:55:24 | 2024-02-09 04:28:06 | 0:32:42 | 0:21:02 | 0:11:40 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 7552766 | 2024-02-08 23:20:11 | 2024-02-09 03:55:24 | 2024-02-09 04:22:35 | 0:27:11 | 0:17:05 | 0:10:06 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-02-09T04:12:33.906287+0000 mon.a (mon.0) 160 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552767 | 2024-02-08 23:20:12 | 2024-02-09 03:55:25 | 2024-02-09 04:18:14 | 0:22:49 | 0:14:29 | 0:08:20 | smithi | main | centos | 8.stream | rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.stream_container_tools} 1-start 2-services/rgw 3-final} | 1 | |
fail | 7552768 | 2024-02-08 23:20:13 | 2024-02-09 03:55:25 | 2024-02-09 04:47:22 | 0:51:57 | 0:41:37 | 0:10:20 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
"2024-02-09T04:31:26.433763+0000 mon.a (mon.0) 1383 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log |
||||||||||||||
pass | 7552769 | 2024-02-08 23:20:13 | 2024-02-09 03:55:26 | 2024-02-09 04:29:05 | 0:33:39 | 0:25:44 | 0:07:55 | smithi | main | rhel | 8.6 | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} tasks/mon_recovery} | 3 | |
pass | 7552770 | 2024-02-08 23:20:14 | 2024-02-09 03:55:26 | 2024-02-09 04:17:06 | 0:21:40 | 0:12:56 | 0:08:44 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7552771 | 2024-02-08 23:20:15 | 2024-02-09 03:55:27 | 2024-02-09 04:37:26 | 0:41:59 | 0:34:55 | 0:07:04 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_big} | 2 | |
fail | 7552772 | 2024-02-08 23:20:16 | 2024-02-09 03:55:27 | 2024-02-09 04:37:27 | 0:42:00 | 0:31:23 | 0:10:37 | smithi | main | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} | 2 | |
Failure Reason:
"2024-02-09T04:12:43.213750+0000 mon.a (mon.0) 121 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552773 | 2024-02-08 23:20:17 | 2024-02-09 03:55:28 | 2024-02-09 04:24:29 | 0:29:01 | 0:23:32 | 0:05:29 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/cache-agent-small} | 2 | |
pass | 7552774 | 2024-02-08 23:20:18 | 2024-02-09 03:55:28 | 2024-02-09 04:30:21 | 0:34:53 | 0:26:29 | 0:08:24 | smithi | main | centos | 8.stream | rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
pass | 7552775 | 2024-02-08 23:20:19 | 2024-02-09 03:55:28 | 2024-02-09 04:31:50 | 0:36:22 | 0:22:39 | 0:13:43 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7552776 | 2024-02-08 23:20:19 | 2024-02-09 03:57:29 | 2024-02-09 04:19:32 | 0:22:03 | 0:11:40 | 0:10:23 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_adoption} | 1 | |
pass | 7552777 | 2024-02-08 23:20:20 | 2024-02-09 03:58:00 | 2024-02-09 04:44:22 | 0:46:22 | 0:35:18 | 0:11:04 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
pass | 7552778 | 2024-02-08 23:20:21 | 2024-02-09 03:59:11 | 2024-02-09 04:19:10 | 0:19:59 | 0:10:18 | 0:09:41 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
pass | 7552779 | 2024-02-08 23:20:22 | 2024-02-09 03:59:11 | 2024-02-09 04:31:32 | 0:32:21 | 0:20:30 | 0:11:51 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/pool-create-delete} | 2 | |
pass | 7552780 | 2024-02-08 23:20:23 | 2024-02-09 04:00:32 | 2024-02-09 04:26:31 | 0:25:59 | 0:14:51 | 0:11:08 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{centos_8} tasks/failover} | 2 | |
pass | 7552781 | 2024-02-08 23:20:23 | 2024-02-09 04:01:12 | 2024-02-09 04:32:37 | 0:31:25 | 0:23:02 | 0:08:23 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7552782 | 2024-02-08 23:20:24 | 2024-02-09 04:02:53 | 2024-02-09 04:28:20 | 0:25:27 | 0:16:52 | 0:08:35 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
pass | 7552783 | 2024-02-08 23:20:25 | 2024-02-09 04:02:54 | 2024-02-09 04:46:31 | 0:43:37 | 0:34:08 | 0:09:29 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 7552784 | 2024-02-08 23:20:26 | 2024-02-09 04:03:04 | 2024-02-09 04:30:39 | 0:27:35 | 0:17:57 | 0:09:38 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7552785 | 2024-02-08 23:20:27 | 2024-02-09 04:03:14 | 2024-02-09 04:40:15 | 0:37:01 | 0:29:25 | 0:07:36 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache-pool-snaps-readproxy} | 2 | |
fail | 7552786 | 2024-02-08 23:20:28 | 2024-02-09 04:05:05 | 2024-02-09 04:31:02 | 0:25:57 | 0:16:02 | 0:09:55 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi047 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0e714d9a4bd2a821113e6318adb87bd06cf81ec1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 724f7a0a-c702-11ee-95b6-87774f69a715 -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
pass | 7552787 | 2024-02-08 23:20:28 | 2024-02-09 04:05:46 | 2024-02-09 04:30:47 | 0:25:01 | 0:18:58 | 0:06:03 | smithi | main | rhel | 8.6 | rados/singleton/{all/watch-notify-same-primary mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7552788 | 2024-02-08 23:20:29 | 2024-02-09 04:05:46 | 2024-02-09 04:35:02 | 0:29:16 | 0:19:07 | 0:10:09 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi053 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 7552789 | 2024-02-08 23:20:30 | 2024-02-09 04:06:27 | 2024-02-09 04:52:09 | 0:45:42 | 0:37:28 | 0:08:14 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7552790 | 2024-02-08 23:20:31 | 2024-02-09 04:06:57 | 2024-02-09 06:50:19 | 2:43:22 | 2:35:01 | 0:08:21 | smithi | main | centos | 8.stream | rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{centos_8}} | 1 | |
pass | 7552791 | 2024-02-08 23:20:32 | 2024-02-09 04:06:58 | 2024-02-09 04:41:17 | 0:34:19 | 0:24:06 | 0:10:13 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
fail | 7552792 | 2024-02-08 23:20:32 | 2024-02-09 04:07:58 | 2024-02-09 04:35:49 | 0:27:51 | 0:21:11 | 0:06:40 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-09T04:27:58.334844+0000 mon.a (mon.0) 158 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552793 | 2024-02-08 23:20:33 | 2024-02-09 04:08:09 | 2024-02-09 04:53:26 | 0:45:17 | 0:32:42 | 0:12:35 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
pass | 7552794 | 2024-02-08 23:20:34 | 2024-02-09 04:08:50 | 2024-02-09 04:41:03 | 0:32:13 | 0:19:15 | 0:12:58 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
fail | 7552795 | 2024-02-08 23:20:35 | 2024-02-09 04:10:00 | 2024-02-09 04:47:31 | 0:37:31 | 0:26:47 | 0:10:44 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi043 with status 5: 'sudo systemctl stop ceph-b480be38-c703-11ee-95b6-87774f69a715@mon.smithi043' |
||||||||||||||
pass | 7552796 | 2024-02-08 23:20:36 | 2024-02-09 04:10:41 | 2024-02-09 04:44:03 | 0:33:22 | 0:21:46 | 0:11:36 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} | 2 | |
pass | 7552797 | 2024-02-08 23:20:37 | 2024-02-09 04:11:01 | 2024-02-09 04:46:06 | 0:35:05 | 0:23:00 | 0:12:05 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-pool-snaps} | 2 | |
pass | 7552798 | 2024-02-08 23:20:38 | 2024-02-09 04:11:02 | 2024-02-09 04:28:54 | 0:17:52 | 0:07:37 | 0:10:15 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} | 1 | |
pass | 7552799 | 2024-02-08 23:20:39 | 2024-02-09 04:11:02 | 2024-02-09 04:51:02 | 0:40:00 | 0:30:55 | 0:09:05 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7552800 | 2024-02-08 23:20:39 | 2024-02-09 04:11:03 | 2024-02-09 04:35:59 | 0:24:56 | 0:18:40 | 0:06:16 | smithi | main | rhel | 8.6 | rados/singleton/{all/admin-socket mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7552801 | 2024-02-08 23:20:40 | 2024-02-09 04:11:03 | 2024-02-09 05:07:00 | 0:55:57 | 0:35:33 | 0:20:24 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} | 3 | |
fail | 7552802 | 2024-02-08 23:20:41 | 2024-02-09 04:11:03 | 2024-02-09 05:58:13 | 1:47:10 | 1:37:06 | 0:10:04 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7c86b0d2-c703-11ee-95b6-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 7552803 | 2024-02-08 23:20:42 | 2024-02-09 04:11:04 | 2024-02-09 05:05:03 | 0:53:59 | 0:43:19 | 0:10:40 | smithi | main | centos | 8.stream | rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
pass | 7552804 | 2024-02-08 23:20:43 | 2024-02-09 04:11:04 | 2024-02-09 04:49:04 | 0:38:00 | 0:28:20 | 0:09:40 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7552805 | 2024-02-08 23:20:44 | 2024-02-09 04:11:05 | 2024-02-09 04:43:07 | 0:32:02 | 0:21:18 | 0:10:44 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mix} | 2 | |
pass | 7552806 | 2024-02-08 23:20:45 | 2024-02-09 04:11:05 | 2024-02-09 05:02:03 | 0:50:58 | 0:43:55 | 0:07:03 | smithi | main | rhel | 8.6 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/misc} | 1 | |
fail | 7552807 | 2024-02-08 23:20:45 | 2024-02-09 04:11:06 | 2024-02-09 05:01:13 | 0:50:07 | 0:38:16 | 0:11:51 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
"2024-02-09T04:34:51.375795+0000 mon.a (mon.0) 158 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552808 | 2024-02-08 23:20:46 | 2024-02-09 04:11:06 | 2024-02-09 04:43:54 | 0:32:48 | 0:21:14 | 0:11:34 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} | 1 | |
pass | 7552809 | 2024-02-08 23:20:47 | 2024-02-09 04:11:06 | 2024-02-09 04:47:31 | 0:36:25 | 0:29:13 | 0:07:12 | smithi | main | rhel | 8.6 | rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7552810 | 2024-02-08 23:20:48 | 2024-02-09 04:11:07 | 2024-02-09 04:51:44 | 0:40:37 | 0:32:49 | 0:07:48 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/cache-snaps-balanced} | 2 | |
fail | 7552811 | 2024-02-08 23:20:49 | 2024-02-09 04:12:48 | 2024-02-09 05:07:54 | 0:55:06 | 0:46:56 | 0:08:10 | smithi | main | rhel | 8.6 | rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
"2024-02-09T04:50:00.000137+0000 mon.a (mon.0) 1125 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled" in cluster log |
||||||||||||||
pass | 7552812 | 2024-02-08 23:20:49 | 2024-02-09 04:14:48 | 2024-02-09 04:43:45 | 0:28:57 | 0:21:02 | 0:07:55 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7552813 | 2024-02-08 23:20:50 | 2024-02-09 04:15:09 | 2024-02-09 04:41:28 | 0:26:19 | 0:18:07 | 0:08:12 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-02-09T04:38:37.603579+0000 mon.a (mon.0) 675 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552814 | 2024-02-08 23:20:51 | 2024-02-09 04:16:10 | 2024-02-09 04:35:10 | 0:19:00 | 0:08:44 | 0:10:16 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 3 | |
pass | 7552815 | 2024-02-08 23:20:52 | 2024-02-09 04:17:10 | 2024-02-09 04:45:00 | 0:27:50 | 0:16:47 | 0:11:03 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} | 1 | |
fail | 7552816 | 2024-02-08 23:20:53 | 2024-02-09 04:18:01 | 2024-02-09 04:47:16 | 0:29:15 | 0:18:27 | 0:10:48 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"2024-02-09T04:44:04.769674+0000 mon.smithi096 (mon.0) 643 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7552817 | 2024-02-08 23:20:54 | 2024-02-09 04:19:11 | 2024-02-09 05:08:39 | 0:49:28 | 0:43:20 | 0:06:08 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
pass | 7552818 | 2024-02-08 23:20:54 | 2024-02-09 04:19:42 | 2024-02-09 04:40:10 | 0:20:28 | 0:09:41 | 0:10:47 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/deduptool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7552819 | 2024-02-08 23:20:55 | 2024-02-09 04:19:42 | 2024-02-09 04:57:12 | 0:37:30 | 0:25:20 | 0:12:10 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} | 2 | |
pass | 7552820 | 2024-02-08 23:20:56 | 2024-02-09 04:21:43 | 2024-02-09 04:53:54 | 0:32:11 | 0:21:10 | 0:11:01 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/cache-snaps} | 2 | |
pass | 7552821 | 2024-02-08 23:20:57 | 2024-02-09 04:21:44 | 2024-02-09 04:47:13 | 0:25:29 | 0:13:43 | 0:11:46 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-comp-zlib supported-random-distro$/{centos_8} tasks/insights} | 2 | |
pass | 7552822 | 2024-02-08 23:20:58 | 2024-02-09 04:23:55 | 2024-02-09 04:51:43 | 0:27:48 | 0:14:34 | 0:13:14 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7552823 | 2024-02-08 23:20:58 | 2024-02-09 04:26:36 | 2024-02-09 05:03:48 | 0:37:12 | 0:25:52 | 0:11:20 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi140 with status 5: 'sudo systemctl stop ceph-16acd05e-c706-11ee-95b6-87774f69a715@mon.smithi140' |
||||||||||||||
pass | 7552824 | 2024-02-08 23:20:59 | 2024-02-09 04:26:36 | 2024-02-09 04:49:14 | 0:22:38 | 0:11:28 | 0:11:10 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_5925} | 2 | |
pass | 7552825 | 2024-02-08 23:21:00 | 2024-02-09 04:26:37 | 2024-02-09 04:47:02 | 0:20:25 | 0:09:18 | 0:11:07 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552826 | 2024-02-08 23:21:01 | 2024-02-09 04:26:37 | 2024-02-09 05:25:42 | 0:59:05 | 0:47:32 | 0:11:33 | smithi | main | centos | 8.stream | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |
Failure Reason:
"2024-02-09T04:45:21.162750+0000 mon.a (mon.0) 140 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7552827 | 2024-02-08 23:21:02 | 2024-02-09 04:26:38 | 2024-02-09 04:46:25 | 0:19:47 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} | 3 | |||
Failure Reason:
Failed to reconnect to smithi170 |
||||||||||||||
fail | 7552828 | 2024-02-08 23:21:03 | 2024-02-09 04:26:38 | 2024-02-09 05:03:48 | 0:37:10 | 0:26:56 | 0:10:14 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
"2024-02-09T04:51:23.908851+0000 mon.a (mon.0) 497 : cluster [WRN] Replacing daemon mds.a.smithi181.csvkng as rank 0 with standby daemon mds.user_test_fs.smithi181.iswphm" in cluster log |
||||||||||||||
pass | 7552829 | 2024-02-08 23:21:03 | 2024-02-09 04:26:38 | 2024-02-09 04:55:53 | 0:29:15 | 0:21:20 | 0:07:55 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7552830 | 2024-02-08 23:21:04 | 2024-02-09 04:26:39 | 2024-02-09 04:53:52 | 0:27:13 | 0:19:01 | 0:08:12 | smithi | main | rhel | 8.6 | rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7552831 | 2024-02-08 23:21:05 | 2024-02-09 04:28:10 | 2024-02-09 05:10:34 | 0:42:24 | 0:21:47 | 0:20:37 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-09T04:55:41.143942+0000 mon.a (mon.0) 161 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552832 | 2024-02-08 23:21:06 | 2024-02-09 04:28:30 | 2024-02-09 04:51:01 | 0:22:31 | 0:12:16 | 0:10:15 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache} | 2 | |
fail | 7552833 | 2024-02-08 23:21:07 | 2024-02-09 04:29:01 | 2024-02-09 05:14:32 | 0:45:31 | 0:38:14 | 0:07:17 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
pass | 7552834 | 2024-02-08 23:21:08 | 2024-02-09 04:29:11 | 2024-02-09 05:11:46 | 0:42:35 | 0:30:52 | 0:11:43 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552835 | 2024-02-08 23:21:08 | 2024-02-09 04:29:12 | 2024-02-09 05:31:35 | 1:02:23 | 0:49:45 | 0:12:38 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} | 2 | |
Failure Reason:
"2024-02-09T04:56:22.848138+0000 mon.a (mon.0) 542 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552836 | 2024-02-08 23:21:09 | 2024-02-09 04:30:22 | 2024-02-09 07:22:37 | 2:52:15 | 2:40:26 | 0:11:49 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7552837 | 2024-02-08 23:21:10 | 2024-02-09 04:30:43 | 2024-02-09 05:10:09 | 0:39:26 | 0:26:54 | 0:12:32 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_mostlyread} | 2 | |
pass | 7552838 | 2024-02-08 23:21:11 | 2024-02-09 04:31:33 | 2024-02-09 04:53:29 | 0:21:56 | 0:12:25 | 0:09:31 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} | 1 | |
fail | 7552839 | 2024-02-08 23:21:12 | 2024-02-09 04:31:34 | 2024-02-09 05:10:56 | 0:39:22 | 0:32:21 | 0:07:01 | smithi | main | rhel | 8.6 | rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
"2024-02-09T04:58:06.407235+0000 mon.a (mon.0) 159 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552840 | 2024-02-08 23:21:13 | 2024-02-09 04:31:54 | 2024-02-09 05:04:12 | 0:32:18 | 0:20:43 | 0:11:35 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |
pass | 7552841 | 2024-02-08 23:21:14 | 2024-02-09 04:31:55 | 2024-02-09 04:58:33 | 0:26:38 | 0:19:06 | 0:07:32 | smithi | main | rhel | 8.6 | rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7552842 | 2024-02-08 23:21:14 | 2024-02-09 04:32:45 | 2024-02-09 04:57:51 | 0:25:06 | 0:15:11 | 0:09:55 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
"2024-02-09T04:55:01.971461+0000 mon.a (mon.0) 408 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7552843 | 2024-02-08 23:21:15 | 2024-02-09 04:32:46 | 2024-02-09 05:14:32 | 0:41:46 | 0:32:30 | 0:09:16 | smithi | main | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552844 | 2024-02-08 23:21:16 | 2024-02-09 04:32:46 | 2024-02-09 05:22:33 | 0:49:47 | 0:40:07 | 0:09:40 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
pass | 7552845 | 2024-02-08 23:21:17 | 2024-02-09 04:33:07 | 2024-02-09 05:16:19 | 0:43:12 | 0:30:07 | 0:13:05 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
fail | 7552846 | 2024-02-08 23:21:18 | 2024-02-09 04:35:18 | 2024-02-09 05:12:28 | 0:37:10 | 0:26:02 | 0:11:08 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi187 with status 5: 'sudo systemctl stop ceph-2fe1e158-c707-11ee-95b6-87774f69a715@mon.smithi187' |
||||||||||||||
pass | 7552847 | 2024-02-08 23:21:19 | 2024-02-09 04:35:18 | 2024-02-09 05:19:29 | 0:44:11 | 0:35:20 | 0:08:51 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7552848 | 2024-02-08 23:21:20 | 2024-02-09 04:37:29 | 2024-02-09 04:59:24 | 0:21:55 | 0:12:14 | 0:09:41 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552849 | 2024-02-08 23:21:20 | 2024-02-09 04:37:29 | 2024-02-09 05:03:09 | 0:25:40 | 0:15:02 | 0:10:38 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
"2024-02-09T04:59:41.672630+0000 mon.smithi086 (mon.0) 612 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7552850 | 2024-02-08 23:21:21 | 2024-02-09 04:38:00 | 2024-02-09 05:09:29 | 0:31:29 | 0:21:55 | 0:09:34 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
fail | 7552851 | 2024-02-08 23:21:22 | 2024-02-09 05:26:20 | 2165 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/rbd_cls} | 3 | ||||
Failure Reason:
"2024-02-09T04:59:45.782774+0000 mon.a (mon.0) 178 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
pass | 7552852 | 2024-02-08 23:21:23 | 2024-02-09 04:40:21 | 2024-02-09 05:06:56 | 0:26:35 | 0:19:24 | 0:07:11 | smithi | main | rhel | 8.6 | rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7552853 | 2024-02-08 23:21:24 | 2024-02-09 04:41:12 | 2024-02-09 05:15:46 | 0:34:34 | 0:23:10 | 0:11:24 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-02-09T05:00:51.250284+0000 mon.a (mon.0) 161 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552854 | 2024-02-08 23:21:25 | 2024-02-09 04:41:23 | 2024-02-09 05:00:40 | 0:19:17 | 0:08:19 | 0:10:58 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} | 2 | |
fail | 7552855 | 2024-02-08 23:21:25 | 2024-02-09 04:42:13 | 2024-02-09 05:24:21 | 0:42:08 | 0:29:16 | 0:12:52 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
"2024-02-09T05:18:03.393228+0000 mon.a (mon.0) 966 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
pass | 7552856 | 2024-02-08 23:21:26 | 2024-02-09 04:42:14 | 2024-02-09 05:25:45 | 0:43:31 | 0:33:50 | 0:09:41 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/rados_api_tests} | 2 | |
pass | 7552857 | 2024-02-08 23:21:27 | 2024-02-09 04:42:14 | 2024-02-09 05:10:05 | 0:27:51 | 0:18:37 | 0:09:14 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
pass | 7552858 | 2024-02-08 23:21:28 | 2024-02-09 04:42:15 | 2024-02-09 05:47:14 | 1:04:59 | 0:55:27 | 0:09:32 | smithi | main | centos | 8.stream | rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/dashboard} | 2 | |
pass | 7552859 | 2024-02-08 23:21:29 | 2024-02-09 04:42:15 | 2024-02-09 05:10:12 | 0:27:57 | 0:18:45 | 0:09:12 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7552860 | 2024-02-08 23:21:30 | 2024-02-09 04:42:15 | 2024-02-09 05:09:13 | 0:26:58 | 0:16:48 | 0:10:10 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/mon-stretch} | 1 | |
pass | 7552861 | 2024-02-08 23:21:30 | 2024-02-09 04:42:16 | 2024-02-09 08:05:53 | 3:23:37 | 3:04:06 | 0:19:31 | smithi | main | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} | 4 | |
fail | 7552862 | 2024-02-08 23:21:31 | 2024-02-09 04:42:16 | 2024-02-09 05:45:23 | 1:03:07 | 0:52:04 | 0:11:03 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
"2024-02-09T05:03:19.920300+0000 mon.a (mon.0) 172 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
pass | 7552863 | 2024-02-08 23:21:32 | 2024-02-09 05:28:51 | 2024-02-09 07:14:04 | 1:45:13 | 1:39:25 | 0:05:48 | smithi | main | rhel | 8.6 | rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7552864 | 2024-02-08 23:21:33 | 2024-02-09 05:28:51 | 2024-02-09 07:35:49 | 2:06:58 | 1:53:02 | 0:13:56 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
pass | 7552865 | 2024-02-08 23:21:34 | 2024-02-09 05:28:52 | 2024-02-09 06:03:52 | 0:35:00 | 0:23:20 | 0:11:40 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-comp-zstd supported-random-distro$/{ubuntu_latest} tasks/module_selftest} | 2 | |
pass | 7552866 | 2024-02-08 23:21:34 | 2024-02-09 05:28:52 | 2024-02-09 05:57:18 | 0:28:26 | 0:15:01 | 0:13:25 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7552867 | 2024-02-08 23:21:35 | 2024-02-09 05:28:52 | 2024-02-09 06:06:20 | 0:37:28 | 0:24:42 | 0:12:46 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} | 2 | |
Failure Reason:
"2024-02-09T06:01:26.115945+0000 mon.a (mon.0) 376 : cluster [WRN] Health check failed: 1 host is in maintenance mode (HOST_IN_MAINTENANCE)" in cluster log |
||||||||||||||
pass | 7552868 | 2024-02-08 23:21:36 | 2024-02-09 05:28:53 | 2024-02-09 06:16:07 | 0:47:14 | 0:36:50 | 0:10:24 | smithi | main | rhel | 8.6 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_api_tests} | 2 | |
fail | 7552869 | 2024-02-08 23:21:37 | 2024-02-09 05:28:53 | 2024-02-09 06:11:53 | 0:43:00 | 0:28:33 | 0:14:27 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi146 with status 5: 'sudo systemctl stop ceph-4efea582-c70f-11ee-95b6-87774f69a715@mon.smithi146' |
||||||||||||||
pass | 7552870 | 2024-02-08 23:21:38 | 2024-02-09 05:28:54 | 2024-02-09 05:48:41 | 0:19:47 | 0:10:28 | 0:09:19 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} | 1 | |
fail | 7552871 | 2024-02-08 23:21:39 | 2024-02-09 05:28:54 | 2024-02-09 06:04:20 | 0:35:26 | 0:22:46 | 0:12:40 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi149 with status 5: 'sudo systemctl stop ceph-849443a6-c70e-11ee-95b6-87774f69a715@mon.smithi149' |
||||||||||||||
pass | 7552872 | 2024-02-08 23:21:39 | 2024-02-09 05:28:55 | 2024-02-09 06:39:46 | 1:10:51 | 0:58:23 | 0:12:28 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
pass | 7552873 | 2024-02-08 23:21:40 | 2024-02-09 05:28:55 | 2024-02-09 05:55:42 | 0:26:47 | 0:18:20 | 0:08:27 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/readwrite} | 2 | |
pass | 7552874 | 2024-02-08 23:21:41 | 2024-02-09 05:28:55 | 2024-02-09 05:56:18 | 0:27:23 | 0:12:31 | 0:14:52 | smithi | main | ubuntu | 18.04 | rados/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_18.04} 2-node-mgr orchestrator_cli} | 2 | |
pass | 7552875 | 2024-02-08 23:21:42 | 2024-02-09 05:28:56 | 2024-02-09 06:11:15 | 0:42:19 | 0:28:51 | 0:13:28 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7552876 | 2024-02-08 23:21:43 | 2024-02-09 05:28:56 | 2024-02-09 05:57:17 | 0:28:21 | 0:19:41 | 0:08:40 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7552877 | 2024-02-08 23:21:43 | 2024-02-09 05:28:57 | 2024-02-09 05:54:10 | 0:25:13 | 0:12:16 | 0:12:57 | smithi | main | centos | 8.stream | rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} | 3 | |
Failure Reason:
'package_manager_version' |
||||||||||||||
pass | 7552878 | 2024-02-08 23:21:44 | 2024-02-09 05:28:57 | 2024-02-09 06:28:36 | 0:59:39 | 0:49:57 | 0:09:42 | smithi | main | rhel | 8.6 | rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7552879 | 2024-02-08 23:21:45 | 2024-02-09 05:28:58 | 2024-02-09 05:57:33 | 0:28:35 | 0:17:42 | 0:10:53 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-09T05:53:45.928199+0000 mon.a (mon.0) 667 : cluster [WRN] Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552880 | 2024-02-08 23:21:46 | 2024-02-09 05:28:58 | 2024-02-09 05:58:54 | 0:29:56 | 0:15:37 | 0:14:19 | smithi | main | centos | 8.stream | rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 7552881 | 2024-02-08 23:21:47 | 2024-02-09 05:28:58 | 2024-02-09 05:55:30 | 0:26:32 | 0:16:29 | 0:10:03 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/basic 3-final} | 1 | |
pass | 7552882 | 2024-02-08 23:21:47 | 2024-02-09 05:28:59 | 2024-02-09 06:29:01 | 1:00:02 | 0:49:03 | 0:10:59 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/radosbench} | 2 | |
pass | 7552883 | 2024-02-08 23:21:48 | 2024-02-09 05:28:59 | 2024-02-09 06:02:45 | 0:33:46 | 0:21:14 | 0:12:32 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |
fail | 7552884 | 2024-02-08 23:21:49 | 2024-02-09 05:29:00 | 2024-02-09 06:17:06 | 0:48:06 | 0:36:37 | 0:11:29 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
"2024-02-09T05:56:24.012881+0000 mon.a (mon.0) 543 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552885 | 2024-02-08 23:21:50 | 2024-02-09 05:29:00 | 2024-02-09 05:57:57 | 0:28:57 | 0:18:00 | 0:10:57 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/fusestore supported-random-distro$/{rhel_8}} | 1 | |
pass | 7552886 | 2024-02-08 23:21:51 | 2024-02-09 05:29:01 | 2024-02-09 05:50:54 | 0:21:53 | 0:08:21 | 0:13:32 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552887 | 2024-02-08 23:21:52 | 2024-02-09 05:29:01 | 2024-02-09 06:02:39 | 0:33:38 | 0:19:56 | 0:13:42 | smithi | main | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} | 2 | |
Failure Reason:
"2024-02-09T05:50:42.397352+0000 mon.a (mon.0) 244 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7552888 | 2024-02-08 23:21:52 | 2024-02-09 05:29:01 | 2024-02-09 05:55:51 | 0:26:50 | 0:13:36 | 0:13:14 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
"2024-02-09T05:49:51.588527+0000 mon.a (mon.0) 180 : cluster [WRN] Health check failed: Degraded data redundancy: 2/52 objects degraded (3.846%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7552889 | 2024-02-08 23:21:53 | 2024-02-09 05:29:02 | 2024-02-09 05:50:35 | 0:21:33 | smithi | main | ubuntu | 18.04 | rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |||
Failure Reason:
Failed to reconnect to smithi134 |
||||||||||||||
fail | 7552890 | 2024-02-08 23:21:54 | 2024-02-09 05:29:02 | 2024-02-09 05:40:16 | 0:11:14 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
Command failed on smithi067 with status 100: 'sudo apt-get clean' |
||||||||||||||
dead | 7552891 | 2024-02-08 23:21:55 | 2024-02-09 05:29:03 | 2024-02-09 05:29:05 | 0:00:02 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} | 1 | |||
Failure Reason:
Error reimaging machines: 500 Server Error: Internal Server Error for url: http://fog.front.sepia.ceph.com/fog/host/172/task |
||||||||||||||
pass | 7552892 | 2024-02-08 23:21:56 | 2024-02-09 05:29:03 | 2024-02-09 05:54:30 | 0:25:27 | 0:16:02 | 0:09:25 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/redirect} | 2 | |
fail | 7552893 | 2024-02-08 23:21:57 | 2024-02-09 05:29:04 | 2024-02-09 05:58:58 | 0:29:54 | 0:19:46 | 0:10:08 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"2024-02-09T05:54:15.951135+0000 mon.smithi113 (mon.0) 611 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7552894 | 2024-02-08 23:21:57 | 2024-02-09 05:29:04 | 2024-02-09 06:19:42 | 0:50:38 | 0:42:03 | 0:08:35 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
pass | 7552895 | 2024-02-08 23:21:58 | 2024-02-09 05:29:04 | 2024-02-09 06:44:48 | 1:15:44 | 1:05:12 | 0:10:32 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 7552896 | 2024-02-08 23:21:59 | 2024-02-09 05:29:05 | 2024-02-09 05:40:16 | 0:11:11 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} | 1 | |||
Failure Reason:
SSH connection to smithi067 was lost: 'sudo apt-get update' |
||||||||||||||
fail | 7552897 | 2024-02-08 23:22:00 | 2024-02-09 05:29:06 | 2024-02-09 05:58:25 | 0:29:19 | 0:18:22 | 0:10:57 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi008 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 7552898 | 2024-02-08 23:22:01 | 2024-02-09 05:29:06 | 2024-02-09 05:56:35 | 0:27:29 | 0:20:03 | 0:07:26 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7552899 | 2024-02-08 23:22:02 | 2024-02-09 05:29:07 | 2024-02-09 06:01:42 | 0:32:35 | 0:22:06 | 0:10:29 | smithi | main | rhel | 8.6 | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} tasks/mon_recovery} | 2 | |
fail | 7552900 | 2024-02-08 23:22:02 | 2024-02-09 05:29:07 | 2024-02-09 05:58:27 | 0:29:20 | 0:22:09 | 0:07:11 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-02-09T05:49:23.456606+0000 mon.a (mon.0) 159 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552901 | 2024-02-08 23:22:03 | 2024-02-09 05:29:08 | 2024-02-09 06:16:43 | 0:47:35 | 0:34:08 | 0:13:27 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 7552902 | 2024-02-08 23:22:04 | 2024-02-09 05:29:08 | 2024-02-09 06:00:13 | 0:31:05 | 0:17:54 | 0:13:11 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/repair_test} | 2 | |
pass | 7552903 | 2024-02-08 23:22:05 | 2024-02-09 05:29:09 | 2024-02-09 06:05:37 | 0:36:28 | 0:20:27 | 0:16:01 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/redirect_promote_tests} | 2 | |
pass | 7552904 | 2024-02-08 23:22:06 | 2024-02-09 05:29:09 | 2024-02-09 05:52:51 | 0:23:42 | 0:10:15 | 0:13:27 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/per_module_finisher_stats} | 2 | |
pass | 7552905 | 2024-02-08 23:22:07 | 2024-02-09 05:29:09 | 2024-02-09 05:57:18 | 0:28:09 | 0:13:06 | 0:15:03 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7552906 | 2024-02-08 23:22:08 | 2024-02-09 05:29:10 | 2024-02-09 06:13:13 | 0:44:03 | 0:28:41 | 0:15:22 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi140 with status 5: 'sudo systemctl stop ceph-6c013f6e-c70f-11ee-95b6-87774f69a715@mon.smithi140' |
||||||||||||||
pass | 7552907 | 2024-02-08 23:22:08 | 2024-02-09 05:34:11 | 2024-02-09 06:03:17 | 0:29:06 | 0:14:33 | 0:14:33 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} | 2 | |
pass | 7552908 | 2024-02-08 23:22:09 | 2024-02-09 05:35:32 | 2024-02-09 06:25:33 | 0:50:01 | 0:40:08 | 0:09:53 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7552909 | 2024-02-08 23:22:10 | 2024-02-09 05:35:32 | 2024-02-09 06:48:09 | 1:12:37 | 0:59:26 | 0:13:11 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_osdmap_prune} | 2 | |
pass | 7552910 | 2024-02-08 23:22:11 | 2024-02-09 05:38:33 | 2024-02-09 06:10:43 | 0:32:10 | 0:20:05 | 0:12:05 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
pass | 7552911 | 2024-02-08 23:22:12 | 2024-02-09 05:43:34 | 2024-02-09 06:01:13 | 0:17:39 | 0:07:31 | 0:10:08 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 7552912 | 2024-02-08 23:22:13 | 2024-02-09 05:44:25 | 2024-02-09 06:21:23 | 0:36:58 | 0:30:29 | 0:06:29 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 7552913 | 2024-02-08 23:22:13 | 2024-02-09 05:44:25 | 2024-02-09 06:11:02 | 0:26:37 | 0:19:38 | 0:06:59 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7552914 | 2024-02-08 23:22:14 | 2024-02-09 05:44:26 | 2024-02-09 06:35:38 | 0:51:12 | 0:41:28 | 0:09:44 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
"2024-02-09T06:11:51.905818+0000 mon.a (mon.0) 717 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
pass | 7552915 | 2024-02-08 23:22:15 | 2024-02-09 05:44:26 | 2024-02-09 06:49:22 | 1:04:56 | 0:59:16 | 0:05:40 | smithi | main | rhel | 8.6 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/mon} | 1 | |
pass | 7552916 | 2024-02-08 23:22:16 | 2024-02-09 05:44:26 | 2024-02-09 06:24:44 | 0:40:18 | 0:30:12 | 0:10:06 | smithi | main | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
pass | 7552917 | 2024-02-08 23:22:17 | 2024-02-09 05:44:27 | 2024-02-09 06:08:14 | 0:23:47 | 0:11:24 | 0:12:23 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 7552918 | 2024-02-08 23:22:18 | 2024-02-09 05:44:27 | 2024-02-09 06:09:38 | 0:25:11 | 0:15:31 | 0:09:40 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/redirect_set_object} | 2 | |
pass | 7552919 | 2024-02-08 23:22:18 | 2024-02-09 05:44:28 | 2024-02-09 06:11:30 | 0:27:02 | 0:20:16 | 0:06:46 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7552920 | 2024-02-08 23:22:19 | 2024-02-09 05:44:28 | 2024-02-09 06:09:40 | 0:25:12 | 0:18:47 | 0:06:25 | smithi | main | rhel | 8.6 | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7552921 | 2024-02-08 23:22:20 | 2024-02-09 05:44:29 | 2024-02-09 06:10:43 | 0:26:14 | 0:18:44 | 0:07:30 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-09T06:08:17.640392+0000 mon.a (mon.0) 675 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552922 | 2024-02-08 23:22:21 | 2024-02-09 05:44:39 | 2024-02-09 06:10:01 | 0:25:22 | 0:15:57 | 0:09:25 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
pass | 7552923 | 2024-02-08 23:22:22 | 2024-02-09 05:44:40 | 2024-02-09 06:03:35 | 0:18:55 | 0:10:38 | 0:08:17 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7552924 | 2024-02-08 23:22:23 | 2024-02-09 05:44:40 | 2024-02-09 06:15:23 | 0:30:43 | 0:20:41 | 0:10:02 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-small-objects} | 2 | |
fail | 7552925 | 2024-02-08 23:22:24 | 2024-02-09 05:45:21 | 2024-02-09 06:41:02 | 0:55:41 | 0:44:11 | 0:11:30 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
"2024-02-09T06:20:00.000163+0000 mon.a (mon.0) 786 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log |
||||||||||||||
pass | 7552926 | 2024-02-08 23:22:24 | 2024-02-09 05:47:21 | 2024-02-09 06:12:39 | 0:25:18 | 0:12:45 | 0:12:33 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} | 1 | |
pass | 7552927 | 2024-02-08 23:22:25 | 2024-02-09 05:48:42 | 2024-02-09 06:18:50 | 0:30:08 | 0:16:52 | 0:13:16 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} | 1 | |
pass | 7552928 | 2024-02-08 23:22:26 | 2024-02-09 05:51:10 | 2024-02-09 06:16:00 | 0:24:50 | 0:14:25 | 0:10:25 | smithi | main | centos | 8.stream | rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7552929 | 2024-02-08 23:22:27 | 2024-02-09 06:23:43 | 965 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/set-chunks-read} | 2 | ||||
fail | 7552930 | 2024-02-08 23:22:28 | 2024-02-09 05:54:32 | 2024-02-09 06:33:28 | 0:38:56 | 0:26:19 | 0:12:37 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi189 with status 5: 'sudo systemctl stop ceph-7c0664fe-c712-11ee-95b6-87774f69a715@mon.smithi189' |
||||||||||||||
pass | 7552931 | 2024-02-08 23:22:29 | 2024-02-09 05:55:33 | 2024-02-09 06:24:51 | 0:29:18 | 0:22:38 | 0:06:40 | smithi | main | rhel | 8.6 | rados/objectstore/{backends/keyvaluedb supported-random-distro$/{rhel_8}} | 1 | |
pass | 7552932 | 2024-02-08 23:22:29 | 2024-02-09 05:55:43 | 2024-02-09 06:32:28 | 0:36:45 | 0:26:45 | 0:10:00 | smithi | main | centos | 8.stream | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
fail | 7552933 | 2024-02-08 23:22:30 | 2024-02-09 05:55:43 | 2024-02-09 06:59:22 | 1:03:39 | 0:51:01 | 0:12:38 | smithi | main | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} | 2 | |
Failure Reason:
"2024-02-09T06:15:30.381079+0000 mon.a (mon.0) 115 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552934 | 2024-02-08 23:22:31 | 2024-02-09 05:56:24 | 2024-02-09 06:36:34 | 0:40:10 | 0:33:07 | 0:07:03 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7552935 | 2024-02-08 23:22:32 | 2024-02-09 05:57:24 | 2024-02-09 06:25:32 | 0:28:08 | 0:18:39 | 0:09:29 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
pass | 7552936 | 2024-02-08 23:22:33 | 2024-02-09 05:57:25 | 2024-02-09 06:19:38 | 0:22:13 | 0:12:34 | 0:09:39 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/scrub_test} | 2 | |
pass | 7552937 | 2024-02-08 23:22:33 | 2024-02-09 05:57:25 | 2024-02-09 06:15:33 | 0:18:08 | 0:08:54 | 0:09:14 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7552938 | 2024-02-08 23:22:34 | 2024-02-09 05:57:25 | 2024-02-09 06:34:35 | 0:37:10 | 0:26:41 | 0:10:29 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
"2024-02-09T06:20:11.102070+0000 mon.a (mon.0) 500 : cluster [WRN] Replacing daemon mds.a.smithi080.rltkzw as rank 0 with standby daemon mds.user_test_fs.smithi080.hohsbs" in cluster log |
||||||||||||||
fail | 7552939 | 2024-02-08 23:22:35 | 2024-02-09 05:57:26 | 2024-02-09 06:29:01 | 0:31:35 | 0:21:34 | 0:10:01 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-02-09T06:14:56.056634+0000 mon.a (mon.0) 161 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552940 | 2024-02-08 23:22:36 | 2024-02-09 05:57:26 | 2024-02-09 06:20:59 | 0:23:33 | 0:14:56 | 0:08:37 | smithi | main | centos | 8.stream | rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7552941 | 2024-02-08 23:22:37 | 2024-02-09 05:58:07 | 2024-02-09 06:32:08 | 0:34:01 | 0:20:20 | 0:13:41 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/small-objects-balanced} | 2 | |
pass | 7552942 | 2024-02-08 23:22:38 | 2024-02-09 05:58:57 | 2024-02-09 06:27:05 | 0:28:08 | 0:19:29 | 0:08:39 | smithi | main | rhel | 8.6 | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/mon_clock_no_skews} | 3 | |
pass | 7552943 | 2024-02-08 23:22:38 | 2024-02-09 06:00:08 | 2024-02-09 07:12:29 | 1:12:21 | 1:02:54 | 0:09:27 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench} | 3 | |
fail | 7552944 | 2024-02-08 23:22:39 | 2024-02-09 06:00:18 | 2024-02-09 06:35:55 | 0:35:37 | 0:24:38 | 0:10:59 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi164 with status 5: 'sudo systemctl stop ceph-a988dd44-c712-11ee-95b6-87774f69a715@mon.smithi164' |
||||||||||||||
pass | 7552945 | 2024-02-08 23:22:40 | 2024-02-09 06:00:19 | 2024-02-09 06:35:07 | 0:34:48 | 0:28:48 | 0:06:00 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-low-osd-mem-target supported-random-distro$/{rhel_8} tasks/progress} | 2 | |
pass | 7552946 | 2024-02-08 23:22:41 | 2024-02-09 06:00:19 | 2024-02-09 06:26:06 | 0:25:47 | 0:12:46 | 0:13:01 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7552947 | 2024-02-08 23:22:42 | 2024-02-09 06:00:20 | 2024-02-09 06:45:14 | 0:44:54 | 0:34:03 | 0:10:51 | smithi | main | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7552948 | 2024-02-08 23:22:43 | 2024-02-09 06:00:20 | 2024-02-09 06:26:29 | 0:26:09 | 0:15:13 | 0:10:56 | smithi | main | centos | 8.stream | rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
pass | 7552949 | 2024-02-08 23:22:44 | 2024-02-09 06:00:20 | 2024-02-09 07:11:21 | 1:11:01 | 1:01:47 | 0:09:14 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 7552950 | 2024-02-08 23:22:45 | 2024-02-09 06:00:21 | 2024-02-09 06:27:49 | 0:27:28 | 0:20:45 | 0:06:43 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 7552951 | 2024-02-08 23:22:45 | 2024-02-09 06:00:21 | 2024-02-09 06:47:56 | 0:47:35 | 0:38:00 | 0:09:35 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
Failure Reason:
"2024-02-09T06:28:01.954331+0000 mon.a (mon.0) 776 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
pass | 7552952 | 2024-02-08 23:22:46 | 2024-02-09 06:00:21 | 2024-02-09 06:43:20 | 0:42:59 | 0:33:29 | 0:09:30 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/one workloads/rados_mon_workunits} | 2 | |
pass | 7552953 | 2024-02-08 23:22:47 | 2024-02-09 06:00:22 | 2024-02-09 06:23:27 | 0:23:05 | 0:13:42 | 0:09:23 | smithi | main | centos | 8.stream | rados/singleton/{all/mon-auth-caps mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7552954 | 2024-02-08 23:22:48 | 2024-02-09 06:00:22 | 2024-02-09 06:50:05 | 0:49:43 | 0:37:14 | 0:12:29 | smithi | main | centos | 8.stream | rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
"2024-02-09T06:40:00.000145+0000 mon.a (mon.0) 2656 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 2 pool(s) do not have an application enabled" in cluster log |
||||||||||||||
pass | 7552955 | 2024-02-08 23:22:49 | 2024-02-09 06:02:53 | 2024-02-09 06:36:57 | 0:34:04 | 0:22:55 | 0:11:09 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/small-objects-localized} | 2 | |
fail | 7552956 | 2024-02-08 23:22:50 | 2024-02-09 06:03:23 | 2024-02-09 06:28:19 | 0:24:56 | 0:15:50 | 0:09:06 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
"2024-02-09T06:25:51.403007+0000 mon.a (mon.0) 405 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7552957 | 2024-02-08 23:22:51 | 2024-02-09 06:03:24 | 2024-02-09 06:27:01 | 0:23:37 | 0:16:53 | 0:06:44 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"2024-02-09T06:24:38.300493+0000 mon.smithi022 (mon.0) 568 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7552958 | 2024-02-08 23:22:51 | 2024-02-09 06:03:54 | 2024-02-09 06:25:45 | 0:21:51 | 0:11:19 | 0:10:32 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_read} | 1 | |
pass | 7552959 | 2024-02-08 23:22:52 | 2024-02-09 06:03:54 | 2024-02-09 06:44:45 | 0:40:51 | 0:26:31 | 0:14:20 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 7552960 | 2024-02-08 23:22:53 | 2024-02-09 06:08:15 | 2024-02-09 06:45:34 | 0:37:19 | 0:26:23 | 0:10:56 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi161 with status 5: 'sudo systemctl stop ceph-2755853c-c714-11ee-95b6-87774f69a715@mon.smithi161' |
||||||||||||||
pass | 7552961 | 2024-02-08 23:22:54 | 2024-02-09 06:08:16 | 2024-02-09 06:29:52 | 0:21:36 | 0:09:29 | 0:12:07 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7552962 | 2024-02-08 23:22:55 | 2024-02-09 06:09:46 | 2024-02-09 06:32:50 | 0:23:04 | 0:13:15 | 0:09:49 | smithi | main | centos | 8.stream | rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7552963 | 2024-02-08 23:22:55 | 2024-02-09 06:09:47 | 2024-02-09 06:44:36 | 0:34:49 | 0:23:02 | 0:11:47 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-09T06:29:41.515788+0000 mon.a (mon.0) 161 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552964 | 2024-02-08 23:22:56 | 2024-02-09 06:10:07 | 2024-02-09 06:38:02 | 0:27:55 | 0:20:30 | 0:07:25 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
pass | 7552965 | 2024-02-08 23:22:57 | 2024-02-09 06:10:48 | 2024-02-09 06:50:22 | 0:39:34 | 0:32:26 | 0:07:08 | smithi | main | rhel | 8.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/small-objects} | 2 | |
fail | 7552966 | 2024-02-08 23:22:58 | 2024-02-09 06:11:08 | 2024-02-09 06:48:48 | 0:37:40 | 0:28:03 | 0:09:37 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
"2024-02-09T06:35:31.139581+0000 mon.a (mon.0) 234 : cluster [WRN] Health check failed: 1/5 mons down, quorum a,e,c,d (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552967 | 2024-02-08 23:22:59 | 2024-02-09 06:11:39 | 2024-02-09 06:54:25 | 0:42:46 | 0:29:17 | 0:13:29 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
pass | 7552968 | 2024-02-08 23:23:00 | 2024-02-09 06:15:30 | 2024-02-09 09:49:29 | 3:33:59 | 3:24:51 | 0:09:08 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} | 1 | |
pass | 7552969 | 2024-02-08 23:23:00 | 2024-02-09 06:15:30 | 2024-02-09 06:40:41 | 0:25:11 | 0:18:01 | 0:07:10 | smithi | main | rhel | 8.6 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/libcephsqlite} | 2 | |
fail | 7552970 | 2024-02-08 23:23:01 | 2024-02-09 06:15:41 | 2024-02-09 07:01:25 | 0:45:44 | 0:34:58 | 0:10:46 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
"2024-02-09T06:34:52.582787+0000 mon.a (mon.0) 177 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
pass | 7552971 | 2024-02-08 23:23:02 | 2024-02-09 06:15:41 | 2024-02-09 06:49:41 | 0:34:00 | 0:24:27 | 0:09:33 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |
dead | 7552972 | 2024-02-08 23:23:03 | 2024-02-09 06:15:41 | 2024-02-09 06:35:17 | 0:19:36 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds |
||||||||||||||
pass | 7552973 | 2024-02-08 23:23:04 | 2024-02-09 06:15:52 | 2024-02-09 06:43:46 | 0:27:54 | 0:16:36 | 0:11:18 | smithi | main | centos | 8.stream | rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7552974 | 2024-02-08 23:23:04 | 2024-02-09 06:15:52 | 2024-02-09 06:50:01 | 0:34:09 | 0:23:19 | 0:10:50 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} | 2 | |
Failure Reason:
"2024-02-09T06:47:10.367717+0000 mon.a (mon.0) 502 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7552975 | 2024-02-08 23:23:05 | 2024-02-09 06:15:53 | 2024-02-09 06:36:32 | 0:20:39 | 0:12:06 | 0:08:33 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7552976 | 2024-02-08 23:23:06 | 2024-02-09 06:15:53 | 2024-02-09 06:53:36 | 0:37:43 | 0:26:57 | 0:10:46 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7552977 | 2024-02-08 23:23:07 | 2024-02-09 06:15:53 | 2024-02-09 06:53:34 | 0:37:41 | 0:26:14 | 0:11:27 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi149 with status 5: 'sudo systemctl stop ceph-46850850-c715-11ee-95b6-87774f69a715@mon.smithi149' |
||||||||||||||
pass | 7552978 | 2024-02-08 23:23:08 | 2024-02-09 06:15:54 | 2024-02-09 06:53:53 | 0:37:59 | 0:26:46 | 0:11:13 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
fail | 7552979 | 2024-02-08 23:23:08 | 2024-02-09 06:15:54 | 2024-02-09 06:44:00 | 0:28:06 | 0:17:48 | 0:10:18 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-02-09T06:40:07.546257+0000 mon.a (mon.0) 688 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7552980 | 2024-02-08 23:23:09 | 2024-02-09 06:15:55 | 2024-02-09 06:55:57 | 0:40:02 | 0:29:55 | 0:10:07 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7552981 | 2024-02-08 23:23:10 | 2024-02-09 06:45:06 | 1058 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} | 1 | ||||
pass | 7552982 | 2024-02-08 23:23:11 | 2024-02-09 06:16:46 | 2024-02-09 06:37:51 | 0:21:05 | 0:11:27 | 0:09:38 | smithi | main | centos | 8.stream | rados/singleton/{all/mon-config mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7552983 | 2024-02-08 23:23:12 | 2024-02-09 06:16:46 | 2024-02-09 06:45:20 | 0:28:34 | 0:18:45 | 0:09:49 | smithi | main | rhel | 8.6 | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} | 3 | |
fail | 7552984 | 2024-02-08 23:23:13 | 2024-02-09 06:19:47 | 2024-02-09 07:22:10 | 1:02:23 | 0:51:57 | 0:10:26 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
"2024-02-09T06:48:37.097365+0000 mon.a (mon.0) 771 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 7552985 | 2024-02-08 23:23:13 | 2024-02-09 06:21:07 | 2024-02-09 07:18:34 | 0:57:27 | 0:48:12 | 0:09:15 | smithi | main | centos | 8.stream | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/classic} | 2 | |
Failure Reason:
"2024-02-09T06:38:45.125190+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7552986 | 2024-02-08 23:23:14 | 2024-02-09 06:21:28 | 2024-02-09 06:45:46 | 0:24:18 | 0:11:21 | 0:12:57 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
pass | 7552987 | 2024-02-08 23:23:15 | 2024-02-09 06:23:29 | 2024-02-09 06:54:32 | 0:31:03 | 0:24:07 | 0:06:56 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7552988 | 2024-02-08 23:23:16 | 2024-02-09 06:23:49 | 2024-02-09 06:52:27 | 0:28:38 | 0:20:48 | 0:07:50 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-stupid supported-random-distro$/{rhel_8} tasks/prometheus} | 2 | |
pass | 7552989 | 2024-02-08 23:23:17 | 2024-02-09 06:24:50 | 2024-02-09 06:54:59 | 0:30:09 | 0:21:52 | 0:08:17 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7552990 | 2024-02-08 23:23:17 | 2024-02-09 06:25:41 | 2024-02-09 07:02:03 | 0:36:22 | 0:26:13 | 0:10:09 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
pass | 7552991 | 2024-02-08 23:23:18 | 2024-02-09 06:25:51 | 2024-02-09 07:08:01 | 0:42:10 | 0:35:44 | 0:06:26 | smithi | main | rhel | 8.6 | rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
pass | 7552992 | 2024-02-08 23:23:19 | 2024-02-09 06:26:11 | 2024-02-09 06:53:55 | 0:27:44 | 0:18:03 | 0:09:41 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} | 2 | |
pass | 7552993 | 2024-02-08 23:23:20 | 2024-02-09 06:26:12 | 2024-02-09 06:48:14 | 0:22:02 | 0:11:20 | 0:10:42 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_adoption} | 1 | |
pass | 7552994 | 2024-02-08 23:23:21 | 2024-02-09 06:26:32 | 2024-02-09 07:05:06 | 0:38:34 | 0:27:07 | 0:11:27 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/snaps-few-objects} | 2 | |
fail | 7552995 | 2024-02-08 23:23:22 | 2024-02-09 06:27:13 | 2024-02-09 07:03:04 | 0:35:51 | 0:16:49 | 0:19:02 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"2024-02-09T07:00:00.845906+0000 mon.smithi170 (mon.0) 610 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7552996 | 2024-02-08 23:23:22 | 2024-02-09 06:27:13 | 2024-02-09 07:23:05 | 0:55:52 | 0:45:16 | 0:10:36 | smithi | main | centos | 8.stream | rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} | 1 | |
fail | 7552997 | 2024-02-08 23:23:23 | 2024-02-09 06:27:54 | 2024-02-09 07:20:37 | 0:52:43 | 0:41:58 | 0:10:45 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
fail | 7552998 | 2024-02-08 23:23:24 | 2024-02-09 06:29:04 | 2024-02-09 07:34:31 | 1:05:27 | 0:53:13 | 0:12:14 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
"2024-02-09T07:10:00.000133+0000 mon.a (mon.0) 1477 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log |
||||||||||||||
pass | 7552999 | 2024-02-08 23:23:25 | 2024-02-09 06:31:15 | 2024-02-09 06:56:23 | 0:25:08 | 0:16:16 | 0:08:52 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7553000 | 2024-02-08 23:23:26 | 2024-02-09 06:31:16 | 2024-02-09 07:05:33 | 0:34:17 | 0:24:37 | 0:09:40 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/snaps-few-objects} | 2 | |
pass | 7553001 | 2024-02-08 23:23:27 | 2024-02-09 06:31:26 | 2024-02-09 06:54:15 | 0:22:49 | 0:10:50 | 0:11:59 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
fail | 7553002 | 2024-02-08 23:23:27 | 2024-02-09 06:31:27 | 2024-02-09 06:58:23 | 0:26:56 | 0:18:10 | 0:08:46 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi089 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 7553003 | 2024-02-08 23:23:28 | 2024-02-09 06:31:27 | 2024-02-09 07:07:29 | 0:36:02 | 0:26:25 | 0:09:37 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/rados_api_tests} | 2 | |
fail | 7553004 | 2024-02-08 23:23:29 | 2024-02-09 06:31:28 | 2024-02-09 06:58:38 | 0:27:10 | 0:21:08 | 0:06:02 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-09T06:55:30.008179+0000 mon.a (mon.0) 658 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7553005 | 2024-02-08 23:23:30 | 2024-02-09 06:31:28 | 2024-02-09 07:13:12 | 0:41:44 | 0:32:58 | 0:08:46 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7553006 | 2024-02-08 23:23:31 | 2024-02-09 06:31:29 | 2024-02-09 07:27:10 | 0:55:41 | 0:43:30 | 0:12:11 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
fail | 7553007 | 2024-02-08 23:23:31 | 2024-02-09 06:32:59 | 2024-02-09 07:12:09 | 0:39:10 | 0:27:20 | 0:11:50 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
Failure Reason:
Command failed on smithi183 with status 5: 'sudo systemctl stop ceph-ceb2f776-c717-11ee-95b6-87774f69a715@mon.smithi183' |
||||||||||||||
pass | 7553008 | 2024-02-08 23:23:32 | 2024-02-09 06:35:10 | 2024-02-09 07:12:37 | 0:37:27 | 0:28:41 | 0:08:46 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 2 | |
pass | 7553009 | 2024-02-08 23:23:33 | 2024-02-09 06:35:30 | 2024-02-09 06:53:33 | 0:18:03 | 0:07:56 | 0:10:07 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} | 1 | |
pass | 7553010 | 2024-02-08 23:23:34 | 2024-02-09 06:36:41 | 2024-02-09 07:10:51 | 0:34:10 | 0:23:01 | 0:11:09 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 2 | |
pass | 7553011 | 2024-02-08 23:23:35 | 2024-02-09 06:36:41 | 2024-02-09 07:03:29 | 0:26:48 | 0:14:59 | 0:11:49 | smithi | main | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
fail | 7553012 | 2024-02-08 23:23:36 | 2024-02-09 06:37:02 | 2024-02-09 06:58:04 | 0:21:02 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/test_rbd_api} | 3 | |||
Failure Reason:
Failed to reconnect to smithi043 |
||||||||||||||
fail | 7553013 | 2024-02-08 23:23:36 | 2024-02-09 06:38:03 | 2024-02-09 08:03:07 | 1:25:04 | 1:14:19 | 0:10:45 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi018 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0d6c3338-c718-11ee-95b6-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7553014 | 2024-02-08 23:23:37 | 2024-02-09 06:39:53 | 2024-02-09 07:09:52 | 0:29:59 | 0:20:44 | 0:09:15 | smithi | main | centos | 8.stream | rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} | 1 | |
pass | 7553015 | 2024-02-08 23:23:38 | 2024-02-09 06:40:44 | 2024-02-09 07:02:34 | 0:21:50 | 0:12:00 | 0:09:50 | smithi | main | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | |
pass | 7553016 | 2024-02-08 23:23:39 | 2024-02-09 06:40:44 | 2024-02-09 07:19:19 | 0:38:35 | 0:25:16 | 0:13:19 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7553017 | 2024-02-08 23:23:40 | 2024-02-09 06:43:25 | 2024-02-09 07:40:46 | 0:57:21 | 0:45:05 | 0:12:16 | smithi | main | centos | 8.stream | rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
fail | 7553018 | 2024-02-08 23:23:41 | 2024-02-09 06:44:46 | 2024-02-09 07:32:40 | 0:47:54 | 0:36:47 | 0:11:07 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
"2024-02-09T07:13:08.373725+0000 mon.a (mon.0) 727 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
pass | 7553019 | 2024-02-08 23:23:41 | 2024-02-09 06:44:46 | 2024-02-09 08:37:48 | 1:53:02 | 1:43:17 | 0:09:45 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/scrub} | 1 | |
fail | 7553020 | 2024-02-08 23:23:42 | 2024-02-09 06:44:56 | 2024-02-09 07:34:52 | 0:49:56 | 0:42:36 | 0:07:20 | smithi | main | rhel | 8.6 | rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
"2024-02-09T07:16:03.410422+0000 mon.a (mon.0) 770 : cluster [WRN] Health check failed: 17 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |