ID
Status
Ceph Branch
Suite Branch
Teuthology Branch
Machine
OS
Nodes
Description
Failure Reason
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests}
"2024-02-09T02:43:39.629103+0000 mon.a (mon.0) 160 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/radosbench}
Failed to reconnect to smithi120
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/connectivity start}
"2024-02-09T02:44:24.885221+0000 mon.a (mon.0) 712 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/objectstore/{backends/objectstore supported-random-distro$/{ubuntu_latest}}
hit max job timeout
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/radosbench}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi138 with status 5: 'sudo systemctl stop ceph-09944264-c6f5-11ee-95b6-87774f69a715@mon.smithi138'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity}
Command failed on smithi184 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e39d6e1e-c6f4-11ee-95b6-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_mon_osdmap_prune}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} tasks/rados_api_tests}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/mon_recovery validater/valgrind}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add}
"2024-02-09T02:55:21.880558+0000 mon.smithi100 (mon.0) 637 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton/{all/peer mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/rbd_cls}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} tasks/progress}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs}
"2024-02-09T02:54:19.539145+0000 mon.a (mon.0) 499 : cluster [WRN] Replacing daemon mds.a.smithi142.shtlel as rank 0 with standby daemon mds.user_test_fs.smithi142.ijudtb" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/redirect}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start}
"2024-02-09T02:52:51.323009+0000 mon.a (mon.0) 162 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/mirror 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root}
"2024-02-09T03:04:48.174436+0000 mon.a (mon.0) 711 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python}
"2024-02-09T03:05:17.160021+0000 mon.a (mon.0) 159 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli}
"2024-02-09T02:59:49.887783+0000 mon.a (mon.0) 420 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/redirect_promote_tests}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final}
Failed to reconnect to smithi062
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi174 with status 5: 'sudo systemctl stop ceph-985d51ce-c6f6-11ee-95b6-87774f69a715@mon.smithi174'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects}
"2024-02-09T03:20:00.000168+0000 mon.a (mon.0) 1187 : cluster [WRN] Health detail: HEALTH_WARN nodeep-scrub flag(s) set" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start}
"2024-02-09T03:06:56.698212+0000 mon.a (mon.0) 438 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag}
"2024-02-09T03:08:06.265019+0000 mon.smithi063 (mon.0) 626 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/redirect_set_object}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/rados_cls_all}
"2024-02-09T03:10:42.361288+0000 mon.a (mon.0) 529 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon}
"2024-02-09T03:20:09.742041+0000 mon.a (mon.0) 979 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final}
reached maximum tries (301) after waiting for 300 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/dashboard}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/objectstore/{backends/alloc-hint supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/rest/{mgr-restful supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/standalone/{supported-random-distro$/{centos_8} workloads/crush}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04}
Failed to reconnect to smithi022
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e}
"2024-02-09T03:23:28.625879+0000 mon.a (mon.0) 371 : cluster [WRN] Health check failed: 1 host is in maintenance mode (HOST_IN_MAINTENANCE)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi171 with status 5: 'sudo systemctl stop ceph-f16835de-c6f8-11ee-95b6-87774f69a715@mon.smithi171'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/rados_mon_workunits}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/rados_api_tests validater/lockdep}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final}
Command failed on smithi190 with status 5: 'sudo systemctl stop ceph-f6c6ad6c-c6f8-11ee-95b6-87774f69a715@mon.smithi190'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/set-chunks-read}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools} 2-node-mgr orchestrator_cli}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/filestore-xfs supported-random-distro$/{rhel_8} tasks/prometheus}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi}
'package_manager_version'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start}
"2024-02-09T03:24:26.876069+0000 mon.a (mon.0) 523 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final}
reached maximum tries (301) after waiting for 300 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.6_container_tools_rhel8} 1-start 2-services/basic 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/small-objects-balanced}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root}
"2024-02-09T03:35:23.670998+0000 mon.a (mon.0) 718 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic}
Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9c640ed0-c6fa-11ee-95b6-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests}
"2024-02-09T03:39:22.281272+0000 mon.a (mon.0) 1097 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/rados_python}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api}
"2024-02-09T03:28:11.514710+0000 mon.a (mon.0) 178 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/small-objects-localized}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final}
reached maximum tries (301) after waiting for 300 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm}
Command failed (workunit test cephadm/test_cephadm.sh) on smithi037 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait}
"2024-02-09T03:36:27.959937+0000 mon.smithi043 (mon.0) 607 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/connectivity start}
"2024-02-09T03:38:30.735199+0000 mon.a (mon.0) 658 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/small-objects}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/cache-snaps}
"2024-02-09T03:36:47.379496+0000 mon.a (mon.0) 181 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi114 with status 5: 'sudo systemctl stop ceph-630c6ce8-c6fc-11ee-95b6-87774f69a715@mon.smithi114'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_workunits}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/workunits}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root}
"2024-02-09T03:47:07.102580+0000 mon.a (mon.0) 530 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/erasure-code}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-balanced}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python}
Failed to reconnect to smithi120
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/classic start}
"2024-02-09T03:45:40.192489+0000 mon.a (mon.0) 704 : cluster [WRN] Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs2 3-final}
Failed to reconnect to smithi059
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/rados_stress_watch}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench}
"2024-02-09T04:10:00.000161+0000 mon.a (mon.0) 1492 : cluster [WRN] Health detail: HEALTH_WARN nodeep-scrub flag(s) set" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/snaps-few-objects-localized}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate}
"2024-02-09T03:48:19.812348+0000 mon.smithi129 (mon.0) 569 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi163 with status 5: 'sudo systemctl stop ceph-f4007216-c6fd-11ee-95b6-87774f69a715@mon.smithi163'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity}
"2024-02-09T03:47:46.841912+0000 mon.a (mon.0) 249 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final}
reached maximum tries (301) after waiting for 300 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{rhel_8}}
Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs}
"2024-02-09T03:56:00.293078+0000 mon.a (mon.0) 498 : cluster [WRN] Replacing daemon mds.a.smithi136.lxcebu as rank 0 with standby daemon mds.user_test_fs.smithi136.guyzpf" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/snaps-few-objects}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start}
"2024-02-09T03:56:48.018723+0000 mon.a (mon.0) 481 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final}
Command failed on smithi070 with status 5: 'sudo systemctl stop ceph-bcfb0104-c6fe-11ee-95b6-87774f69a715@mon.smithi070'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root}
"2024-02-09T04:05:09.654440+0000 mon.a (mon.0) 524 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep}
"2024-02-09T04:04:11.374180+0000 mon.a (mon.0) 524 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/one workloads/snaps-few-objects}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/write_fadvise_dontneed}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_8} tasks/crash}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests}
"2024-02-09T04:12:12.826919+0000 mon.a (mon.0) 696 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_striper}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli}
"2024-02-09T04:05:51.312567+0000 mon.a (mon.0) 403 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/objectstore/{backends/filejournal supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi137 with status 5: 'sudo systemctl stop ceph-1bdf0cf0-c700-11ee-95b6-87774f69a715@mon.smithi137'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start}
"2024-02-09T04:09:53.018329+0000 mon.a (mon.0) 521 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/admin_socket_objecter_requests}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon}
"2024-02-09T04:24:47.971011+0000 mon.a (mon.0) 969 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/standalone/{supported-random-distro$/{rhel_8} workloads/mgr}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-radosbench}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects}
"2024-02-09T04:30:00.000119+0000 mon.a (mon.0) 1418 : cluster [WRN] Health detail: HEALTH_WARN noscrub,nodeep-scrub flag(s) set" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/client-keyring 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e}
"2024-02-09T04:24:08.455862+0000 mon.a (mon.0) 499 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/cache-agent-big}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi027 with status 5: 'sudo systemctl stop ceph-065f7bc4-c702-11ee-95b6-87774f69a715@mon.smithi027'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start}
"2024-02-09T04:12:33.906287+0000 mon.a (mon.0) 160 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.stream_container_tools} 1-start 2-services/rgw 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root}
"2024-02-09T04:31:26.433763+0000 mon.a (mon.0) 1383 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} tasks/mon_recovery}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_big}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic}
"2024-02-09T04:12:43.213750+0000 mon.a (mon.0) 121 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/cache-agent-small}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_adoption}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/pool-create-delete}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{centos_8} tasks/failover}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/iscsi 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache-pool-snaps-readproxy}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add}
Command failed on smithi047 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0e714d9a4bd2a821113e6318adb87bd06cf81ec1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 724f7a0a-c702-11ee-95b6-87774f69a715 -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/watch-notify-same-primary mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm}
Command failed (workunit test cephadm/test_cephadm.sh) on smithi053 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/classic start}
"2024-02-09T04:27:58.334844+0000 mon.a (mon.0) 158 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/mirror 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi043 with status 5: 'sudo systemctl stop ceph-b480be38-c703-11ee-95b6-87774f69a715@mon.smithi043'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-pool-snaps}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/admin-socket mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final}
Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7c86b0d2-c703-11ee-95b6-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final}
reached maximum tries (301) after waiting for 300 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mix}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/standalone/{supported-random-distro$/{rhel_8} workloads/misc}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root}
"2024-02-09T04:34:51.375795+0000 mon.a (mon.0) 158 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/cache-snaps-balanced}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests}
"2024-02-09T04:50:00.000137+0000 mon.a (mon.0) 1125 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/connectivity start}
"2024-02-09T04:38:37.603579+0000 mon.a (mon.0) 675 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag}
"2024-02-09T04:44:04.769674+0000 mon.smithi096 (mon.0) 643 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final}
reached maximum tries (301) after waiting for 300 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton/{all/deduptool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/cache-snaps}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-comp-zlib supported-random-distro$/{centos_8} tasks/insights}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi140 with status 5: 'sudo systemctl stop ceph-16acd05e-c706-11ee-95b6-87774f69a715@mon.smithi140'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_5925}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity}
"2024-02-09T04:45:21.162750+0000 mon.a (mon.0) 140 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench}
Failed to reconnect to smithi170
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs}
"2024-02-09T04:51:23.908851+0000 mon.a (mon.0) 497 : cluster [WRN] Replacing daemon mds.a.smithi181.csvkng as rank 0 with standby daemon mds.user_test_fs.smithi181.iswphm" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start}
"2024-02-09T04:55:41.143942+0000 mon.a (mon.0) 161 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final}
reached maximum tries (301) after waiting for 300 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root}
"2024-02-09T04:56:22.848138+0000 mon.a (mon.0) 542 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_mostlyread}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python}
"2024-02-09T04:58:06.407235+0000 mon.a (mon.0) 159 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli}
"2024-02-09T04:55:01.971461+0000 mon.a (mon.0) 408 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final}
reached maximum tries (301) after waiting for 300 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/pool-snaps-few-objects}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi187 with status 5: 'sudo systemctl stop ceph-2fe1e158-c707-11ee-95b6-87774f69a715@mon.smithi187'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait}
"2024-02-09T04:59:41.672630+0000 mon.smithi086 (mon.0) 612 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/rbd_cls}
"2024-02-09T04:59:45.782774+0000 mon.a (mon.0) 178 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start}
"2024-02-09T05:00:51.250284+0000 mon.a (mon.0) 161 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon}
"2024-02-09T05:18:03.393228+0000 mon.a (mon.0) 966 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/rados_api_tests}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/dashboard}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/standalone/{supported-random-distro$/{centos_8} workloads/mon-stretch}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/snaps-few-objects}
"2024-02-09T05:03:19.920300+0000 mon.a (mon.0) 172 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-comp-zstd supported-random-distro$/{ubuntu_latest} tasks/module_selftest}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e}
"2024-02-09T06:01:26.115945+0000 mon.a (mon.0) 376 : cluster [WRN] Health check failed: 1 host is in maintenance mode (HOST_IN_MAINTENANCE)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_api_tests}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi146 with status 5: 'sudo systemctl stop ceph-4efea582-c70f-11ee-95b6-87774f69a715@mon.smithi146'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final}
Command failed on smithi149 with status 5: 'sudo systemctl stop ceph-849443a6-c70e-11ee-95b6-87774f69a715@mon.smithi149'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/radosbench-high-concurrency}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/readwrite}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_18.04} 2-node-mgr orchestrator_cli}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi}
'package_manager_version'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start}
"2024-02-09T05:53:45.928199+0000 mon.a (mon.0) 667 : cluster [WRN] Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs2 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/basic 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/radosbench}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-many-deletes}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root}
"2024-02-09T05:56:24.012881+0000 mon.a (mon.0) 543 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/objectstore/{backends/fusestore supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic}
"2024-02-09T05:50:42.397352+0000 mon.a (mon.0) 244 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}}
"2024-02-09T05:49:51.588527+0000 mon.a (mon.0) 180 : cluster [WRN] Health check failed: Degraded data redundancy: 2/52 objects degraded (3.846%), 1 pg degraded (PG_DEGRADED)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests}
Failed to reconnect to smithi134
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
Command failed on smithi067 with status 100: 'sudo apt-get clean'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption}
Error reimaging machines: 500 Server Error: Internal Server Error for url: http://fog.front.sepia.ceph.com/fog/host/172/task
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/redirect}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate}
"2024-02-09T05:54:15.951135+0000 mon.smithi113 (mon.0) 611 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final}
reached maximum tries (301) after waiting for 300 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read}
SSH connection to smithi067 was lost: 'sudo apt-get update'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm}
Command failed (workunit test cephadm/test_cephadm.sh) on smithi008 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} tasks/mon_recovery}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/connectivity start}
"2024-02-09T05:49:23.456606+0000 mon.a (mon.0) 159 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/test_rbd_api}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/repair_test}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/redirect_promote_tests}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/per_module_finisher_stats}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi140 with status 5: 'sudo systemctl stop ceph-6c013f6e-c70f-11ee-95b6-87774f69a715@mon.smithi140'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_osdmap_prune}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root}
"2024-02-09T06:11:51.905818+0000 mon.a (mon.0) 717 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/standalone/{supported-random-distro$/{rhel_8} workloads/mon}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/redirect_set_object}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/classic start}
"2024-02-09T06:08:17.640392+0000 mon.a (mon.0) 675 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/basic 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-small-objects}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/cache-snaps}
"2024-02-09T06:20:00.000163+0000 mon.a (mon.0) 786 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/set-chunks-read}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi189 with status 5: 'sudo systemctl stop ceph-7c0664fe-c712-11ee-95b6-87774f69a715@mon.smithi189'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/objectstore/{backends/keyvaluedb supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity}
"2024-02-09T06:15:30.381079+0000 mon.a (mon.0) 115 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/client-keyring 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/scrub_test}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs}
"2024-02-09T06:20:11.102070+0000 mon.a (mon.0) 500 : cluster [WRN] Replacing daemon mds.a.smithi080.rltkzw as rank 0 with standby daemon mds.user_test_fs.smithi080.hohsbs" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start}
"2024-02-09T06:14:56.056634+0000 mon.a (mon.0) 161 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/small-objects-balanced}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/mon_clock_no_skews}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final}
Command failed on smithi164 with status 5: 'sudo systemctl stop ceph-a988dd44-c712-11ee-95b6-87774f69a715@mon.smithi164'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-low-osd-mem-target supported-random-distro$/{rhel_8} tasks/progress}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/iscsi 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root}
"2024-02-09T06:28:01.954331+0000 mon.a (mon.0) 776 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/one workloads/rados_mon_workunits}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton/{all/mon-auth-caps mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests}
"2024-02-09T06:40:00.000145+0000 mon.a (mon.0) 2656 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 2 pool(s) do not have an application enabled" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/small-objects-localized}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli}
"2024-02-09T06:25:51.403007+0000 mon.a (mon.0) 405 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add}
"2024-02-09T06:24:38.300493+0000 mon.smithi022 (mon.0) 568 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_read}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi161 with status 5: 'sudo systemctl stop ceph-2755853c-c714-11ee-95b6-87774f69a715@mon.smithi161'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start}
"2024-02-09T06:29:41.515788+0000 mon.a (mon.0) 161 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/small-objects}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon}
"2024-02-09T06:35:31.139581+0000 mon.a (mon.0) 234 : cluster [WRN] Health check failed: 1/5 mons down, quorum a,e,c,d (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/libcephsqlite}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/rbd_cls}
"2024-02-09T06:34:52.582787+0000 mon.a (mon.0) 177 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-snaps-few-objects-overwrites}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final}
Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e}
"2024-02-09T06:47:10.367717+0000 mon.a (mon.0) 502 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi149 with status 5: 'sudo systemctl stop ceph-46850850-c715-11ee-95b6-87774f69a715@mon.smithi149'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-balanced}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start}
"2024-02-09T06:40:07.546257+0000 mon.a (mon.0) 688 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton/{all/mon-config mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root}
"2024-02-09T06:48:37.097365+0000 mon.a (mon.0) 771 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/classic}
"2024-02-09T06:38:45.125190+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-stupid supported-random-distro$/{rhel_8} tasks/prometheus}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-localized}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_adoption}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/snaps-few-objects}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag}
"2024-02-09T07:00:00.845906+0000 mon.smithi170 (mon.0) 610 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final}
reached maximum tries (301) after waiting for 300 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects}
"2024-02-09T07:10:00.000133+0000 mon.a (mon.0) 1477 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/snaps-few-objects}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm}
Command failed (workunit test cephadm/test_cephadm.sh) on smithi089 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/rados_api_tests}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/classic start}
"2024-02-09T06:55:30.008179+0000 mon.a (mon.0) 658 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final}
reached maximum tries (301) after waiting for 300 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}}
Command failed on smithi183 with status 5: 'sudo systemctl stop ceph-ceb2f776-c717-11ee-95b6-87774f69a715@mon.smithi183'
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/write_fadvise_dontneed}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 18.04
rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/test_rbd_api}
Failed to reconnect to smithi043
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final}
Command failed on smithi018 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0d6c3338-c718-11ee-95b6-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
ubuntu 20.04
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final}
reached maximum tries (301) after waiting for 300 seconds
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root}
"2024-02-09T07:13:08.373725+0000 mon.a (mon.0) 727 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
centos 8.stream
rados/standalone/{supported-random-distro$/{centos_8} workloads/scrub}
wip-yuri10-testing-2024-02-08-0854-pacific
wip-yuri10-testing-2024-02-08-0854-pacific
main
smithi
rhel 8.6
rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests}
"2024-02-09T07:16:03.410422+0000 mon.a (mon.0) 770 : cluster [WRN] Health check failed: 17 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log