User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2024-02-07 14:57:27 | 2024-02-07 15:00:34 | 2024-02-08 03:51:09 | 12:50:35 | rados | wip-yuri2-testing-2024-02-06-1154 | smithi | 720625d | 20 | 45 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7550090 | 2024-02-07 14:58:49 | 2024-02-07 15:00:34 | 2024-02-07 15:33:28 | 0:32:54 | 0:13:26 | 0:19:28 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"2024-02-07T15:27:20.608498+0000 mon.smithi050 (mon.0) 261 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
fail | 7550091 | 2024-02-07 14:58:50 | 2024-02-07 15:00:35 | 2024-02-07 15:52:40 | 0:52:05 | 0:38:43 | 0:13:22 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
"2024-02-07T15:27:24.304975+0000 mon.a (mon.0) 350 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,e" in cluster log |
||||||||||||||
pass | 7550092 | 2024-02-07 14:58:51 | 2024-02-07 15:02:25 | 2024-02-07 15:36:28 | 0:34:03 | 0:24:05 | 0:09:58 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/quincy backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 7550093 | 2024-02-07 14:58:52 | 2024-02-07 15:03:16 | 2024-02-07 15:34:30 | 0:31:14 | 0:18:09 | 0:13:05 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 2 | |
fail | 7550094 | 2024-02-07 14:58:53 | 2024-02-07 15:05:37 | 2024-02-07 15:31:02 | 0:25:25 | 0:13:49 | 0:11:36 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
"2024-02-07T15:24:28.362946+0000 mon.smithi181 (mon.0) 263 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
fail | 7550095 | 2024-02-07 14:58:54 | 2024-02-07 15:09:25 | 2024-02-07 17:02:24 | 1:52:59 | 1:43:26 | 0:09:33 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7550096 | 2024-02-07 14:58:54 | 2024-02-07 15:09:35 | 2024-02-07 17:18:37 | 2:09:02 | 1:59:16 | 0:09:46 | smithi | main | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-test.sh) on smithi204 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=720625d38f7f95e6fa31efce87133247d9d28517 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-test.sh' |
||||||||||||||
fail | 7550097 | 2024-02-07 14:58:55 | 2024-02-07 15:09:35 | 2024-02-07 19:10:52 | 4:01:17 | 3:51:58 | 0:09:19 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi040 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=720625d38f7f95e6fa31efce87133247d9d28517 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7550098 | 2024-02-07 14:58:56 | 2024-02-07 15:09:36 | 2024-02-07 15:39:17 | 0:29:41 | 0:20:31 | 0:09:10 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"2024-02-07T15:30:08.693443+0000 mon.smithi001 (mon.0) 253 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
fail | 7550099 | 2024-02-07 14:58:57 | 2024-02-07 15:09:36 | 2024-02-07 15:52:48 | 0:43:12 | 0:34:39 | 0:08:33 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
"2024-02-07T15:29:21.562183+0000 mon.a (mon.0) 190 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
dead | 7550100 | 2024-02-07 14:58:58 | 2024-02-07 15:09:36 | 2024-02-08 03:20:00 | 12:10:24 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7550101 | 2024-02-07 14:58:59 | 2024-02-07 15:09:37 | 2024-02-07 17:06:52 | 1:57:15 | 1:46:35 | 0:10:40 | smithi | main | centos | 9.stream | rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on smithi029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=reef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh' |
||||||||||||||
fail | 7550102 | 2024-02-07 14:58:59 | 2024-02-07 15:09:37 | 2024-02-07 15:33:19 | 0:23:42 | 0:12:45 | 0:10:57 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
"2024-02-07T15:26:17.782077+0000 mon.smithi047 (mon.0) 263 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
pass | 7550103 | 2024-02-07 14:59:00 | 2024-02-07 15:09:38 | 2024-02-07 15:35:37 | 0:25:59 | 0:14:42 | 0:11:17 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
pass | 7550104 | 2024-02-07 14:59:01 | 2024-02-07 15:09:38 | 2024-02-07 15:32:59 | 0:23:21 | 0:13:27 | 0:09:54 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7550105 | 2024-02-07 14:59:02 | 2024-02-07 15:09:38 | 2024-02-07 15:30:32 | 0:20:54 | 0:10:59 | 0:09:55 | smithi | main | centos | 9.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} | 1 | |
fail | 7550106 | 2024-02-07 14:59:03 | 2024-02-07 15:09:39 | 2024-02-07 15:30:35 | 0:20:56 | 0:08:20 | 0:12:36 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=720625d38f7f95e6fa31efce87133247d9d28517 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 7550107 | 2024-02-07 14:59:04 | 2024-02-07 15:09:39 | 2024-02-07 15:37:47 | 0:28:08 | 0:14:00 | 0:14:08 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"2024-02-07T15:29:41.617724+0000 mon.smithi019 (mon.0) 261 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
fail | 7550108 | 2024-02-07 14:59:04 | 2024-02-07 15:12:30 | 2024-02-07 16:36:37 | 1:24:07 | 1:13:56 | 0:10:11 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
"2024-02-07T15:50:00.000423+0000 mon.a (mon.0) 1391 : cluster 3 [WRN] OSDMAP_FLAGS: nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
fail | 7550109 | 2024-02-07 14:59:05 | 2024-02-07 15:12:50 | 2024-02-07 16:19:35 | 1:06:45 | 0:52:43 | 0:14:02 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
pass | 7550110 | 2024-02-07 14:59:06 | 2024-02-07 15:16:21 | 2024-02-07 15:45:16 | 0:28:55 | 0:15:29 | 0:13:26 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
fail | 7550111 | 2024-02-07 14:59:07 | 2024-02-07 15:20:22 | 2024-02-07 15:55:25 | 0:35:03 | 0:19:46 | 0:15:17 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"2024-02-07T15:47:03.255861+0000 mon.smithi027 (mon.0) 252 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
pass | 7550112 | 2024-02-07 14:59:08 | 2024-02-07 15:24:54 | 2024-02-07 15:45:55 | 0:21:01 | 0:11:11 | 0:09:50 | smithi | main | centos | 9.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-bitmap} supported-random-distro$/{centos_latest} tasks/insights} | 2 | |
pass | 7550113 | 2024-02-07 14:59:09 | 2024-02-07 15:24:54 | 2024-02-07 15:57:54 | 0:33:00 | 0:23:08 | 0:09:52 | smithi | main | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
fail | 7550114 | 2024-02-07 14:59:10 | 2024-02-07 15:24:54 | 2024-02-07 15:53:55 | 0:29:01 | 0:18:35 | 0:10:26 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
"2024-02-07T15:49:15.151931+0000 mon.a (mon.0) 579 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,b" in cluster log |
||||||||||||||
fail | 7550115 | 2024-02-07 14:59:10 | 2024-02-07 15:24:55 | 2024-02-07 15:47:27 | 0:22:32 | 0:13:22 | 0:09:10 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
"2024-02-07T15:40:55.037383+0000 mon.smithi052 (mon.0) 256 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
fail | 7550116 | 2024-02-07 14:59:11 | 2024-02-07 15:24:55 | 2024-02-07 16:01:01 | 0:36:06 | 0:24:27 | 0:11:39 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
"2024-02-07T15:47:07.367213+0000 mon.a (mon.0) 187 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
fail | 7550117 | 2024-02-07 14:59:12 | 2024-02-07 15:24:55 | 2024-02-07 15:41:55 | 0:17:00 | 0:07:23 | 0:09:37 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
fail | 7550118 | 2024-02-07 14:59:13 | 2024-02-07 15:24:56 | 2024-02-07 15:59:08 | 0:34:12 | 0:23:22 | 0:10:50 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
"2024-02-07T15:45:27.222794+0000 mon.a (mon.0) 221 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,e" in cluster log |
||||||||||||||
pass | 7550119 | 2024-02-07 14:59:14 | 2024-02-07 15:24:56 | 2024-02-07 15:56:11 | 0:31:15 | 0:21:07 | 0:10:08 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7550120 | 2024-02-07 14:59:15 | 2024-02-07 15:24:57 | 2024-02-07 15:48:28 | 0:23:31 | 0:14:27 | 0:09:04 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"2024-02-07T15:41:27.900692+0000 mon.smithi032 (mon.0) 259 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
pass | 7550121 | 2024-02-07 14:59:15 | 2024-02-07 15:25:27 | 2024-02-07 15:52:37 | 0:27:10 | 0:18:27 | 0:08:43 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_rgw_multisite} | 3 | |
fail | 7550122 | 2024-02-07 14:59:16 | 2024-02-07 15:25:28 | 2024-02-07 15:54:58 | 0:29:30 | 0:18:38 | 0:10:52 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7550123 | 2024-02-07 14:59:17 | 2024-02-07 15:27:58 | 2024-02-07 16:09:52 | 0:41:54 | 0:28:42 | 0:13:12 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7550124 | 2024-02-07 14:59:18 | 2024-02-07 15:33:07 | 2024-02-07 16:06:57 | 0:33:50 | 0:22:18 | 0:11:32 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_set_mon_crush_locations} | 3 | |
Failure Reason:
"2024-02-07T15:58:08.056872+0000 mon.a (mon.0) 196 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
fail | 7550125 | 2024-02-07 14:59:19 | 2024-02-07 15:34:38 | 2024-02-07 16:21:13 | 0:46:35 | 0:35:30 | 0:11:05 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
"2024-02-07T15:56:47.143898+0000 mon.a (mon.0) 191 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
fail | 7550126 | 2024-02-07 14:59:20 | 2024-02-07 15:35:38 | 2024-02-07 16:04:07 | 0:28:29 | 0:18:35 | 0:09:54 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
"2024-02-07T15:56:23.802844+0000 mon.smithi100 (mon.0) 252 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
pass | 7550127 | 2024-02-07 14:59:20 | 2024-02-07 15:36:29 | 2024-02-07 16:01:34 | 0:25:05 | 0:14:55 | 0:10:10 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
fail | 7550128 | 2024-02-07 14:59:21 | 2024-02-07 15:36:29 | 2024-02-07 16:03:16 | 0:26:47 | 0:13:28 | 0:13:19 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"2024-02-07T15:57:09.353683+0000 mon.smithi017 (mon.0) 261 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
pass | 7550129 | 2024-02-07 14:59:22 | 2024-02-07 15:41:13 | 2024-02-07 16:18:19 | 0:37:06 | 0:28:29 | 0:08:37 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
pass | 7550130 | 2024-02-07 14:59:23 | 2024-02-07 15:41:13 | 2024-02-07 16:25:51 | 0:44:38 | 0:26:19 | 0:18:19 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} | 2 | |
fail | 7550131 | 2024-02-07 14:59:24 | 2024-02-07 15:41:13 | 2024-02-07 16:04:08 | 0:22:55 | 0:13:25 | 0:09:30 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"2024-02-07T15:57:59.992601+0000 mon.smithi001 (mon.0) 260 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
fail | 7550132 | 2024-02-07 14:59:25 | 2024-02-07 15:41:14 | 2024-02-07 16:14:20 | 0:33:06 | 0:23:13 | 0:09:53 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
"2024-02-07T16:01:01.847504+0000 mon.a (mon.0) 188 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
dead | 7550133 | 2024-02-07 14:59:26 | 2024-02-07 15:41:14 | 2024-02-08 03:51:09 | 12:09:55 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7550134 | 2024-02-07 14:59:26 | 2024-02-07 15:41:15 | 2024-02-07 17:36:13 | 1:54:58 | 1:44:59 | 0:09:59 | smithi | main | centos | 9.stream | rados/upgrade/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=reef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh' |
||||||||||||||
fail | 7550135 | 2024-02-07 14:59:27 | 2024-02-07 15:41:15 | 2024-02-07 16:14:02 | 0:32:47 | 0:14:32 | 0:18:15 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-07T16:10:00.000129+0000 mon.a (mon.0) 685 : cluster 3 [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled" in cluster log |
||||||||||||||
fail | 7550136 | 2024-02-07 14:59:28 | 2024-02-07 15:41:15 | 2024-02-07 16:04:15 | 0:23:00 | 0:13:15 | 0:09:45 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_cephadm_timeout} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm_timeout.py) on smithi107 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=720625d38f7f95e6fa31efce87133247d9d28517 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm_timeout.py' |
||||||||||||||
fail | 7550137 | 2024-02-07 14:59:29 | 2024-02-07 15:41:16 | 2024-02-07 17:39:30 | 1:58:14 | 1:47:15 | 0:10:59 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7550138 | 2024-02-07 14:59:30 | 2024-02-07 15:41:16 | 2024-02-07 16:14:56 | 0:33:40 | 0:19:20 | 0:14:20 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
"2024-02-07T16:05:57.565277+0000 mon.smithi007 (mon.0) 253 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
pass | 7550139 | 2024-02-07 14:59:31 | 2024-02-07 15:45:17 | 2024-02-07 18:23:16 | 2:37:59 | 2:08:42 | 0:29:17 | smithi | main | centos | 9.stream | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_latest}} | 1 | |
pass | 7550140 | 2024-02-07 14:59:32 | 2024-02-07 15:45:57 | 2024-02-07 16:07:06 | 0:21:09 | 0:10:12 | 0:10:57 | smithi | main | ubuntu | 22.04 | rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} | 3 | |
fail | 7550141 | 2024-02-07 14:59:32 | 2024-02-07 15:47:28 | 2024-02-07 16:45:48 | 0:58:20 | 0:34:25 | 0:23:55 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
"2024-02-07T16:30:00.000238+0000 mon.a (mon.0) 994 : cluster 3 [WRN] OSDMAP_FLAGS: noscrub,nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
fail | 7550142 | 2024-02-07 14:59:33 | 2024-02-07 15:52:39 | 2024-02-07 16:19:20 | 0:26:41 | 0:14:15 | 0:12:26 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"2024-02-07T16:11:19.374704+0000 mon.smithi022 (mon.0) 263 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
pass | 7550143 | 2024-02-07 14:59:34 | 2024-02-07 15:54:50 | 2024-02-07 16:28:10 | 0:33:20 | 0:22:25 | 0:10:55 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-many-deletes} | 2 | |
pass | 7550144 | 2024-02-07 14:59:35 | 2024-02-07 15:55:50 | 2024-02-07 16:32:05 | 0:36:15 | 0:25:39 | 0:10:36 | smithi | main | ubuntu | 22.04 | rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
pass | 7550145 | 2024-02-07 14:59:36 | 2024-02-07 15:55:50 | 2024-02-07 16:22:51 | 0:27:01 | 0:17:10 | 0:09:51 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
fail | 7550146 | 2024-02-07 14:59:37 | 2024-02-07 15:55:51 | 2024-02-07 16:31:39 | 0:35:48 | 0:25:07 | 0:10:41 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
"2024-02-07T16:18:05.429669+0000 mon.a (mon.0) 276 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
fail | 7550147 | 2024-02-07 14:59:37 | 2024-02-07 15:55:51 | 2024-02-07 16:19:54 | 0:24:03 | 0:12:58 | 0:11:05 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
"2024-02-07T16:13:49.874501+0000 mon.smithi081 (mon.0) 264 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
pass | 7550148 | 2024-02-07 14:59:38 | 2024-02-07 15:56:12 | 2024-02-07 16:22:27 | 0:26:15 | 0:14:36 | 0:11:39 | smithi | main | centos | 9.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest} thrashers/sync-many workloads/pool-create-delete} | 2 | |
fail | 7550149 | 2024-02-07 14:59:39 | 2024-02-07 15:56:12 | 2024-02-07 16:32:08 | 0:35:56 | 0:25:16 | 0:10:40 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
"2024-02-07T16:17:21.266025+0000 mon.a (mon.0) 318 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,e" in cluster log |
||||||||||||||
fail | 7550150 | 2024-02-07 14:59:40 | 2024-02-07 15:56:13 | 2024-02-07 17:27:53 | 1:31:40 | 1:22:20 | 0:09:20 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
"2024-02-07T16:30:00.000140+0000 mon.a (mon.0) 1229 : cluster 3 [WRN] OSDMAP_FLAGS: nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
fail | 7550151 | 2024-02-07 14:59:41 | 2024-02-07 15:56:13 | 2024-02-07 16:26:34 | 0:30:21 | 0:19:44 | 0:10:37 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"2024-02-07T16:17:47.584909+0000 mon.smithi067 (mon.0) 253 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log |
||||||||||||||
pass | 7550152 | 2024-02-07 14:59:42 | 2024-02-07 15:56:13 | 2024-02-07 16:21:54 | 0:25:41 | 0:15:28 | 0:10:13 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
fail | 7550153 | 2024-02-07 14:59:43 | 2024-02-07 15:56:14 | 2024-02-07 17:00:54 | 1:04:40 | 0:55:51 | 0:08:49 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7550154 | 2024-02-07 14:59:43 | 2024-02-07 15:56:14 | 2024-02-07 16:19:14 | 0:23:00 | 0:07:26 | 0:15:34 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
fail | 7550155 | 2024-02-07 14:59:44 | 2024-02-07 15:56:14 | 2024-02-07 16:28:05 | 0:31:51 | 0:19:43 | 0:12:08 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_rgw_multisite} | 3 | |
Failure Reason:
"2024-02-07T16:20:40.273252+0000 mon.a (mon.0) 196 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
fail | 7550156 | 2024-02-07 14:59:45 | 2024-02-07 15:57:55 | 2024-02-07 16:26:45 | 0:28:50 | 0:15:24 | 0:13:26 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |
Failure Reason:
"2024-02-07T16:20:01.066561+0000 mon.a (mon.0) 247 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,c" in cluster log |