User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2024-04-21 14:00:04 | 2024-04-21 14:39:24 | 2024-04-22 03:32:39 | 12:53:15 | rados | wip-yuriw-testing-20240419.185239-main | smithi | 36c3715 | 7 | 40 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7666700 | 2024-04-21 14:01:31 | 2024-04-21 14:39:24 | 2024-04-21 15:10:33 | 0:31:09 | 0:24:31 | 0:06:38 | smithi | main | centos | 9.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi136 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=36c371567dddacf6207ea36f2535396ab31415fc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 7666702 | 2024-04-21 14:01:32 | 2024-04-21 14:39:25 | 2024-04-21 15:35:24 | 0:55:59 | 0:40:55 | 0:15:04 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
"2024-04-21T15:20:00.000118+0000 mon.a (mon.0) 1665 : cluster [WRN] Health detail: HEALTH_WARN noscrub,nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
fail | 7666704 | 2024-04-21 14:01:33 | 2024-04-21 14:42:36 | 2024-04-21 15:11:44 | 0:29:08 | 0:21:52 | 0:07:16 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/e2e} | 2 | |
Failure Reason:
"2024-04-21T15:08:12.842658+0000 mon.a (mon.0) 560 : cluster [WRN] Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7666706 | 2024-04-21 14:01:34 | 2024-04-21 14:46:17 | 2024-04-21 15:07:24 | 0:21:07 | 0:14:55 | 0:06:12 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-04-21T14:58:15.815173+0000 mon.a (mon.0) 253 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7666708 | 2024-04-21 14:01:35 | 2024-04-21 14:52:19 | 2024-04-21 15:07:11 | 0:14:52 | 0:07:25 | 0:07:27 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
fail | 7666710 | 2024-04-21 14:01:36 | 2024-04-21 14:52:21 | 2024-04-21 14:54:20 | 0:01:59 | 0 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} | — | ||
fail | 7666712 | 2024-04-21 14:01:37 | 2024-04-21 14:52:20 | 2024-04-21 16:49:28 | 1:57:08 | 1:48:41 | 0:08:27 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7666714 | 2024-04-21 14:01:38 | 2024-04-21 14:52:21 | 2024-04-21 16:52:50 | 2:00:29 | 1:49:23 | 0:11:06 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi203 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=36c371567dddacf6207ea36f2535396ab31415fc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh' |
||||||||||||||
fail | 7666716 | 2024-04-21 14:01:39 | 2024-04-21 14:52:21 | 2024-04-21 15:13:15 | 0:20:54 | 0:14:14 | 0:06:40 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"2024-04-21T15:09:30.273613+0000 mon.smithi040 (mon.0) 823 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7666718 | 2024-04-21 14:01:40 | 2024-04-21 14:52:22 | 2024-04-21 15:31:19 | 0:38:57 | 0:27:55 | 0:11:02 | smithi | main | ubuntu | 22.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi149 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=36c371567dddacf6207ea36f2535396ab31415fc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
dead | 7666720 | 2024-04-21 14:01:42 | 2024-04-21 14:52:23 | 2024-04-21 15:52:38 | 1:00:15 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} | 3 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (241) after waiting for 3600 seconds |
||||||||||||||
fail | 7666722 | 2024-04-21 14:01:43 | 2024-04-21 14:52:23 | 2024-04-21 15:14:10 | 0:21:47 | 0:13:52 | 0:07:55 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} | 2 | |
Failure Reason:
"2024-04-21T15:09:09.708926+0000 mon.a (mon.0) 299 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
dead | 7666724 | 2024-04-21 14:01:44 | 2024-04-21 14:52:24 | 2024-04-22 03:02:39 | 12:10:15 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7666726 | 2024-04-21 14:01:45 | 2024-04-21 14:52:25 | 2024-04-21 16:46:50 | 1:54:25 | 1:45:50 | 0:08:35 | smithi | main | centos | 9.stream | rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
"2024-04-21T15:18:12.309521+0000 mon.a (mon.0) 344 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7666728 | 2024-04-21 14:01:46 | 2024-04-21 14:52:25 | 2024-04-21 15:27:42 | 0:35:17 | 0:23:49 | 0:11:28 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_host_drain} | 3 | |
Failure Reason:
"2024-04-21T15:25:25.278424+0000 mon.a (mon.0) 738 : cluster [WRN] Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)" in cluster log |
||||||||||||||
fail | 7666730 | 2024-04-21 14:01:47 | 2024-04-21 14:52:28 | 2024-04-21 14:54:27 | 0:01:59 | 0 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} | — | ||
fail | 7666732 | 2024-04-21 14:01:48 | 2024-04-21 14:52:27 | 2024-04-21 15:14:09 | 0:21:42 | 0:13:29 | 0:08:13 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
Failure Reason:
"2024-04-21T15:08:41.982833+0000 mon.a (mon.0) 265 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7666734 | 2024-04-21 14:01:49 | 2024-04-21 14:52:27 | 2024-04-21 16:16:55 | 1:24:28 | 1:13:01 | 0:11:27 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
"2024-04-21T15:40:00.000104+0000 mon.a (mon.0) 1864 : cluster [WRN] Health detail: HEALTH_WARN nodeep-scrub flag(s) set; Degraded data redundancy: 4529/47638 objects degraded (9.507%), 4 pgs degraded" in cluster log |
||||||||||||||
fail | 7666736 | 2024-04-21 14:01:50 | 2024-04-21 14:52:28 | 2024-04-21 16:01:44 | 1:09:16 | 1:00:00 | 0:09:16 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7666738 | 2024-04-21 14:01:51 | 2024-04-21 14:52:29 | 2024-04-21 15:14:03 | 0:21:34 | 0:14:45 | 0:06:49 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-04-21T15:07:07.051544+0000 mon.a (mon.0) 519 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
fail | 7666740 | 2024-04-21 14:01:52 | 2024-04-21 14:52:29 | 2024-04-21 15:24:01 | 0:31:32 | 0:19:55 | 0:11:37 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_rgw_multisite} | 3 | |
Failure Reason:
"2024-04-21T15:18:12.314105+0000 mon.a (mon.0) 440 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7666742 | 2024-04-21 14:01:53 | 2024-04-21 14:52:30 | 2024-04-21 15:31:43 | 0:39:13 | 0:28:12 | 0:11:01 | smithi | main | ubuntu | 22.04 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi080 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=36c371567dddacf6207ea36f2535396ab31415fc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 7666744 | 2024-04-21 14:01:54 | 2024-04-21 14:52:31 | 2024-04-21 15:15:49 | 0:23:18 | 0:14:51 | 0:08:27 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi087 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=36c371567dddacf6207ea36f2535396ab31415fc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7666747 | 2024-04-21 14:01:57 | 2024-04-21 15:07:44 | 2024-04-21 15:39:53 | 0:32:09 | 0:21:02 | 0:11:07 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"2024-04-21T15:34:30.434720+0000 mon.smithi098 (mon.0) 810 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7666749 | 2024-04-21 14:01:58 | 2024-04-21 15:07:47 | 2024-04-21 15:09:46 | 0:01:59 | 0:00:01 | 0:01:58 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} | — | |
fail | 7666751 | 2024-04-21 14:01:59 | 2024-04-21 15:07:45 | 2024-04-21 15:30:30 | 0:22:45 | 0:16:42 | 0:06:03 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7666753 | 2024-04-21 14:02:00 | 2024-04-21 15:07:46 | 2024-04-21 15:45:27 | 0:37:41 | 0:30:26 | 0:07:15 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
pass | 7666755 | 2024-04-21 14:02:01 | 2024-04-21 15:07:47 | 2024-04-21 15:54:26 | 0:46:39 | 0:36:03 | 0:10:36 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/snaps-few-objects} | 3 | |
fail | 7666757 | 2024-04-21 14:02:02 | 2024-04-21 15:07:47 | 2024-04-21 15:25:32 | 0:17:45 | 0:08:35 | 0:09:10 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi089 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=36c371567dddacf6207ea36f2535396ab31415fc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 7666759 | 2024-04-21 14:02:03 | 2024-04-21 15:07:48 | 2024-04-21 15:30:13 | 0:22:25 | 0:15:00 | 0:07:25 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-04-21T15:24:22.698146+0000 mon.a (mon.0) 665 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
dead | 7666761 | 2024-04-21 14:02:05 | 2024-04-21 15:07:49 | 2024-04-22 03:17:01 | 12:09:12 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7666763 | 2024-04-21 14:02:06 | 2024-04-21 15:07:49 | 2024-04-21 15:32:06 | 0:24:17 | 0:16:42 | 0:07:35 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_host_drain} | 3 | |
Failure Reason:
"2024-04-21T15:26:04.183124+0000 mon.a (mon.0) 384 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7666765 | 2024-04-21 14:02:07 | 2024-04-21 15:07:50 | 2024-04-21 15:49:51 | 0:42:01 | 0:28:13 | 0:13:48 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} | 4 | |
fail | 7666767 | 2024-04-21 14:02:08 | 2024-04-21 15:08:01 | 2024-04-21 15:46:45 | 0:38:44 | 0:24:21 | 0:14:23 | smithi | main | centos | 9.stream | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi107 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=36c371567dddacf6207ea36f2535396ab31415fc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
dead | 7666769 | 2024-04-21 14:02:09 | 2024-04-21 15:17:53 | 2024-04-22 03:32:39 | 12:14:46 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7666771 | 2024-04-21 14:02:10 | 2024-04-21 15:22:54 | 2024-04-21 17:16:36 | 1:53:42 | 1:46:02 | 0:07:40 | smithi | main | centos | 9.stream | rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
"2024-04-21T15:47:21.780391+0000 mon.a (mon.0) 362 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7666773 | 2024-04-21 14:02:11 | 2024-04-21 15:23:07 | 2024-04-21 15:25:06 | 0:01:59 | 0:00:01 | 0:01:58 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} | — | |
pass | 7666775 | 2024-04-21 14:02:12 | 2024-04-21 15:23:06 | 2024-04-21 15:40:52 | 0:17:46 | 0:10:44 | 0:07:02 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7666777 | 2024-04-21 14:02:13 | 2024-04-21 15:23:06 | 2024-04-21 17:24:06 | 2:01:00 | 1:54:28 | 0:06:32 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7666779 | 2024-04-21 14:02:14 | 2024-04-21 15:23:07 | 2024-04-21 15:48:54 | 0:25:47 | 0:18:39 | 0:07:08 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
"2024-04-21T15:40:16.347445+0000 mon.a (mon.0) 439 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
fail | 7666781 | 2024-04-21 14:02:15 | 2024-04-21 15:23:08 | 2024-04-21 15:42:34 | 0:19:26 | 0:11:22 | 0:08:04 | smithi | main | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/mon} | 1 | |
Failure Reason:
Command failed (workunit test mon/mkfs.sh) on smithi175 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=36c371567dddacf6207ea36f2535396ab31415fc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mkfs.sh' |
||||||||||||||
fail | 7666783 | 2024-04-21 14:02:16 | 2024-04-21 15:23:09 | 2024-04-21 15:44:57 | 0:21:48 | 0:14:29 | 0:07:19 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"2024-04-21T15:41:19.654235+0000 mon.smithi088 (mon.0) 831 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7666785 | 2024-04-21 14:02:17 | 2024-04-21 15:23:09 | 2024-04-21 16:18:08 | 0:54:59 | 0:29:27 | 0:25:32 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 4 | |
fail | 7666787 | 2024-04-21 14:02:18 | 2024-04-21 16:02:58 | 2024-04-21 16:22:59 | 0:20:01 | 0:08:00 | 0:12:01 | smithi | main | centos | 9.stream | rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest}} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op copy_from 50 --op write_excl 50 --pool base' |
||||||||||||||
fail | 7666789 | 2024-04-21 14:02:19 | 2024-04-21 16:07:19 | 2024-04-21 16:41:03 | 0:33:44 | 0:22:04 | 0:11:40 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |
Failure Reason:
"2024-04-21T16:32:53.066406+0000 mon.a (mon.0) 449 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7666791 | 2024-04-21 14:02:20 | 2024-04-21 16:09:30 | 2024-04-21 16:30:19 | 0:20:49 | 0:10:32 | 0:10:17 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7666793 | 2024-04-21 14:02:21 | 2024-04-21 16:13:11 | 2024-04-21 16:49:16 | 0:36:05 | 0:25:21 | 0:10:44 | smithi | main | ubuntu | 22.04 | rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-04-21T16:34:28.402661+0000 mon.a (mon.0) 379 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7666795 | 2024-04-21 14:02:22 | 2024-04-21 16:13:22 | 2024-04-21 16:50:02 | 0:36:40 | 0:28:17 | 0:08:23 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/snaps-few-objects-localized} | 4 | |
pass | 7666797 | 2024-04-21 14:02:23 | 2024-04-21 16:14:53 | 2024-04-21 18:46:12 | 2:31:19 | 2:24:25 | 0:06:54 | smithi | main | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/osd-backfill} | 1 | |
fail | 7666799 | 2024-04-21 14:02:24 | 2024-04-21 16:15:14 | 2024-04-21 16:46:19 | 0:31:05 | 0:24:59 | 0:06:06 | smithi | main | centos | 9.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi194 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=36c371567dddacf6207ea36f2535396ab31415fc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 7666801 | 2024-04-21 14:02:25 | 2024-04-21 16:16:45 | 2024-04-21 16:38:33 | 0:21:48 | 0:11:02 | 0:10:46 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_ca_signed_key} | 2 | |
Failure Reason:
Command failed on smithi136 with status 5: 'sudo systemctl stop ceph-ff49bc4e-fffc-11ee-bc93-c7b262605968@mon.a' |