Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7434842 2023-10-22 14:50:36 2023-10-22 14:51:36 2023-10-22 15:10:41 0:19:05 0:09:24 0:09:41 smithi main centos 9.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

fail 7434843 2023-10-22 14:50:36 2023-10-22 14:51:37 2023-10-22 20:01:29 5:09:52 4:59:06 0:10:46 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed (workunit test cls/test_cls_sdk.sh) on smithi082 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4195210f984d2f5c82ddc9ef0cb14f99dc3a4ea6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh'

pass 7434844 2023-10-22 14:50:37 2023-10-22 14:52:17 2023-10-22 15:38:29 0:46:12 0:35:52 0:10:20 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/classic task/test_nfs} 1
fail 7434845 2023-10-22 14:50:38 2023-10-22 14:52:27 2023-10-22 16:01:47 1:09:20 0:59:58 0:09:22 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

"2023-10-22T15:39:35.832606+0000 mon.a (mon.0) 2396 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7434846 2023-10-22 14:50:39 2023-10-22 14:52:38 2023-10-22 15:40:07 0:47:29 0:37:04 0:10:25 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi067 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4195210f984d2f5c82ddc9ef0cb14f99dc3a4ea6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7434847 2023-10-22 14:50:40 2023-10-22 14:52:58 2023-10-22 15:14:31 0:21:33 0:11:44 0:09:49 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

fail 7434848 2023-10-22 14:50:41 2023-10-22 14:52:58 2023-10-22 15:24:15 0:31:17 0:20:37 0:10:40 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

saw valgrind issues

fail 7434849 2023-10-22 14:50:41 2023-10-22 14:53:29 2023-10-22 15:41:37 0:48:08 0:37:37 0:10:31 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

saw valgrind issues

fail 7434850 2023-10-22 14:50:42 2023-10-22 14:54:29 2023-10-22 17:05:22 2:10:53 1:59:40 0:11:13 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

"2023-10-22T16:08:53.547952+0000 mon.a (mon.0) 601 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7434851 2023-10-22 14:50:43 2023-10-22 14:55:50 2023-10-22 15:38:08 0:42:18 0:25:36 0:16:42 smithi main centos 8.stream rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 2
fail 7434852 2023-10-22 14:50:44 2023-10-22 14:58:41 2023-10-22 15:48:01 0:49:20 0:32:42 0:16:38 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4195210f984d2f5c82ddc9ef0cb14f99dc3a4ea6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'