Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7352817 2023-07-26 05:47:13 2023-07-26 08:08:34 2023-07-26 08:30:30 0:21:56 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/classic task/test_extra_daemon_features} 2
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

fail 7352818 2023-07-26 05:47:14 2023-07-26 08:10:10 2023-07-26 10:45:12 2:35:02 2:23:18 0:11:44 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

"2023-07-26T09:36:51.041668+0000 mon.a (mon.0) 673 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7352819 2023-07-26 05:47:14 2023-07-26 08:10:47 2023-07-26 08:44:16 0:33:29 0:21:49 0:11:40 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/classic task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

dead 7352820 2023-07-26 05:47:15 2023-07-26 08:11:23 2023-07-26 20:23:22 12:11:59 smithi main centos 9.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

hit max job timeout

fail 7352821 2023-07-26 05:47:16 2023-07-26 08:11:24 2023-07-26 09:34:45 1:23:21 1:07:54 0:15:27 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

"2023-07-26T09:08:12.143947+0000 mon.a (mon.0) 1446 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7352822 2023-07-26 05:47:17 2023-07-26 08:13:28 2023-07-26 09:11:17 0:57:49 0:42:08 0:15:41 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi069 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cce0a22cc0f74dd39a65fcd51420060decfcbb60 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7352823 2023-07-26 05:47:17 2023-07-26 08:14:13 2023-07-26 08:40:34 0:26:21 0:13:11 0:13:10 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

pass 7352824 2023-07-26 05:47:18 2023-07-26 08:14:28 2023-07-26 09:05:16 0:50:48 0:40:39 0:10:09 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
pass 7352825 2023-07-26 05:47:19 2023-07-26 08:15:08 2023-07-26 09:10:40 0:55:32 0:43:37 0:11:55 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
dead 7352826 2023-07-26 05:47:20 2023-07-26 08:17:03 2023-07-26 08:38:48 0:21:45 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds