Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7265319 2023-05-07 00:43:06 2023-05-07 00:44:49 2023-05-07 01:14:33 0:29:44 0:22:38 0:07:06 smithi main rhel 8.4 fs/libcephfs/{begin clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{rhel_8} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs/{frag test}} 2
pass 7265320 2023-05-07 00:43:07 2023-05-07 00:45:49 2023-05-07 01:22:59 0:37:10 0:27:28 0:09:42 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
dead 7265321 2023-05-07 00:43:08 2023-05-07 00:45:50 2023-05-07 12:58:05 12:12:15 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 7265322 2023-05-07 00:43:09 2023-05-07 00:49:40 2023-05-07 01:47:18 0:57:38 0:47:23 0:10:15 smithi main centos 8.stream fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} 3
fail 7265323 2023-05-07 00:43:10 2023-05-07 00:49:51 2023-05-07 01:42:49 0:52:58 0:47:26 0:05:32 smithi main rhel 8.4 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap-schedule} 2
Failure Reason:

"2023-05-07T01:19:12.613263+0000 mon.a (mon.0) 586 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log

fail 7265324 2023-05-07 00:43:11 2023-05-07 00:49:51 2023-05-07 01:10:19 0:20:28 0:13:22 0:07:06 smithi main rhel 8.4 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/acls} 2
Failure Reason:

Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)

pass 7265325 2023-05-07 00:43:12 2023-05-07 00:50:22 2023-05-07 01:26:20 0:35:58 0:24:55 0:11:03 smithi main ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/data-scan} 2
fail 7265326 2023-05-07 00:43:12 2023-05-07 00:50:23 2023-05-07 08:30:44 7:40:21 7:31:43 0:08:38 smithi main rhel 8.4 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi029 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8f93a58b82b94b6c9ac48277cc15bd48d4c0a902 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'