Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7219578 2023-03-24 21:04:29 2023-03-24 21:04:35 2023-03-24 23:23:15 2:18:40 2:05:34 0:13:06 smithi main centos 8.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} 1
pass 7219579 2023-03-24 21:04:30 2023-03-24 21:04:36 2023-03-24 21:43:11 0:38:35 0:29:27 0:09:08 smithi main rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/strays} 2
fail 7219580 2023-03-24 21:04:31 2023-03-24 21:08:03 2023-03-24 21:37:25 0:29:22 0:17:50 0:11:32 smithi main rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/acls} 2
Failure Reason:

Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)

fail 7219581 2023-03-24 21:04:32 2023-03-24 21:08:04 2023-03-24 21:39:48 0:31:44 0:21:54 0:09:50 smithi main rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mds-full} 2
Failure Reason:

"2023-03-24T21:25:38.398452+0000 osd.5 (osd.5) 3 : cluster [WRN] OSD bench result of 223749.519871 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]." in cluster log

fail 7219582 2023-03-24 21:04:33 2023-03-24 21:10:20 2023-03-24 21:57:06 0:46:46 0:35:27 0:11:19 smithi main rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/strays} 2
Failure Reason:

"2023-03-24T21:41:52.553545+0000 mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi115:0 (7054), after waiting 48.2051 seconds during MDS startup" in cluster log

fail 7219583 2023-03-24 21:04:34 2023-03-24 21:11:59 2023-03-24 21:31:38 0:19:39 0:12:32 0:07:07 smithi main rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/acls} 2
Failure Reason:

Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)

fail 7219584 2023-03-24 21:04:35 2023-03-24 21:12:29 2023-03-25 04:17:21 7:04:52 6:54:17 0:10:35 smithi main rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi032 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=23eb3b2f0fc65087846571af4e15146a980fc03d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 7219585 2023-03-24 21:04:35 2023-03-24 21:16:08 2023-03-24 22:18:10 1:02:02 0:49:35 0:12:27 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cephfs_misc_tests} 5
fail 7219586 2023-03-24 21:04:36 2023-03-24 21:18:19 2023-03-24 22:17:31 0:59:12 0:52:39 0:06:33 smithi main rhel 8.4 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/clone}} 2
Failure Reason:

"2023-03-24T22:04:45.216486+0000 mon.a (mon.0) 3150 : cluster [WRN] Health check failed: Degraded data redundancy: 2 pgs degraded (PG_DEGRADED)" in cluster log

pass 7219587 2023-03-24 21:04:37 2023-03-24 21:18:20 2023-03-24 22:01:14 0:42:54 0:30:37 0:12:17 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7219588 2023-03-24 21:04:38 2023-03-24 21:18:21 2023-03-24 21:59:50 0:41:29 0:32:34 0:08:55 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2