Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6814166 2022-04-29 20:40:22 2022-04-29 20:40:50 2022-04-29 21:10:39 0:29:49 0:19:32 0:10:17 smithi master rhel 8.5 fs:thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_trivial_sync}} 2
fail 6814167 2022-04-29 20:40:23 2022-04-29 20:41:01 2022-04-29 21:04:27 0:23:26 0:13:54 0:09:32 smithi master centos 8.stream fs:thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed on smithi152 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6814168 2022-04-29 20:40:24 2022-04-29 20:41:12 2022-04-29 21:14:47 0:33:35 0:25:19 0:08:16 smithi master rhel 8.5 fs:thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6814169 2022-04-29 20:40:25 2022-04-29 20:43:13 2022-04-29 21:19:33 0:36:20 0:26:46 0:09:34 smithi master rhel 8.5 fs:thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
pass 6814170 2022-04-29 20:40:27 2022-04-29 20:44:23 2022-04-29 21:13:31 0:29:08 0:20:56 0:08:12 smithi master rhel 8.5 fs:thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
pass 6814171 2022-04-29 20:40:28 2022-04-29 20:44:34 2022-04-29 21:17:08 0:32:34 0:24:28 0:08:06 smithi master rhel 8.5 fs:thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
pass 6814172 2022-04-29 20:40:30 2022-04-29 20:46:15 2022-04-29 21:16:02 0:29:47 0:20:05 0:09:42 smithi master rhel 8.5 fs:thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
fail 6814173 2022-04-29 20:40:31 2022-04-29 20:46:35 2022-04-30 00:18:03 3:31:28 3:18:17 0:13:11 smithi master ubuntu 20.04 fs:thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

"2022-04-29T21:26:34.291434+0000 mds.e (mds.1) 1 : cluster [WRN] client.4803 isn't responding to mclientcaps(revoke), ino 0x20000000ade pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 300.004415 seconds ago" in cluster log