Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6779101 2022-04-06 12:57:28 2022-04-06 12:59:14 2022-04-06 13:22:53 0:23:39 0:12:41 0:10:58 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{yes}} 3
Failure Reason:

Command failed on smithi016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 3bc67330-b5ac-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.16 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6779102 2022-04-06 12:57:29 2022-04-06 12:59:14 2022-04-06 13:22:14 0:23:00 0:13:48 0:09:12 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
pass 6779103 2022-04-06 12:57:29 2022-04-06 12:59:14 2022-04-06 13:39:27 0:40:13 0:28:41 0:11:32 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/pacific}} 3
fail 6779104 2022-04-06 12:57:30 2022-04-06 13:00:25 2022-04-06 13:32:25 0:32:00 0:15:18 0:16:42 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} 3
Failure Reason:

Command failed on smithi093 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 37d7a93c-b5ad-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.93 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6779105 2022-04-06 12:57:31 2022-04-06 13:04:46 2022-04-06 13:32:06 0:27:20 0:21:42 0:05:38 smithi master rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/metrics} 2
pass 6779106 2022-04-06 12:57:32 2022-04-06 13:05:06 2022-04-06 14:02:09 0:57:03 0:46:31 0:10:32 smithi master centos 8.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host} 2
fail 6779107 2022-04-06 12:57:32 2022-04-06 13:05:06 2022-04-06 14:08:15 1:03:09 0:49:07 0:14:02 smithi master centos 8.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp overrides/{mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/failover} 2
Failure Reason:

SELinux denials found on ubuntu@smithi055.front.sepia.ceph.com: ['type=AVC msg=audit(1649251041.328:202): avc: denied { node_bind } for pid=1460 comm="ping" saddr=172.21.15.55 scontext=system_u:system_r:ping_t:s0 tcontext=system_u:object_r:node_t:s0 tclass=icmp_socket permissive=1']

pass 6779108 2022-04-06 12:57:33 2022-04-06 13:07:37 2022-04-06 13:49:20 0:41:43 0:30:45 0:10:58 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
pass 6779109 2022-04-06 12:57:34 2022-04-06 13:07:57 2022-04-06 15:35:42 2:27:45 2:16:29 0:11:16 smithi master centos 8.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} 1
fail 6779110 2022-04-06 12:57:35 2022-04-06 13:08:28 2022-04-06 13:33:58 0:25:30 0:12:31 0:12:59 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{no}} 3
Failure Reason:

Command failed on smithi080 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid d01791c6-b5ad-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.80 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6779111 2022-04-06 12:57:36 2022-04-06 13:10:58 2022-04-06 13:37:39 0:26:41 0:13:02 0:13:39 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{legacy} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} 3
Failure Reason:

Command failed on smithi005 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 49617236-b5ae-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.5 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

dead 6779112 2022-04-06 12:57:37 2022-04-06 13:12:59 2022-04-06 19:53:25 6:40:26 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

dead 6779113 2022-04-06 12:57:37 2022-04-06 13:15:40 2022-04-06 20:17:18 7:01:38 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

hit max job timeout

fail 6779114 2022-04-06 12:57:38 2022-04-06 13:15:40 2022-04-06 13:39:35 0:23:55 0:13:06 0:10:49 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} 3
Failure Reason:

Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid ad2b43b4-b5ae-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.37 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6779115 2022-04-06 12:57:39 2022-04-06 13:16:20 2022-04-06 13:45:57 0:29:37 0:19:05 0:10:32 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

fail 6779116 2022-04-06 12:57:40 2022-04-06 13:16:21 2022-04-06 13:43:22 0:27:01 0:13:00 0:14:01 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} 3
Failure Reason:

Command failed on smithi067 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 0d62679e-b5af-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.67 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6779117 2022-04-06 12:57:41 2022-04-06 13:17:51 2022-04-06 13:53:30 0:35:39 0:22:49 0:12:50 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

pass 6779118 2022-04-06 12:57:41 2022-04-06 13:20:32 2022-04-06 13:41:13 0:20:41 0:10:53 0:09:48 smithi master centos 8.stream fs/upgrade/nofs/{bluestore-bitmap centos_latest conf/{client mds mon osd} no-mds-cluster overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-upgrade}} 1
fail 6779119 2022-04-06 12:57:42 2022-04-06 13:20:32 2022-04-06 13:44:36 0:24:04 0:13:03 0:11:01 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{no}} 3
Failure Reason:

Command failed on smithi025 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 6a64ad58-b5af-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.25 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6779120 2022-04-06 12:57:43 2022-04-06 13:20:53 2022-04-06 13:46:06 0:25:13 0:13:34 0:11:39 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{yes}} 3
Failure Reason:

Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 6593edf2-b5af-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.43 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6779121 2022-04-06 12:57:44 2022-04-06 13:21:13 2022-04-06 15:59:48 2:38:35 2:27:42 0:10:53 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi157 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cfb8f943163b374162da0d7b0240f267dd46e4e1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/untar_snap_rm.sh'

pass 6779122 2022-04-06 12:57:45 2022-04-06 13:21:33 2022-04-06 13:55:25 0:33:52 0:21:06 0:12:46 smithi master centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
fail 6779123 2022-04-06 12:57:45 2022-04-06 13:22:24 2022-04-06 13:46:35 0:24:11 0:12:52 0:11:19 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{yes}} 3
Failure Reason:

Command failed on smithi016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid a4f67b4a-b5af-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.16 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6779124 2022-04-06 12:57:46 2022-04-06 13:22:54 2022-04-06 13:54:09 0:31:15 0:19:41 0:11:34 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

fail 6779125 2022-04-06 12:57:47 2022-04-06 13:23:15 2022-04-06 13:49:21 0:26:06 0:12:49 0:13:17 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} 3
Failure Reason:

Command failed on smithi119 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid e097f5c0-b5af-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.119 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6779126 2022-04-06 12:57:48 2022-04-06 13:24:15 2022-04-06 14:09:20 0:45:05 0:34:03 0:11:02 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
dead 6779127 2022-04-06 12:57:49 2022-04-06 13:24:16 2022-04-06 14:06:55 0:42:39 0:28:21 0:14:18 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
Failure Reason:

SSH connection to smithi022 was lost: 'sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0'

fail 6779128 2022-04-06 12:57:49 2022-04-06 13:26:46 2022-04-06 13:52:17 0:25:31 0:12:23 0:13:08 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} 3
Failure Reason:

Command failed on smithi116 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 58e9af6e-b5b0-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.116 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6779129 2022-04-06 12:57:50 2022-04-06 13:29:07 2022-04-06 14:06:32 0:37:25 0:26:31 0:10:54 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/pacific}} 3
fail 6779130 2022-04-06 12:57:51 2022-04-06 13:29:37 2022-04-06 13:57:23 0:27:46 0:13:34 0:14:12 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} 3
Failure Reason:

Command failed on smithi106 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 03973cd8-b5b1-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.106 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6779131 2022-04-06 12:57:52 2022-04-06 13:32:08 2022-04-06 13:57:46 0:25:38 0:13:38 0:12:00 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} 3
Failure Reason:

Command failed on smithi093 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 10264d9a-b5b1-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.93 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

dead 6779132 2022-04-06 12:57:53 2022-04-06 13:32:28 2022-04-06 14:19:01 0:46:33 0:33:50 0:12:43 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi169 with status 126: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cfb8f943163b374162da0d7b0240f267dd46e4e1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

fail 6779133 2022-04-06 12:57:53 2022-04-06 13:34:09 2022-04-06 13:57:42 0:23:33 0:13:24 0:10:09 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{no}} 3
Failure Reason:

Command failed on smithi002 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 52b35194-b5b1-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.2 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6779134 2022-04-06 12:57:54 2022-04-06 13:34:59 2022-04-06 18:24:57 4:49:58 4:38:38 0:11:20 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

"2022-04-06T14:35:22.065213+0000 mds.f (mds.0) 1 : cluster [WRN] client.4734 isn't responding to mclientcaps(revoke), ino 0x10000003541 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 300.004658 seconds ago" in cluster log

pass 6779135 2022-04-06 12:57:55 2022-04-06 13:35:40 2022-04-06 14:32:25 0:56:45 0:45:26 0:11:19 smithi master centos 8.stream fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench traceless/50pc} 2
pass 6779136 2022-04-06 12:57:56 2022-04-06 13:35:40 2022-04-06 14:18:08 0:42:28 0:31:16 0:11:12 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
fail 6779137 2022-04-06 12:57:57 2022-04-06 13:39:32 2022-04-06 14:02:40 0:23:08 0:13:13 0:09:55 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{no}} 3
Failure Reason:

Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid e214dc72-b5b1-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.37 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6779138 2022-04-06 12:57:57 2022-04-06 13:39:43 2022-04-06 14:09:00 0:29:17 0:13:35 0:15:42 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{legacy} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} 3
Failure Reason:

Command failed on smithi067 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid bf50e6bc-b5b2-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.67 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6779139 2022-04-06 12:57:58 2022-04-06 13:43:24 2022-04-06 14:18:24 0:35:00 0:25:40 0:09:20 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
pass 6779140 2022-04-06 12:57:59 2022-04-06 13:43:24 2022-04-06 14:29:44 0:46:20 0:34:46 0:11:34 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6779141 2022-04-06 12:58:00 2022-04-06 13:44:44 2022-04-06 14:06:58 0:22:14 0:14:12 0:08:02 smithi master rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/acls} 2
Failure Reason:

Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)

pass 6779142 2022-04-06 12:58:01 2022-04-06 13:46:05 2022-04-06 14:37:27 0:51:22 0:38:54 0:12:28 smithi master ubuntu 20.04 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
dead 6779143 2022-04-06 12:58:01 2022-04-06 13:46:15 2022-04-06 20:27:19 6:41:04 smithi master rhel 8.4 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} 2
Failure Reason:

hit max job timeout

pass 6779144 2022-04-06 12:58:02 2022-04-06 13:46:16 2022-04-06 15:40:47 1:54:31 1:41:53 0:12:38 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
fail 6779145 2022-04-06 12:58:03 2022-04-06 13:46:36 2022-04-06 14:12:57 0:26:21 0:13:28 0:12:53 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

Command failed on smithi016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 320d233c-b5b3-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.16 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6779146 2022-04-06 12:58:04 2022-04-06 13:48:17 2022-04-06 14:22:34 0:34:17 0:21:00 0:13:17 smithi master centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
fail 6779147 2022-04-06 12:58:05 2022-04-06 13:49:27 2022-04-06 14:14:22 0:24:55 0:13:01 0:11:54 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{yes}} 3
Failure Reason:

Command failed on smithi018 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 77b33ade-b5b3-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.18 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6779148 2022-04-06 12:58:05 2022-04-06 13:49:27 2022-04-06 15:40:53 1:51:26 1:40:50 0:10:36 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
fail 6779149 2022-04-06 12:58:06 2022-04-06 13:49:28 2022-04-06 14:15:10 0:25:42 0:12:38 0:13:04 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{yes}} 3
Failure Reason:

Command failed on smithi116 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid b753b402-b5b3-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.116 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

dead 6779150 2022-04-06 12:58:07 2022-04-06 13:52:18 2022-04-06 17:40:56 3:48:38 3:34:36 0:14:02 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi167 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cfb8f943163b374162da0d7b0240f267dd46e4e1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

fail 6779151 2022-04-06 12:58:08 2022-04-06 13:52:59 2022-04-06 14:28:34 0:35:35 0:22:13 0:13:22 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

fail 6779152 2022-04-06 12:58:08 2022-04-06 13:53:39 2022-04-06 14:21:15 0:27:36 0:12:59 0:14:37 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} 3
Failure Reason:

Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 62776aea-b5b4-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.71 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6779153 2022-04-06 12:58:09 2022-04-06 13:54:10 2022-04-06 14:38:38 0:44:28 0:28:48 0:15:40 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/pacific}} 3
fail 6779154 2022-04-06 12:58:10 2022-04-06 13:55:30 2022-04-06 14:22:45 0:27:15 0:12:58 0:14:17 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{yes}} 3
Failure Reason:

Command failed on smithi065 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 92343c0e-b5b4-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.65 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6779155 2022-04-06 12:58:11 2022-04-06 13:55:41 2022-04-06 15:38:57 1:43:16 1:31:55 0:11:21 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
fail 6779156 2022-04-06 12:58:12 2022-04-06 13:55:51 2022-04-06 14:22:39 0:26:48 0:12:43 0:14:05 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} 3
Failure Reason:

Command failed on smithi106 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid ba141afa-b5b4-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.106 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6779157 2022-04-06 12:58:12 2022-04-06 13:57:33 2022-04-06 14:22:26 0:24:53 0:11:46 0:13:07 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi138 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 57c9174c-b5b4-11ec-8c36-001a4aab830c -- ceph mon dump -f json'

pass 6779158 2022-04-06 12:58:13 2022-04-06 13:57:44 2022-04-06 16:39:01 2:41:17 2:28:16 0:13:01 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
fail 6779159 2022-04-06 12:58:14 2022-04-06 13:57:54 2022-04-06 14:25:33 0:27:39 0:12:52 0:14:47 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} 3
Failure Reason:

Command failed on smithi053 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 -v bootstrap --fsid 01a29a0e-b5b5-11ec-8c36-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.53 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6779160 2022-04-06 12:58:15 2022-04-07 02:44:19 2022-04-07 03:15:20 0:31:01 0:19:57 0:11:04 smithi master centos 8.stream fs/cephadm/renamevolume/{0-start 1-rename distro/single-container-host overrides/whitelist_health} 2
pass 6779161 2022-04-06 12:58:15 2022-04-07 02:44:20 2022-04-07 03:48:26 1:04:06 0:52:03 0:12:03 smithi master centos 8.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/dbench validater/valgrind} 2
pass 6779162 2022-04-06 12:58:16 2022-04-07 02:45:20 2022-04-07 04:04:47 1:19:27 1:08:33 0:10:54 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{legacy} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{no}} 3
pass 6779163 2022-04-06 12:58:17 2022-04-07 02:45:41 2022-04-07 03:30:06 0:44:25 0:31:24 0:13:01 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
pass 6779164 2022-04-06 12:58:18 2022-04-07 02:46:31 2022-04-07 04:04:12 1:17:41 1:07:40 0:10:01 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
pass 6779165 2022-04-06 12:58:19 2022-04-07 02:46:42 2022-04-07 03:28:53 0:42:11 0:28:57 0:13:14 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} 3
pass 6779166 2022-04-06 12:58:19 2022-04-07 02:48:22 2022-04-07 03:22:57 0:34:35 0:22:33 0:12:02 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
pass 6779167 2022-04-06 12:58:20 2022-04-07 02:48:53 2022-04-07 03:34:41 0:45:48 0:34:51 0:10:57 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{yes}} 3
pass 6779168 2022-04-06 12:58:21 2022-04-07 02:49:33 2022-04-07 03:12:33 0:23:00 0:11:24 0:11:36 smithi master centos 8.stream fs/upgrade/nofs/{bluestore-bitmap centos_latest conf/{client mds mon osd} no-mds-cluster overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-upgrade}} 1
dead 6779169 2022-04-06 12:58:22 2022-04-07 02:49:33 2022-04-07 09:25:11 6:35:38 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
pass 6779170 2022-04-06 12:58:22 2022-04-07 02:51:54 2022-04-07 03:45:16 0:53:22 0:43:07 0:10:15 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} 3
pass 6779171 2022-04-06 12:58:23 2022-04-07 02:52:04 2022-04-07 05:18:50 2:26:46 2:14:54 0:11:52 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
dead 6779172 2022-04-06 12:58:24 2022-04-07 02:53:45 2022-04-07 09:33:10 6:39:25 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6779173 2022-04-06 12:58:25 2022-04-07 02:54:45 2022-04-07 03:53:31 0:58:46 0:44:41 0:14:05 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{no}} 3
pass 6779174 2022-04-06 12:58:25 2022-04-07 02:55:36 2022-04-07 03:23:27 0:27:51 0:15:35 0:12:16 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
pass 6779175 2022-04-06 12:58:26 2022-04-07 02:56:27 2022-04-07 04:09:46 1:13:19 1:02:28 0:10:51 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
pass 6779176 2022-04-06 12:58:27 2022-04-07 02:57:07 2022-04-07 04:36:34 1:39:27 1:28:43 0:10:44 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{no}} 3
pass 6779177 2022-04-06 12:58:28 2022-04-07 02:57:37 2022-04-07 03:31:25 0:33:48 0:20:39 0:13:09 smithi master centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
pass 6779178 2022-04-06 12:58:28 2022-04-07 03:00:18 2022-04-07 03:44:56 0:44:38 0:31:51 0:12:47 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} 3
pass 6779179 2022-04-06 12:58:29 2022-04-07 03:01:49 2022-04-07 03:30:18 0:28:29 0:16:24 0:12:05 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
pass 6779180 2022-04-06 12:58:30 2022-04-07 03:02:29 2022-04-07 03:26:31 0:24:02 0:16:58 0:07:04 smithi master rhel 8.4 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/misc}} 2
pass 6779181 2022-04-06 12:58:31 2022-04-07 03:03:30 2022-04-07 04:55:28 1:51:58 1:39:36 0:12:22 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
pass 6779182 2022-04-06 12:58:31 2022-04-07 03:03:40 2022-04-07 03:49:40 0:46:00 0:33:34 0:12:26 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6779183 2022-04-06 12:58:32 2022-04-07 03:03:50 2022-04-07 03:47:30 0:43:40 0:33:30 0:10:10 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{yes}} 3
pass 6779184 2022-04-06 12:58:33 2022-04-07 03:04:21 2022-04-07 03:51:02 0:46:41 0:31:20 0:15:21 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{legacy} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{no}} 3
pass 6779185 2022-04-06 12:58:34 2022-04-07 03:05:41 2022-04-07 04:58:59 1:53:18 1:39:29 0:13:49 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
pass 6779186 2022-04-06 12:58:34 2022-04-07 03:07:52 2022-04-07 03:44:59 0:37:07 0:26:37 0:10:30 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/pacific}} 3
pass 6779187 2022-04-06 12:58:35 2022-04-07 03:07:52 2022-04-07 04:05:43 0:57:51 0:42:46 0:15:05 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{no}} 3
pass 6779188 2022-04-06 12:58:36 2022-04-07 03:12:43 2022-04-07 04:30:40 1:17:57 1:06:40 0:11:17 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
pass 6779189 2022-04-06 12:58:36 2022-04-07 03:14:04 2022-04-07 03:56:25 0:42:21 0:30:46 0:11:35 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} 3
pass 6779190 2022-04-06 12:58:37 2022-04-07 03:15:24 2022-04-07 03:40:25 0:25:01 0:11:53 0:13:08 smithi master centos 8.stream fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{centos_8} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/libcephfs_python} 2
fail 6779191 2022-04-06 12:58:38 2022-04-07 03:17:25 2022-04-07 03:47:11 0:29:46 0:19:38 0:10:08 smithi master ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/metrics} 2
Failure Reason:

Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)

dead 6779192 2022-04-06 12:58:39 2022-04-07 03:17:46 2022-04-07 07:07:57 3:50:11 3:34:13 0:15:58 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi186 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cfb8f943163b374162da0d7b0240f267dd46e4e1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

fail 6779193 2022-04-06 12:58:40 2022-04-07 03:22:36 2022-04-07 03:52:55 0:30:19 0:17:33 0:12:46 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
Failure Reason:

SELinux denials found on ubuntu@smithi049.front.sepia.ceph.com: ['type=AVC msg=audit(1649302415.311:201): avc: denied { node_bind } for pid=1508 comm="ping" saddr=172.21.15.49 scontext=system_u:system_r:ping_t:s0 tcontext=system_u:object_r:node_t:s0 tclass=icmp_socket permissive=1']

pass 6779194 2022-04-06 12:58:40 2022-04-07 03:23:07 2022-04-07 04:01:17 0:38:10 0:26:58 0:11:12 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{yes}} 3
pass 6779195 2022-04-06 12:58:41 2022-04-07 03:23:27 2022-04-07 03:54:08 0:30:41 0:18:37 0:12:04 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/multimds_misc} 2
dead 6779196 2022-04-06 12:58:42 2022-04-07 03:25:08 2022-04-07 10:00:31 6:35:23 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
fail 6779197 2022-04-06 12:58:43 2022-04-07 03:26:38 2022-04-07 05:20:12 1:53:34 1:39:28 0:14:06 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

error during scrub thrashing: reached maximum tries (30) after waiting for 900 seconds

pass 6779198 2022-04-06 12:58:44 2022-04-07 03:28:59 2022-04-07 04:18:07 0:49:08 0:42:03 0:07:05 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
pass 6779199 2022-04-06 12:58:44 2022-04-07 03:28:59 2022-04-07 04:11:54 0:42:55 0:29:01 0:13:54 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
pass 6779200 2022-04-06 12:58:45 2022-04-07 03:30:10 2022-04-07 04:02:09 0:31:59 0:19:47 0:12:12 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
fail 6779201 2022-04-06 12:58:46 2022-04-07 03:30:20 2022-04-07 04:02:07 0:31:47 0:19:02 0:12:45 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

dead 6779202 2022-04-06 12:58:47 2022-04-07 03:31:01 2022-04-07 10:23:42 6:52:41 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{no}} 3
Failure Reason:

hit max job timeout

pass 6779203 2022-04-06 12:58:48 2022-04-07 03:31:31 2022-04-07 04:43:41 1:12:10 0:58:38 0:13:32 smithi master centos 8.stream fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-ec-root overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
pass 6779204 2022-04-06 12:58:48 2022-04-07 03:34:42 2022-04-07 04:23:00 0:48:18 0:35:11 0:13:07 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6779205 2022-04-06 12:58:49 2022-04-07 03:37:52 2022-04-07 05:01:52 1:24:00 1:09:59 0:14:01 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{legacy} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{yes}} 3
pass 6779206 2022-04-06 12:58:50 2022-04-07 03:40:33 2022-04-07 04:12:10 0:31:37 0:16:27 0:15:10 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
pass 6779207 2022-04-06 12:58:51 2022-04-07 03:45:04 2022-04-07 05:08:43 1:23:39 1:14:15 0:09:24 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
pass 6779208 2022-04-06 12:58:51 2022-04-07 03:45:04 2022-04-07 06:28:44 2:43:40 2:37:01 0:06:39 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} 3
pass 6779209 2022-04-06 12:58:52 2022-04-07 03:45:25 2022-04-07 04:24:16 0:38:51 0:27:55 0:10:56 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{yes}} 3
pass 6779210 2022-04-06 12:58:53 2022-04-07 03:47:16 2022-04-07 04:19:05 0:31:49 0:21:24 0:10:25 smithi master centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
pass 6779211 2022-04-06 12:58:54 2022-04-07 03:47:36 2022-04-07 04:09:30 0:21:54 0:14:28 0:07:26 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
pass 6779212 2022-04-06 12:58:55 2022-04-07 03:48:26 2022-04-07 04:33:17 0:44:51 0:32:34 0:12:17 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{yes}} 3
pass 6779213 2022-04-06 12:58:55 2022-04-07 03:49:47 2022-04-07 05:56:50 2:07:03 1:56:20 0:10:43 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
fail 6779214 2022-04-06 12:58:56 2022-04-07 03:51:07 2022-04-07 04:24:39 0:33:32 0:19:49 0:13:43 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

dead 6779215 2022-04-06 12:58:57 2022-04-07 03:52:58 2022-04-07 10:31:17 6:38:19 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6779216 2022-04-06 12:58:58 2022-04-07 03:53:38 2022-04-07 04:26:03 0:32:25 0:21:15 0:11:10 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} 3
Failure Reason:

Command failed on smithi201 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cfb8f943163b374162da0d7b0240f267dd46e4e1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6a10989c-b629-11ec-8c36-001a4aab830c -- ceph orch daemon add osd smithi201:vg_nvme/lv_1'

fail 6779217 2022-04-06 12:58:59 2022-04-07 03:54:09 2022-04-07 08:49:54 4:55:45 4:43:55 0:11:50 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

"2022-04-07T05:31:25.823982+0000 mds.c (mds.0) 25 : cluster [WRN] client.4571 isn't responding to mclientcaps(revoke), ino 0x10000005dd1 pending pAsLsXsFsc issued pAsLsXsFscb, sent 300.004379 seconds ago" in cluster log

pass 6779218 2022-04-06 12:58:59 2022-04-07 03:56:29 2022-04-07 04:50:59 0:54:30 0:38:10 0:16:20 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{legacy} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} 3
fail 6779219 2022-04-06 12:59:00 2022-04-07 04:01:21 2022-04-07 04:54:17 0:52:56 0:46:33 0:06:23 smithi master rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snap-schedule} 2
Failure Reason:

"2022-04-07T04:18:17.938470+0000 mon.a (mon.0) 430 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 6779220 2022-04-06 12:59:01 2022-04-07 04:01:21 2022-04-07 04:32:06 0:30:45 0:23:30 0:07:15 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2