Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7014060 2022-09-06 12:25:35 2022-09-06 12:32:03 2022-09-06 12:58:58 0:26:55 0:14:46 0:12:09 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/workunit/dir-max-entries} 2
pass 7014061 2022-09-06 12:25:36 2022-09-06 12:33:23 2022-09-06 13:03:36 0:30:13 0:15:12 0:15:01 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/mdtest} 4
pass 7014062 2022-09-06 12:25:37 2022-09-06 12:37:14 2022-09-06 13:37:58 1:00:44 0:50:31 0:10:13 smithi main centos 8.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} 2
pass 7014063 2022-09-06 12:25:38 2022-09-06 12:37:25 2022-09-06 14:07:09 1:29:44 1:19:00 0:10:44 smithi main rhel 8.6 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
fail 7014064 2022-09-06 12:25:38 2022-09-06 12:37:55 2022-09-06 13:50:26 1:12:31 1:00:43 0:11:48 smithi main ubuntu 20.04 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/clone}} 2
Failure Reason:

Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)

pass 7014065 2022-09-06 12:25:39 2022-09-06 12:39:06 2022-09-06 13:03:19 0:24:13 0:12:23 0:11:50 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
pass 7014066 2022-09-06 12:25:40 2022-09-06 12:39:56 2022-09-06 13:27:35 0:47:39 0:36:21 0:11:18 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7014067 2022-09-06 12:25:41 2022-09-06 12:40:47 2022-09-06 13:13:25 0:32:38 0:20:14 0:12:24 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/workunit/quota} 2
Failure Reason:

Command failed (workunit test fs/quota/quota.sh) on smithi087 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b40ecbf0136791c92ac1badb59c4772694a1940d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/fs/quota/quota.sh'

pass 7014068 2022-09-06 12:25:42 2022-09-06 12:42:37 2022-09-06 14:40:37 1:58:00 1:45:54 0:12:06 smithi main centos 8.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/fsstress validater/valgrind} 2
pass 7014069 2022-09-06 12:25:43 2022-09-06 12:43:48 2022-09-06 13:23:05 0:39:17 0:30:58 0:08:19 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/yes standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/fsx}} 3
pass 7014070 2022-09-06 12:25:44 2022-09-06 12:45:38 2022-09-06 13:15:20 0:29:42 0:18:39 0:11:03 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
pass 7014071 2022-09-06 12:25:45 2022-09-06 12:46:19 2022-09-06 13:08:34 0:22:15 0:14:44 0:07:31 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/acls} 2
pass 7014072 2022-09-06 12:25:46 2022-09-06 12:47:29 2022-09-06 13:15:55 0:28:26 0:19:04 0:09:22 smithi main centos 8.stream fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
pass 7014073 2022-09-06 12:25:47 2022-09-06 12:47:30 2022-09-06 13:19:33 0:32:03 0:14:50 0:17:13 smithi main ubuntu 20.04 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
pass 7014074 2022-09-06 12:25:48 2022-09-06 12:51:41 2022-09-06 13:29:25 0:37:44 0:20:40 0:17:04 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
fail 7014075 2022-09-06 12:25:49 2022-09-06 12:55:52 2022-09-06 13:33:28 0:37:36 0:24:10 0:13:26 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/admin} 2
Failure Reason:

Test failure: test_dump_loads (tasks.cephfs.test_admin.TestAdminCommandDumpLoads)

pass 7014076 2022-09-06 12:25:50 2022-09-06 12:58:33 2022-09-06 13:34:21 0:35:48 0:25:10 0:10:38 smithi main centos 8.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/ffsb}} 2
pass 7014077 2022-09-06 12:25:51 2022-09-06 12:59:03 2022-09-06 13:45:30 0:46:27 0:33:13 0:13:14 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/no standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/fsync-tester}} 3
pass 7014078 2022-09-06 12:25:52 2022-09-06 13:02:34 2022-09-06 13:51:11 0:48:37 0:36:52 0:11:45 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7014079 2022-09-06 12:25:53 2022-09-06 13:03:25 2022-09-06 13:35:03 0:31:38 0:20:19 0:11:19 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/alternate-pool} 2
pass 7014080 2022-09-06 12:25:54 2022-09-06 13:03:45 2022-09-06 13:29:31 0:25:46 0:18:27 0:07:19 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
pass 7014081 2022-09-06 12:25:55 2022-09-06 13:03:45 2022-09-06 13:37:04 0:33:19 0:22:51 0:10:28 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/asok_dump_tree} 2
pass 7014082 2022-09-06 12:25:56 2022-09-06 13:07:36 2022-09-06 13:45:32 0:37:56 0:24:06 0:13:50 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
fail 7014083 2022-09-06 12:25:57 2022-09-06 13:10:27 2022-09-06 13:36:21 0:25:54 0:12:57 0:12:57 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/auto-repair} 2
Failure Reason:

"1662471050.9764667 mon.a (mon.0) 120 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 7014084 2022-09-06 12:25:58 2022-09-06 13:13:28 2022-09-06 13:58:40 0:45:12 0:37:19 0:07:53 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/yes standby-replay subvolume/{with-quota} tasks/{0-check-counter workunit/fs/test_o_trunc}} 3
fail 7014085 2022-09-06 12:25:59 2022-09-06 13:15:29 2022-09-06 17:49:18 4:33:49 4:21:26 0:12:23 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

"1662471939.749849 mds.e (mds.1) 1 : cluster [WRN] client.4742 isn't responding to mclientcaps(revoke), ino 0x2000000192c pending pAsLsXsFscr issued pAsLsXsFscrb, sent 300.005999 seconds ago" in cluster log

fail 7014086 2022-09-06 12:26:00 2022-09-06 13:15:59 2022-09-06 13:27:50 0:11:51 smithi main fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=kernel&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=testing

pass 7014087 2022-09-06 12:26:01 2022-09-06 13:19:40 2022-09-06 14:42:44 1:23:04 1:08:32 0:14:32 smithi main rhel 8.6 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_ffsb traceless/50pc} 2
pass 7014088 2022-09-06 12:26:02 2022-09-06 13:23:11 2022-09-06 13:47:04 0:23:53 0:11:31 0:12:22 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/backtrace} 2
pass 7014089 2022-09-06 12:26:03 2022-09-06 13:23:11 2022-09-06 14:16:47 0:53:36 0:36:16 0:17:20 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7014090 2022-09-06 12:26:04 2022-09-06 13:27:42 2022-09-06 13:59:37 0:31:55 0:16:57 0:14:58 smithi main ubuntu 20.04 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
pass 7014091 2022-09-06 12:26:05 2022-09-06 13:27:53 2022-09-06 13:52:32 0:24:39 0:11:49 0:12:50 smithi main centos 8.stream fs/bugs/client_trim_caps/{begin/{0-install 1-ceph 2-logrotate} centos_latest clusters/small-cluster conf/{client mds mon osd} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/trim-i24137} 1
fail 7014092 2022-09-06 12:26:06 2022-09-06 13:29:33 2022-09-06 13:55:28 0:25:55 0:12:35 0:13:20 smithi main centos 8.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host} 2
Failure Reason:

Command failed on smithi171 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:b40ecbf0136791c92ac1badb59c4772694a1940d shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b99fdcea-2dea-11ed-8431-001a4aab830c -- ceph mon dump -f json'

pass 7014093 2022-09-06 12:26:07 2022-09-06 13:29:33 2022-09-06 14:09:03 0:39:30 0:26:38 0:12:52 smithi main rhel 8.6 fs/full/{begin/{0-install 1-ceph 2-logrotate} clusters/1-node-1-mds-1-osd conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mgr-osd-full} 1
pass 7014094 2022-09-06 12:26:08 2022-09-06 13:29:34 2022-09-06 13:56:51 0:27:17 0:11:55 0:15:22 smithi main ubuntu 20.04 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{ubuntu_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client} 2
pass 7014095 2022-09-06 12:26:09 2022-09-06 13:33:39 2022-09-06 16:05:07 2:31:28 2:19:40 0:11:48 smithi main rhel 8.6 fs/mirror/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distros$/{rhel_8} tasks/mirror} 1
pass 7014096 2022-09-06 12:26:10 2022-09-06 13:33:40 2022-09-06 14:19:58 0:46:18 0:35:22 0:10:56 smithi main rhel 8.6 fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distro$/{rhel_8} workloads/cephfs-mirror-ha-workunit} 1
pass 7014097 2022-09-06 12:26:11 2022-09-06 13:34:30 2022-09-06 14:50:19 1:15:49 1:02:06 0:13:43 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cephfs_misc_tests} 5
fail 7014098 2022-09-06 12:26:12 2022-09-06 13:36:31 2022-09-06 14:29:09 0:52:38 0:38:45 0:13:53 smithi main ubuntu 20.04 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} 2
Failure Reason:

"1662473316.075791 mon.a (mon.0) 1364 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 7014099 2022-09-06 12:26:13 2022-09-06 13:37:11 2022-09-06 14:02:34 0:25:23 0:14:20 0:11:03 smithi main centos 8.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
pass 7014100 2022-09-06 12:26:14 2022-09-06 13:38:02 2022-09-06 14:41:59 1:03:57 0:50:00 0:13:57 smithi main ubuntu 20.04 fs/shell/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/cephfs-shell} 2
pass 7014101 2022-09-06 12:26:15 2022-09-06 13:45:38 2022-09-06 15:06:41 1:21:03 1:10:38 0:10:25 smithi main centos 8.stream fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
pass 7014102 2022-09-06 12:26:16 2022-09-06 13:45:39 2022-09-06 14:07:08 0:21:29 0:10:31 0:10:58 smithi main ubuntu 20.04 fs/top/{begin/{0-install 1-ceph 2-logrotate} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/ignorelist_health supported-random-distros$/{ubuntu_latest} tasks/fstop} 1
fail 7014103 2022-09-06 12:26:17 2022-09-06 13:45:39 2022-09-06 15:26:39 1:41:00 1:30:52 0:10:08 smithi main centos 8.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} 1
Failure Reason:

Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)

pass 7014104 2022-09-06 12:26:18 2022-09-06 13:45:39 2022-09-06 14:20:43 0:35:04 0:22:07 0:12:57 smithi main centos 8.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/misc}} 2
pass 7014105 2022-09-06 12:26:19 2022-09-06 13:47:10 2022-09-06 14:25:42 0:38:32 0:25:03 0:13:29 smithi main rhel 8.6 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/iozone}} 2
pass 7014106 2022-09-06 12:26:20 2022-09-06 13:48:40 2022-09-06 14:09:01 0:20:21 0:13:40 0:06:41 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/cap-flush} 2
pass 7014107 2022-09-06 12:26:21 2022-09-06 13:49:11 2022-09-06 14:34:39 0:45:28 0:32:49 0:12:39 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
dead 7014108 2022-09-06 12:26:22 2022-09-06 13:49:31 2022-09-07 02:00:37 12:11:06 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/no standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/suites/iogen}} 3
Failure Reason:

hit max job timeout

fail 7014109 2022-09-06 12:26:23 2022-09-06 13:51:22 2022-09-06 14:31:52 0:40:30 0:28:04 0:12:26 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-limits} 2
Failure Reason:

"1662474003.8369 mon.a (mon.0) 448 : cluster [WRN] Health check failed: Degraded data redundancy: 1/2384 objects degraded (0.042%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 7014110 2022-09-06 12:26:24 2022-09-06 13:52:43 2022-09-06 15:07:03 1:14:20 1:03:34 0:10:46 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
pass 7014111 2022-09-06 12:26:25 2022-09-06 13:54:03 2022-09-06 14:49:36 0:55:33 0:43:21 0:12:12 smithi main ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/3 tasks/dbench validater/lockdep} 2
pass 7014112 2022-09-06 12:26:26 2022-09-06 13:55:34 2022-09-06 14:20:14 0:24:40 0:12:08 0:12:32 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-readahead} 2
pass 7014113 2022-09-06 12:26:27 2022-09-06 13:56:54 2022-09-06 14:19:53 0:22:59 0:11:19 0:11:40 smithi main centos 8.stream fs/upgrade/nofs/{bluestore-bitmap centos_latest conf/{client mds mon osd} no-mds-cluster overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-pacific 1-upgrade}} 1
pass 7014114 2022-09-06 12:26:28 2022-09-06 13:58:45 2022-09-06 14:23:54 0:25:09 0:18:44 0:06:25 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
pass 7014115 2022-09-06 12:26:29 2022-09-06 13:58:45 2022-09-06 14:39:16 0:40:31 0:33:18 0:07:13 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-recovery} 2
pass 7014116 2022-09-06 12:26:30 2022-09-06 13:59:46 2022-09-06 14:38:01 0:38:15 0:26:57 0:11:18 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} scrub/yes standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/iozone}} 3
fail 7014117 2022-09-06 12:26:31 2022-09-06 14:30:20 787 smithi main centos 8.stream fs/upgrade/upgraded_client/from_nautilus/{bluestore-bitmap centos_latest clusters/{1-mds-1-client-micro} conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-nautilus 1-client-upgrade 2-client-sanity}} 2
Failure Reason:

Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)

pass 7014118 2022-09-06 12:26:32 2022-09-06 14:07:18 2022-09-06 14:42:42 0:35:24 0:22:05 0:13:19 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/damage} 2
pass 7014119 2022-09-06 12:26:33 2022-09-06 14:09:08 2022-09-06 14:31:20 0:22:12 0:11:48 0:10:24 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
pass 7014120 2022-09-06 12:26:34 2022-09-06 14:09:09 2022-09-06 14:49:40 0:40:31 0:22:05 0:18:26 smithi main rhel 8.6 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
pass 7014121 2022-09-06 12:26:35 2022-09-06 14:16:30 2022-09-06 14:46:21 0:29:51 0:15:36 0:14:15 smithi main ubuntu 20.04 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
pass 7014122 2022-09-06 12:26:36 2022-09-06 14:16:50 2022-09-06 15:01:46 0:44:56 0:29:38 0:15:18 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/data-scan} 2
fail 7014123 2022-09-06 12:26:37 2022-09-06 14:20:01 2022-09-06 14:43:30 0:23:29 0:11:29 0:12:00 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi174 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/daemon-base:latest-pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7e2dcb98-2df1-11ed-8431-001a4aab830c -- ceph mon dump -f json'

fail 7014124 2022-09-06 12:26:38 2022-09-06 14:20:22 2022-09-06 15:07:35 0:47:13 0:32:31 0:14:42 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/no standby-replay subvolume/{with-quota} tasks/{0-check-counter workunit/suites/pjd}} 3
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi061 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b40ecbf0136791c92ac1badb59c4772694a1940d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 7014125 2022-09-06 12:26:39 2022-09-06 14:24:03 2022-09-06 15:20:45 0:56:42 0:49:07 0:07:35 smithi main rhel 8.6 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_snaptests}} 2
fail 7014126 2022-09-06 12:26:40 2022-09-06 14:25:33 2022-09-06 15:00:54 0:35:21 0:28:53 0:06:28 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/exports} 2
Failure Reason:

Test failure: test_ephemeral_random_failover (tasks.cephfs.test_exports.TestEphemeralPins)

pass 7014127 2022-09-06 12:26:41 2022-09-06 14:25:44 2022-09-06 16:10:54 1:45:10 1:35:03 0:10:07 smithi main rhel 8.6 fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_dbench_iozone} 2
pass 7014128 2022-09-06 12:26:42 2022-09-06 14:29:14 2022-09-06 14:57:20 0:28:06 0:16:36 0:11:30 smithi main centos 8.stream fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress traceless/50pc} 2
pass 7014129 2022-09-06 12:26:43 2022-09-06 14:30:25 2022-09-06 14:56:10 0:25:45 0:12:36 0:13:09 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} 4
pass 7014130 2022-09-06 12:26:44 2022-09-06 14:42:50 2022-09-06 15:42:26 0:59:36 0:52:30 0:07:06 smithi main rhel 8.6 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} 2
pass 7014131 2022-09-06 12:26:45 2022-09-06 14:42:50 2022-09-06 15:48:36 1:05:46 0:59:42 0:06:04 smithi main rhel 8.6 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
pass 7014132 2022-09-06 12:26:46 2022-09-06 14:43:41 2022-09-06 15:25:32 0:41:51 0:33:37 0:08:14 smithi main rhel 8.6 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/snapshot}} 2
pass 7014133 2022-09-06 12:26:47 2022-09-06 14:46:32 2022-09-06 15:17:23 0:30:51 0:15:44 0:15:07 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/forward-scrub} 2
pass 7014134 2022-09-06 12:26:47 2022-09-06 14:49:42 2022-09-06 15:21:35 0:31:53 0:20:40 0:11:13 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
pass 7014135 2022-09-06 12:26:48 2022-09-06 14:49:43 2022-09-06 15:22:10 0:32:27 0:19:34 0:12:53 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
pass 7014136 2022-09-06 12:26:49 2022-09-06 14:50:23 2022-09-06 15:21:55 0:31:32 0:20:14 0:11:18 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/fragment} 2
pass 7014137 2022-09-06 12:26:50 2022-09-06 14:50:24 2022-09-06 15:25:30 0:35:06 0:25:24 0:09:42 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/yes standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/direct_io}} 3
pass 7014138 2022-09-06 12:26:51 2022-09-06 14:54:45 2022-09-06 15:45:22 0:50:37 0:35:37 0:15:00 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7014139 2022-09-06 12:26:52 2022-09-06 21:45:45 2022-09-06 22:09:22 0:23:37 0:16:22 0:07:15 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
pass 7014140 2022-09-06 12:26:53 2022-09-06 21:46:16 2022-09-06 22:11:50 0:25:34 0:19:10 0:06:24 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/journal-repair} 2
pass 7014141 2022-09-06 12:26:54 2022-09-06 21:46:36 2022-09-07 00:02:44 2:16:08 2:04:24 0:11:44 smithi main ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/fsstress validater/valgrind} 2
pass 7014142 2022-09-06 12:26:55 2022-09-06 21:46:57 2022-09-06 22:11:50 0:24:53 0:12:37 0:12:16 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mds-flush} 2
pass 7014143 2022-09-06 12:26:56 2022-09-06 21:47:37 2022-09-06 22:20:57 0:33:20 0:22:13 0:11:07 smithi main centos 8.stream fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
pass 7014144 2022-09-06 12:26:57 2022-09-06 21:47:48 2022-09-06 22:17:33 0:29:45 0:19:12 0:10:33 smithi main rhel 8.6 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{rhel_8} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/ino_release_cb} 2
pass 7014145 2022-09-06 12:26:58 2022-09-06 21:48:28 2022-09-06 22:13:21 0:24:53 0:13:49 0:11:04 smithi main ubuntu 20.04 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
pass 7014146 2022-09-06 12:26:59 2022-09-06 21:49:59 2022-09-06 22:30:32 0:40:33 0:25:45 0:14:48 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
pass 7014147 2022-09-06 12:27:00 2022-09-06 21:52:49 2022-09-06 22:38:02 0:45:13 0:31:09 0:14:04 smithi main rhel 8.6 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} 2
pass 7014148 2022-09-06 12:27:01 2022-09-06 21:54:50 2022-09-07 01:26:34 3:31:44 3:19:55 0:11:49 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/no standby-replay subvolume/{with-quota} tasks/{0-check-counter workunit/fs/misc}} 3
pass 7014149 2022-09-06 12:27:02 2022-09-06 21:55:01 2022-09-06 22:40:49 0:45:48 0:32:02 0:13:46 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mds-full} 2
pass 7014150 2022-09-06 12:27:03 2022-09-06 21:55:41 2022-09-06 22:38:12 0:42:31 0:28:29 0:14:02 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
pass 7014151 2022-09-06 12:27:04 2022-09-06 21:56:31 2022-09-06 23:41:30 1:44:59 1:33:30 0:11:29 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
pass 7014152 2022-09-06 12:27:05 2022-09-06 21:56:32 2022-09-06 22:15:57 0:19:25 0:13:22 0:06:03 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mds_creation_retry} 2
pass 7014153 2022-09-06 12:27:06 2022-09-06 21:56:42 2022-09-06 22:51:19 0:54:37 0:46:50 0:07:47 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/yes standby-replay subvolume/{with-no-extra-options} tasks/{0-check-counter workunit/kernel_untar_build}} 3
fail 7014154 2022-09-06 12:27:07 2022-09-06 21:57:43 2022-09-06 22:51:13 0:53:30 0:41:47 0:11:43 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/metrics} 2
Failure Reason:

"1662503705.9584684 mon.a (mon.0) 1451 : cluster [WRN] Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY)" in cluster log

pass 7014155 2022-09-06 12:27:08 2022-09-06 21:58:43 2022-09-06 23:08:11 1:09:28 0:55:23 0:14:05 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
pass 7014156 2022-09-06 12:27:09 2022-09-06 22:00:04 2022-09-06 22:51:03 0:50:59 0:37:39 0:13:20 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7014157 2022-09-06 12:27:10 2022-09-06 22:01:54 2022-09-06 22:32:21 0:30:27 0:16:53 0:13:34 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/mdtest} 5
pass 7014158 2022-09-06 12:27:11 2022-09-06 22:02:35 2022-09-06 22:50:12 0:47:37 0:35:20 0:12:17 smithi main rhel 8.6 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} 2
pass 7014159 2022-09-06 12:27:12 2022-09-06 22:03:05 2022-09-06 23:02:58 0:59:53 0:47:53 0:12:00 smithi main ubuntu 20.04 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
fail 7014160 2022-09-06 12:27:12 2022-09-06 22:03:26 2022-09-06 22:57:23 0:53:57 0:42:27 0:11:30 smithi main rhel 8.6 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/basic}} 2
Failure Reason:

Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)

pass 7014161 2022-09-06 12:27:13 2022-09-06 22:03:56 2022-09-06 22:39:36 0:35:40 0:19:55 0:15:45 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/multimds_misc} 2
pass 7014162 2022-09-06 12:27:14 2022-09-06 22:07:47 2022-09-06 22:57:29 0:49:42 0:42:57 0:06:45 smithi main rhel 8.6 fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_dbench_iozone} 2
pass 7014163 2022-09-06 12:27:15 2022-09-06 22:08:18 2022-09-06 22:41:58 0:33:40 0:22:55 0:10:45 smithi main ubuntu 20.04 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_blogbench traceless/50pc} 2
pass 7014164 2022-09-06 12:27:16 2022-09-06 22:08:28 2022-09-06 22:33:42 0:25:14 0:18:48 0:06:26 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
pass 7014165 2022-09-06 12:27:17 2022-09-06 22:08:49 2022-09-06 22:32:37 0:23:48 0:16:28 0:07:20 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/openfiletable} 2
pass 7014166 2022-09-06 12:27:18 2022-09-06 22:09:29 2022-09-06 22:40:59 0:31:30 0:21:12 0:10:18 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
pass 7014167 2022-09-06 12:27:19 2022-09-06 22:09:40 2022-09-06 23:07:34 0:57:54 0:47:00 0:10:54 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} scrub/no standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/blogbench}} 3
pass 7014168 2022-09-06 12:27:20 2022-09-06 22:10:30 2022-09-06 22:47:28 0:36:58 0:22:32 0:14:26 smithi main rhel 8.6 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
pass 7014169 2022-09-06 12:27:21 2022-09-06 22:11:51 2022-09-06 22:39:35 0:27:44 0:15:41 0:12:03 smithi main ubuntu 20.04 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
pass 7014170 2022-09-06 12:27:22 2022-09-06 22:11:51 2022-09-06 22:45:20 0:33:29 0:20:24 0:13:05 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/pool-perm} 2
pass 7014171 2022-09-06 12:27:23 2022-09-06 22:11:52 2022-09-06 22:47:16 0:35:24 0:24:21 0:11:03 smithi main rhel 8.6 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_suites_pjd}} 2
pass 7014172 2022-09-06 12:27:24 2022-09-06 22:12:12 2022-09-06 23:02:25 0:50:13 0:36:39 0:13:34 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7014173 2022-09-06 12:27:25 2022-09-06 22:13:23 2022-09-06 22:47:13 0:33:50 0:20:23 0:13:27 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/quota} 2
pass 7014174 2022-09-06 12:27:26 2022-09-06 22:13:43 2022-09-06 23:26:15 1:12:32 1:01:13 0:11:19 smithi main ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/dbench validater/lockdep} 2
pass 7014175 2022-09-06 12:27:27 2022-09-06 22:14:34 2022-09-06 23:25:04 1:10:30 1:02:20 0:08:10 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/yes standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/dbench}} 3
pass 7014176 2022-09-06 12:27:28 2022-09-06 22:15:54 2022-09-06 22:37:18 0:21:24 0:11:28 0:09:56 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
pass 7014177 2022-09-06 12:27:29 2022-09-06 22:16:05 2022-09-06 22:34:59 0:18:54 0:13:31 0:05:23 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/recovery-fs} 2
pass 7014178 2022-09-06 12:27:30 2022-09-06 22:16:05 2022-09-06 22:54:41 0:38:36 0:26:01 0:12:35 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
fail 7014179 2022-09-06 12:27:31 2022-09-06 22:17:36 2022-09-06 22:51:26 0:33:50 0:21:30 0:12:20 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/scrub} 2
Failure Reason:

"1662503892.5372937 mon.a (mon.0) 452 : cluster [WRN] Health check failed: Degraded data redundancy: 19/2934 objects degraded (0.648%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 7014180 2022-09-06 12:27:32 2022-09-06 22:18:17 2022-09-06 23:02:56 0:44:39 0:32:16 0:12:23 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
dead 7014181 2022-09-06 12:27:33 2022-09-11 12:57:20 2022-09-11 12:57:24 0:00:04 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe09e6a12b0>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014182 2022-09-06 12:27:34 2022-09-11 12:57:20 2022-09-11 12:57:25 0:00:05 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/sessionmap} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f98dd497208>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014183 2022-09-06 12:27:34 2022-09-11 12:57:20 2022-09-11 12:57:25 0:00:05 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/no standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/ffsb}} 3
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f73a2b27ef0>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014184 2022-09-06 12:27:35 2022-09-11 12:57:21 2022-09-11 12:57:25 0:00:04 smithi main centos 8.stream fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f37b7f7d358>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014185 2022-09-06 12:27:36 2022-09-11 12:57:21 2022-09-11 12:57:25 0:00:04 smithi main centos 8.stream fs/cephadm/renamevolume/{0-start 1-rename distro/single-container-host overrides/ignorelist_health} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6495cad588>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014186 2022-09-06 12:27:37 2022-09-11 12:57:22 2022-09-11 12:57:25 0:00:03 smithi main rhel 8.6 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{rhel_8} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs/{frag test}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7eb2b46588>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014187 2022-09-06 12:27:38 2022-09-11 12:57:22 2022-09-11 12:57:25 0:00:03 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cephfs_misc_tests} 4
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9b248d3e10>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014188 2022-09-06 12:27:39 2022-09-11 12:57:22 2022-09-11 12:57:25 0:00:03 smithi main rhel 8.6 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe8749ee588>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014189 2022-09-06 12:27:40 2022-09-11 12:57:23 2022-09-11 12:57:24 0:00:01 smithi main centos 8.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f615f385470>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014190 2022-09-06 12:27:41 2022-09-11 12:57:23 2022-09-11 12:57:28 0:00:05 smithi main centos 8.stream fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f86600ae4e0>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014191 2022-09-06 12:27:42 2022-09-11 12:57:24 2022-09-11 12:57:28 0:00:04 smithi main ubuntu 20.04 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/clone}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f3a0331a4a8>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014192 2022-09-06 12:27:43 2022-09-11 12:57:24 2022-09-11 12:57:28 0:00:04 smithi main rhel 8.6 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_trivial_sync}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f02b46e6550>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014193 2022-09-06 12:27:44 2022-09-11 12:57:24 2022-09-11 12:57:39 0:00:15 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap-schedule} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9408c4b550>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014194 2022-09-06 12:27:45 2022-09-11 12:57:35 2022-09-11 12:57:39 0:00:04 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8f81d41dd8>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014195 2022-09-06 12:27:46 2022-09-11 12:57:35 2022-09-11 12:57:39 0:00:04 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap_schedule_snapdir} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f33c1f27358>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014196 2022-09-06 12:27:47 2022-09-11 12:57:35 2022-09-11 12:57:39 0:00:04 smithi main fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc159df7470>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014197 2022-09-06 12:27:48 2022-09-11 12:57:36 2022-09-11 12:57:39 0:00:03 smithi main ubuntu 20.04 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench traceless/50pc} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fba06b684e0>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014198 2022-09-06 12:27:49 2022-09-11 12:57:36 2022-09-11 12:57:39 0:00:03 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f40731780b8>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014199 2022-09-06 12:27:50 2022-09-11 12:57:37 2022-09-11 12:57:39 0:00:02 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/yes standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/fs/norstats}} 3
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f834a279fd0>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014200 2022-09-06 12:27:51 2022-09-11 12:57:37 2022-09-11 12:57:39 0:00:02 smithi main centos 8.stream fs/upgrade/nofs/{bluestore-bitmap centos_latest conf/{client mds mon osd} no-mds-cluster overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-pacific 1-upgrade}} 1
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f64f9cef668>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014201 2022-09-06 12:27:52 2022-09-11 12:57:37 2022-09-11 12:57:39 0:00:02 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snapshots} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fbbf3328470>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014202 2022-09-06 12:27:53 2022-09-11 12:57:38 2022-09-11 12:57:42 0:00:04 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc49cdbe4a8>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014203 2022-09-06 12:27:54 2022-09-11 12:57:38 2022-09-11 12:57:42 0:00:04 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/strays} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fbc389ab240>: Failed to establish a new connection: [Errno 113] No route to host',))

fail 7014204 2022-09-06 12:27:55 2022-09-07 17:21:09 2022-09-07 17:45:45 0:24:36 0:13:35 0:11:01 smithi main centos 8.stream fs/upgrade/upgraded_client/from_nautilus/{bluestore-bitmap centos_latest clusters/{1-mds-1-client-micro} conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-nautilus 1-client-upgrade 2-client-sanity}} 2
Failure Reason:

Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)

pass 7014205 2022-09-06 12:27:56 2022-09-07 17:26:40 2022-09-07 17:54:04 0:27:24 0:16:39 0:10:45 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/test_journal_migration} 2
dead 7014206 2022-09-06 12:27:57 2022-09-11 12:57:38 2022-09-11 12:57:42 0:00:04 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/no standby-replay subvolume/{with-no-extra-options} tasks/{0-check-counter workunit/suites/fsstress}} 3
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7c5a843fd0>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014207 2022-09-06 12:27:58 2022-09-11 12:57:39 2022-09-11 12:57:42 0:00:03 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f837d6fa438>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014208 2022-09-06 12:27:59 2022-09-11 12:57:39 2022-09-11 12:57:42 0:00:03 smithi main ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/3 tasks/fsstress validater/valgrind} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f49de896438>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014209 2022-09-06 12:28:00 2022-09-11 12:57:40 2022-09-11 12:57:42 0:00:02 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/truncate_delay} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc175f86320>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014210 2022-09-06 12:28:01 2022-09-11 12:57:40 2022-09-11 12:57:42 0:00:02 smithi main centos 8.stream fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9cc417f208>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014211 2022-09-06 12:28:02 2022-09-11 12:57:41 2022-09-11 12:57:45 0:00:04 smithi main centos 8.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fba64a7a4a8>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014212 2022-09-06 12:28:03 2022-09-11 12:57:41 2022-09-11 12:57:45 0:00:04 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2e614ec2b0>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014213 2022-09-06 12:28:04 2022-09-11 12:57:41 2022-09-11 12:57:46 0:00:05 smithi main centos 8.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/ffsb}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f139e1bb4a8>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014214 2022-09-06 12:28:04 2022-09-11 12:57:42 2022-09-11 12:57:45 0:00:03 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/workunit/dir-max-entries} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd0a3f09390>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014215 2022-09-06 12:28:05 2022-09-11 12:57:42 2022-09-11 12:57:45 0:00:03 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} scrub/yes standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/fsx}} 3
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1bea7adeb8>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014216 2022-09-06 12:28:06 2022-09-11 12:57:43 2022-09-11 12:57:46 0:00:03 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7fed6c0fd0>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014217 2022-09-06 12:28:07 2022-09-11 12:57:43 2022-09-11 12:57:45 0:00:02 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} 5
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f46e83499e8>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014218 2022-09-06 12:28:08 2022-09-11 12:57:44 2022-09-11 12:57:48 0:00:04 smithi main rhel 8.6 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f913a597208>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014219 2022-09-06 12:28:09 2022-09-11 12:57:44 2022-09-11 12:57:49 0:00:05 smithi main rhel 8.6 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f10e5f3c208>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014220 2022-09-06 12:28:10 2022-09-11 12:57:45 2022-09-11 12:57:48 0:00:03 smithi main rhel 8.6 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/misc}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb1ba8e6550>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014221 2022-09-06 12:28:11 2022-09-11 12:57:45 2022-09-11 12:57:48 0:00:03 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/workunit/quota} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f754eabf240>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014222 2022-09-06 12:28:12 2022-09-11 12:57:45 2022-09-11 12:57:48 0:00:03 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8f24e425c0>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014223 2022-09-06 12:28:13 2022-09-11 12:57:46 2022-09-11 12:57:48 0:00:02 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/acls} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f20ada12470>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014224 2022-09-06 12:28:14 2022-09-11 12:57:46 2022-09-11 12:57:48 0:00:02 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff3c05002b0>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014225 2022-09-06 12:28:15 2022-09-11 12:57:47 2022-09-11 12:57:51 0:00:04 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f555bc22438>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014226 2022-09-06 12:28:16 2022-09-11 12:57:47 2022-09-11 12:57:52 0:00:05 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/no standby-replay subvolume/{with-quota} tasks/{0-check-counter workunit/suites/fsync-tester}} 3
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f3bd8c02ef0>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014227 2022-09-06 12:28:17 2022-09-11 12:57:48 2022-09-11 12:57:52 0:00:04 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/admin} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa46f8812e8>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014228 2022-09-06 12:28:18 2022-09-11 12:57:48 2022-09-11 12:57:52 0:00:04 smithi main rhel 8.6 fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_dbench_iozone} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f3d3c499320>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014229 2022-09-06 12:28:19 2022-09-11 12:57:48 2022-09-11 12:57:51 0:00:03 smithi main centos 8.stream fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_ffsb traceless/50pc} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff66e949128>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014230 2022-09-06 12:28:20 2022-09-11 12:57:49 2022-09-11 12:57:52 0:00:03 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f63d3ea1dd8>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014231 2022-09-06 12:28:21 2022-09-11 12:57:49 2022-09-11 12:57:51 0:00:02 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/alternate-pool} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9fe463b390>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014232 2022-09-06 12:28:22 2022-09-11 12:57:50 2022-09-11 12:57:54 0:00:04 smithi main centos 8.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/iozone}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f39bc392080>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014233 2022-09-06 12:28:23 2022-09-11 12:57:50 2022-09-11 12:57:55 0:00:05 smithi main rhel 8.6 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f085d68b518>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014234 2022-09-06 12:28:24 2022-09-11 12:57:50 2022-09-11 12:57:55 0:00:05 smithi main ubuntu 20.04 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{ubuntu_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs_python} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7faa0fbe6588>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014235 2022-09-06 12:28:25 2022-09-11 12:57:51 2022-09-11 12:57:55 0:00:04 smithi main centos 8.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8085cb2128>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014236 2022-09-06 12:28:26 2022-09-11 12:57:51 2022-09-11 12:57:55 0:00:04 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/asok_dump_tree} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1ccedef5f8>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014237 2022-09-06 12:28:27 2022-09-11 12:57:52 2022-09-11 12:57:55 0:00:03 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/yes standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/fs/test_o_trunc}} 3
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5c8c627f28>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014238 2022-09-06 12:28:28 2022-09-11 12:57:52 2022-09-11 12:57:55 0:00:03 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f32aefe52b0>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014239 2022-09-06 12:28:29 2022-09-11 12:57:53 2022-09-11 12:57:58 0:00:05 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f65425272b0>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014240 2022-09-06 12:28:30 2022-09-11 12:57:53 2022-09-11 12:57:58 0:00:05 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/auto-repair} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f00a7e93390>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014241 2022-09-06 12:28:31 2022-09-11 12:57:54 2022-09-11 12:57:58 0:00:04 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/backtrace} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8153e3f240>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014242 2022-09-06 12:28:32 2022-09-11 12:57:54 2022-09-11 12:57:58 0:00:04 smithi main centos 8.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/dbench validater/lockdep} 2
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f680ee734a8>: Failed to establish a new connection: [Errno 113] No route to host',))

dead 7014243 2022-09-06 12:28:33 2022-09-11 12:57:55 2022-09-11 12:57:58 0:00:03 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
Failure Reason:

Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa012454dd8>: Failed to establish a new connection: [Errno 113] No route to host',))