Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6868569 2022-06-08 15:21:31 2022-06-08 16:06:42 2022-06-08 16:45:40 0:38:58 0:30:17 0:08:41 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6868570 2022-06-08 15:21:33 2022-06-08 16:06:42 2022-06-08 17:01:51 0:55:09 0:43:59 0:11:10 smithi main centos 8.stream fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} 3
pass 6868571 2022-06-08 15:21:34 2022-06-08 16:06:43 2022-06-08 16:34:11 0:27:28 0:19:55 0:07:33 smithi main ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
fail 6868572 2022-06-08 15:21:35 2022-06-08 16:06:43 2022-06-08 17:13:23 1:06:40 0:52:34 0:14:06 smithi main centos 8.stream fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

Command failed on smithi063 with status 1: 'sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp'

fail 6868573 2022-06-08 15:21:36 2022-06-08 16:10:24 2022-06-08 17:17:02 1:06:38 0:54:22 0:12:16 smithi main centos 8.stream fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snap-schedule} 2
Failure Reason:

"2022-06-08T16:51:29.168708+0000 mon.a (mon.0) 589 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log

pass 6868574 2022-06-08 15:21:37 2022-06-08 16:10:45 2022-06-08 16:52:28 0:41:43 0:33:52 0:07:51 smithi main rhel 8.4 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volume-client} 2
pass 6868575 2022-06-08 15:21:39 2022-06-08 16:11:56 2022-06-08 16:53:44 0:41:48 0:30:31 0:11:17 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6868576 2022-06-08 15:21:40 2022-06-08 16:12:56 2022-06-08 16:34:24 0:21:28 0:10:56 0:10:32 smithi main ubuntu 18.04 fs/upgrade/volumes/import-legacy/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 2
Failure Reason:

Command failed on smithi094 with status 1: "sudo nsenter --net=/var/run/netns/ceph-ns-mnt.0 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' --id vol_data_isolated --client_mountpoint=/volumes/_nogroup/vol_isolated mnt.0"

pass 6868577 2022-06-08 15:21:41 2022-06-08 16:13:17 2022-06-08 17:19:22 1:06:05 0:54:59 0:11:06 smithi main centos 8.stream fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/clone}} 2
pass 6868578 2022-06-08 15:21:43 2022-06-08 16:13:47 2022-06-08 17:07:40 0:53:53 0:39:30 0:14:23 smithi main centos 8.stream fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-recovery} 2