Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6946345 2022-07-24 15:38:44 2022-07-24 15:39:23 2022-07-24 16:52:12 1:12:49 1:03:03 0:09:46 smithi main rhel 8.6 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/clone}} 2
pass 6946346 2022-07-24 15:38:45 2022-07-24 15:39:23 2022-07-24 17:42:37 2:03:14 1:52:25 0:10:49 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/yes standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/suites/dbench}} 3
pass 6946347 2022-07-24 15:38:47 2022-07-24 15:39:23 2022-07-24 16:10:36 0:31:13 0:25:39 0:05:34 smithi main ubuntu 20.04 fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distro$/{ubuntu_latest} workloads/cephfs-mirror-ha-workunit} 1
pass 6946348 2022-07-24 15:38:48 2022-07-24 15:39:24 2022-07-24 17:48:23 2:08:59 2:03:11 0:05:48 smithi main centos 8.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} 1
pass 6946349 2022-07-24 15:38:49 2022-07-24 15:39:24 2022-07-24 16:19:10 0:39:46 0:28:38 0:11:08 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/metrics} 2
fail 6946350 2022-07-24 15:38:50 2022-07-24 15:39:25 2022-07-24 16:37:06 0:57:41 0:49:50 0:07:51 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap-schedule} 2
Failure Reason:

"2022-07-24T16:00:26.149213+0000 mon.a (mon.0) 468 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 6946351 2022-07-24 15:38:51 2022-07-24 15:39:25 2022-07-24 16:35:11 0:55:46 0:46:53 0:08:53 smithi main ubuntu 20.04 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/clone}} 2
fail 6946352 2022-07-24 15:38:53 2022-07-24 15:39:25 2022-07-24 16:17:13 0:37:48 0:28:10 0:09:38 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} scrub/no standby-replay subvolume/{with-quota} tasks/{0-check-counter workunit/suites/pjd}} 3
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=19acc2f7e3edb197f028ffa801e28f62f3698c79 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'