Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6889278 2022-06-21 11:47:48 2022-06-21 12:15:38 2022-06-21 13:22:48 1:07:10 0:55:07 0:12:03 smithi main rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap-schedule} 2
Failure Reason:

"2022-06-21T12:43:23.244046+0000 mon.a (mon.0) 488 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 6889279 2022-06-21 11:47:49 2022-06-21 12:15:38 2022-06-21 12:49:55 0:34:17 0:25:46 0:08:31 smithi main rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/5 scrub/no standby-replay subvolume/{with-quota} tasks/{0-check-counter workunit/suites/pjd}} 3
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi062 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6436acc4b51b52635f8fa0e56cd79ba66c028d81 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 6889280 2022-06-21 11:47:50 2022-06-21 12:15:39 2022-06-21 15:15:20 2:59:41 2:49:00 0:10:41 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
pass 6889281 2022-06-21 11:47:51 2022-06-21 12:15:39 2022-06-21 14:56:09 2:40:30 2:28:49 0:11:41 smithi main rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/3 scrub/yes standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/suites/dbench}} 3
fail 6889282 2022-06-21 11:47:52 2022-06-21 12:15:40 2022-06-21 12:46:53 0:31:13 0:25:28 0:05:45 smithi main rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/3 scrub/no standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/pjd}} 3
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi092 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6436acc4b51b52635f8fa0e56cd79ba66c028d81 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

dead 6889283 2022-06-21 11:47:52 2022-06-21 12:15:40 2022-06-21 12:38:54 0:23:14 0:15:39 0:07:35 smithi main rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/no standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/pjd}} 3
Failure Reason:

{'smithi003.front.sepia.ceph.com': {'changed': False, 'msg': 'Failed to connect to the host via ssh: ssh: connect to host smithi003.front.sepia.ceph.com port 22: No route to host', 'unreachable': True}, 'smithi121.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'module_stderr': "Warning: Permanently added 'smithi121.front.sepia.ceph.com,172.21.15.121' (ECDSA) to the list of known hosts.\r\nControlSocket /home/teuthworker/.ansible/cp/a7df4ec898 already exists, disabling multiplexing\r\nConnection to smithi121.front.sepia.ceph.com closed.\r\n", 'module_stdout': "/usr/bin/python: can't open file '/home/ubuntu/.ansible/tmp/ansible-tmp-1655814164.51473-28061-57049058558779/AnsiballZ_apt.py': [Errno 2] No such file or directory\r\n", 'msg': 'MODULE FAILURE\nSee stdout/stderr for the exact error', 'rc': 2}}

fail 6889284 2022-06-21 11:47:53 2022-06-21 12:15:41 2022-06-21 12:22:53 0:07:12 smithi main ubuntu 20.04 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/basic}} 2
Failure Reason:

Command failed on smithi003 with status 2: 'sudo dpkg -i /tmp/linux-image.deb'

pass 6889285 2022-06-21 11:47:54 2022-06-21 12:15:41 2022-06-21 13:27:02 1:11:21 0:59:38 0:11:43 smithi main rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/no standby-replay subvolume/{with-no-extra-options} tasks/{0-check-counter workunit/suites/ffsb}} 3
fail 6889286 2022-06-21 11:47:55 2022-06-21 12:15:42 2022-06-21 16:06:56 3:51:14 3:41:46 0:09:28 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

"2022-06-21T13:04:07.739192+0000 mds.e (mds.0) 24 : cluster [WRN] client.4780 isn't responding to mclientcaps(revoke), ino 0x1000000408a pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 300.127744 seconds ago" in cluster log

fail 6889287 2022-06-21 11:47:56 2022-06-21 12:15:42 2022-06-21 12:58:33 0:42:51 0:32:25 0:10:26 smithi main rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/5 scrub/no standby-replay subvolume/{with-no-extra-options} tasks/{0-check-counter workunit/suites/pjd}} 3
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi001 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6436acc4b51b52635f8fa0e56cd79ba66c028d81 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

fail 6889288 2022-06-21 11:47:57 2022-06-21 12:15:42 2022-06-21 16:09:02 3:53:20 3:42:46 0:10:34 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

"2022-06-21T13:06:15.338717+0000 mds.c (mds.1) 1 : cluster [WRN] client.4671 isn't responding to mclientcaps(revoke), ino 0x20000003716 pending pAsLsXsFsc issued pAsLsXsFscb, sent 300.005403 seconds ago" in cluster log

fail 6889289 2022-06-21 11:47:58 2022-06-21 12:15:43 2022-06-21 14:35:43 2:20:00 2:12:28 0:07:32 smithi main rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/3 scrub/yes standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: reached maximum tries (30) after waiting for 900 seconds