Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi059.front.sepia.ceph.com smithi True True 2022-09-27 02:28:44.727439 scheduled_pdonnell@teuthology rhel 8.6 x86_64 /home/teuthworker/archive/pdonnell-2022-09-27_02:27:48-fs-wip-pdonnell-testing-20220923.171109-distro-default-smithi/7044540
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7044540 2022-09-27 02:28:02 2022-09-27 02:28:44 2022-09-27 02:44:24 0:15:40 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/iogen}} 3
fail 7044530 2022-09-26 23:43:08 2022-09-26 23:43:51 2022-09-27 00:19:21 0:35:30 0:25:10 0:10:20 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 7044510 2022-09-26 19:11:59 2022-09-26 19:12:19 2022-09-26 20:53:58 1:41:39 1:31:38 0:10:01 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/misc}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7044433 2022-09-26 16:35:59 2022-09-26 16:36:47 2022-09-26 17:17:01 0:40:14 0:31:45 0:08:29 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/pjd}} 3
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi059 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=21fa21e37e88ed873d641d9e5c90110b817d733d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 7044364 2022-09-26 06:10:38 2022-09-26 08:56:07 2022-09-26 11:15:14 2:19:07 2:09:07 0:10:00 smithi main centos 8.stream rbd/maintenance/{base/install clusters/{fixed-3 openstack} objectstore/bluestore-comp-zstd qemu/xfstests supported-random-distro$/{centos_8} workloads/dynamic_features_no_cache} 3
pass 7044353 2022-09-26 06:10:25 2022-09-26 08:40:51 2022-09-26 08:59:13 0:18:22 0:11:02 0:07:20 smithi main rhel 8.4 rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/filestore-xfs supported-random-distro$/{rhel_8} thrashers/cache thrashosds-health workloads/rbd_nbd} 3
pass 7044338 2022-09-26 06:10:09 2022-09-26 08:20:54 2022-09-26 08:40:45 0:19:51 0:12:08 0:07:43 smithi main ubuntu 20.04 rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/rbd_fsx_nbd} 3
pass 7044180 2022-09-26 02:04:44 2022-09-26 04:51:52 2022-09-26 08:20:46 3:28:54 3:20:48 0:08:06 smithi main rhel 8.4 rbd/encryption/{cache/none clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-stupid pool/none supported-random-distro$/{rhel_8} workloads/qemu_xfstests_luks1} 3
pass 7044124 2022-09-26 02:03:39 2022-09-26 03:26:33 2022-09-26 04:53:38 1:27:05 1:21:24 0:05:41 smithi main rhel 8.4 rbd/encryption/{cache/writearound clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-comp-zstd pool/small-cache-pool supported-random-distro$/{rhel_8} workloads/qemu_xfstests_luks2} 3
pass 7044103 2022-09-26 02:03:15 2022-09-26 02:59:22 2022-09-26 03:26:27 0:27:05 0:18:45 0:08:20 smithi main centos 8.stream rbd/singleton-bluestore/{all/issue-20295 objectstore/bluestore-bitmap openstack supported-random-distro$/{centos_8}} 4
pass 7044043 2022-09-26 02:02:05 2022-09-26 02:02:41 2022-09-26 02:59:51 0:57:10 0:51:11 0:05:59 smithi main ubuntu 20.04 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-snappy policy/none rbd-mirror/four-per-cluster supported-random-distro$/{ubuntu_latest} workloads/rbd-mirror-snapshot-stress-workunit-fast-diff} 2
pass 7044017 2022-09-25 13:21:41 2022-09-25 14:57:22 2022-09-25 16:14:59 1:17:37 0:59:03 0:18:34 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/kernel_untar_build}} 3
fail 7043999 2022-09-25 13:21:18 2022-09-25 14:14:09 2022-09-25 14:58:00 0:43:51 0:35:21 0:08:30 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/pjd}} 3
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5cac8001082f21fde5850fe50ea862c12a869554 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

fail 7043964 2022-09-25 13:20:36 2022-09-25 13:22:07 2022-09-25 14:17:00 0:54:53 0:40:50 0:14:03 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/metrics} 2
Failure Reason:

"1664114889.140172 mon.a (mon.0) 2436 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 7043870 2022-09-25 07:09:52 2022-09-25 12:57:31 2022-09-25 13:21:13 0:23:42 0:13:57 0:09:45 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/repave-all} 2
pass 7043845 2022-09-25 07:09:34 2022-09-25 12:38:15 2022-09-25 12:57:41 0:19:26 0:13:36 0:05:50 smithi main rhel 8.4 orch/cephadm/workunits/{0-distro/rhel_8.4_container_tools_3.0 agent/off mon_election/connectivity task/test_adoption} 1
pass 7043785 2022-09-25 07:08:48 2022-09-25 11:54:16 2022-09-25 12:39:02 0:44:46 0:38:31 0:06:15 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7043754 2022-09-25 07:08:25 2022-09-25 11:30:20 2022-09-25 11:54:10 0:23:50 0:10:51 0:12:59 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7043717 2022-09-25 07:07:55 2022-09-25 11:08:02 2022-09-25 11:30:44 0:22:42 0:16:42 0:06:00 smithi main rhel 8.4 orch/cephadm/workunits/{0-distro/rhel_8.4_container_tools_rhel8 agent/off mon_election/connectivity task/test_orch_cli} 1
pass 7043684 2022-09-25 07:07:17 2022-09-25 10:45:14 2022-09-25 11:08:00 0:22:46 0:14:15 0:08:31 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-add} 2