Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7418412 2023-10-09 11:13:50 2023-10-09 11:14:33 2023-10-09 12:12:11 0:57:38 0:43:51 0:13:47 smithi main rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{yes}} 3
pass 7418413 2023-10-09 11:13:51 2023-10-09 11:14:33 2023-10-09 12:26:46 1:12:13 0:58:05 0:14:08 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/admin} 2
pass 7418414 2023-10-09 11:13:51 2023-10-09 11:14:34 2023-10-09 12:55:57 1:41:23 1:28:40 0:12:43 smithi main centos 8.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} 2
fail 7418415 2023-10-09 11:13:52 2023-10-09 11:14:34 2023-10-09 12:08:02 0:53:28 0:40:42 0:12:46 smithi main centos 8.stream fs/full/{begin/{0-install 1-ceph 2-logrotate} clusters/1-node-1-mds-1-osd conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mgr-osd-full} 1
Failure Reason:

Command failed (workunit test fs/full/subvolume_snapshot_rm.sh) on smithi022 with status 110: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c68c3f491122d7c1646aa58425d37dc1a8b276ba TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/full/subvolume_snapshot_rm.sh'

pass 7418416 2023-10-09 11:13:53 2023-10-09 11:14:34 2023-10-09 11:58:59 0:44:25 0:33:02 0:11:23 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-limits} 2
fail 7418417 2023-10-09 11:13:54 2023-10-09 11:14:35 2023-10-09 13:33:38 2:19:03 2:09:45 0:09:18 smithi main rhel 8.4 fs/mirror/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health} supported-random-distros$/{rhel_8} tasks/mirror} 1
Failure Reason:

"1696852575.5977018 mon.a (mon.0) 1992 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7418418 2023-10-09 11:13:55 2023-10-09 11:14:35 2023-10-09 12:04:16 0:49:41 0:35:06 0:14:35 smithi main centos 8.stream fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{ignorelist_health} supported-random-distro$/{centos_8} workloads/cephfs-mirror-ha-workunit} 1
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7418419 2023-10-09 11:13:55 2023-10-09 11:14:35 2023-10-09 12:32:49 1:18:14 1:00:58 0:17:16 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cephfs_misc_tests} 5
fail 7418420 2023-10-09 11:13:56 2023-10-09 11:14:36 2023-10-09 12:56:52 1:42:16 1:30:22 0:11:54 smithi main centos 8.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/ignorelist_health tasks/mirror}} 1
Failure Reason:

Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)

fail 7418421 2023-10-09 11:13:57 2023-10-09 11:14:36 2023-10-09 11:47:32 0:32:56 0:17:23 0:15:33 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} 4
Failure Reason:

Command failed on smithi123 with status 8: 'TESTDIR=/home/ubuntu/cephtest bash -s'

fail 7418422 2023-10-09 11:13:58 2023-10-09 11:14:36 2023-10-09 11:36:38 0:22:02 0:08:55 0:13:07 smithi main fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=08cac0187919fc3a247efdf445f652e2fb30a647

fail 7418423 2023-10-09 11:13:59 2023-10-09 11:14:37 2023-10-09 11:48:27 0:33:50 0:23:19 0:10:31 smithi main rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mds-full} 2
Failure Reason:

"1696851350.512159 osd.6 (osd.6) 3 : cluster [WRN] OSD bench result of 219533.586260 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.6. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]." in cluster log

pass 7418424 2023-10-09 11:14:00 2023-10-09 11:14:37 2023-10-09 12:13:53 0:59:16 0:46:54 0:12:22 smithi main rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} 3
fail 7418425 2023-10-09 11:14:00 2023-10-09 11:14:37 2023-10-09 11:49:32 0:34:55 0:24:09 0:10:46 smithi main rhel 8.4 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/basic}} 2
Failure Reason:

Test failure: test_volume_rm (tasks.cephfs.test_volumes.TestVolumes)

fail 7418426 2023-10-09 11:14:01 2023-10-09 11:14:38 2023-10-09 13:19:03 2:04:25 1:51:55 0:12:30 smithi main rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap-schedule} 2
Failure Reason:

"1696856325.3261294 mon.a (mon.0) 1797 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7418427 2023-10-09 11:14:02 2023-10-09 11:14:38 2023-10-09 11:48:42 0:34:04 0:20:10 0:13:54 smithi main ubuntu 20.04 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{ubuntu_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs/{frag test}} 2
Failure Reason:

Command failed (workunit test libcephfs/test.sh) on smithi008 with status 127: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c68c3f491122d7c1646aa58425d37dc1a8b276ba TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh'

fail 7418428 2023-10-09 11:14:03 2023-10-09 11:14:38 2023-10-09 12:02:53 0:48:15 0:35:02 0:13:13 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/strays} 2
Failure Reason:

Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)

fail 7418429 2023-10-09 11:14:04 2023-10-09 11:14:39 2023-10-09 11:38:59 0:24:20 0:09:09 0:15:11 smithi main fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=08cac0187919fc3a247efdf445f652e2fb30a647

fail 7418430 2023-10-09 11:14:05 2023-10-09 11:14:39 2023-10-09 11:47:09 0:32:30 0:17:45 0:14:45 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} 5
Failure Reason:

Command failed on smithi138 with status 8: 'TESTDIR=/home/ubuntu/cephtest bash -s'