Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7721958 2024-05-23 05:11:19 2024-05-23 05:11:29 2024-05-23 05:12:43 0:01:14 smithi main centos 9.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host overrides/{ignorelist_health pg_health}} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi174

fail 7721959 2024-05-23 05:11:20 2024-05-23 05:11:40 2024-05-23 05:38:54 0:27:14 0:15:09 0:12:05 smithi main centos 9.stream fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/{centos_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/client} 2
Failure Reason:

Command failed (workunit test client/test.sh) on smithi016 with status 134: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e115ef42ee752865db609106056952d21b0ec2cf TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/client/test.sh'

fail 7721960 2024-05-23 05:11:21 2024-05-23 05:11:40 2024-05-23 05:51:07 0:39:27 0:29:46 0:09:41 smithi main centos 9.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate 3-modules} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/{ignorelist_health pg_health} tasks/mirror}} 1
Failure Reason:

Test failure: test_cephfs_mirror_blocklist (tasks.cephfs.test_mirroring.TestMirroring)

pass 7721961 2024-05-23 05:11:22 2024-05-23 05:12:10 2024-05-23 05:38:23 0:26:13 0:15:11 0:11:02 smithi main ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-3-client conf/{client mds mgr mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/ior-shared-file} 5
pass 7721962 2024-05-23 05:11:23 2024-05-23 05:12:51 2024-05-23 05:44:06 0:31:15 0:18:16 0:12:59 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
pass 7721963 2024-05-23 05:11:24 2024-05-23 05:13:11 2024-05-23 06:15:15 1:02:04 0:50:53 0:11:11 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-quiesce/with-quiesce 6-workunit/fs/norstats}} 3
dead 7721964 2024-05-23 05:11:25 2024-05-23 05:13:32 2024-05-23 05:15:06 0:01:34 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-quiesce/with-quiesce 6-workunit/suites/blogbench}} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi144

fail 7721965 2024-05-23 05:11:27 2024-05-23 05:14:02 2024-05-23 06:23:54 1:09:52 0:59:42 0:10:10 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-quiesce/with-quiesce 6-workunit/suites/ffsb}} 3
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi096 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e115ef42ee752865db609106056952d21b0ec2cf TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

pass 7721966 2024-05-23 05:11:28 2024-05-23 05:14:33 2024-05-23 06:26:10 1:11:37 1:01:00 0:10:37 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/fsx}} 3
fail 7721967 2024-05-23 05:11:29 2024-05-23 05:15:03 2024-05-23 06:15:29 1:00:26 0:48:34 0:11:52 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/blogbench}} 3
Failure Reason:

error during quiesce thrashing: Error quiescing set '9aa2921f': 110 (ETIMEDOUT)

pass 7721968 2024-05-23 05:11:30 2024-05-23 05:16:04 2024-05-23 05:52:03 0:35:59 0:22:59 0:13:00 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/forward-scrub} 2
dead 7721969 2024-05-23 05:11:31 2024-05-23 05:18:05 2024-05-23 06:53:45 1:35:40 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/misc}} 3
fail 7721970 2024-05-23 05:11:32 2024-05-23 05:18:45 2024-05-23 06:20:10 1:01:25 0:52:23 0:09:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-quiesce/no 6-workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

pass 7721971 2024-05-23 05:11:33 2024-05-23 05:18:45 2024-05-23 06:17:04 0:58:19 0:48:27 0:09:52 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/no 6-workunit/suites/ffsb}} 3
fail 7721972 2024-05-23 05:11:34 2024-05-23 05:19:36 2024-05-23 05:53:53 0:34:17 0:19:23 0:14:54 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/pool-perm} 2
Failure Reason:

Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)

pass 7721973 2024-05-23 05:11:35 2024-05-23 05:22:57 2024-05-23 05:54:26 0:31:29 0:17:44 0:13:45 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/quota} 2
fail 7721974 2024-05-23 05:11:36 2024-05-23 05:23:17 2024-05-23 05:56:48 0:33:31 0:22:02 0:11:29 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/acls} 2
Failure Reason:

Command failed on smithi029 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

dead 7721975 2024-05-23 05:11:37 2024-05-23 05:23:38 2024-05-23 06:53:48 1:30:10 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/admin} 2
pass 7721976 2024-05-23 05:11:38 2024-05-23 05:25:08 2024-05-23 05:54:16 0:29:08 0:19:53 0:09:15 smithi main centos 9.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/3 tasks/fsstress validater/lockdep} 2
pass 7721977 2024-05-23 05:11:39 2024-05-23 05:25:19 2024-05-23 06:01:45 0:36:26 0:22:13 0:14:13 smithi main centos 8.stream fs/upgrade/upgraded_client/{bluestore-bitmap branch/pacific centos_8.stream clusters/{1-mds-1-client-micro} conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn pg_health} tasks/{0-install 1-mount/mount/fuse 2-clients/fuse-upgrade 3-workload/stress_tests/fsstress}} 2
dead 7721978 2024-05-23 05:11:40 2024-05-23 05:34:05 2024-05-23 06:54:11 1:20:06 smithi main ubuntu 22.04 fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/basic}} 2
dead 7721979 2024-05-23 05:11:42 2024-05-23 05:34:05 2024-05-23 05:35:29 0:01:24 smithi main ubuntu 22.04 fs/shell/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/ubuntu_latest mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/cephfs-shell} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi031

dead 7721980 2024-05-23 05:11:43 2024-05-23 05:34:26 2024-05-23 06:53:28 1:19:02 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7721981 2024-05-23 05:11:44 2024-05-23 05:36:36 2024-05-23 06:41:49 1:05:13 0:54:32 0:10:41 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"1716445200.0001853 mon.smithi069 (mon.0) 667 : cluster [ERR] fs cephfs is offline because no MDS is active for it." in cluster log

pass 7721982 2024-05-23 05:11:45 2024-05-23 05:36:37 2024-05-23 06:10:03 0:33:26 0:21:05 0:12:21 smithi main ubuntu 22.04 fs/32bits/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_fsstress} 2
fail 7721983 2024-05-23 05:11:46 2024-05-23 05:38:17 2024-05-23 06:47:17 1:09:00 0:54:03 0:14:57 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds