Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7566782 2024-02-19 19:28:37 2024-02-19 21:30:34 2024-02-19 22:14:15 0:43:41 0:33:10 0:10:31 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7566784 2024-02-19 19:28:38 2024-02-19 21:30:35 2024-02-19 22:27:38 0:57:03 0:48:08 0:08:55 smithi main centos 8.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/centos_latest} 2
Failure Reason:

"2024-02-19T21:58:47.903693+0000 mon.smithi099 (mon.0) 641 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

fail 7566786 2024-02-19 19:28:38 2024-02-19 21:30:36 2024-02-19 22:09:42 0:39:06 0:31:51 0:07:15 smithi main rhel 8.6 fs/full/{begin clusters/1-node-1-mds-1-osd conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mgr-osd-full} 1
Failure Reason:

Command failed (workunit test fs/full/subvolume_snapshot_rm.sh) on smithi107 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=81bd20d634209c7cb82c18be12b4b5a05643ebf1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/full/subvolume_snapshot_rm.sh'

pass 7566788 2024-02-19 19:28:39 2024-02-19 21:33:57 2024-02-19 23:51:17 2:17:20 2:08:00 0:09:20 smithi main centos 8.stream fs/valgrind/{begin centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} 1
fail 7566790 2024-02-19 19:28:40 2024-02-19 21:35:28 2024-02-19 22:12:11 0:36:43 0:26:27 0:10:16 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi086 with status 5: 'sudo systemctl stop ceph-867b6b52-cf71-11ee-95bb-87774f69a715@mon.smithi086'

fail 7566793 2024-02-19 19:28:41 2024-02-19 22:04:12 861 smithi main rhel 8.6 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/acls} 2
Failure Reason:

Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)

fail 7566795 2024-02-19 19:28:42 2024-02-19 21:43:31 2024-02-19 22:12:36 0:29:05 0:14:52 0:14:13 smithi main ubuntu 20.04 fs/multiclient/{begin clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} 4
Failure Reason:

Command failed on smithi146 with status 8: 'TESTDIR=/home/ubuntu/cephtest bash -s'

fail 7566797 2024-02-19 19:28:43 2024-02-19 21:46:02 2024-02-19 22:27:18 0:41:16 0:30:42 0:10:34 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi160 with status 5: 'sudo systemctl stop ceph-810f454c-cf73-11ee-95bb-87774f69a715@mon.smithi160'

fail 7566799 2024-02-19 19:28:43 2024-02-19 21:46:03 2024-02-19 22:29:13 0:43:10 0:33:44 0:09:26 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi189 with status 5: 'sudo systemctl stop ceph-ca40a3e6-cf73-11ee-95bb-87774f69a715@mon.smithi189'

fail 7566801 2024-02-19 19:28:44 2024-02-19 21:46:03 2024-02-19 22:11:57 0:25:54 0:15:16 0:10:38 smithi main ubuntu 20.04 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} 2
fail 7566803 2024-02-19 19:28:45 2024-02-19 21:46:04 2024-02-19 22:25:59 0:39:55 0:29:25 0:10:30 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi088 with status 5: 'sudo systemctl stop ceph-56739a54-cf73-11ee-95bb-87774f69a715@mon.smithi088'

fail 7566805 2024-02-19 19:28:46 2024-02-19 21:46:05 2024-02-19 22:24:19 0:38:14 0:26:15 0:11:59 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi184 with status 5: 'sudo systemctl stop ceph-36a8f1ec-cf73-11ee-95bb-87774f69a715@mon.smithi184'

fail 7566807 2024-02-19 19:28:47 2024-02-19 21:49:17 2024-02-19 22:18:23 0:29:06 0:14:57 0:14:09 smithi main ubuntu 20.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} 5
Failure Reason:

Command failed on smithi069 with status 8: 'TESTDIR=/home/ubuntu/cephtest bash -s'

fail 7566808 2024-02-19 19:28:47 2024-02-19 21:52:18 2024-02-19 22:34:47 0:42:29 0:30:06 0:12:23 smithi main centos 8.stream fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi125 with status 128: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=81bd20d634209c7cb82c18be12b4b5a05643ebf1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh'

fail 7566809 2024-02-19 19:28:48 2024-02-19 21:53:38 2024-02-19 22:30:58 0:37:20 0:26:39 0:10:41 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi097 with status 5: 'sudo systemctl stop ceph-f6020aec-cf73-11ee-95bb-87774f69a715@mon.smithi097'

fail 7566810 2024-02-19 19:28:49 2024-02-19 21:53:49 2024-02-19 22:30:58 0:37:09 0:26:24 0:10:45 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi144 with status 5: 'sudo systemctl stop ceph-2db841b8-cf74-11ee-95bb-87774f69a715@mon.smithi144'

fail 7566811 2024-02-19 19:28:50 2024-02-19 21:55:49 2024-02-19 23:03:31 1:07:42 0:50:53 0:16:49 smithi main ubuntu 20.04 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/3 scrub/yes standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} 3
Failure Reason:

Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi134 with status 23: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=81bd20d634209c7cb82c18be12b4b5a05643ebf1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh'

fail 7566812 2024-02-19 19:28:50 2024-02-19 22:01:20 2024-02-19 22:43:19 0:41:59 0:29:43 0:12:16 smithi main ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/strays} 2
Failure Reason:

Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)