Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7238218 2023-04-11 13:50:14 2023-04-11 13:50:59 2023-04-11 14:16:30 0:25:31 smithi main centos 8.stream fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
Failure Reason:

machine smithi154.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_rishabh@teuthology

dead 7238219 2023-04-11 13:50:15 2023-04-11 13:50:59 2023-04-11 14:10:29 0:19:30 smithi main ubuntu 20.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cephfs_misc_tests} 5
Failure Reason:

Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds

fail 7238220 2023-04-11 13:50:16 2023-04-11 13:51:00 2023-04-11 14:16:33 0:25:33 smithi main rhel 8.4 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} 3
Failure Reason:

machine smithi110.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_rishabh@teuthology

fail 7238221 2023-04-11 13:50:17 2023-04-11 13:51:00 2023-04-11 14:16:45 0:25:45 smithi main rhel 8.4 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/acls} 2
Failure Reason:

machine smithi131.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_rishabh@teuthology

fail 7238222 2023-04-11 13:50:17 2023-04-11 13:51:00 2023-04-11 14:30:00 0:39:00 0:16:25 0:22:35 smithi main ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
Failure Reason:

SSH connection to smithi148 was lost: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=681dd7280d36cda3abcedb40f86f672671cd2e37 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/iozone.sh'

dead 7238223 2023-04-11 13:50:18 2023-04-11 13:51:01 2023-04-11 14:07:00 0:15:59 smithi main centos 8.stream fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/backtrace} 2
Failure Reason:

Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds

fail 7238224 2023-04-11 13:50:19 2023-04-11 13:51:01 2023-04-11 14:17:11 0:26:10 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

machine smithi019.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_rishabh@teuthology

fail 7238225 2023-04-11 13:50:19 2023-04-11 13:51:01 2023-04-11 14:17:45 0:26:44 smithi main ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/data-scan} 2
Failure Reason:

machine smithi049.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_rishabh@teuthology

dead 7238226 2023-04-11 13:50:20 2023-04-11 13:51:02 2023-04-11 14:06:07 0:15:05 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 7238227 2023-04-11 13:50:21 2023-04-11 13:51:02 2023-04-11 14:27:59 0:36:57 0:15:58 0:20:59 smithi main ubuntu 20.04 fs/libcephfs/{begin clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{ubuntu_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs/{frag test}} 2
fail 7238228 2023-04-11 13:50:21 2023-04-11 13:51:02 2023-04-11 14:11:48 0:20:46 smithi main ubuntu 20.04 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} 2
Failure Reason:

machine smithi033.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_rishabh@teuthology

fail 7238229 2023-04-11 13:50:22 2023-04-11 13:52:53 2023-04-11 14:15:18 0:22:25 smithi main ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

machine smithi037.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_rishabh@teuthology

fail 7238230 2023-04-11 13:50:23 2023-04-11 13:52:54 2023-04-11 14:11:33 0:18:39 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

machine smithi006.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_rishabh@teuthology

fail 7238231 2023-04-11 13:50:23 2023-04-11 13:53:04 2023-04-11 14:10:57 0:17:53 smithi main centos 8.stream fs/libcephfs/{begin clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{centos_8} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs_python} 2
Failure Reason:

machine smithi042.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_rishabh@teuthology

fail 7238232 2023-04-11 13:50:24 2023-04-11 13:53:04 2023-04-11 14:11:51 0:18:47 smithi main rhel 8.4 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap-schedule} 2
Failure Reason:

machine smithi149.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_rishabh@teuthology