Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi123.front.sepia.ceph.com smithi True True 2024-03-28 02:03:49.968707 mchangir centos 9 x86_64 /home/teuthworker/archive/mchangir-2024-03-27_03:45:23-fs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/7625294
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7626308 2024-03-27 13:03:15 2024-03-28 01:38:00 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects} 2
Failure Reason:

hit max job timeout

fail 7626290 2024-03-27 12:59:57 2024-03-27 13:28:50 698 smithi main centos 9.stream fs:fscrypt:/fscrypt-protect/{begin/{0-install 1-ceph 2-logrotate 3-modules} bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} distro/{centos_latest} mount/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down osd pg-warn} tasks/{0-client 1-tests/fscrypt-protect}} 3
Failure Reason:

Command failed (workunit test fs/fscrypt_protect.sh) on smithi188 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/fscrypt && cd -- /home/ubuntu/cephtest/mnt.1/fscrypt && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=fc9dad12a5ea2402733ec44a0a29aa37b4073e37 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.1/qa/workunits/fs/fscrypt_protect.sh kclient'

fail 7625890 2024-03-27 08:06:20 2024-03-27 08:14:36 2024-03-27 08:43:23 0:28:47 0:19:05 0:09:42 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} 1
Failure Reason:

"2024-03-27T08:41:04.783057+0000 mon.a (mon.0) 427 : cluster [ERR] Health check failed: 1 scrub errors (OSD_SCRUB_ERRORS)" in cluster log

fail 7625867 2024-03-27 07:38:34 2024-03-27 11:37:25 2024-03-27 12:01:28 0:24:03 0:17:11 0:06:52 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

No module named 'tasks.ceph'

fail 7625761 2024-03-27 05:31:57 2024-03-27 09:48:27 2024-03-27 11:29:43 1:41:16 1:29:28 0:11:48 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7625720 2024-03-27 05:31:09 2024-03-27 08:45:48 2024-03-27 09:45:31 0:59:43 0:47:51 0:11:52 smithi main ubuntu 22.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
pass 7625677 2024-03-27 05:30:15 2024-03-27 07:28:00 2024-03-27 08:15:10 0:47:10 0:36:16 0:10:54 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/direct_io}} 3
fail 7625622 2024-03-27 05:29:06 2024-03-27 06:41:11 2024-03-27 07:16:07 0:34:56 0:23:36 0:11:20 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/forward-scrub} 2
Failure Reason:

"2024-03-27T07:09:08.654178+0000 mds.c (mds.0) 1 : cluster [ERR] dir 0x10000000000 object missing on disk; some files may be lost (/dir)" in cluster log

fail 7625562 2024-03-27 05:27:49 2024-03-27 05:33:25 2024-03-27 06:35:41 1:02:16 0:51:08 0:11:08 smithi main centos 9.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host} 2
Failure Reason:

"2024-03-27T06:02:14.435084+0000 mon.smithi033 (mon.0) 643 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

running 7625294 2024-03-27 03:46:58 2024-03-28 02:02:39 2024-03-28 09:45:34 7:44:30 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/fs/misc}} 3
pass 7625282 2024-03-27 03:46:46 2024-03-28 01:35:28 2024-03-28 02:03:44 0:28:16 0:15:19 0:12:57 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v1 tasks/openfiletable} 2
pass 7625185 2024-03-27 03:13:59 2024-03-27 04:16:40 2024-03-27 05:01:24 0:44:44 0:33:18 0:11:26 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7625069 2024-03-27 02:30:47 2024-03-27 02:31:03 2024-03-27 02:56:16 0:25:13 0:15:14 0:09:59 smithi main centos 9.stream fs:fscrypt:/fscrypt-protect/{begin/{0-install 1-ceph 2-logrotate 3-modules} bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down osd pg-warn} tasks/{0-client 1-tests/fscrypt-protect}} 3
pass 7624316 2024-03-26 21:18:48 2024-03-27 12:08:30 2024-03-27 12:59:39 0:51:09 0:41:12 0:09:57 smithi main centos 9.stream rbd/singleton/{all/rbd_mirror conf/{disable-pool-app} objectstore/bluestore-hybrid openstack supported-random-distro$/{centos_latest}} 1
pass 7624253 2024-03-26 21:17:45 2024-03-27 05:00:58 2024-03-27 05:34:33 0:33:35 0:24:08 0:09:27 smithi main centos 9.stream rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/ec extra-conf/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-low-osd-mem-target supported-random-distro$/{centos_latest} workloads/python_api_tests_with_defaults} 3
pass 7624235 2024-03-26 21:17:27 2024-03-27 03:02:50 2024-03-27 04:18:06 1:15:16 1:04:55 0:10:21 smithi main centos 9.stream rbd/maintenance/{base/install clusters/{fixed-3 openstack} conf/{disable-pool-app} objectstore/bluestore-low-osd-mem-target qemu/xfstests supported-random-distro$/{centos_latest} workloads/rebuild_object_map} 3
fail 7624173 2024-03-26 20:33:58 2024-03-26 22:19:45 2043 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

"2024-03-26T21:58:47.110603+0000 mon.a (mon.0) 196 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log

fail 7624118 2024-03-26 20:32:45 2024-03-26 20:55:05 2024-03-26 21:26:04 0:30:59 0:19:04 0:11:55 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_basic} 2
Failure Reason:

"2024-03-26T21:21:06.091537+0000 mon.a (mon.0) 289 : cluster [WRN] Health check failed: Degraded data redundancy: 7/21 objects degraded (33.333%), 3 pgs degraded (PG_DEGRADED)" in cluster log

pass 7623825 2024-03-26 17:12:05 2024-03-26 18:33:10 2024-03-26 19:18:26 0:45:16 0:32:12 0:13:04 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/suites/fsync-tester}} 3
pass 7623773 2024-03-26 17:10:53 2024-03-26 17:42:20 2024-03-26 18:33:08 0:50:48 0:37:47 0:13:01 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/iogen}} 3