Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi150.front.sepia.ceph.com smithi True True 2024-05-04 10:36:44.372711 scheduled_yuriw@teuthology ubuntu 22.04 x86_64 /home/teuthworker/archive/yuriw-2024-05-02_19:07:25-rados-wip-yuri4-testing-2024-04-29-0642-distro-default-smithi/7686179
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7690753 2024-05-04 04:51:24 2024-05-04 09:37:57 2024-05-04 10:32:32 0:54:35 0:42:57 0:11:38 smithi main centos 9.stream fs/shell/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/centos_latest mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/cephfs-shell} 2
fail 7690713 2024-05-04 04:50:37 2024-05-04 09:13:43 2024-05-04 09:33:42 0:19:59 0:07:57 0:12:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-quiesce/with-quiesce 6-workunit/suites/pjd}} 3
Failure Reason:

Command failed on smithi077 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:25eb20a061356442d4a1c711818ce2e5848c382d pull'

pass 7690664 2024-05-04 04:49:34 2024-05-04 08:35:52 2024-05-04 09:14:18 0:38:26 0:22:47 0:15:39 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
fail 7690624 2024-05-04 04:48:44 2024-05-04 08:09:36 2024-05-04 08:24:46 0:15:10 0:06:05 0:09:05 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-quiesce/no 6-workunit/suites/dbench}} 3
Failure Reason:

Command failed on smithi112 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:25eb20a061356442d4a1c711818ce2e5848c382d pull'

pass 7690545 2024-05-04 04:47:05 2024-05-04 07:17:41 2024-05-04 08:09:26 0:51:45 0:39:22 0:12:23 smithi main centos 9.stream fs/snaps/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/workunit/snaps} 2
fail 7690522 2024-05-04 04:46:36 2024-05-04 07:02:09 2024-05-04 07:15:55 0:13:46 0:06:23 0:07:23 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/no 6-workunit/suites/blogbench}} 3
Failure Reason:

Command failed on smithi107 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:25eb20a061356442d4a1c711818ce2e5848c382d pull'

fail 7690490 2024-05-04 04:45:56 2024-05-04 06:38:42 2024-05-04 06:52:06 0:13:24 0:06:14 0:07:10 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-quiesce/no 6-workunit/suites/dbench}} 3
Failure Reason:

Command failed on smithi077 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:25eb20a061356442d4a1c711818ce2e5848c382d pull'

pass 7690323 2024-05-04 01:44:50 2024-05-04 05:44:07 2024-05-04 06:38:32 0:54:25 0:45:06 0:09:19 smithi main ubuntu 22.04 rgw/verify/{0-install accounts$/{main} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec s3tests-branch sharding$/{single} striping$/{stripe-greater-than-chunk} supported-random-distro$/{ubuntu_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} 2
fail 7688642 2024-05-03 22:53:04 2024-05-04 03:10:02 2024-05-04 05:12:00 2:01:58 1:49:32 0:12:26 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-quiesce/no 6-workunit/fs/misc}} 3
Failure Reason:

"2024-05-04T04:09:01.507653+0000 mds.b (mds.0) 105 : cluster [WRN] Scrub error on inode 0x1000000c5ee (/volumes/qa/sv_1/1e654b01-eda5-451a-aee9-41b21838934d/client.0/tmp/payload.1/multiple_rsync_payload.205388/kernel) see mds.b log and `damage ls` output for details" in cluster log

pass 7688594 2024-05-03 22:52:11 2024-05-04 02:31:07 2024-05-04 03:12:37 0:41:30 0:33:33 0:07:57 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-quiesce/with-quiesce 6-workunit/suites/fsync-tester}} 3
pass 7688571 2024-05-03 22:51:45 2024-05-04 02:07:24 2024-05-04 02:31:01 0:23:37 0:13:13 0:10:24 smithi main ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-2-client conf/{client mds mgr mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/ior-shared-file} 4
fail 7688474 2024-05-03 22:49:54 2024-05-04 00:34:10 2024-05-04 01:55:09 1:20:59 1:13:20 0:07:39 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/misc}} 3
Failure Reason:

error during quiesce thrashing: Error quiescing set '4bcf6cd9': 110 (ETIMEDOUT)

fail 7688416 2024-05-03 22:48:48 2024-05-03 23:13:05 2024-05-04 00:30:18 1:17:13 1:07:32 0:09:41 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7687635 2024-05-03 14:03:24 2024-05-04 05:12:48 2024-05-04 05:44:29 0:31:41 0:22:43 0:08:58 smithi main ubuntu 22.04 fs/traceless/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_blogbench traceless/50pc} 2
pass 7687568 2024-05-03 14:01:58 2024-05-03 22:45:42 2024-05-03 23:14:53 0:29:11 0:18:02 0:11:09 smithi main centos 9.stream fs/traceless/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_fsstress traceless/50pc} 2
fail 7687534 2024-05-03 14:00:37 2024-05-03 21:39:09 2024-05-03 22:31:29 0:52:20 0:41:46 0:10:34 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-quiesce/no 6-workunit/suites/fsstress}} 3
Failure Reason:

Command failed on smithi148 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

pass 7687517 2024-05-03 14:00:15 2024-05-03 20:57:56 2024-05-03 21:39:49 0:41:53 0:30:01 0:11:52 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-quiesce/no 6-workunit/direct_io}} 3
fail 7687477 2024-05-03 13:59:26 2024-05-03 19:48:41 2024-05-03 20:56:20 1:07:39 0:54:15 0:13:24 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/no 6-workunit/suites/iogen}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

pass 7687467 2024-05-03 13:59:14 2024-05-03 19:25:04 2024-05-03 19:48:31 0:23:27 0:13:03 0:10:24 smithi main centos 9.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs pg_health session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_trivial_sync}} 2
pass 7687421 2024-05-03 13:58:17 2024-05-03 17:52:31 2024-05-03 19:15:53 1:23:22 1:13:16 0:10:06 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-quiesce/no 6-workunit/postgres}} 3