Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi167.front.sepia.ceph.com smithi True True 2024-03-19 09:17:06.790197 scheduled_vshankar@teuthology centos 9 x86_64 /home/teuthworker/archive/vshankar-2024-03-19_04:32:52-fs-wip-vshankar-squid-testing-2024-03-19-0707-squid-testing-default-smithi/7610395
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7610781 2024-03-19 08:31:30 2024-03-19 08:51:27 2024-03-19 09:16:15 0:24:48 0:11:36 0:13:12 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} tasks/{0-install test/rbd_cli_import_export}} 3
pass 7610601 2024-03-19 04:58:13 2024-03-19 07:32:09 2024-03-19 08:07:36 0:35:27 0:24:49 0:10:38 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/fsstress}} 3
fail 7610564 2024-03-19 04:57:32 2024-03-19 06:44:15 2024-03-19 07:30:08 0:45:53 0:32:25 0:13:28 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/fsstress}} 3
Failure Reason:

"2024-03-19T07:19:09.153970+0000 mds.f (mds.4) 1 : cluster [WRN] evicting unresponsive client smithi066:x (15333), after 300.476 seconds" in cluster log

running 7610395 2024-03-19 04:34:19 2024-03-19 09:16:26 2024-03-19 10:00:18 0:44:47 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} 3
pass 7610360 2024-03-19 04:33:49 2024-03-19 08:06:04 2024-03-19 08:54:55 0:48:51 0:37:34 0:11:17 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/fs/test_o_trunc}} 3
fail 7610295 2024-03-19 02:37:21 2024-03-19 02:50:42 2024-03-19 03:50:56 1:00:14 0:41:45 0:18:29 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} 2
Failure Reason:

Command failed on smithi069 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell 1.0 deep-scrub'

fail 7610244 2024-03-19 01:21:23 2024-03-19 04:23:29 2024-03-19 06:38:47 2:15:18 2:03:51 0:11:27 smithi main ubuntu 22.04 rgw/verify/{0-install accounts$/{none} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec s3tests-branch sharding$/{single} striping$/{stripe-equals-chunk} supported-random-distro$/{ubuntu_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
Failure Reason:

valgrind error: Leak_PossiblyLost calloc calloc allocate_dtv

pass 7610198 2024-03-19 01:20:44 2024-03-19 03:54:51 2024-03-19 04:24:43 0:29:52 0:19:14 0:10:38 smithi main ubuntu 22.04 rgw/hadoop-s3a/{clusters/fixed-2 hadoop/default ignore-pg-availability overrides s3a-hadoop supported-random-distro$/{ubuntu_latest}} 2
fail 7609347 2024-03-18 21:09:12 2024-03-19 01:34:44 2024-03-19 02:47:49 1:13:05 0:55:44 0:17:21 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi069 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:quincy shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 38cbb5a4-e594-11ee-95c9-87774f69a715 -e sha1=b6ae24918b03b1b8a19c9263a50655bba80bb810 -- bash -c 'ceph orch ps'"

fail 7609297 2024-03-18 21:08:47 2024-03-19 01:27:15 776 smithi main ubuntu 22.04 fs/shell/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/ubuntu_latest mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/cephfs-shell} 2
Failure Reason:

Test failure: test_cd_with_args (tasks.cephfs.test_cephfs_shell.TestCD)

fail 7609203 2024-03-18 21:07:25 2024-03-18 23:00:03 2024-03-19 00:59:27 1:59:24 1:16:06 0:43:18 smithi main centos 9.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable malloc malloc strdup

fail 7609120 2024-03-18 21:06:00 2024-03-18 21:12:04 2024-03-18 22:59:08 1:47:04 1:23:59 0:23:05 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/iozone}} 3
Failure Reason:

"2024-03-18T22:34:46.851749+0000 mon.a (mon.0) 210 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

fail 7606339 2024-03-16 15:06:24 2024-03-18 19:37:40 2024-03-18 21:07:21 1:29:41 1:20:40 0:09:01 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/misc}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7606254 2024-03-16 15:05:16 2024-03-18 18:14:05 2024-03-18 19:40:26 1:26:21 0:47:01 0:39:20 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v2 tasks/snapshots} 2
Failure Reason:

SELinux denials found on ubuntu@smithi050.front.sepia.ceph.com: ['type=AVC msg=audit(1710787891.031:200): avc: denied { checkpoint_restore } for pid=1206 comm="agetty" capability=40 scontext=system_u:system_r:getty_t:s0-s0:c0.c1023 tcontext=system_u:system_r:getty_t:s0-s0:c0.c1023 tclass=capability2 permissive=1']

pass 7606196 2024-03-16 15:04:28 2024-03-18 17:44:41 2024-03-18 18:15:06 0:30:25 0:17:11 0:13:14 smithi main ubuntu 22.04 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_trivial_sync}} 2
fail 7580434 2024-03-03 07:38:05 2024-03-03 07:42:11 2024-03-03 08:22:01 0:39:50 0:26:27 0:13:23 smithi main centos 9.stream fs:functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v1 tasks/quiesce} 2
Failure Reason:

Test failure: test_quiesce_path_release (tasks.cephfs.test_quiesce.TestQuiesce)

dead 7580338 2024-03-03 05:01:23 2024-03-03 05:24:51 2024-03-03 05:25:56 0:01:05 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} tasks/{0-install test/rbd_python_api_tests}} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi104

fail 7580185 2024-03-02 21:18:59 2024-03-02 21:53:17 2024-03-02 22:59:41 1:06:24 0:52:58 0:13:26 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7580123 2024-03-02 17:51:29 2024-03-03 08:31:35 2024-03-03 09:25:37 0:54:02 0:40:54 0:13:08 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7580035 2024-03-02 17:49:56 2024-03-03 07:14:23 2024-03-03 07:44:35 0:30:12 0:19:25 0:10:47 smithi main ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/fsstress validater/lockdep} 2