Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi188.front.sepia.ceph.com smithi True True 2024-05-09 08:01:05.444045 scheduled_teuthology@teuthology centos 9 x86_64 /home/teuthworker/archive/teuthology-2024-05-08_21:24:04-fs-squid-distro-default-smithi/7698663
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7699502 2024-05-09 03:10:33 2024-05-09 03:37:13 2024-05-09 04:16:57 0:39:44 0:24:43 0:15:01 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} 3
pass 7699470 2024-05-09 03:09:59 2024-05-09 03:20:39 2024-05-09 03:40:10 0:19:31 0:10:39 0:08:52 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} 1
fail 7699448 2024-05-09 02:35:26 2024-05-09 02:36:52 2024-05-09 03:07:07 0:30:15 0:19:51 0:10:24 smithi main centos 9.stream rados:thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/cache-snaps-balanced} 4
Failure Reason:

reached maximum tries (91) after waiting for 540 seconds

fail 7699437 2024-05-09 00:24:35 2024-05-09 06:02:12 2024-05-09 06:48:16 0:46:04 0:35:05 0:10:59 smithi main centos 9.stream fs:upgrade:mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2024-05-09T06:40:00.000208+0000 mon.smithi160 (mon.0) 608 : cluster [WRN] fs cephfs is degraded" in cluster log

pass 7699131 2024-05-08 22:11:35 2024-05-09 05:25:10 2024-05-09 06:02:49 0:37:39 0:26:28 0:11:11 smithi main centos 9.stream fs/traceless/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_ffsb traceless/50pc} 2
pass 7699085 2024-05-08 22:10:47 2024-05-09 04:38:12 2024-05-09 05:25:21 0:47:09 0:38:50 0:08:19 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-quiesce/no 6-workunit/suites/blogbench}} 3
pass 7699059 2024-05-08 22:10:21 2024-05-09 04:16:48 2024-05-09 04:38:12 0:21:24 0:11:09 0:10:15 smithi main centos 9.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_pjd} 2
pass 7698883 2024-05-08 22:07:15 2024-05-09 00:14:20 2024-05-09 02:37:39 2:23:19 2:12:50 0:10:29 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/snap-schedule} 2
running 7698663 2024-05-08 21:27:11 2024-05-09 07:59:55 2024-05-09 08:12:26 0:13:09 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/dbench}} 3
pass 7698588 2024-05-08 21:25:53 2024-05-09 06:56:43 2024-05-09 08:01:02 1:04:19 0:53:04 0:11:15 smithi main centos 9.stream fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} kclient-overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health} tasks/kernel_cfuse_workunits_dbench_iozone} 2
fail 7698174 2024-05-08 19:26:46 2024-05-08 23:11:12 2024-05-09 00:07:00 0:55:48 0:48:06 0:07:42 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/postgres}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

pass 7698069 2024-05-08 19:24:42 2024-05-08 21:26:32 2024-05-08 23:12:07 1:45:35 1:31:54 0:13:41 smithi main rhel 8.6 fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/3 tasks/fsstress validater/valgrind} 2
pass 7698027 2024-05-08 19:23:54 2024-05-08 20:38:59 2024-05-08 21:26:34 0:47:35 0:40:15 0:07:20 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/ffsb}} 3
pass 7697965 2024-05-08 19:22:45 2024-05-08 19:38:31 2024-05-08 20:39:13 1:00:42 0:49:37 0:11:05 smithi main centos 8.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host} 2
pass 7697883 2024-05-08 17:44:36 2024-05-08 18:02:16 2024-05-08 18:42:54 0:40:38 0:23:08 0:17:30 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec s3tests-branch tasks/rgw_s3tests ubuntu_latest} 2
pass 7697810 2024-05-08 16:08:05 2024-05-08 16:11:51 2024-05-08 18:08:47 1:56:56 1:45:59 0:10:57 smithi main ubuntu 22.04 rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-comp-snappy 4-supported-random-distro$/{ubuntu_latest} 5-data-pool/replicated 6-prepare/qcow2-http 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup conf/{disable-pool-app}} 3
pass 7697643 2024-05-08 15:03:41 2024-05-08 15:47:10 2024-05-08 16:12:38 0:25:28 0:15:23 0:10:05 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} 3
pass 7697603 2024-05-08 15:02:53 2024-05-08 15:29:42 2024-05-08 15:47:19 0:17:37 0:07:56 0:09:41 smithi main centos 9.stream rados/objectstore/{backends/fusestore supported-random-distro$/{centos_latest}} 1
pass 7697529 2024-05-08 15:01:19 2024-05-08 15:02:31 2024-05-08 15:30:17 0:27:46 0:17:59 0:09:47 smithi main centos 9.stream rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{centos_latest}} 1
fail 7697484 2024-05-08 13:39:40 2024-05-08 14:13:42 2024-05-08 14:32:04 0:18:22 0:05:29 0:12:53 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_domain} 2
Failure Reason:

Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:7c8f650b36e258f639fa4a83becade57cbfd2009-aarch64 pull'