Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi018.front.sepia.ceph.com smithi True True 2024-05-14 05:16:31.617373 scheduled_sjust@teuthology centos 9 x86_64 /home/teuthworker/archive/sjust-2024-05-14_04:46:37-crimson-rados:thrash-wip-sjust-crimson-testing-2024-05-13-distro-default-smithi/7705481
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7705481 2024-05-14 04:48:10 2024-05-14 05:16:31 2024-05-14 14:58:08 9:43:21 smithi main centos 9.stream crimson-rados:thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} 2
pass 7705428 2024-05-14 00:33:50 2024-05-14 01:26:28 2024-05-14 04:04:59 2:38:31 2:24:28 0:14:03 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-radosbench} 2
pass 7705361 2024-05-14 00:31:23 2024-05-14 00:35:21 2024-05-14 01:27:32 0:52:11 0:39:27 0:12:44 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/iogen}} 3
pass 7705166 2024-05-13 22:08:40 2024-05-14 00:11:47 2024-05-14 00:38:30 0:26:43 0:17:55 0:08:48 smithi main rhel 8.6 orch/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
pass 7705135 2024-05-13 21:32:48 2024-05-13 23:43:29 2024-05-14 00:12:54 0:29:25 0:16:54 0:12:31 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_suites_fsstress thrashosds-health} 4
pass 7705109 2024-05-13 21:32:22 2024-05-13 23:13:54 2024-05-13 23:44:38 0:30:44 0:20:01 0:10:43 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_suites_fsync thrashosds-health} 4
pass 7705056 2024-05-13 21:11:01 2024-05-13 22:45:02 2024-05-13 23:14:25 0:29:23 0:18:10 0:11:13 smithi main centos 9.stream orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
pass 7705021 2024-05-13 21:10:25 2024-05-13 22:14:14 2024-05-13 22:45:05 0:30:51 0:17:15 0:13:36 smithi main centos 9.stream orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_rgw_multisite} 3
fail 7704925 2024-05-13 21:08:54 2024-05-13 21:31:20 2024-05-13 22:17:28 0:46:08 0:33:59 0:12:09 smithi main centos 9.stream orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7704647 2024-05-13 07:43:27 2024-05-13 08:40:09 2024-05-13 09:29:41 0:49:32 0:39:30 0:10:02 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7704607 2024-05-13 07:42:32 2024-05-13 08:12:40 2024-05-13 08:40:41 0:28:01 0:18:02 0:09:59 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
pass 7704535 2024-05-13 05:54:37 2024-05-13 05:58:02 2024-05-13 07:42:34 1:44:32 1:34:43 0:09:49 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/misc}} 3
pass 7704499 2024-05-13 00:24:53 2024-05-13 02:47:25 2024-05-13 05:07:20 2:19:55 2:06:40 0:13:15 smithi main ubuntu 20.04 upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-octopus-install/octopus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-quincy 7-final-workload mon_election/classic thrashosds-health ubuntu_20.04} 5
pass 7704393 2024-05-12 22:05:27 2024-05-13 14:56:02 2024-05-13 15:36:16 0:40:14 0:28:34 0:11:40 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} tasks/e2e} 2
pass 7704324 2024-05-12 22:04:17 2024-05-13 14:24:21 2024-05-13 14:56:32 0:32:11 0:18:12 0:13:59 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_rgw_multisite} 3
pass 7704241 2024-05-12 22:02:52 2024-05-13 13:43:43 2024-05-13 14:27:47 0:44:04 0:33:06 0:10:58 smithi main centos 8.stream rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
pass 7704182 2024-05-12 22:01:53 2024-05-13 13:08:26 2024-05-13 13:43:42 0:35:16 0:23:11 0:12:05 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/careful thrashosds-health workloads/ec-small-objects-overwrites} 2
fail 7704073 2024-05-12 21:27:27 2024-05-12 23:27:31 2024-05-13 00:27:14 0:59:43 0:48:38 0:11:05 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/fs/misc}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

pass 7704017 2024-05-12 21:26:28 2024-05-12 22:46:42 2024-05-12 23:28:01 0:41:19 0:32:08 0:09:11 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/direct_io}} 3
fail 7703917 2024-05-12 21:24:41 2024-05-12 21:33:36 2024-05-12 22:40:42 1:07:06 0:58:04 0:09:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi001 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c50db29d375ad9ca1a200449f9347b89e25fb278 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'