Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi113.front.sepia.ceph.com smithi True True 2024-05-14 05:27:34.960204 scheduled_sjust@teuthology centos 9 x86_64 /home/teuthworker/archive/sjust-2024-05-14_04:49:59-rados:thrash-wip-sjust-balanced-read-testing-2024-05-13-distro-default-smithi/7705494
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7705494 2024-05-14 04:50:28 2024-05-14 05:26:14 2024-05-14 12:27:42 7:03:06 smithi main centos 9.stream rados:thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/snaps-few-objects-balanced} 4
pass 7705432 2024-05-14 00:33:54 2024-05-14 01:29:50 2024-05-14 01:54:56 0:25:06 0:13:16 0:11:50 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
pass 7705393 2024-05-14 00:31:59 2024-05-14 00:57:18 2024-05-14 01:31:29 0:34:11 0:23:58 0:10:13 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/fsync-tester}} 3
fail 7705261 2024-05-13 22:10:07 2024-05-14 02:34:56 2024-05-14 03:28:11 0:53:15 0:42:21 0:10:54 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7705201 2024-05-13 22:09:12 2024-05-14 01:54:38 2024-05-14 02:36:35 0:41:57 0:31:20 0:10:37 smithi main centos 8.stream orch/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
pass 7705193 2024-05-13 22:09:05 2024-05-14 00:27:19 2024-05-14 00:57:52 0:30:33 0:18:43 0:11:50 smithi main ubuntu 20.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
pass 7705140 2024-05-13 21:32:53 2024-05-13 23:48:32 2024-05-14 00:27:12 0:38:40 0:28:18 0:10:22 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-distros/centos_latest tasks/rados_api_tests thrashosds-health} 4
pass 7705121 2024-05-13 21:32:34 2024-05-13 23:22:50 2024-05-13 23:49:33 0:26:43 0:15:36 0:11:07 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_suites_fsstress thrashosds-health} 4
pass 7705073 2024-05-13 21:11:19 2024-05-13 22:49:59 2024-05-13 23:23:02 0:33:03 0:21:55 0:11:08 smithi main ubuntu 22.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/jaeger 3-final} 2
pass 7705023 2024-05-13 21:10:27 2024-05-13 22:17:46 2024-05-13 22:50:59 0:33:13 0:22:19 0:10:54 smithi main centos 9.stream orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
fail 7704938 2024-05-13 21:09:07 2024-05-13 21:31:25 2024-05-13 22:15:35 0:44:10 0:30:07 0:14:03 smithi main centos 9.stream orch/cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
Failure Reason:

"2024-05-13T21:54:02.989070+0000 mon.a (mon.0) 209 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,c" in cluster log

pass 7704656 2024-05-13 07:43:39 2024-05-13 08:44:43 2024-05-13 09:18:11 0:33:28 0:23:32 0:09:56 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} 2
pass 7704505 2024-05-13 00:24:59 2024-05-13 02:54:49 2024-05-13 06:39:23 3:44:34 3:30:36 0:13:58 smithi main ubuntu 20.04 upgrade:octopus-x/stress-split-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-quincy 8-final-workload/{rbd-python snaps-many-objects} mon_election/classic objectstore/filestore-xfs thrashosds-health ubuntu_20.04} 5
fail 7704439 2024-05-12 22:06:14 2024-05-13 15:17:42 2024-05-13 15:38:35 0:20:53 0:09:26 0:11:27 smithi main ubuntu 22.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-small-objects-balanced} 2
Failure Reason:

HTTPSConnectionPool(host='4.chacra.ceph.com', port=443): Max retries exceeded with url: /repos/ceph/reef/b806bdbddfddd976c2919d3cca5c05faad473799/ubuntu/jammy/flavors/default/repo (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f1002ad06d0>: Failed to establish a new connection: [Errno 110] Connection timed out'))

pass 7704369 2024-05-12 22:05:03 2024-05-13 14:43:41 2024-05-13 15:18:16 0:34:35 0:23:28 0:11:07 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7704307 2024-05-12 22:04:00 2024-05-13 14:17:24 2024-05-13 14:44:49 0:27:25 0:16:51 0:10:34 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
fail 7704199 2024-05-12 22:02:10 2024-05-13 13:23:35 2024-05-13 14:17:03 0:53:28 0:42:30 0:10:58 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/mon} 1
Failure Reason:

Command failed (workunit test mon/osd-erasure-code-profile.sh) on smithi113 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2965081fb1a343a09e00ceeede998525f5c6cb5e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/osd-erasure-code-profile.sh'

fail 7704022 2024-05-12 21:26:33 2024-05-12 22:50:15 2024-05-13 00:55:52 2:05:37 1:56:35 0:09:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

pass 7703930 2024-05-12 21:24:56 2024-05-12 21:36:52 2024-05-12 22:50:09 1:13:17 1:01:08 0:12:09 smithi main ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-3-client conf/{client mds mgr mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cephfs_misc_tests} 5
fail 7703843 2024-05-12 21:05:52 2024-05-13 02:10:24 2024-05-13 02:45:49 0:35:25 0:26:10 0:09:15 smithi main centos 9.stream rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi113 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c9f299128a357326288806b31d29637272fee905 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'