Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi083.front.sepia.ceph.com smithi True True 2024-05-14 05:14:50.251576 scheduled_sjust@teuthology centos 9 x86_64 /home/teuthworker/archive/sjust-2024-05-14_04:46:37-crimson-rados:thrash-wip-sjust-crimson-testing-2024-05-13-distro-default-smithi/7705478
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7705478 2024-05-14 04:48:08 2024-05-14 05:12:19 2024-05-14 14:16:11 9:05:42 smithi main centos 9.stream crimson-rados:thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/simple thrashosds-health workloads/small-objects-balanced} 2
pass 7705437 2024-05-14 00:34:00 2024-05-14 01:39:03 2024-05-14 02:08:35 0:29:32 0:19:14 0:10:18 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
fail 7705371 2024-05-14 00:31:34 2024-05-14 00:44:17 2024-05-14 01:27:42 0:43:25 0:32:08 0:11:17 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2024-05-14T01:13:53.848112+0000 mon.smithi083 (mon.0) 266 : cluster [WRN] Health check failed: Degraded data redundancy: 73/372 objects degraded (19.624%), 20 pgs degraded (PG_DEGRADED)" in cluster log

pass 7705295 2024-05-13 22:10:38 2024-05-14 03:00:52 2024-05-14 03:56:18 0:55:26 0:45:11 0:10:15 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7705258 2024-05-13 22:10:04 2024-05-14 02:33:25 2024-05-14 03:00:57 0:27:32 0:17:32 0:10:00 smithi main centos 8.stream orch/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7705220 2024-05-13 22:09:29 2024-05-14 02:08:27 2024-05-14 02:33:46 0:25:19 0:15:46 0:09:33 smithi main centos 8.stream orch/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools} 2-node-mgr agent/on orchestrator_cli} 2
pass 7705158 2024-05-13 21:33:10 2024-05-14 00:11:23 2024-05-14 00:44:36 0:33:13 0:23:35 0:09:38 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-distros/centos_latest tasks/snaps-many-objects thrashosds-health} 4
pass 7705130 2024-05-13 21:32:43 2024-05-13 23:35:16 2024-05-14 00:11:27 0:36:11 0:23:34 0:12:37 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-distros/centos_latest tasks/snaps-many-objects thrashosds-health} 4
pass 7705096 2024-05-13 21:11:43 2024-05-13 23:04:25 2024-05-13 23:37:52 0:33:27 0:24:37 0:08:50 smithi main ubuntu 22.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
fail 7705033 2024-05-13 21:10:37 2024-05-13 22:24:30 2024-05-13 22:59:55 0:35:25 0:25:43 0:09:42 smithi main centos 9.stream orch/cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
Failure Reason:

"2024-05-13T22:41:30.921458+0000 mon.a (mon.0) 209 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,c" in cluster log

pass 7704919 2024-05-13 21:08:48 2024-05-13 21:31:18 2024-05-13 22:24:29 0:53:11 0:39:17 0:13:54 smithi main ubuntu 22.04 orch/cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} 5
pass 7704672 2024-05-13 07:44:01 2024-05-13 08:56:50 2024-05-13 09:38:51 0:42:01 0:32:32 0:09:29 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
pass 7704624 2024-05-13 07:42:55 2024-05-13 08:25:18 2024-05-13 08:57:08 0:31:50 0:20:08 0:11:42 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} 1
pass 7704586 2024-05-13 07:42:04 2024-05-13 07:51:19 2024-05-13 08:27:20 0:36:01 0:23:27 0:12:34 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} 2
fail 7704545 2024-05-13 05:54:41 2024-05-13 06:11:48 2024-05-13 07:18:11 1:06:23 0:53:03 0:13:20 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3ee2ba724b88bb242428fcf88b7dc576e740e26d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

pass 7704490 2024-05-13 00:24:43 2024-05-13 02:41:41 2024-05-13 05:23:31 2:41:50 2:22:04 0:19:46 smithi main centos 8.stream upgrade:octopus-x/stress-split/{0-distro/centos_8.stream_container_tools_crun 0-roles 1-start 2-first-half-tasks/readwrite 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} 2
pass 7704388 2024-05-12 22:05:22 2024-05-13 14:53:29 2024-05-13 15:39:31 0:46:02 0:35:27 0:10:35 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/classic msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/cache-snaps} 3
pass 7704327 2024-05-12 22:04:20 2024-05-13 14:32:43 2024-05-13 14:54:00 0:21:17 0:12:10 0:09:07 smithi main centos 8.stream rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 1
fail 7704286 2024-05-12 22:03:39 2024-05-13 14:06:34 2024-05-13 14:26:42 0:20:08 0:09:29 0:10:39 smithi main ubuntu 22.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/rados_stress_watch} 2
Failure Reason:

HTTPSConnectionPool(host='4.chacra.ceph.com', port=443): Max retries exceeded with url: /repos/ceph/reef/b806bdbddfddd976c2919d3cca5c05faad473799/ubuntu/jammy/flavors/default/repo (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f4dbcbb86a0>: Failed to establish a new connection: [Errno 110] Connection timed out'))

pass 7704239 2024-05-12 22:02:50 2024-05-13 13:41:22 2024-05-13 14:06:52 0:25:30 0:13:08 0:12:22 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4