Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7474179 2023-12-01 16:31:49 2023-12-02 08:38:17 2023-12-02 08:55:30 0:17:13 0:06:01 0:11:12 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi088 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 3b8b28b8-90f0-11ee-95a2-87774f69a715 --force'

fail 7474180 2023-12-01 16:31:50 2023-12-02 08:38:17 2023-12-02 08:58:10 0:19:53 0:08:16 0:11:37 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_crun agent/off mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid a6d0489c-90f0-11ee-95a2-87774f69a715 --force'

pass 7474181 2023-12-01 16:31:50 2023-12-02 08:38:58 2023-12-02 09:18:22 0:39:24 0:29:01 0:10:23 smithi main centos 9.stream orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7474182 2023-12-01 16:31:51 2023-12-02 08:39:18 2023-12-02 09:02:22 0:23:04 0:13:06 0:09:58 smithi main centos 9.stream orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_crun} 2-node-mgr agent/off orchestrator_cli} 2
pass 7474183 2023-12-01 16:31:52 2023-12-02 08:39:29 2023-12-02 09:06:09 0:26:40 0:15:07 0:11:33 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
pass 7474184 2023-12-01 16:31:54 2023-12-02 08:39:49 2023-12-02 09:00:33 0:20:44 0:11:18 0:09:26 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} 1
fail 7474185 2023-12-01 16:31:55 2023-12-02 08:39:49 2023-12-02 08:57:29 0:17:40 0:06:11 0:11:29 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} 3
Failure Reason:

Command failed on smithi008 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 91949d98-90f0-11ee-95a2-87774f69a715 --force'

pass 7474186 2023-12-01 16:31:55 2023-12-02 08:40:00 2023-12-07 20:37:16 5 days, 11:57:16 0:14:01 5 days, 11:43:15 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_crun 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
pass 7474187 2023-12-01 16:31:56 2023-12-02 08:40:10 2023-12-02 09:17:56 0:37:46 0:27:31 0:10:15 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7474188 2023-12-01 16:31:57 2023-12-02 08:40:40 2023-12-02 09:04:19 0:23:39 0:13:41 0:09:58 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi055 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:790ec80fe8c70e17748ed7354bfa28637b894703 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8763ab2-90f0-11ee-95a2-87774f69a715 -- ceph rgw realm bootstrap -i -'

fail 7474189 2023-12-01 16:31:58 2023-12-02 08:41:11 2023-12-02 09:01:28 0:20:17 0:08:26 0:11:51 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_crun agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
Failure Reason:

Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 1857614e-90f1-11ee-95a2-87774f69a715 --force'

pass 7474190 2023-12-01 16:31:59 2023-12-02 08:42:32 2023-12-02 09:07:27 0:24:55 0:13:55 0:11:00 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} 2
fail 7474191 2023-12-01 16:32:00 2023-12-02 08:42:42 2023-12-02 08:59:21 0:16:39 0:06:03 0:10:36 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_crun 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Command failed on smithi032 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 05d4cafc-90f1-11ee-95a2-87774f69a715 --force'

pass 7474192 2023-12-01 16:32:01 2023-12-02 08:42:52 2023-12-02 09:20:43 0:37:51 0:27:12 0:10:39 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
fail 7474193 2023-12-01 16:32:01 2023-12-02 08:43:33 2023-12-02 09:03:10 0:19:37 0:09:39 0:09:58 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_crun agent/off mon_election/classic task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi163 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=790ec80fe8c70e17748ed7354bfa28637b894703 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

pass 7474194 2023-12-01 16:32:02 2023-12-02 08:43:33 2023-12-02 09:07:08 0:23:35 0:12:50 0:10:45 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
fail 7474195 2023-12-01 16:32:03 2023-12-02 08:44:03 2023-12-02 12:31:28 3:47:25 3:35:27 0:11:58 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi187 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=790ec80fe8c70e17748ed7354bfa28637b894703 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

pass 7474196 2023-12-01 16:32:04 2023-12-02 08:44:04 2023-12-02 09:13:46 0:29:42 0:15:41 0:14:01 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
fail 7474197 2023-12-01 16:32:05 2023-12-02 08:47:25 2023-12-02 09:03:33 0:16:08 0:05:57 0:10:11 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 3
Failure Reason:

Command failed on smithi003 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 9d5d4322-90f1-11ee-95a2-87774f69a715 --force'

pass 7474198 2023-12-01 16:32:06 2023-12-02 08:48:25 2023-12-02 09:13:18 0:24:53 0:14:18 0:10:35 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm} 1
fail 7474199 2023-12-01 16:32:07 2023-12-02 08:48:26 2023-12-02 09:03:59 0:15:33 0:05:46 0:09:47 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_crun 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

Command failed on smithi119 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 9a36c04c-90f1-11ee-95a2-87774f69a715 --force'

pass 7474200 2023-12-01 16:32:08 2023-12-02 08:48:56 2023-12-02 09:20:20 0:31:24 0:21:04 0:10:20 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
pass 7474201 2023-12-01 16:32:08 2023-12-02 08:49:16 2023-12-02 09:42:37 0:53:21 0:42:48 0:10:33 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
fail 7474202 2023-12-01 16:32:09 2023-12-02 08:49:17 2023-12-02 09:06:12 0:16:55 0:06:55 0:10:00 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_crun agent/on mon_election/connectivity task/test_cephadm_repos} 1
Failure Reason:

Command failed (workunit test cephadm/test_repos.sh) on smithi106 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=790ec80fe8c70e17748ed7354bfa28637b894703 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh'

fail 7474203 2023-12-01 16:32:10 2023-12-02 08:49:17 2023-12-02 09:04:27 0:15:10 0:05:46 0:09:24 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_crun 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

Command failed on smithi137 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid a9b2d34e-90f1-11ee-95a2-87774f69a715 --force'

fail 7474204 2023-12-01 16:32:11 2023-12-02 08:49:28 2023-12-02 09:08:18 0:18:50 0:08:18 0:10:32 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_crun 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi033 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 15d017d0-90f2-11ee-95a2-87774f69a715 --force'

pass 7474205 2023-12-01 16:32:12 2023-12-02 08:49:38 2023-12-02 09:10:46 0:21:08 0:12:13 0:08:55 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
pass 7474206 2023-12-01 16:32:13 2023-12-02 08:49:38 2023-12-02 09:27:23 0:37:45 0:28:38 0:09:07 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
dead 7474207 2023-12-01 16:32:14 2023-12-02 08:50:29 2023-12-02 09:12:34 0:22:05 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

pass 7474208 2023-12-01 16:32:15 2023-12-02 08:53:30 2023-12-02 09:22:03 0:28:33 0:14:41 0:13:52 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} 3
fail 7474209 2023-12-01 16:32:15 2023-12-02 08:54:20 2023-12-02 09:10:06 0:15:46 0:05:49 0:09:57 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_crun 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

Command failed on smithi086 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 706ce984-90f2-11ee-95a2-87774f69a715 --force'

fail 7474210 2023-12-01 16:32:16 2023-12-02 08:55:01 2023-12-02 09:12:09 0:17:08 0:08:19 0:08:49 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_crun fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rotate-keys} 2
Failure Reason:

Command failed on smithi087 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid ca8ed512-90f2-11ee-95a2-87774f69a715 --force'

fail 7474211 2023-12-01 16:32:17 2023-12-02 08:55:01 2023-12-02 09:11:55 0:16:54 0:08:14 0:08:40 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_crun agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
Failure Reason:

Command failed on smithi042 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid c034bea6-90f2-11ee-95a2-87774f69a715 --force'

pass 7474212 2023-12-01 16:32:18 2023-12-02 08:55:11 2023-12-02 09:18:24 0:23:13 0:12:52 0:10:21 smithi main centos 9.stream orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream} 2-node-mgr agent/on orchestrator_cli} 2
pass 7474213 2023-12-01 16:32:19 2023-12-02 08:55:12 2023-12-02 09:18:31 0:23:19 0:12:35 0:10:44 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} 2
fail 7474214 2023-12-01 16:32:20 2023-12-02 08:55:22 2023-12-02 09:10:20 0:14:58 0:05:43 0:09:15 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_crun} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi120 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 785fadde-90f2-11ee-95a2-87774f69a715 --force'

fail 7474215 2023-12-01 16:32:20 2023-12-02 08:55:22 2023-12-02 09:11:00 0:15:38 0:06:05 0:09:33 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_crun 0-nvme-loop agent/on fixed-2 mon_election/classic start} 3
Failure Reason:

Command failed on smithi028 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 9870d2ce-90f2-11ee-95a2-87774f69a715 --force'

fail 7474216 2023-12-01 16:32:21 2023-12-02 08:55:43 2023-12-02 09:11:21 0:15:38 0:06:11 0:09:27 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_crun 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} 2
Failure Reason:

Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid aed3d6ba-90f2-11ee-95a2-87774f69a715 --force'

pass 7474217 2023-12-01 16:32:22 2023-12-02 08:55:43 2023-12-02 09:19:12 0:23:29 0:13:13 0:10:16 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_orch_cli} 1
fail 7474218 2023-12-01 16:32:23 2023-12-02 08:55:44 2023-12-02 09:45:25 0:49:41 0:35:52 0:13:49 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi148 with status 126: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=790ec80fe8c70e17748ed7354bfa28637b894703 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 7474219 2023-12-01 16:32:24 2023-12-02 08:56:14 2023-12-02 09:20:30 0:24:16 0:12:35 0:11:41 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
fail 7474220 2023-12-01 16:32:25 2023-12-02 08:56:24 2023-12-02 09:17:00 0:20:36 0:08:48 0:11:48 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_crun agent/on mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi008 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 4f81ce32-90f3-11ee-95a2-87774f69a715 --force'

pass 7474221 2023-12-01 16:32:26 2023-12-02 08:57:35 2023-12-02 09:22:55 0:25:20 0:14:08 0:11:12 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
fail 7474222 2023-12-01 16:32:27 2023-12-02 08:57:45 2023-12-02 09:15:39 0:17:54 0:06:11 0:11:43 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 14a7963e-90f3-11ee-95a2-87774f69a715 --force'

fail 7474223 2023-12-01 16:32:27 2023-12-02 08:58:16 2023-12-02 09:15:10 0:16:54 0:06:02 0:10:52 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_crun 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

Command failed on smithi032 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 380b3ef0-90f3-11ee-95a2-87774f69a715 --force'

pass 7474224 2023-12-01 16:32:28 2023-12-02 08:59:26 2023-12-02 09:32:50 0:33:24 0:20:53 0:12:31 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
pass 7474225 2023-12-01 16:32:29 2023-12-02 08:59:47 2023-12-02 09:24:41 0:24:54 0:16:11 0:08:43 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
fail 7474226 2023-12-01 16:32:30 2023-12-02 08:59:47 2023-12-02 09:17:36 0:17:49 0:06:06 0:11:43 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_crun 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi060 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 537aa61c-90f3-11ee-95a2-87774f69a715 --force'

pass 7474227 2023-12-01 16:32:31 2023-12-02 09:00:38 2023-12-02 09:37:49 0:37:11 0:27:35 0:09:36 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
pass 7474228 2023-12-01 16:32:32 2023-12-02 09:00:58 2023-12-02 09:24:36 0:23:38 0:12:59 0:10:39 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
fail 7474229 2023-12-01 16:32:33 2023-12-02 09:01:18 2023-12-02 09:16:31 0:15:13 0:06:12 0:09:01 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3
Failure Reason:

Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 670d99c8-90f3-11ee-95a2-87774f69a715 --force'

pass 7474230 2023-12-01 16:32:34 2023-12-02 09:01:29 2023-12-02 10:25:27 1:23:58 1:13:56 0:10:02 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
pass 7474231 2023-12-01 16:32:35 2023-12-02 09:02:29 2023-12-02 09:37:39 0:35:10 0:25:18 0:09:52 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
pass 7474232 2023-12-01 16:32:35 2023-12-02 09:02:30 2023-12-02 09:23:22 0:20:52 0:11:11 0:09:41 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/connectivity task/test_adoption} 1
fail 7474233 2023-12-01 16:32:36 2023-12-02 09:02:30 2023-12-02 09:18:53 0:16:23 0:05:57 0:10:26 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_crun 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi093 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid b9cc95ba-90f3-11ee-95a2-87774f69a715 --force'

fail 7474234 2023-12-01 16:32:37 2023-12-02 09:03:20 2023-12-02 09:22:14 0:18:54 0:08:48 0:10:06 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_crun agent/on mon_election/classic task/test_ca_signed_key} 2
Failure Reason:

Command failed on smithi079 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid fa362b16-90f3-11ee-95a2-87774f69a715 --force'

fail 7474235 2023-12-01 16:32:38 2023-12-02 09:03:21 2023-12-02 09:20:41 0:17:20 0:08:15 0:09:05 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} 2
Failure Reason:

Command failed on smithi136 with status 1: 'sudo cephadm rm-cluster --fsid 02f7ceda-90f4-11ee-95a2-87774f69a715 --force'

pass 7474236 2023-12-01 16:32:39 2023-12-02 09:03:41 2023-12-02 09:26:54 0:23:13 0:12:49 0:10:24 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
fail 7474237 2023-12-01 16:32:40 2023-12-02 09:03:41 2023-12-02 09:20:46 0:17:05 0:07:08 0:09:57 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/classic task/test_cephadm_repos} 1
Failure Reason:

Command failed (workunit test cephadm/test_repos.sh) on smithi145 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=790ec80fe8c70e17748ed7354bfa28637b894703 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh'

fail 7474238 2023-12-01 16:32:41 2023-12-02 09:04:02 2023-12-02 09:25:14 0:21:12 0:06:05 0:15:07 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_crun 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi083 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 9aa8bd20-90f4-11ee-95a2-87774f69a715 --force'

fail 7474239 2023-12-01 16:32:42 2023-12-02 09:04:22 2023-12-02 12:51:25 3:47:03 3:35:28 0:11:35 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi138 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=790ec80fe8c70e17748ed7354bfa28637b894703 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

fail 7474240 2023-12-01 16:32:42 2023-12-02 09:04:23 2023-12-02 09:21:33 0:17:10 0:08:15 0:08:55 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_crun 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi137 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 223d8154-90f4-11ee-95a2-87774f69a715 --force'