Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7116785 2022-12-14 20:50:08 2022-12-15 04:15:55 2022-12-15 04:47:16 0:31:21 0:18:41 0:12:40 smithi main centos 8.stream rados:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

pass 7116787 2022-12-14 20:50:13 2022-12-15 04:17:57 2022-12-15 04:47:23 0:29:26 0:19:44 0:09:42 smithi main rhel 8.6 rados:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm} 1
fail 7116788 2022-12-14 20:50:16 2022-12-15 04:19:51 2022-12-15 04:41:08 0:21:17 0:12:12 0:09:05 smithi main rhel 8.6 rados:cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi106 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 00d1e6ae-7c32-11ed-8443-001a4aab830c -- ceph orch device zap smithi106 /dev/vg_nvme/lv_4 --force'

fail 7116813 2022-12-14 20:50:17 2022-12-15 04:20:14 2022-12-15 04:40:38 0:20:24 0:12:50 0:07:34 smithi main rhel 8.6 rados:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi044 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 15ca4a24-7c32-11ed-8443-001a4aab830c -- ceph orch device zap smithi044 /dev/nvme4n1 --force'

dead 7116837 2022-12-14 20:50:18 2022-12-15 04:20:14 2022-12-15 04:22:20 0:02:06 smithi main ubuntu 20.04 rados:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm_repos} 1
Failure Reason:

Error reimaging machines: Failed to power on smithi183

fail 7116839 2022-12-14 20:50:19 2022-12-15 04:20:20 2022-12-15 04:50:39 0:30:19 0:19:18 0:11:01 smithi main centos 8.stream rados:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

dead 7116840 2022-12-14 20:50:20 2022-12-15 04:21:35 2022-12-15 04:25:02 0:03:27 smithi main rhel 8.6 rados:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi202

fail 7116841 2022-12-14 20:50:36 2022-12-15 04:21:51 2022-12-15 04:45:18 0:23:27 0:13:11 0:10:16 smithi main centos 8.stream rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
Failure Reason:

Command failed on smithi169 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8d9452e8-7c32-11ed-8443-001a4aab830c -- ceph orch device zap smithi169 /dev/vg_nvme/lv_4 --force'

fail 7116842 2022-12-14 20:50:47 2022-12-15 04:21:51 2022-12-15 04:54:25 0:32:34 0:20:51 0:11:43 smithi main rhel 8.6 rados:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116843 2022-12-14 20:50:52 2022-12-15 04:23:01 2022-12-15 04:44:49 0:21:48 0:12:56 0:08:52 smithi main centos 8.stream rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_nfs} 1
Failure Reason:

Command failed on smithi183 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 893e43b6-7c32-11ed-8443-001a4aab830c -- ceph orch device zap smithi183 /dev/vg_nvme/lv_4 --force'

fail 7116844 2022-12-14 20:50:53 2022-12-15 04:23:02 2022-12-15 04:45:59 0:22:57 0:09:20 0:13:37 smithi main ubuntu 20.04 rados:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi074 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4b43a6d2-7c32-11ed-8443-001a4aab830c -- ceph orch device zap smithi074 /dev/nvme4n1 --force'

fail 7116845 2022-12-14 20:50:59 2022-12-15 04:23:03 2022-12-15 04:56:08 0:33:05 0:20:41 0:12:24 smithi main rhel 8.6 rados:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116846 2022-12-14 20:51:05 2022-12-15 04:25:01 2022-12-15 04:49:59 0:24:58 0:14:35 0:10:23 smithi main rhel 8.6 rados:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi120 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2ad62d60-7c33-11ed-8443-001a4aab830c -- ceph orch device zap smithi120 /dev/vg_nvme/lv_4 --force'

fail 7116847 2022-12-14 20:51:15 2022-12-15 04:26:07 2022-12-15 04:45:18 0:19:11 0:10:17 0:08:54 smithi main centos 8.stream rados:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi154 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c01d89be-7c32-11ed-8443-001a4aab830c -- ceph orch device zap smithi154 /dev/nvme4n1 --force'

fail 7116848 2022-12-14 20:51:21 2022-12-15 04:26:27 2022-12-15 04:44:34 0:18:07 0:09:56 0:08:11 smithi main centos 8.stream rados:cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi092 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a6f0cb54-7c32-11ed-8443-001a4aab830c -- ceph orch device zap smithi092 /dev/vg_nvme/lv_4 --force'

fail 7116849 2022-12-14 20:51:32 2022-12-15 04:26:32 2022-12-15 04:58:14 0:31:42 0:18:20 0:13:22 smithi main rhel 8.6 rados:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi016 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 023444d6-7c34-11ed-8443-001a4aab830c -- ceph orch device zap smithi016 /dev/vg_nvme/lv_4 --force'

fail 7116850 2022-12-14 20:51:48 2022-12-15 04:36:17 2022-12-15 05:06:19 0:30:02 0:19:50 0:10:12 smithi main rhel 8.6 rados:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116851 2022-12-14 20:51:54 2022-12-15 04:36:33 2022-12-15 04:58:03 0:21:30 0:10:49 0:10:41 smithi main centos 8.stream rados:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi031 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4dc31a3a-7c34-11ed-8443-001a4aab830c -- ceph orch device zap smithi031 /dev/nvme4n1 --force'

pass 7116852 2022-12-14 20:52:00 2022-12-15 04:36:44 2022-12-15 04:59:35 0:22:51 0:09:59 0:12:52 smithi main ubuntu 20.04 rados:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_adoption} 1
fail 7116853 2022-12-14 20:52:05 2022-12-15 04:37:32 2022-12-15 05:22:53 0:45:21 0:31:21 0:14:00 smithi main ubuntu 20.04 rados:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116854 2022-12-14 20:52:06 2022-12-15 04:38:19 2022-12-15 05:01:59 0:23:40 0:12:42 0:10:58 smithi main rhel 8.6 rados:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi143 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d76b8a06-7c34-11ed-8443-001a4aab830c -- ceph orch device zap smithi143 /dev/nvme4n1 --force'

pass 7116855 2022-12-14 20:52:07 2022-12-15 04:38:40 2022-12-15 05:06:07 0:27:27 0:18:24 0:09:03 smithi main centos 8.stream rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} 1
fail 7116856 2022-12-14 20:52:13 2022-12-15 04:38:40 2022-12-15 05:08:24 0:29:44 0:19:16 0:10:28 smithi main centos 8.stream rados:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds