User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
gabrioux | 2022-12-06 14:20:24 | 2022-12-06 23:49:17 | 2022-12-07 01:18:20 | 1:29:03 | rados:cephadm | wip-guits-testing-2022-12-02-0801 | smithi | 2c11cbb | 4 | 19 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7105305 | 2022-12-06 14:20:37 | 2022-12-06 23:49:17 | 2022-12-07 00:15:01 | 0:25:44 | 0:18:32 | 0:07:12 | smithi | main | centos | 8.stream | rados:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 7105306 | 2022-12-06 14:20:38 | 2022-12-06 23:49:17 | 2022-12-07 00:14:01 | 0:24:44 | 0:19:44 | 0:05:00 | smithi | main | rhel | 8.6 | rados:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm} | 1 | |
fail | 7105307 | 2022-12-06 14:20:39 | 2022-12-06 23:49:18 | 2022-12-07 00:08:43 | 0:19:25 | 0:11:42 | 0:07:43 | smithi | main | rhel | 8.6 | rados:cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_3.0} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi167 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2c11cbb6d9105458a36f33e9b4b0a80c835cfb3b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a3d479be-75c2-11ed-843e-001a4aab830c -- ceph orch device zap smithi167 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7105308 | 2022-12-06 14:20:41 | 2022-12-06 23:49:28 | 2022-12-07 00:09:36 | 0:20:08 | 0:11:53 | 0:08:15 | smithi | main | rhel | 8.6 | rados:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi138 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2c11cbb6d9105458a36f33e9b4b0a80c835cfb3b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c8eead14-75c2-11ed-843e-001a4aab830c -- ceph orch device zap smithi138 /dev/nvme4n1 --force' |
||||||||||||||
pass | 7105309 | 2022-12-06 14:20:43 | 2022-12-06 23:50:29 | 2022-12-07 00:05:58 | 0:15:29 | 0:06:15 | 0:09:14 | smithi | main | ubuntu | 20.04 | rados:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
fail | 7105310 | 2022-12-06 14:20:45 | 2022-12-06 23:50:29 | 2022-12-07 00:15:37 | 0:25:08 | 0:17:45 | 0:07:23 | smithi | main | centos | 8.stream | rados:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7105311 | 2022-12-06 14:20:46 | 2022-12-06 23:50:30 | 2022-12-07 00:09:37 | 0:19:07 | 0:11:51 | 0:07:16 | smithi | main | rhel | 8.6 | rados:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi071 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2c11cbb6d9105458a36f33e9b4b0a80c835cfb3b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d1888d96-75c2-11ed-843e-001a4aab830c -- ceph orch device zap smithi071 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7105312 | 2022-12-06 14:20:48 | 2022-12-06 23:50:30 | 2022-12-07 00:09:40 | 0:19:10 | 0:12:28 | 0:06:42 | smithi | main | centos | 8.stream | rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
Command failed on smithi102 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2c11cbb6d9105458a36f33e9b4b0a80c835cfb3b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 05554fe2-75c3-11ed-843e-001a4aab830c -- ceph orch device zap smithi102 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7105313 | 2022-12-06 14:20:49 | 2022-12-06 23:50:40 | 2022-12-07 00:18:36 | 0:27:56 | 0:20:02 | 0:07:54 | smithi | main | rhel | 8.6 | rados:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7105314 | 2022-12-06 14:20:51 | 2022-12-06 23:51:31 | 2022-12-07 00:11:02 | 0:19:31 | 0:12:28 | 0:07:03 | smithi | main | centos | 8.stream | rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi072 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2c11cbb6d9105458a36f33e9b4b0a80c835cfb3b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 37be4ace-75c3-11ed-843e-001a4aab830c -- ceph orch device zap smithi072 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7105315 | 2022-12-06 14:20:53 | 2022-12-06 23:51:31 | 2022-12-07 00:11:56 | 0:20:25 | 0:09:24 | 0:11:01 | smithi | main | ubuntu | 20.04 | rados:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi163 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2c11cbb6d9105458a36f33e9b4b0a80c835cfb3b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f89120a6-75c2-11ed-843e-001a4aab830c -- ceph orch device zap smithi163 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7105316 | 2022-12-06 14:20:54 | 2022-12-06 23:51:42 | 2022-12-07 00:21:08 | 0:29:26 | 0:19:40 | 0:09:46 | smithi | main | rhel | 8.6 | rados:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7105317 | 2022-12-06 14:20:56 | 2022-12-06 23:54:03 | 2022-12-07 00:16:43 | 0:22:40 | 0:14:32 | 0:08:08 | smithi | main | rhel | 8.6 | rados:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi191 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2c11cbb6d9105458a36f33e9b4b0a80c835cfb3b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ef66cf0c-75c3-11ed-843e-001a4aab830c -- ceph orch device zap smithi191 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7105318 | 2022-12-06 14:20:58 | 2022-12-06 23:55:43 | 2022-12-07 00:13:01 | 0:17:18 | 0:09:54 | 0:07:24 | smithi | main | centos | 8.stream | rados:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi154 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2c11cbb6d9105458a36f33e9b4b0a80c835cfb3b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5a99d02c-75c3-11ed-843e-001a4aab830c -- ceph orch device zap smithi154 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7105319 | 2022-12-06 14:20:59 | 2022-12-06 23:55:54 | 2022-12-07 00:12:50 | 0:16:56 | 0:09:50 | 0:07:06 | smithi | main | centos | 8.stream | rados:cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi064 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2c11cbb6d9105458a36f33e9b4b0a80c835cfb3b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 582d1c4a-75c3-11ed-843e-001a4aab830c -- ceph orch device zap smithi064 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7105320 | 2022-12-06 14:21:01 | 2022-12-06 23:55:54 | 2022-12-07 00:25:53 | 0:29:59 | 0:16:40 | 0:13:19 | smithi | main | rhel | 8.6 | rados:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi033 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2c11cbb6d9105458a36f33e9b4b0a80c835cfb3b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 15bf4340-75c5-11ed-843e-001a4aab830c -- ceph orch device zap smithi033 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7105321 | 2022-12-06 14:21:03 | 2022-12-07 00:02:46 | 2022-12-07 01:18:20 | 1:15:34 | 1:09:18 | 0:06:16 | smithi | main | rhel | 8.6 | rados:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7105322 | 2022-12-06 14:21:05 | 2022-12-07 00:03:16 | 2022-12-07 00:20:19 | 0:17:03 | 0:09:53 | 0:07:10 | smithi | main | centos | 8.stream | rados:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi060 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2c11cbb6d9105458a36f33e9b4b0a80c835cfb3b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 601da9c8-75c4-11ed-843e-001a4aab830c -- ceph orch device zap smithi060 /dev/nvme4n1 --force' |
||||||||||||||
pass | 7105323 | 2022-12-06 14:21:06 | 2022-12-07 00:03:27 | 2022-12-07 00:24:03 | 0:20:36 | 0:09:47 | 0:10:49 | smithi | main | ubuntu | 20.04 | rados:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_adoption} | 1 | |
fail | 7105324 | 2022-12-06 14:21:08 | 2022-12-07 00:03:57 | 2022-12-07 00:44:49 | 0:40:52 | 0:29:32 | 0:11:20 | smithi | main | ubuntu | 20.04 | rados:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7105325 | 2022-12-06 14:21:09 | 2022-12-07 00:04:58 | 2022-12-07 00:24:16 | 0:19:18 | 0:11:34 | 0:07:44 | smithi | main | rhel | 8.6 | rados:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi119 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2c11cbb6d9105458a36f33e9b4b0a80c835cfb3b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cace71c6-75c4-11ed-843e-001a4aab830c -- ceph orch device zap smithi119 /dev/nvme4n1 --force' |
||||||||||||||
pass | 7105326 | 2022-12-06 14:21:11 | 2022-12-07 00:05:18 | 2022-12-07 00:28:24 | 0:23:06 | 0:17:11 | 0:05:55 | smithi | main | centos | 8.stream | rados:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} | 1 | |
fail | 7105327 | 2022-12-06 14:21:12 | 2022-12-07 00:05:18 | 2022-12-07 00:30:21 | 0:25:03 | 0:17:36 | 0:07:27 | smithi | main | centos | 8.stream | rados:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |