User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
adking | 2022-11-29 17:28:43 | 2022-12-02 06:12:55 | 2022-12-02 18:57:39 | 12:44:44 | orch:cephadm | wip-adk-testing-2022-11-29-0941 | smithi | d770419 | 8 | 85 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7097197 | 2022-11-29 17:28:48 | 2022-12-02 06:12:55 | 2022-12-02 06:40:56 | 0:28:01 | 0:17:24 | 0:10:37 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097198 | 2022-11-29 17:28:49 | 2022-12-02 06:15:56 | 2022-12-02 06:28:56 | 0:13:00 | 0:07:00 | 0:06:00 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
Command failed on smithi092 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
fail | 7097199 | 2022-11-29 17:28:50 | 2022-12-02 06:15:56 | 2022-12-02 06:37:58 | 0:22:02 | 0:15:31 | 0:06:31 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi062 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6aeb746e-720b-11ed-843e-001a4aab830c -- ceph orch device zap smithi062 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097200 | 2022-11-29 17:28:51 | 2022-12-02 06:16:56 | 2022-12-02 06:42:07 | 0:25:11 | 0:17:47 | 0:07:24 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097201 | 2022-11-29 17:28:52 | 2022-12-02 06:17:27 | 2022-12-02 06:39:47 | 0:22:20 | 0:13:11 | 0:09:09 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi161 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a4012f1e-720b-11ed-843e-001a4aab830c -- ceph orch daemon add osd smithi161:vg_nvme/lv_4' |
||||||||||||||
fail | 7097202 | 2022-11-29 17:28:54 | 2022-12-02 06:20:28 | 2022-12-02 06:37:53 | 0:17:25 | 0:10:12 | 0:07:13 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi089 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 42b9a45c-720b-11ed-843e-001a4aab830c -- ceph orch daemon add osd smithi089:vg_nvme/lv_4' |
||||||||||||||
pass | 7097203 | 2022-11-29 17:28:55 | 2022-12-02 06:20:48 | 2022-12-02 06:44:01 | 0:23:13 | 0:17:07 | 0:06:06 | smithi | main | centos | 8.stream | orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools} 2-node-mgr agent/off orchestrator_cli} | 2 | |
fail | 7097204 | 2022-11-29 17:28:56 | 2022-12-02 06:20:49 | 2022-12-02 06:39:25 | 0:18:36 | 0:09:59 | 0:08:37 | smithi | main | centos | 8.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
Command failed on smithi102 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 810cfb8c-720b-11ed-843e-001a4aab830c -- ceph orch device zap smithi102 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097205 | 2022-11-29 17:28:57 | 2022-12-02 06:21:49 | 2022-12-02 06:41:52 | 0:20:03 | 0:09:04 | 0:10:59 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi157 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8f3d84ce-720b-11ed-843e-001a4aab830c -- ceph orch device zap smithi157 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097206 | 2022-11-29 17:28:58 | 2022-12-02 06:21:49 | 2022-12-02 06:38:51 | 0:17:02 | 0:09:44 | 0:07:18 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi038 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6d136706-720b-11ed-843e-001a4aab830c -- ceph orch daemon add osd smithi038:vg_nvme/lv_4' |
||||||||||||||
dead | 7097207 | 2022-11-29 17:29:00 | 2022-12-02 06:22:00 | 2022-12-02 06:41:48 | 0:19:48 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
fail | 7097208 | 2022-11-29 17:29:01 | 2022-12-02 06:22:10 | 2022-12-02 06:43:30 | 0:21:20 | 0:14:22 | 0:06:58 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi204 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 288b6f9c-720c-11ed-843e-001a4aab830c -- ceph orch device zap smithi204 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097209 | 2022-11-29 17:29:02 | 2022-12-02 06:22:11 | 2022-12-02 06:47:17 | 0:25:06 | 0:19:33 | 0:05:33 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097210 | 2022-11-29 17:29:03 | 2022-12-02 06:22:11 | 2022-12-02 06:40:15 | 0:18:04 | 0:11:27 | 0:06:37 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi148 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ccd3de1e-720b-11ed-843e-001a4aab830c -- ceph orch device zap smithi148 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7097211 | 2022-11-29 17:29:04 | 2022-12-02 06:22:41 | 2022-12-02 06:46:22 | 0:23:41 | 0:15:16 | 0:08:25 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi072 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9b17978e-720c-11ed-843e-001a4aab830c -- ceph orch device zap smithi072 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097212 | 2022-11-29 17:29:05 | 2022-12-02 06:24:52 | 2022-12-02 06:46:58 | 0:22:06 | 0:11:06 | 0:11:00 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi064 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4cf4a150-720c-11ed-843e-001a4aab830c -- ceph orch device zap smithi064 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097213 | 2022-11-29 17:29:07 | 2022-12-02 06:25:33 | 2022-12-02 06:45:16 | 0:19:43 | 0:10:36 | 0:09:07 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi189 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 50777b86-720c-11ed-843e-001a4aab830c -- ceph orch device zap smithi189 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097214 | 2022-11-29 17:29:08 | 2022-12-02 06:25:33 | 2022-12-02 06:52:28 | 0:26:55 | 0:19:17 | 0:07:38 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097215 | 2022-11-29 17:29:09 | 2022-12-02 06:27:04 | 2022-12-02 06:47:56 | 0:20:52 | 0:13:14 | 0:07:38 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi036 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9783f572-720c-11ed-843e-001a4aab830c -- ceph orch device zap smithi036 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097216 | 2022-11-29 17:29:10 | 2022-12-02 06:27:14 | 2022-12-02 06:53:09 | 0:25:55 | 0:17:49 | 0:08:06 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097217 | 2022-11-29 17:29:11 | 2022-12-02 06:27:55 | 2022-12-02 06:49:10 | 0:21:15 | 0:13:18 | 0:07:57 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi146 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b3b33000-720c-11ed-843e-001a4aab830c -- ceph orch device zap smithi146 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097218 | 2022-11-29 17:29:12 | 2022-12-02 06:27:55 | 2022-12-02 06:50:43 | 0:22:48 | 0:14:24 | 0:08:24 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi017 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 05868ca6-720d-11ed-843e-001a4aab830c -- ceph orch device zap smithi017 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097219 | 2022-11-29 17:29:14 | 2022-12-02 06:29:36 | 2022-12-02 06:49:08 | 0:19:32 | 0:12:40 | 0:06:52 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi081 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b3a6da-720c-11ed-843e-001a4aab830c -- ceph orch device zap smithi081 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097220 | 2022-11-29 17:29:15 | 2022-12-02 06:30:06 | 2022-12-02 07:12:20 | 0:42:14 | 0:30:45 | 0:11:29 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097221 | 2022-11-29 17:29:16 | 2022-12-02 06:30:37 | 2022-12-02 06:51:53 | 0:21:16 | 0:09:08 | 0:12:08 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi099 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ee0b9350-720c-11ed-843e-001a4aab830c -- ceph orch device zap smithi099 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7097222 | 2022-11-29 17:29:17 | 2022-12-02 02:47:06 | 2022-12-02 03:08:56 | 0:21:50 | 0:11:17 | 0:10:33 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi074 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f357b9c0-71ed-11ed-843e-001a4aab830c -- ceph orch device zap smithi074 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097223 | 2022-11-29 17:29:18 | 2022-12-02 06:32:27 | 2022-12-02 06:58:38 | 0:26:11 | 0:15:42 | 0:10:29 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi039 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0aae0fd2-720e-11ed-843e-001a4aab830c -- ceph orch device zap smithi039 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7097224 | 2022-11-29 17:29:20 | 2022-12-02 06:34:58 | 2022-12-02 06:55:08 | 0:20:10 | 0:12:18 | 0:07:52 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_adoption} | 1 | |
fail | 7097225 | 2022-11-29 17:29:21 | 2022-12-02 06:36:19 | 2022-12-02 07:02:53 | 0:26:34 | 0:18:42 | 0:07:52 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097226 | 2022-11-29 17:29:22 | 2022-12-02 06:37:59 | 2022-12-02 06:58:52 | 0:20:53 | 0:14:59 | 0:05:54 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi089 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 54c17834-720e-11ed-843e-001a4aab830c -- ceph orch device zap smithi089 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097227 | 2022-11-29 17:29:23 | 2022-12-02 06:38:00 | 2022-12-02 06:59:00 | 0:21:00 | 0:13:40 | 0:07:20 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi018 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1ca428c0-720e-11ed-843e-001a4aab830c -- ceph orch daemon add osd smithi018:vg_nvme/lv_4' |
||||||||||||||
fail | 7097228 | 2022-11-29 17:29:24 | 2022-12-02 06:38:20 | 2022-12-02 06:56:06 | 0:17:46 | 0:08:22 | 0:09:24 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi038 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d2739c2c-720d-11ed-843e-001a4aab830c -- ceph orch daemon add osd smithi038:vg_nvme/lv_4' |
||||||||||||||
pass | 7097229 | 2022-11-29 17:29:25 | 2022-12-02 06:39:01 | 2022-12-02 07:05:42 | 0:26:41 | 0:20:50 | 0:05:51 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_cephadm} | 1 | |
fail | 7097230 | 2022-11-29 17:29:27 | 2022-12-02 06:39:01 | 2022-12-02 07:06:57 | 0:27:56 | 0:20:05 | 0:07:51 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097231 | 2022-11-29 17:29:28 | 2022-12-02 06:39:31 | 2022-12-02 07:01:21 | 0:21:50 | 0:11:53 | 0:09:57 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi161 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6503880e-720e-11ed-843e-001a4aab830c -- ceph orch device zap smithi161 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097232 | 2022-11-29 17:29:29 | 2022-12-02 06:39:52 | 2022-12-02 07:05:28 | 0:25:36 | 0:17:51 | 0:07:45 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 7097233 | 2022-11-29 17:29:30 | 2022-12-02 06:40:12 | 2022-12-02 06:56:57 | 0:16:45 | 0:10:47 | 0:05:58 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
fail | 7097234 | 2022-11-29 17:29:32 | 2022-12-02 06:40:13 | 2022-12-02 06:57:54 | 0:17:41 | 0:10:51 | 0:06:50 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi148 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 21bba392-720e-11ed-843e-001a4aab830c -- ceph orch daemon add osd smithi148:vg_nvme/lv_4' |
||||||||||||||
fail | 7097235 | 2022-11-29 17:29:33 | 2022-12-02 06:40:23 | 2022-12-02 06:57:57 | 0:17:34 | 0:10:15 | 0:07:19 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi120 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 20d46ba8-720e-11ed-843e-001a4aab830c -- ceph orch device zap smithi120 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7097236 | 2022-11-29 17:29:34 | 2022-12-02 06:41:04 | 2022-12-02 07:01:32 | 0:20:28 | 0:13:01 | 0:07:27 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi138 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c09ed88a-720e-11ed-843e-001a4aab830c -- ceph orch device zap smithi138 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097237 | 2022-11-29 17:29:36 | 2022-12-02 06:41:44 | 2022-12-02 07:03:26 | 0:21:42 | 0:13:24 | 0:08:18 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi026 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c63a67dc-720e-11ed-843e-001a4aab830c -- ceph orch device zap smithi026 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
dead | 7097238 | 2022-11-29 17:29:37 | 2022-12-02 06:41:55 | 2022-12-02 07:00:30 | 0:18:35 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
fail | 7097239 | 2022-11-29 17:29:38 | 2022-12-02 06:42:05 | 2022-12-02 07:00:45 | 0:18:40 | 0:12:47 | 0:05:53 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
Command failed on smithi055 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aab60bec-720e-11ed-843e-001a4aab830c -- ceph orch device zap smithi055 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097240 | 2022-11-29 17:29:39 | 2022-12-02 06:42:15 | 2022-12-02 07:04:42 | 0:22:27 | 0:13:42 | 0:08:45 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi032 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ee93232c-720e-11ed-843e-001a4aab830c -- ceph orch device zap smithi032 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097241 | 2022-11-29 17:29:41 | 2022-12-02 06:43:36 | 2022-12-02 07:03:35 | 0:19:59 | 0:13:01 | 0:06:58 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi007 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0cdcba32-720f-11ed-843e-001a4aab830c -- ceph orch device zap smithi007 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097242 | 2022-11-29 17:29:42 | 2022-12-02 06:44:07 | 2022-12-02 07:11:50 | 0:27:43 | 0:19:29 | 0:08:14 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097243 | 2022-11-29 17:29:43 | 2022-12-02 06:46:27 | 2022-12-02 07:10:03 | 0:23:36 | 0:15:37 | 0:07:59 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi064 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a2dff1ca-720f-11ed-843e-001a4aab830c -- ceph orch device zap smithi064 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097244 | 2022-11-29 17:29:45 | 2022-12-02 06:47:08 | 2022-12-02 07:06:16 | 0:19:08 | 0:12:13 | 0:06:55 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi189 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5badb0bc-720f-11ed-843e-001a4aab830c -- ceph orch device zap smithi189 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097245 | 2022-11-29 17:29:46 | 2022-12-02 06:47:08 | 2022-12-02 07:14:17 | 0:27:09 | 0:19:34 | 0:07:35 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097246 | 2022-11-29 17:29:47 | 2022-12-02 06:47:19 | 2022-12-02 07:04:29 | 0:17:10 | 0:10:11 | 0:06:59 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi036 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0d19f866-720f-11ed-843e-001a4aab830c -- ceph orch device zap smithi036 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7097247 | 2022-11-29 17:29:49 | 2022-12-02 06:47:59 | 2022-12-02 07:07:21 | 0:19:22 | 0:12:46 | 0:06:36 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi112 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 88ccb458-720f-11ed-843e-001a4aab830c -- ceph orch device zap smithi112 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097248 | 2022-11-29 17:29:50 | 2022-12-02 06:48:20 | 2022-12-02 07:10:03 | 0:21:43 | 0:14:56 | 0:06:47 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi081 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e198dc92-720f-11ed-843e-001a4aab830c -- ceph orch device zap smithi081 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
dead | 7097249 | 2022-11-29 17:29:51 | 2022-12-02 06:49:10 | 2022-12-02 18:57:39 | 12:08:29 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7097250 | 2022-11-29 17:29:52 | 2022-12-02 06:49:21 | 2022-12-02 07:07:58 | 0:18:37 | 0:12:29 | 0:06:08 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi044 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a440ccec-720f-11ed-843e-001a4aab830c -- ceph orch device zap smithi044 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097251 | 2022-11-29 17:29:54 | 2022-12-02 06:49:22 | 2022-12-02 07:09:47 | 0:20:25 | 0:13:10 | 0:07:15 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi115 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d657ecc4-720f-11ed-843e-001a4aab830c -- ceph orch daemon add osd smithi115:vg_nvme/lv_4' |
||||||||||||||
pass | 7097252 | 2022-11-29 17:29:55 | 2022-12-02 06:50:52 | 2022-12-02 07:15:48 | 0:24:56 | 0:19:38 | 0:05:18 | smithi | main | rhel | 8.6 | orch:cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.6_container_tools_3.0} 2-node-mgr agent/on orchestrator_cli} | 2 | |
fail | 7097253 | 2022-11-29 17:29:56 | 2022-12-02 06:50:53 | 2022-12-02 07:07:23 | 0:16:30 | 0:09:49 | 0:06:41 | smithi | main | centos | 8.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi017 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6979d8f6-720f-11ed-843e-001a4aab830c -- ceph orch device zap smithi017 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097254 | 2022-11-29 17:29:57 | 2022-12-02 06:50:53 | 2022-12-02 07:08:48 | 0:17:55 | 0:09:33 | 0:08:22 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi099 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9887a952-720f-11ed-843e-001a4aab830c -- ceph orch daemon add osd smithi099:vg_nvme/lv_4' |
||||||||||||||
fail | 7097255 | 2022-11-29 17:29:59 | 2022-12-02 06:51:54 | 2022-12-02 07:13:03 | 0:21:09 | 0:11:08 | 0:10:01 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi033 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f412092a-720f-11ed-843e-001a4aab830c -- ceph orch device zap smithi033 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097256 | 2022-11-29 17:30:00 | 2022-12-02 06:51:54 | 2022-12-02 07:17:53 | 0:25:59 | 0:18:25 | 0:07:34 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097257 | 2022-11-29 17:30:01 | 2022-12-02 06:52:25 | 2022-12-02 07:17:54 | 0:25:29 | 0:16:36 | 0:08:53 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi088 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c3f318a0-7210-11ed-843e-001a4aab830c -- ceph orch device zap smithi088 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097258 | 2022-11-29 17:30:02 | 2022-12-02 06:54:36 | 2022-12-02 07:13:29 | 0:18:53 | 0:12:38 | 0:06:15 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi074 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5fa6d71a-7210-11ed-843e-001a4aab830c -- ceph orch device zap smithi074 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097259 | 2022-11-29 17:30:03 | 2022-12-02 06:54:36 | 2022-12-02 07:22:56 | 0:28:20 | 0:19:55 | 0:08:25 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097260 | 2022-11-29 17:30:05 | 2022-12-02 06:56:17 | 2022-12-02 07:16:08 | 0:19:51 | 0:11:37 | 0:08:14 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi102 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7ffec572-7210-11ed-843e-001a4aab830c -- ceph orch device zap smithi102 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7097261 | 2022-11-29 17:30:06 | 2022-12-02 06:57:08 | 2022-12-02 07:23:10 | 0:26:02 | 0:17:41 | 0:08:21 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097262 | 2022-11-29 17:30:07 | 2022-12-02 06:57:58 | 2022-12-02 07:19:29 | 0:21:31 | 0:15:13 | 0:06:18 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi148 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 30df6072-7211-11ed-843e-001a4aab830c -- ceph orch device zap smithi148 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097263 | 2022-11-29 17:30:08 | 2022-12-02 06:57:58 | 2022-12-02 07:17:56 | 0:19:58 | 0:12:53 | 0:07:05 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi039 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 08822970-7211-11ed-843e-001a4aab830c -- ceph orch device zap smithi039 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7097264 | 2022-11-29 17:30:09 | 2022-12-02 06:58:39 | 2022-12-02 07:19:43 | 0:21:04 | 0:14:37 | 0:06:27 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_adoption} | 1 | |
fail | 7097265 | 2022-11-29 17:30:11 | 2022-12-02 06:58:59 | 2022-12-02 07:19:25 | 0:20:26 | 0:13:36 | 0:06:50 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi018 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8193ad8-7210-11ed-843e-001a4aab830c -- ceph orch device zap smithi018 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
dead | 7097266 | 2022-11-29 17:30:12 | 2022-12-02 06:59:10 | 2022-12-02 07:20:16 | 0:21:06 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
pass | 7097267 | 2022-11-29 17:30:13 | 2022-12-02 07:00:50 | 2022-12-02 07:27:49 | 0:26:59 | 0:18:10 | 0:08:49 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm} | 1 | |
fail | 7097268 | 2022-11-29 17:30:15 | 2022-12-02 07:00:51 | 2022-12-02 07:28:33 | 0:27:42 | 0:19:55 | 0:07:47 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097269 | 2022-11-29 17:30:16 | 2022-12-02 07:01:31 | 2022-12-02 07:19:13 | 0:17:42 | 0:10:35 | 0:07:07 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi138 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0e4992b2-7211-11ed-843e-001a4aab830c -- ceph orch daemon add osd smithi138:vg_nvme/lv_4' |
||||||||||||||
fail | 7097270 | 2022-11-29 17:30:17 | 2022-12-02 07:01:42 | 2022-12-02 07:23:56 | 0:22:14 | 0:15:21 | 0:06:53 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi062 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e18fea40-7211-11ed-843e-001a4aab830c -- ceph orch device zap smithi062 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7097271 | 2022-11-29 17:30:18 | 2022-12-02 07:03:02 | 2022-12-02 07:17:54 | 0:14:52 | 0:08:51 | 0:06:01 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
fail | 7097272 | 2022-11-29 17:30:19 | 2022-12-02 07:03:03 | 2022-12-02 07:31:10 | 0:28:07 | 0:20:00 | 0:08:07 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097273 | 2022-11-29 17:30:21 | 2022-12-02 07:03:33 | 2022-12-02 07:22:55 | 0:19:22 | 0:11:52 | 0:07:30 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi007 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 85ffd6d6-7211-11ed-843e-001a4aab830c -- ceph orch device zap smithi007 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7097274 | 2022-11-29 17:30:22 | 2022-12-02 07:03:44 | 2022-12-02 07:24:58 | 0:21:14 | 0:15:31 | 0:05:43 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi036 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0a335e00-7212-11ed-843e-001a4aab830c -- ceph orch device zap smithi036 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097275 | 2022-11-29 17:30:23 | 2022-12-02 07:04:34 | 2022-12-02 07:26:20 | 0:21:46 | 0:11:12 | 0:10:34 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi032 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d2999bda-7211-11ed-843e-001a4aab830c -- ceph orch device zap smithi032 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097276 | 2022-11-29 17:30:24 | 2022-12-02 07:04:45 | 2022-12-02 07:26:33 | 0:21:48 | 0:13:14 | 0:08:34 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi071 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e6aa10aa-7211-11ed-843e-001a4aab830c -- ceph orch daemon add osd smithi071:vg_nvme/lv_4' |
||||||||||||||
fail | 7097277 | 2022-11-29 17:30:26 | 2022-12-02 07:05:35 | 2022-12-02 07:23:58 | 0:18:23 | 0:08:23 | 0:10:00 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi006 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c5892c44-7211-11ed-843e-001a4aab830c -- ceph orch daemon add osd smithi006:vg_nvme/lv_4' |
||||||||||||||
fail | 7097278 | 2022-11-29 17:30:27 | 2022-12-02 07:06:26 | 2022-12-02 07:46:56 | 0:40:30 | 0:30:01 | 0:10:29 | smithi | main | ubuntu | 20.04 | orch:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097279 | 2022-11-29 17:30:28 | 2022-12-02 07:07:06 | 2022-12-02 07:26:00 | 0:18:54 | 0:12:35 | 0:06:19 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
Command failed on smithi017 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2cfd4054-7212-11ed-843e-001a4aab830c -- ceph orch device zap smithi017 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097280 | 2022-11-29 17:30:29 | 2022-12-02 07:07:27 | 2022-12-02 07:26:19 | 0:18:52 | 0:12:46 | 0:06:06 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi112 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2e97d9a6-7212-11ed-843e-001a4aab830c -- ceph orch device zap smithi112 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097281 | 2022-11-29 17:30:30 | 2022-12-02 07:07:27 | 2022-12-02 07:49:45 | 0:42:18 | 0:30:37 | 0:11:41 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097282 | 2022-11-29 17:30:32 | 2022-12-02 07:08:08 | 2022-12-02 07:26:36 | 0:18:28 | 0:13:04 | 0:05:24 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi002 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 44785462-7212-11ed-843e-001a4aab830c -- ceph orch device zap smithi002 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097283 | 2022-11-29 17:30:33 | 2022-12-02 07:08:08 | 2022-12-02 07:29:48 | 0:21:40 | 0:14:23 | 0:07:17 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi181 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9a197b6c-7212-11ed-843e-001a4aab830c -- ceph orch device zap smithi181 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097284 | 2022-11-29 17:30:34 | 2022-12-02 07:08:58 | 2022-12-02 07:34:43 | 0:25:45 | 0:17:55 | 0:07:50 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097285 | 2022-11-29 17:30:35 | 2022-12-02 07:09:49 | 2022-12-02 07:29:24 | 0:19:35 | 0:09:37 | 0:09:58 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi081 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 445057f0-7212-11ed-843e-001a4aab830c -- ceph orch device zap smithi081 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7097286 | 2022-11-29 17:30:36 | 2022-12-02 07:10:09 | 2022-12-02 07:31:29 | 0:21:20 | 0:11:22 | 0:09:58 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi064 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8e6d2fde-7212-11ed-843e-001a4aab830c -- ceph orch device zap smithi064 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097287 | 2022-11-29 17:30:37 | 2022-12-02 07:10:10 | 2022-12-02 07:33:23 | 0:23:13 | 0:14:51 | 0:08:22 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi082 with status 127: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 26b363ee-7213-11ed-843e-001a4aab830c -- ceph mon dump -f json' |
||||||||||||||
fail | 7097288 | 2022-11-29 17:30:39 | 2022-12-02 07:12:00 | 2022-12-02 07:33:33 | 0:21:33 | 0:13:19 | 0:08:14 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi085 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ea316308-7212-11ed-843e-001a4aab830c -- ceph orch device zap smithi085 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097289 | 2022-11-29 17:30:40 | 2022-12-02 07:12:21 | 2022-12-02 07:33:15 | 0:20:54 | 0:14:27 | 0:06:27 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi099 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1cd3ab54-7213-11ed-843e-001a4aab830c -- ceph orch device zap smithi099 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097290 | 2022-11-29 17:30:41 | 2022-12-02 07:12:21 | 2022-12-02 07:37:34 | 0:25:13 | 0:17:56 | 0:07:17 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097291 | 2022-11-29 17:30:42 | 2022-12-02 07:12:32 | 2022-12-02 07:35:57 | 0:23:25 | 0:15:38 | 0:07:47 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi033 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4d59a990-7213-11ed-843e-001a4aab830c -- ceph orch device zap smithi033 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7097292 | 2022-11-29 17:30:43 | 2022-12-02 07:13:12 | 2022-12-02 07:38:21 | 0:25:09 | 0:18:06 | 0:07:03 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7097293 | 2022-11-29 17:30:45 | 2022-12-02 07:13:32 | 2022-12-02 07:42:02 | 0:28:30 | 0:13:59 | 0:14:31 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi061 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:d7704198107e766cbf004d3a036310b592cf3044 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b8184b24-7213-11ed-843e-001a4aab830c -- ceph orch device zap smithi061 /dev/vg_nvme/lv_4 --force' |