User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
adking | 2023-01-11 19:37:47 | 2023-01-11 19:38:41 | 2023-01-13 03:20:47 | 1 day, 7:42:06 | orch:cephadm | wip-guits-testing-2023-01-11-1536 | smithi | ad68a47 | 8 | 83 | 6 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7133491 | 2023-01-11 19:37:52 | 2023-01-11 19:38:39 | 2023-01-11 20:03:35 | 0:24:56 | 0:18:43 | 0:06:13 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133492 | 2023-01-11 19:37:53 | 2023-01-11 19:38:39 | 2023-01-11 19:51:33 | 0:12:54 | 0:06:38 | 0:06:16 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
Command failed on smithi084 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
fail | 7133493 | 2023-01-11 19:37:54 | 2023-01-11 19:38:39 | 2023-01-12 12:41:23 | 17:02:44 | 0:15:20 | 16:47:24 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
[Errno 51] Network is unreachable |
||||||||||||||
fail | 7133494 | 2023-01-11 19:37:55 | 2023-01-11 19:38:39 | 2023-01-11 20:04:57 | 0:26:18 | 0:18:41 | 0:07:37 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133495 | 2023-01-11 19:37:56 | 2023-01-11 19:38:40 | 2023-01-11 20:01:31 | 0:22:51 | 0:13:58 | 0:08:53 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi115 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 12b82e14-91ea-11ed-821a-001a4aab830c -- ceph orch daemon add osd smithi115:vg_nvme/lv_4' |
||||||||||||||
fail | 7133496 | 2023-01-11 19:37:57 | 2023-01-11 19:38:40 | 2023-01-11 19:57:34 | 0:18:54 | 0:11:09 | 0:07:45 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi062 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b729f910-91e9-11ed-821a-001a4aab830c -- ceph orch daemon add osd smithi062:vg_nvme/lv_4' |
||||||||||||||
pass | 7133497 | 2023-01-11 19:37:58 | 2023-01-11 19:38:40 | 2023-01-11 20:03:38 | 0:24:58 | 0:16:59 | 0:07:59 | smithi | main | centos | 8.stream | orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools_crun} 2-node-mgr agent/off orchestrator_cli} | 2 | |
fail | 7133498 | 2023-01-11 19:37:59 | 2023-01-11 19:38:41 | 2023-01-11 19:57:28 | 0:18:47 | 0:10:50 | 0:07:57 | smithi | main | centos | 8.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
Command failed on smithi100 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid afa19194-91e9-11ed-821a-001a4aab830c -- ceph orch device zap smithi100 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133499 | 2023-01-11 19:38:00 | 2023-01-11 19:38:41 | 2023-01-13 03:20:47 | 1 day, 7:42:06 | 14:41:08 | 17:00:58 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_3.0} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi007 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3f89634e-9277-11ed-821b-001a4aab830c -- ceph orch device zap smithi007 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133500 | 2023-01-11 19:38:01 | 2023-01-11 19:38:41 | 2023-01-11 19:57:17 | 0:18:36 | 0:10:26 | 0:08:10 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi124 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a59d4d96-91e9-11ed-821a-001a4aab830c -- ceph orch daemon add osd smithi124:vg_nvme/lv_4' |
||||||||||||||
fail | 7133501 | 2023-01-11 19:38:02 | 2023-01-11 19:38:41 | 2023-01-11 19:59:39 | 0:20:58 | 0:15:06 | 0:05:52 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi188 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 00f4e938-91ea-11ed-821a-001a4aab830c -- ceph orch device zap smithi188 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133502 | 2023-01-11 19:38:04 | 2023-01-11 19:38:42 | 2023-01-11 19:59:46 | 0:21:04 | 0:14:29 | 0:06:35 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi182 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0075994e-91ea-11ed-821a-001a4aab830c -- ceph orch device zap smithi182 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
dead | 7133503 | 2023-01-11 19:38:05 | 2023-01-11 19:38:42 | 2023-01-11 19:43:50 | 0:05:08 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: 'NoneType' object has no attribute '_fields' |
||||||||||||||
fail | 7133504 | 2023-01-11 19:38:06 | 2023-01-11 19:38:42 | 2023-01-11 19:57:09 | 0:18:27 | 0:11:07 | 0:07:20 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi027 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a62cad6a-91e9-11ed-821a-001a4aab830c -- ceph orch device zap smithi027 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7133505 | 2023-01-11 19:38:07 | 2023-01-11 19:38:43 | 2023-01-11 20:03:57 | 0:25:14 | 0:15:30 | 0:09:44 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi042 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a8f4b69a-91ea-11ed-821a-001a4aab830c -- ceph orch device zap smithi042 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133506 | 2023-01-11 19:38:08 | 2023-01-11 19:42:43 | 2023-01-11 20:05:45 | 0:23:02 | 0:11:24 | 0:11:38 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi077 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8b8cca98-91ea-11ed-821a-001a4aab830c -- ceph orch device zap smithi077 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133507 | 2023-01-11 19:38:09 | 2023-01-11 19:44:04 | 2023-01-11 20:10:29 | 0:26:25 | 0:10:41 | 0:15:44 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi121 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2c77a7ac-91eb-11ed-821a-001a4aab830c -- ceph orch device zap smithi121 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133508 | 2023-01-11 19:38:10 | 2023-01-11 19:48:45 | 2023-01-11 20:16:55 | 0:28:10 | 0:19:05 | 0:09:05 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133509 | 2023-01-11 19:38:11 | 2023-01-11 19:52:15 | 2023-01-11 20:15:11 | 0:22:56 | 0:13:38 | 0:09:18 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi084 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 05abc30a-91ec-11ed-821a-001a4aab830c -- ceph orch device zap smithi084 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133510 | 2023-01-11 19:38:12 | 2023-01-11 19:54:06 | 2023-01-11 20:19:34 | 0:25:28 | 0:18:48 | 0:06:40 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133511 | 2023-01-11 19:38:13 | 2023-01-11 19:54:16 | 2023-01-11 20:17:37 | 0:23:21 | 0:14:23 | 0:08:58 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi017 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 67f19238-91ec-11ed-821a-001a4aab830c -- ceph orch device zap smithi017 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133512 | 2023-01-11 19:38:14 | 2023-01-11 19:56:17 | 2023-01-11 20:21:53 | 0:25:36 | 0:15:25 | 0:10:11 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi027 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c435520a-91ec-11ed-821a-001a4aab830c -- ceph orch device zap smithi027 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133513 | 2023-01-11 19:38:16 | 2023-01-11 19:57:28 | 2023-01-11 20:19:10 | 0:21:42 | 0:13:55 | 0:07:47 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi062 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a24fcbb6-91ec-11ed-821a-001a4aab830c -- ceph orch device zap smithi062 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133514 | 2023-01-11 19:38:17 | 2023-01-11 19:57:38 | 2023-01-11 20:39:43 | 0:42:05 | 0:31:20 | 0:10:45 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133515 | 2023-01-11 19:38:18 | 2023-01-11 19:57:38 | 2023-01-11 20:17:23 | 0:19:45 | 0:09:33 | 0:10:12 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi174 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2d504520-91ec-11ed-821a-001a4aab830c -- ceph orch device zap smithi174 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7133516 | 2023-01-11 19:38:19 | 2023-01-11 19:57:38 | 2023-01-11 20:20:53 | 0:23:15 | 0:11:58 | 0:11:17 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi005 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adbfd3f6-91ec-11ed-821a-001a4aab830c -- ceph orch device zap smithi005 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133517 | 2023-01-11 19:38:20 | 2023-01-11 19:58:59 | 2023-01-11 20:21:43 | 0:22:44 | 0:15:47 | 0:06:57 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi007 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d5b40dfa-91ec-11ed-821a-001a4aab830c -- ceph orch device zap smithi007 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7133518 | 2023-01-11 19:38:21 | 2023-01-11 19:59:09 | 2023-01-11 20:20:06 | 0:20:57 | 0:13:29 | 0:07:28 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_adoption} | 1 | |
fail | 7133519 | 2023-01-11 19:38:22 | 2023-01-11 19:59:09 | 2023-01-11 20:25:55 | 0:26:46 | 0:18:58 | 0:07:48 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133520 | 2023-01-11 19:38:23 | 2023-01-11 19:59:30 | 2023-01-11 20:20:37 | 0:21:07 | 0:15:05 | 0:06:02 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi188 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f431a800-91ec-11ed-821a-001a4aab830c -- ceph orch device zap smithi188 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133521 | 2023-01-11 19:38:24 | 2023-01-11 19:59:40 | 2023-01-11 20:21:57 | 0:22:17 | 0:14:11 | 0:08:06 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi026 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f78ee454-91ec-11ed-821a-001a4aab830c -- ceph orch daemon add osd smithi026:vg_nvme/lv_4' |
||||||||||||||
fail | 7133522 | 2023-01-11 19:38:25 | 2023-01-11 20:01:01 | 2023-01-11 20:19:24 | 0:18:23 | 0:08:37 | 0:09:46 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi032 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b56ffb08-91ec-11ed-821a-001a4aab830c -- ceph orch daemon add osd smithi032:vg_nvme/lv_4' |
||||||||||||||
pass | 7133523 | 2023-01-11 19:38:26 | 2023-01-11 20:01:31 | 2023-01-11 20:25:54 | 0:24:23 | 0:19:44 | 0:04:39 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_cephadm} | 1 | |
dead | 7133524 | 2023-01-11 19:38:27 | 2023-01-11 20:01:31 | 2023-01-12 08:08:45 | 12:07:14 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7133525 | 2023-01-11 19:38:28 | 2023-01-11 20:01:32 | 2023-01-11 20:24:17 | 0:22:45 | 0:11:33 | 0:11:12 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi110 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 263fbada-91ed-11ed-821a-001a4aab830c -- ceph orch device zap smithi110 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133526 | 2023-01-11 19:38:29 | 2023-01-11 20:02:12 | 2023-01-11 20:27:40 | 0:25:28 | 0:18:44 | 0:06:44 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 7133527 | 2023-01-11 19:38:30 | 2023-01-11 20:02:42 | 2023-01-11 20:19:42 | 0:17:00 | 0:10:05 | 0:06:55 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
fail | 7133528 | 2023-01-11 19:38:31 | 2023-01-11 20:02:53 | 2023-01-11 20:20:56 | 0:18:03 | 0:10:44 | 0:07:19 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi165 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f98b770e-91ec-11ed-821a-001a4aab830c -- ceph mon dump -f json' |
||||||||||||||
fail | 7133529 | 2023-01-11 19:38:33 | 2023-01-11 20:03:43 | 2023-01-11 20:20:30 | 0:16:47 | 0:10:57 | 0:05:50 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi105 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ebf0e890-91ec-11ed-821a-001a4aab830c -- ceph orch device zap smithi105 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7133530 | 2023-01-11 19:38:34 | 2023-01-11 20:03:43 | 2023-01-11 20:25:41 | 0:21:58 | 0:14:38 | 0:07:20 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi042 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 895448e8-91ed-11ed-821a-001a4aab830c -- ceph orch device zap smithi042 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133531 | 2023-01-11 19:38:35 | 2023-01-11 20:04:04 | 2023-01-11 20:25:49 | 0:21:45 | 0:14:10 | 0:07:35 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi107 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 85186214-91ed-11ed-821a-001a4aab830c -- ceph orch device zap smithi107 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133532 | 2023-01-11 19:38:36 | 2023-01-11 20:04:34 | 2023-01-11 20:30:02 | 0:25:28 | 0:19:07 | 0:06:21 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133533 | 2023-01-11 19:38:37 | 2023-01-11 20:05:04 | 2023-01-11 20:25:35 | 0:20:31 | 0:14:13 | 0:06:18 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
Command failed on smithi002 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 88625934-91ed-11ed-821a-001a4aab830c -- ceph orch device zap smithi002 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133534 | 2023-01-11 19:38:38 | 2023-01-11 20:05:05 | 2023-01-11 20:26:20 | 0:21:15 | 0:15:08 | 0:06:07 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi073 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 949107c8-91ed-11ed-821a-001a4aab830c -- ceph orch device zap smithi073 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133535 | 2023-01-11 19:38:39 | 2023-01-11 20:05:25 | 2023-01-11 20:26:45 | 0:21:20 | 0:13:53 | 0:07:27 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi144 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a5ed3744-91ed-11ed-821a-001a4aab830c -- ceph orch device zap smithi144 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133536 | 2023-01-11 19:38:40 | 2023-01-11 20:30:47 | 1104 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | ||||
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133537 | 2023-01-11 19:38:41 | 2023-01-11 20:05:56 | 2023-01-11 20:27:24 | 0:21:28 | 0:14:46 | 0:06:42 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi112 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d4143960-91ed-11ed-821a-001a4aab830c -- ceph orch device zap smithi112 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133538 | 2023-01-11 19:38:42 | 2023-01-11 20:29:28 | 798 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_nfs} | 1 | ||||
Failure Reason:
Command failed on smithi040 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0822a9d0-91ee-11ed-821a-001a4aab830c -- ceph orch device zap smithi040 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133539 | 2023-01-11 19:38:43 | 2023-01-11 20:07:16 | 2023-01-11 20:33:10 | 0:25:54 | 0:19:24 | 0:06:30 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133540 | 2023-01-11 19:38:44 | 2023-01-11 20:08:07 | 2023-01-11 20:26:11 | 0:18:04 | 0:10:46 | 0:07:18 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi083 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b3d4c8ea-91ed-11ed-821a-001a4aab830c -- ceph orch device zap smithi083 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7133541 | 2023-01-11 19:38:46 | 2023-01-11 20:09:07 | 2023-01-11 20:32:00 | 0:22:53 | 0:13:37 | 0:09:16 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi033 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 62013f8e-91ee-11ed-821a-001a4aab830c -- ceph orch device zap smithi033 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133542 | 2023-01-11 19:38:47 | 2023-01-11 20:10:28 | 2023-01-11 20:31:35 | 0:21:07 | 0:15:09 | 0:05:58 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi035 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7a622c3c-91ee-11ed-821a-001a4aab830c -- ceph orch device zap smithi035 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133543 | 2023-01-11 19:38:48 | 2023-01-11 20:10:38 | 2023-01-11 20:56:34 | 0:45:56 | 0:30:40 | 0:15:16 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133544 | 2023-01-11 19:38:49 | 2023-01-11 20:14:29 | 2023-01-11 20:34:15 | 0:19:46 | 0:12:44 | 0:07:02 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi181 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid eda867c4-91ee-11ed-821a-001a4aab830c -- ceph orch device zap smithi181 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133545 | 2023-01-11 19:38:50 | 2023-01-11 20:15:19 | 2023-01-11 20:38:05 | 0:22:46 | 0:14:14 | 0:08:32 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi016 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 370f8afa-91ef-11ed-821a-001a4aab830c -- ceph orch daemon add osd smithi016:vg_nvme/lv_4' |
||||||||||||||
pass | 7133546 | 2023-01-11 19:38:51 | 2023-01-11 20:17:00 | 2023-01-11 20:40:41 | 0:23:41 | 0:13:18 | 0:10:23 | smithi | main | ubuntu | 20.04 | orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_20.04} 2-node-mgr agent/on orchestrator_cli} | 2 | |
fail | 7133547 | 2023-01-11 19:38:52 | 2023-01-11 20:17:00 | 2023-01-11 20:33:52 | 0:16:52 | 0:10:22 | 0:06:30 | smithi | main | centos | 8.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi084 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid be1f1782-91ee-11ed-821a-001a4aab830c -- ceph orch device zap smithi084 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133548 | 2023-01-11 19:38:53 | 2023-01-11 20:17:00 | 2023-01-11 20:34:27 | 0:17:27 | 0:10:21 | 0:07:06 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi174 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d50c6ff8-91ee-11ed-821a-001a4aab830c -- ceph orch daemon add osd smithi174:vg_nvme/lv_4' |
||||||||||||||
fail | 7133549 | 2023-01-11 19:38:54 | 2023-01-11 20:17:31 | 2023-01-11 20:39:57 | 0:22:26 | 0:11:22 | 0:11:04 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi017 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 517acdd2-91ef-11ed-821a-001a4aab830c -- ceph orch device zap smithi017 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133550 | 2023-01-11 19:38:55 | 2023-01-11 20:17:41 | 2023-01-11 20:46:50 | 0:29:09 | 0:19:17 | 0:09:52 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133551 | 2023-01-11 19:38:56 | 2023-01-11 20:19:11 | 2023-01-11 20:45:06 | 0:25:55 | 0:17:41 | 0:08:14 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi032 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e040961e-91ef-11ed-821a-001a4aab830c -- ceph orch device zap smithi032 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133552 | 2023-01-11 19:38:57 | 2023-01-11 20:19:42 | 2023-01-11 20:41:41 | 0:21:59 | 0:13:40 | 0:08:19 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi100 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ba549590-91ef-11ed-821a-001a4aab830c -- ceph orch device zap smithi100 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133553 | 2023-01-11 19:38:59 | 2023-01-11 20:20:12 | 2023-01-11 20:45:24 | 0:25:12 | 0:18:34 | 0:06:38 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133554 | 2023-01-11 19:39:00 | 2023-01-11 20:20:33 | 2023-01-11 20:37:39 | 0:17:06 | 0:10:51 | 0:06:15 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi188 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3eade478-91ef-11ed-821a-001a4aab830c -- ceph orch device zap smithi188 /dev/nvme4n1 --force' |
||||||||||||||
dead | 7133555 | 2023-01-11 19:39:01 | 2023-01-11 20:20:43 | 2023-01-12 08:28:46 | 12:08:03 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7133556 | 2023-01-11 19:39:02 | 2023-01-11 20:21:03 | 2023-01-11 20:33:21 | 0:12:18 | 0:07:07 | 0:05:11 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
{'smithi005.front.sepia.ceph.com': {'_ansible_no_log': False, 'attempts': 12, 'changed': True, 'cmd': ['subscription-manager', 'register', '--activationkey=testnode', '--org=Ceph', '--name=smithi005.front.sepia.ceph.com', '--force'], 'delta': '0:00:00.729726', 'end': '2023-01-11 20:31:14.532836', 'failed_when_result': True, 'invocation': {'module_args': {'_raw_params': 'subscription-manager register --activationkey=testnode --org=Ceph --name=smithi005.front.sepia.ceph.com --force', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 70, 'start': '2023-01-11 20:31:13.803110', 'stderr': 'The DMI UUID of this host (00000000-0000-0000-0000-0CC47A6BFDDE) matches other registered hosts: smithi005 (HTTP error code 422: Unprocessable Entity)', 'stderr_lines': ['The DMI UUID of this host (00000000-0000-0000-0000-0CC47A6BFDDE) matches other registered hosts: smithi005 (HTTP error code 422: Unprocessable Entity)'], 'stdout': '', 'stdout_lines': []}} |
||||||||||||||
fail | 7133557 | 2023-01-11 19:39:03 | 2023-01-11 20:21:03 | 2023-01-11 20:42:27 | 0:21:24 | 0:14:34 | 0:06:50 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi007 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e0040082-91ef-11ed-821a-001a4aab830c -- ceph orch device zap smithi007 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7133558 | 2023-01-11 19:39:04 | 2023-01-11 20:21:44 | 2023-01-11 20:42:21 | 0:20:37 | 0:14:19 | 0:06:18 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_adoption} | 1 | |
fail | 7133559 | 2023-01-11 19:39:05 | 2023-01-11 20:21:54 | 2023-01-11 20:42:47 | 0:20:53 | 0:14:49 | 0:06:04 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi145 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e60ab7dc-91ef-11ed-821a-001a4aab830c -- ceph orch device zap smithi145 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
dead | 7133560 | 2023-01-11 19:39:06 | 2023-01-11 20:21:54 | 2023-01-11 20:35:23 | 0:13:29 | 0:06:59 | 0:06:30 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
{'smithi027.front.sepia.ceph.com': {'_ansible_no_log': False, 'attempts': 12, 'changed': True, 'cmd': ['subscription-manager', 'register', '--activationkey=testnode', '--org=Ceph', '--name=smithi027.front.sepia.ceph.com', '--force'], 'delta': '0:00:00.699913', 'end': '2023-01-11 20:33:16.603254', 'failed_when_result': True, 'invocation': {'module_args': {'_raw_params': 'subscription-manager register --activationkey=testnode --org=Ceph --name=smithi027.front.sepia.ceph.com --force', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 70, 'start': '2023-01-11 20:33:15.903341', 'stderr': 'The DMI UUID of this host (00000000-0000-0000-0000-0CC47A6BFEC8) matches other registered hosts: smithi027 (HTTP error code 422: Unprocessable Entity)', 'stderr_lines': ['The DMI UUID of this host (00000000-0000-0000-0000-0CC47A6BFEC8) matches other registered hosts: smithi027 (HTTP error code 422: Unprocessable Entity)'], 'stdout': '', 'stdout_lines': []}} |
||||||||||||||
pass | 7133561 | 2023-01-11 19:39:07 | 2023-01-11 20:21:55 | 2023-01-11 20:51:56 | 0:30:01 | 0:18:55 | 0:11:06 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm} | 1 | |
fail | 7133562 | 2023-01-11 19:39:08 | 2023-01-11 20:22:05 | 2023-01-11 20:48:48 | 0:26:43 | 0:18:48 | 0:07:55 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133563 | 2023-01-11 19:39:10 | 2023-01-11 20:24:26 | 2023-01-11 20:43:53 | 0:19:27 | 0:11:31 | 0:07:56 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi002 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ead56e10-91ef-11ed-821a-001a4aab830c -- ceph orch daemon add osd smithi002:vg_nvme/lv_4' |
||||||||||||||
fail | 7133564 | 2023-01-11 19:39:11 | 2023-01-11 20:25:36 | 2023-01-11 20:46:47 | 0:21:11 | 0:15:33 | 0:05:38 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi042 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a54450a4-91f0-11ed-821a-001a4aab830c -- ceph orch device zap smithi042 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7133565 | 2023-01-11 19:39:12 | 2023-01-11 20:25:46 | 2023-01-11 20:44:14 | 0:18:28 | 0:09:40 | 0:08:48 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
fail | 7133566 | 2023-01-11 19:39:13 | 2023-01-11 20:25:57 | 2023-01-11 20:50:56 | 0:24:59 | 0:18:43 | 0:06:16 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133567 | 2023-01-11 19:39:14 | 2023-01-11 20:25:57 | 2023-01-11 20:42:54 | 0:16:57 | 0:10:54 | 0:06:03 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi107 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 087c6ce8-91f0-11ed-821a-001a4aab830c -- ceph orch device zap smithi107 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7133568 | 2023-01-11 19:39:15 | 2023-01-11 20:25:57 | 2023-01-11 20:47:10 | 0:21:13 | 0:14:59 | 0:06:14 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi083 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a18aacba-91f0-11ed-821a-001a4aab830c -- ceph orch device zap smithi083 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133569 | 2023-01-11 19:39:16 | 2023-01-11 20:26:18 | 2023-01-11 20:48:09 | 0:21:51 | 0:11:27 | 0:10:24 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi073 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 784e7160-91f0-11ed-821a-001a4aab830c -- ceph orch device zap smithi073 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133570 | 2023-01-11 19:39:17 | 2023-01-11 20:26:28 | 2023-01-11 20:47:56 | 0:21:28 | 0:14:07 | 0:07:21 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi144 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9535bbd0-91f0-11ed-821a-001a4aab830c -- ceph orch daemon add osd smithi144:vg_nvme/lv_4' |
||||||||||||||
fail | 7133571 | 2023-01-11 19:39:18 | 2023-01-11 20:26:48 | 2023-01-11 20:47:26 | 0:20:38 | 0:08:32 | 0:12:06 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi112 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5ab7f78e-91f0-11ed-821a-001a4aab830c -- ceph orch daemon add osd smithi112:vg_nvme/lv_4' |
||||||||||||||
fail | 7133572 | 2023-01-11 19:39:19 | 2023-01-11 20:27:29 | 2023-01-11 21:09:35 | 0:42:06 | 0:31:44 | 0:10:22 | smithi | main | ubuntu | 20.04 | orch:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133573 | 2023-01-11 19:39:20 | 2023-01-11 20:27:49 | 2023-01-11 20:49:58 | 0:22:09 | 0:13:16 | 0:08:53 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
Command failed on smithi040 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e75ba730-91f0-11ed-821a-001a4aab830c -- ceph orch device zap smithi040 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133574 | 2023-01-11 19:39:21 | 2023-01-11 20:29:30 | 2023-01-11 20:51:39 | 0:22:09 | 0:13:45 | 0:08:24 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi111 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1b5d8800-91f1-11ed-821a-001a4aab830c -- ceph orch device zap smithi111 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133575 | 2023-01-11 19:39:22 | 2023-01-11 20:30:10 | 2023-01-11 21:10:31 | 0:40:21 | 0:30:15 | 0:10:06 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133576 | 2023-01-11 19:39:23 | 2023-01-11 20:30:51 | 2023-01-11 20:52:32 | 0:21:41 | 0:13:43 | 0:07:58 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi035 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 43097fe4-91f1-11ed-821a-001a4aab830c -- ceph orch device zap smithi035 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
dead | 7133577 | 2023-01-11 19:39:24 | 2023-01-11 20:31:41 | 2023-01-11 20:43:01 | 0:11:20 | 0:05:29 | 0:05:51 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
{'smithi157.front.sepia.ceph.com': {'_ansible_no_log': False, 'attempts': 12, 'changed': True, 'cmd': ['subscription-manager', 'register', '--activationkey=testnode', '--org=Ceph', '--name=smithi157', '--force'], 'delta': '0:00:00.659698', 'end': '2023-01-11 20:42:43.139507', 'failed_when_result': True, 'invocation': {'module_args': {'_raw_params': 'subscription-manager register --activationkey=testnode --org=Ceph --name=smithi157 --force', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 70, 'start': '2023-01-11 20:42:42.479809', 'stderr': 'The DMI UUID of this host (00000000-0000-0000-0000-0CC47AD93798) matches other registered hosts: smithi157.front.sepia.ceph.com (HTTP error code 422: Unprocessable Entity)', 'stderr_lines': ['The DMI UUID of this host (00000000-0000-0000-0000-0CC47AD93798) matches other registered hosts: smithi157.front.sepia.ceph.com (HTTP error code 422: Unprocessable Entity)'], 'stdout': '', 'stdout_lines': []}} |
||||||||||||||
fail | 7133578 | 2023-01-11 19:39:26 | 2023-01-11 20:32:01 | 2023-01-11 20:58:04 | 0:26:03 | 0:18:33 | 0:07:30 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133579 | 2023-01-11 19:39:27 | 2023-01-11 20:33:12 | 2023-01-11 20:53:15 | 0:20:03 | 0:09:28 | 0:10:35 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi005 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 29655770-91f1-11ed-821a-001a4aab830c -- ceph orch device zap smithi005 /dev/nvme4n1 --force' |
||||||||||||||
fail | 7133580 | 2023-01-11 19:39:28 | 2023-01-11 20:33:22 | 2023-01-11 20:55:39 | 0:22:17 | 0:11:36 | 0:10:41 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi033 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 82aec794-91f1-11ed-821a-001a4aab830c -- ceph orch device zap smithi033 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133581 | 2023-01-11 19:39:29 | 2023-01-11 20:34:02 | 2023-01-11 20:55:23 | 0:21:21 | 0:14:47 | 0:06:34 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi174 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b92f43a2-91f1-11ed-821a-001a4aab830c -- ceph orch device zap smithi174 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133582 | 2023-01-11 19:39:30 | 2023-01-11 20:34:33 | 2023-01-11 20:57:35 | 0:23:02 | 0:14:35 | 0:08:27 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi027 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f4067fea-91f1-11ed-821a-001a4aab830c -- ceph orch device zap smithi027 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133583 | 2023-01-11 19:39:31 | 2023-01-11 20:35:33 | 2023-01-11 20:55:55 | 0:20:22 | 0:14:06 | 0:06:16 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi181 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cefb5428-91f1-11ed-821a-001a4aab830c -- ceph orch device zap smithi181 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133584 | 2023-01-11 19:39:32 | 2023-01-11 20:35:34 | 2023-01-11 21:02:46 | 0:27:12 | 0:18:33 | 0:08:39 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133585 | 2023-01-11 19:39:33 | 2023-01-11 20:37:44 | 2023-01-11 20:58:46 | 0:21:02 | 0:15:28 | 0:05:34 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi016 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4bc843ee-91f2-11ed-821a-001a4aab830c -- ceph orch device zap smithi016 /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7133586 | 2023-01-11 19:39:34 | 2023-01-11 20:38:15 | 2023-01-11 21:54:49 | 1:16:34 | 1:08:20 | 0:08:14 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7133587 | 2023-01-11 19:39:35 | 2023-01-11 20:39:45 | 2023-01-11 21:05:04 | 0:25:19 | 0:14:05 | 0:11:14 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi006 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad68a4757502b00c0e1571df00a19b1ec13969fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b943be30-91f2-11ed-821a-001a4aab830c -- ceph orch device zap smithi006 /dev/vg_nvme/lv_4 --force' |